metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | opentelemetry-instrumentation-crewai | 0.52.4 | OpenTelemetry crewAI instrumentation | # OpenTelemetry CrewAI Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-crewai/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-crewai.svg">
</a>
This library allows tracing agentic workflows implemented with crewAI framework [crewAI library](https://github.com/crewAIInc/crewAI).
## Installation
```bash
pip install opentelemetry-instrumentation-crewai
```
## Example usage
```python
from opentelemetry.instrumentation.crewai import CrewAIInstrumentor
CrewAIInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"crewai; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-crewai"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:46.559138 | opentelemetry_instrumentation_crewai-0.52.4.tar.gz | 338,576 | 2e/8f/1b0a6ea53bad7467d57d8ae4fbb006b70ce6e8eb9fe40402e06aaa3799c5/opentelemetry_instrumentation_crewai-0.52.4.tar.gz | source | sdist | null | false | 880881c20d8288aa7d97b075ce20238e | 524356efc0b457c6451596b776e56ae5d689aafc8078c69b7c3cc4e4ae76a2fa | 2e8f1b0a6ea53bad7467d57d8ae4fbb006b70ce6e8eb9fe40402e06aaa3799c5 | Apache-2.0 | [] | 54,500 |
2.4 | opentelemetry-instrumentation-cohere | 0.52.4 | OpenTelemetry Cohere instrumentation | # OpenTelemetry Cohere Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-cohere/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-cohere.svg">
</a>
This library allows tracing calls to any of Cohere's endpoints sent with the official [Cohere library](https://github.com/cohere-ai/cohere-python).
## Installation
```bash
pip install opentelemetry-instrumentation-cohere
```
## Example usage
```python
from opentelemetry.instrumentation.cohere import CohereInstrumentor
CohereInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"cohere; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-cohere"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:44.689370 | opentelemetry_instrumentation_cohere-0.52.4.tar.gz | 103,715 | 58/46/8325e7abb58cfd563cf48d91a9702942ee255e4686ca7d0aa8e2626a0f3a/opentelemetry_instrumentation_cohere-0.52.4.tar.gz | source | sdist | null | false | 64dcc24ae2f87db110111fe880eaa604 | a35bd9f55638d78f8cf231398852975c383a42f7f87a1c748b01965814b8ce3f | 58468325e7abb58cfd563cf48d91a9702942ee255e4686ca7d0aa8e2626a0f3a | Apache-2.0 | [] | 53,552 |
2.4 | opentelemetry-instrumentation-chromadb | 0.52.4 | OpenTelemetry Chroma DB instrumentation | # OpenTelemetry Chroma Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-chromadb/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-chromadb.svg">
</a>
This library allows tracing client-side calls to Chroma vector DB sent with the official [Chroma library](https://github.com/chroma-core/chroma).
## Installation
```bash
pip install opentelemetry-instrumentation-chromadb
```
## Example usage
```python
from opentelemetry.instrumentation.chromadb import ChromaInstrumentor
ChromaInstrumentor().instrument()
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"chromadb; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-chromadb"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:43.479368 | opentelemetry_instrumentation_chromadb-0.52.4.tar.gz | 142,118 | e1/6e/02224b625f7e82fc5d9cadbe80d96f41180a427d7ba6f1134b5e646a471e/opentelemetry_instrumentation_chromadb-0.52.4.tar.gz | source | sdist | null | false | 19cab3f8f63e6c2310ac8acc68385c2b | 1a359323a923ae959de3d8c283c564888a0dd7094f3b4c8347782d5e5a7dfec7 | e16e02224b625f7e82fc5d9cadbe80d96f41180a427d7ba6f1134b5e646a471e | Apache-2.0 | [] | 53,285 |
2.4 | opentelemetry-instrumentation-bedrock | 0.52.4 | OpenTelemetry Bedrock instrumentation | # OpenTelemetry Bedrock Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-bedrock/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-bedrock.svg">
</a>
This library allows tracing any of AWS Bedrock's models prompts and completions sent with [Boto3](https://github.com/boto/boto3) to Bedrock.
## Installation
```bash
pip install opentelemetry-instrumentation-bedrock
```
## Example usage
```python
from opentelemetry.instrumentation.bedrock import BedrockInstrumentor
BedrockInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"anthropic>=0.17.0",
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"tokenizers>=0.13.0",
"boto3; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-bedrock"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:42.301522 | opentelemetry_instrumentation_bedrock-0.52.4.tar.gz | 149,890 | 7d/ea/1cdf7a8caa624043fe02eb90248e73269fae9ab275816f6fabcc34f60538/opentelemetry_instrumentation_bedrock-0.52.4.tar.gz | source | sdist | null | false | f49302f05a5636151fce0e3517098920 | d785b14338d475e85e3d2074840e71771c430f55efbad7ed27a9856e6d835771 | 7dea1cdf7a8caa624043fe02eb90248e73269fae9ab275816f6fabcc34f60538 | Apache-2.0 | [] | 53,955 |
2.4 | opentelemetry-instrumentation-anthropic | 0.52.4 | OpenTelemetry Anthropic instrumentation | # OpenTelemetry Anthropic Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-anthropic/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-anthropic.svg">
</a>
This library allows tracing Anthropic prompts and completions sent with the official [Anthropic library](https://github.com/anthropics/anthropic-sdk-python).
## Installation
```bash
pip install opentelemetry-instrumentation-anthropic
```
## Example usage
```python
from opentelemetry.instrumentation.anthropic import AnthropicInstrumentor
AnthropicInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"anthropic; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-anthropic"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:40.254760 | opentelemetry_instrumentation_anthropic-0.52.4.tar.gz | 682,756 | d2/e1/d3ed879e317f307718b21da6962edc0da20f0e82bad810e837793a985720/opentelemetry_instrumentation_anthropic-0.52.4.tar.gz | source | sdist | null | false | c61353eab4531d4236aba3d6ed50be02 | 88bb08755c400de020698c8c5a06c10089819f07187f2abc4ebfee93b0e223eb | d2e1d3ed879e317f307718b21da6962edc0da20f0e82bad810e837793a985720 | Apache-2.0 | [] | 35,519 |
2.4 | opentelemetry-instrumentation-alephalpha | 0.52.4 | OpenTelemetry Aleph Alpha instrumentation | # OpenTelemetry Aleph Alpha Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-alephalpha/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-alephalpha.svg">
</a>
This library allows tracing calls to any of Aleph Alpha's endpoints sent with the official [Aleph Alpha Client](https://github.com/Aleph-Alpha/aleph-alpha-client).
## Installation
```bash
pip install opentelemetry-instrumentation-alephalpha
```
## Example usage
```python
from opentelemetry.instrumentation.alephalpha import AlephAlphaInstrumentor
AlephAlphaInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Benedikt Wolf <bene25@web.de> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"aleph-alpha-client; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-alephalpha"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:37.019024 | opentelemetry_instrumentation_alephalpha-0.52.4.tar.gz | 141,427 | 56/cb/3ecc6b075a85f1d9c845e6be3f126f9d4904ce640f9d9c80817230e0a3ab/opentelemetry_instrumentation_alephalpha-0.52.4.tar.gz | source | sdist | null | false | c35dbe345613106ae45fd3b50a3716c4 | 0e0e76d21a18b6e9c1103ce689ad81ff00325b2781070cec477be735d04ade7d | 56cb3ecc6b075a85f1d9c845e6be3f126f9d4904ce640f9d9c80817230e0a3ab | Apache-2.0 | [] | 53,250 |
2.4 | opentelemetry-instrumentation-agno | 0.52.4 | OpenTelemetry Agno instrumentation | # OpenTelemetry Agno Instrumentation
This library provides automatic instrumentation for the [Agno](https://github.com/agno-agi/agno) framework.
## Installation
```bash
pip install opentelemetry-instrumentation-agno
```
## Usage
```python
from opentelemetry.instrumentation.agno import AgnoInstrumentor
AgnoInstrumentor().instrument()
```
## Supported Features
This instrumentation captures:
- Agent execution (sync and async)
- Team operations
- Model invocations
- Function calls
- Streaming responses
## Links
- [Agno Framework](https://github.com/agno-agi/agno)
- [OpenTelemetry](https://opentelemetry.io/)
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.28.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"agno; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-agno"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:35.934588 | opentelemetry_instrumentation_agno-0.52.4.tar.gz | 82,791 | 8d/2a/96723487fc8d7e9a76ad948c46bfa189f850e20fe91e9c374db53981e8d5/opentelemetry_instrumentation_agno-0.52.4.tar.gz | source | sdist | null | false | 3dc2e9dfa210fc1e73b78bc744d1cecb | f22d7425ec4fbda84a712e1e7f7195258e6250a5ac67e45ae24147ab20a3f70f | 8d2a96723487fc8d7e9a76ad948c46bfa189f850e20fe91e9c374db53981e8d5 | Apache-2.0 | [] | 30,448 |
2.4 | aeo-cli | 1.0.0 | Agentic Engine Optimization CLI — audit URLs for AI crawler readiness | # AEO-CLI
[](https://github.com/hanselhansel/aeo-cli/actions/workflows/test.yml)
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://pypi.org/project/aeo-cli/)
**Audit any URL for AI crawler readiness. Get a 0-100 AEO score.**
## What is AEO?
Agentic Engine Optimization (AEO) is the practice of making your website discoverable and accessible to AI agents and LLM-powered search engines. As AI crawlers like GPTBot, ClaudeBot, and PerplexityBot become major traffic sources, AEO ensures your content is structured, permitted, and optimized for these systems.
AEO-CLI checks how well a URL is prepared for AI consumption and returns a structured score.
## Features
- **Robots.txt AI bot access** — checks 13 AI crawlers (GPTBot, ClaudeBot, DeepSeek-AI, Grok, and more)
- **llms.txt & llms-full.txt** — detects both standard and extended LLM instruction files
- **Schema.org JSON-LD** — extracts and evaluates structured data with high-value type weighting (Product, Article, FAQ, HowTo)
- **Content density** — measures useful content vs. boilerplate with readability scoring, heading structure analysis, and answer-first detection
- **Batch mode** — audit multiple URLs from a file with `--file` and configurable `--concurrency`
- **Custom bot list** — override default bots with `--bots` for targeted checks
- **Verbose output** — detailed per-pillar breakdown with scoring explanations and recommendations
- **Rich CLI output** — formatted tables and scores via Rich
- **JSON / CSV / Markdown output** — machine-readable results for pipelines
- **MCP server** — expose the audit as a tool for AI agents via FastMCP
- **AEO Compiler** — LLM-powered `llms.txt` and `schema.jsonld` generation, with batch mode for multiple URLs
- **CI/CD integration** — `--fail-under` threshold, `--fail-on-blocked-bots`, per-pillar thresholds, baseline regression detection, GitHub Step Summary
- **GitHub Action** — composite action for CI pipelines with baseline support
- **Citation Radar** — query AI models to see what they cite and recommend, with brand tracking and domain classification
- **Share-of-Recommendation Benchmark** — track how often AI models mention and recommend your brand vs competitors, with LLM-as-judge analysis
- **Retail AI-Readiness Auditor** — audit product listings on Amazon, Shopee, Lazada, Tokopedia, TikTok Shop, Blibli, Zalora with 5-pillar scoring and OpenAI Feed Spec compliance
## Installation
```bash
pip install aeo-cli
```
AEO-CLI uses a headless browser for content extraction. After installing, run:
```bash
crawl4ai-setup
```
### Development install
```bash
git clone https://github.com/your-org/aeo-cli.git
cd aeo-cli
pip install -e ".[dev]"
crawl4ai-setup
```
## Quick Start
```bash
aeo-cli audit example.com
```
This runs a full audit and prints a Rich-formatted report with your AEO score.
## CLI Usage
### Single Page Audit
Audit only the specified URL (skip multi-page discovery):
```bash
aeo-cli audit example.com --single
```
### Multi-Page Site Audit (default)
Discover pages via sitemap/spider and audit up to 10 pages:
```bash
aeo-cli audit example.com
```
### Limit Pages
```bash
aeo-cli audit example.com --max-pages 5
```
### JSON Output
Get structured JSON for CI pipelines, dashboards, or scripting:
```bash
aeo-cli audit example.com --json
```
### CSV / Markdown Output
```bash
aeo-cli audit example.com --format csv
aeo-cli audit example.com --format markdown
```
### Verbose Mode
Show detailed per-pillar breakdown with scoring explanations:
```bash
aeo-cli audit example.com --single --verbose
```
### Timeout
Set the HTTP timeout (default: 15 seconds):
```bash
aeo-cli audit example.com --timeout 30
```
### Custom Bot List
Override the default 13 bots with a custom list:
```bash
aeo-cli audit example.com --bots "GPTBot,ClaudeBot,PerplexityBot"
```
### Batch Mode
Audit multiple URLs from a file (one URL per line, `.txt` or `.csv`):
```bash
aeo-cli audit --file urls.txt
aeo-cli audit --file urls.txt --concurrency 5
aeo-cli audit --file urls.txt --format csv
```
### CI Mode
Fail the build if the AEO score is below a threshold:
```bash
aeo-cli audit example.com --fail-under 60
```
Fail if any AI bot is blocked:
```bash
aeo-cli audit example.com --fail-on-blocked-bots
```
#### Per-Pillar Thresholds
Gate CI on individual pillar scores:
```bash
aeo-cli audit example.com --robots-min 20 --content-min 30 --overall-min 60
```
Available: `--robots-min`, `--schema-min`, `--content-min`, `--llms-min`, `--overall-min`.
#### Baseline Regression Detection
Save a baseline and detect score regressions in future audits:
```bash
# Save current scores as baseline
aeo-cli audit example.com --single --save-baseline .aeo-baseline.json
# Compare against baseline (exit 1 if any pillar drops > 5 points)
aeo-cli audit example.com --single --baseline .aeo-baseline.json
# Custom regression threshold
aeo-cli audit example.com --single --baseline .aeo-baseline.json --regression-threshold 10
```
Exit codes: 0 = pass, 1 = score below threshold or regression detected, 2 = bots blocked.
When running in GitHub Actions, a markdown summary is automatically written to `$GITHUB_STEP_SUMMARY`.
### Quiet Mode
Suppress output, exit code 0 if score >= 50, 1 otherwise:
```bash
aeo-cli audit example.com --quiet
```
Use `--fail-under` with `--quiet` to override the default threshold:
```bash
aeo-cli audit example.com --quiet --fail-under 70
```
### Start MCP server
```bash
aeo-cli mcp
```
Launches a FastMCP stdio server exposing the audit as a tool for AI agents.
## MCP Integration
To use AEO-CLI as a tool in Claude Desktop, add this to your Claude Desktop config (`claude_desktop_config.json`):
```json
{
"mcpServers": {
"aeo-cli": {
"command": "aeo-cli",
"args": ["mcp"]
}
}
}
```
Once configured, Claude can call the `audit_url` tool directly to check any URL's AEO readiness.
## AEO Compiler (Generate)
Generate `llms.txt` and `schema.jsonld` files from any URL using LLM analysis:
```bash
pip install aeo-cli[generate]
aeo-cli generate example.com
```
This crawls the URL, sends the content to an LLM, and writes optimized files to `./aeo-output/`.
### Batch Generate
Generate assets for multiple URLs from a file:
```bash
aeo-cli generate-batch urls.txt
aeo-cli generate-batch urls.txt --concurrency 5 --profile ecommerce
aeo-cli generate-batch urls.txt --json
```
Each URL's output goes to a subdirectory under `--output-dir`.
### BYOK (Bring Your Own Key)
The generate command auto-detects your LLM provider from environment variables:
| Priority | Env Variable | Model Used |
|----------|-------------|------------|
| 1 | `OPENAI_API_KEY` | gpt-4o-mini |
| 2 | `ANTHROPIC_API_KEY` | claude-3-haiku-20240307 |
| 3 | Ollama running locally | ollama/llama3.2 |
Override with `--model`:
```bash
aeo-cli generate example.com --model gpt-4o
```
### Industry Profiles
Tailor the output with `--profile`:
```bash
aeo-cli generate example.com --profile saas
aeo-cli generate example.com --profile ecommerce
```
Available: `generic`, `cpg`, `saas`, `ecommerce`, `blog`.
## Citation Radar
Query AI models to see what they cite and recommend for any search prompt:
```bash
pip install aeo-cli[generate]
aeo-cli radar "best project management tools" --brand Asana --brand Monday --model gpt-4o-mini
```
Options:
- `--brand/-b`: Brand name to track (repeatable)
- `--model/-m`: LLM model to query (repeatable, default: gpt-4o-mini)
- `--runs/-r`: Runs per model for statistical significance
- `--json`: Output as JSON
## Retail AI-Readiness Auditor
Audit product listings on marketplaces for AI optimization readiness:
```bash
aeo-cli retail "https://www.amazon.com/dp/B07L123456"
aeo-cli retail "https://shopee.sg/product/123" --json
aeo-cli retail "https://www.lazada.co.id/products/example-i123.html" --verbose
```
Supported marketplaces: Amazon (all TLDs), Shopee, Lazada, Tokopedia, TikTok Shop, Blibli, Zalora, and a Generic fallback (Schema.org/OpenGraph).
Scoring pillars:
- **Product Schema** (25): JSON-LD Product, Offer, AggregateRating
- **Content Quality** (30): Bullet points, description, A+ content, spec charts
- **Visual Assets** (15): Image count, alt text, video
- **Social Proof** (20): Reviews, rating, Q&A
- **Feed Compliance** (10): OpenAI Product Feed Spec alignment
## Share-of-Recommendation Benchmark
Track how AI models mention and recommend your brand across multiple prompts:
```bash
pip install aeo-cli[generate]
aeo-cli benchmark prompts.txt -b "YourBrand" -c "Competitor1" -c "Competitor2"
```
Options:
- `prompts.txt`: CSV (with `prompt,category,intent` columns) or plain text (one prompt per line)
- `--brand/-b`: Target brand to track (required)
- `--competitor/-c`: Competitor brand (repeatable)
- `--model/-m`: LLM model to query (repeatable, default: gpt-4o-mini)
- `--runs/-r`: Runs per model per prompt (default: 3)
- `--yes/-y`: Skip cost confirmation prompt
- `--json`: Output as JSON
## GitHub Action
Use AEO-CLI in your CI pipeline:
```yaml
- name: Run AEO Audit
uses: hanselhansel/aeo-cli@main
with:
url: 'https://your-site.com'
fail-under: '60'
```
With baseline regression detection:
```yaml
- name: Run AEO Audit
uses: hanselhansel/aeo-cli@main
with:
url: 'https://your-site.com'
baseline-file: '.aeo-baseline.json'
save-baseline: '.aeo-baseline.json'
regression-threshold: '5'
```
The action sets up Python, installs aeo-cli, and runs the audit. Outputs `score` and `report-json` for downstream steps. See [docs/ci-integration.md](docs/ci-integration.md) for full documentation.
## Score Breakdown
AEO-CLI returns a score from 0 to 100, composed of four pillars:
| Pillar | Max Points | What it measures |
|---|---|---|
| Content density | 40 | Quality and depth of extractable text content |
| Robots.txt AI bot access | 25 | Whether AI crawlers are allowed in robots.txt |
| Schema.org JSON-LD | 25 | Structured data markup (Product, Article, FAQ, etc.) |
| llms.txt presence | 10 | Whether a /llms.txt file exists for LLM guidance |
### Scoring rationale (2026-02-18)
The weights reflect how AI search engines (ChatGPT, Perplexity, Claude) actually consume web content:
- **Content density (40 pts)** is weighted highest because it's what LLMs extract and cite when answering questions. Rich, well-structured content with headings and lists gives AI better material to work with.
- **Robots.txt (25 pts)** is the gatekeeper — if a bot is blocked, it literally cannot crawl. It's critical but largely binary (either you're blocking or you're not).
- **Schema.org (25 pts)** provides structured "cheat sheets" that help AI understand entities. High-value types (Product, Article, FAQ, HowTo, Recipe) receive bonus weighting. Valuable but not required for citation.
- **llms.txt (10 pts)** is an emerging standard. Both `/llms.txt` and `/llms-full.txt` are checked. No major AI search engine heavily weights it yet, but it signals forward-thinking AI readiness.
## AI Bots Checked
AEO-CLI checks access rules for 13 AI crawlers:
- GPTBot
- ChatGPT-User
- Google-Extended
- ClaudeBot
- PerplexityBot
- Amazonbot
- OAI-SearchBot
- DeepSeek-AI
- Grok
- Meta-ExternalAgent
- cohere-ai
- AI2Bot
- ByteSpider
## Development
```bash
# Install with dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Lint
ruff check src/ tests/
```
## License
MIT
| text/markdown | Hansel Wahjono | null | null | null | null | aeo, seo, ai, llm, crawler, audit, robots-txt, schema-org | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Internet :: WWW/HTT... | [] | null | null | >=3.10 | [] | [] | [] | [
"typer>=0.9",
"rich>=13.0",
"httpx>=0.27",
"beautifulsoup4>=4.12",
"pydantic>=2.0",
"crawl4ai>=0.4",
"fastmcp>=2.0",
"pyyaml>=6.0",
"litellm>=1.40; extra == \"generate\"",
"pytest>=8.0; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"mypy>=1... | [] | [] | [] | [
"Homepage, https://github.com/hanselhansel/aeo-cli",
"Repository, https://github.com/hanselhansel/aeo-cli",
"Issues, https://github.com/hanselhansel/aeo-cli/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:20:30.280664 | aeo_cli-1.0.0.tar.gz | 210,102 | 7a/a5/02b0dba0da1d091207620f2e531c11448e25d24888671b4e05609126a459/aeo_cli-1.0.0.tar.gz | source | sdist | null | false | 726d1e538ec380c5669c8a60c102747c | 465f33399819f17de69609b0257eb1e8dac008b97070951f4c7d1db5b694ef7f | 7aa502b0dba0da1d091207620f2e531c11448e25d24888671b4e05609126a459 | MIT | [
"LICENSE"
] | 163 |
2.4 | brainseg-containers | 1.0.2 | wrapper for brain segmentation tools | # BrainSeg-container
This repository provides a streamlined Python wrapper and CLI tool to automatically download, run, and standardize outputs from state-of-the-art brain segmentation tools using Apptainer / Singularity containers.
Brain segmentation tools often have conflicting dependencies, complex installation steps, or require specific versions of system libraries. This package solves that problem by containerizing the tools and handling the execution, file binding, and label standardization for you.

## The Tools
This pipeline currently supports the following deep-learning-based segmentation tools. We may add more in the future.
1. [**GOUHFI**](https://github.com/mafortin/GOUHFI)
This tool was designed to handle the challenges of Ultra-High Field MRI (7T+). It utilizes "domain randomization" during training, which allows it to remain robust across different MRI contrasts and resolutions, including standard clinical scans.
* **Resolution:** Native (preserves input resolution).
* **CSF Availability:** Yes (segments ventricles and subarachnoid space CSF).
2. [**SynthSeg**](https://github.com/BBillot/SynthSeg)
Developed by the FreeSurfer team, this tool is famous for working "out of the box" on almost any kind of MRI scan (different contrasts, resolutions, or messy clinical data).
* **Resolution:** Fixed 1mm isotropic (always resamples input to 1mm).
* **CSF Availability:** Yes.
3. [**FastSurfer**](https://github.com/Deep-MI/FastSurfer)
A rapid deep-learning-based segmentation tool.
* **Resolution:** Native (but experimental below 0.7mm).
* **CSF Availability:** No (segments ventricles, but ignores subarachnoid space CSF).
4. [**SimNIBS (Charm)**](https://github.com/simnibs/simnibs)
The "Complete Head Anatomy Reconstruction Method" from the SimNIBS suite. While designed for modeling brain stimulation (TMS/TES), it produces high-quality segmentation of extra-cerebral tissues (skull, scalp, etc.) in addition to the brain.
* **Resolution:** Native (pipeline uses the upsampled output to match input).
* **CSF Availability:** Yes.
* **Segmented regions:** Charm provides the following segmentation labels: White-Matter, Gray-Matter, CSF, Bone, Scalp, Eye_bals, Compact_bone, Spongy_bone, Blood, Muscle, Cartilage, Fat, Electrode, Saline_or_gel
## Comparison Output
The pipeline can automatically generate a comparison grid so you can quickly inspect the differences between the tools.
## Getting Started
### Prerequisites
You must have Apptainer (or Singularity) installed on your system to run the containers.
* Ubuntu/Debian: `sudo apt install apptainer`
* Conda/Mamba: `conda install -c conda-forge apptainer `
### Installation
You can install the package directly via pip:
```bash
pip install brainseg-containers
```
*(Optional) If you want to use the plotting and comparison features, install with the `plot` extras:*
```bash
pip install brainseg-containers[plot]
```
---
## Usage
The package provides a simple command-line interface. The first time you run a specific tool, the wrapper will automatically download the corresponding container from the GitHub Container Registry and store it in `~/.brainseg_containers/`.
### Basic Command
```bash
brainseg -t <tool_name> -i <input_file.nii.gz> -o <output_file.nii.gz>
```
**Available Tools:** `synthseg`, `gouhfi`, `fastsurfer`, `simnibs`
### Examples
**Run GOUHFI on a single subject:**
```bash
brainseg -t gouhfi -i inputs/sub-01_T1w.nii.gz -o results/sub-01_gouhfi.nii.gz
```
**Run SynthSeg on the same subject:**
```bash
brainseg -t synthseg -i inputs/sub-01_T1w.nii.gz -o results/sub-01_synthseg.nii.gz
```
*Note: You can optionally provide a custom path to a pre-downloaded `.sif` image using the `--container` flag.*
### Note on Labels
Different tools use different numbers to represent brain regions. To make comparison easier, this pipeline automatically **remaps** the output labels of FastSurfer and GOUHFI to match the standard FreeSurfer lookup table.
| text/markdown | null | Marius Causemann <mariusca@simula.no> | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [
"nibabel",
"fastremap",
"pdbpp; extra == \"dev\"",
"ipython; extra == \"dev\"",
"mypy; extra == \"dev\"",
"ruff; extra == \"dev\"",
"matplotlib; extra == \"plot\"",
"nilearn; extra == \"plot\"",
"numpy; extra == \"plot\"",
"pandas; extra == \"plot\"",
"jupyter-book<2.0.0; extra == \"docs\"",
"... | [] | [] | [] | [
"homepage, https://github.com/MariusCausemann/brainseg"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:20:26.925685 | brainseg_containers-1.0.2.tar.gz | 28,852 | 31/75/c8a1e3e5bb38ef8d32c3782425514e6438a4b455d26748c0d758b7c1b558/brainseg_containers-1.0.2.tar.gz | source | sdist | null | false | 8502c6bff867f71ff869cb66c203239b | 0b0423a5c9aa56219c552620563c296c9d06a2ef60ded69d48dcdb1c12142967 | 3175c8a1e3e5bb38ef8d32c3782425514e6438a4b455d26748c0d758b7c1b558 | null | [] | 209 |
2.4 | epinterface | 1.3.0 | This is a repository for dynamically generating energy models within Python, relying on Archetypal and Eppy for most of its functionality. | # epinterface
[](https://img.shields.io/github/v/release/szvsw/epinterface)
[](https://github.com/szvsw/epinterface/actions/workflows/main.yml?query=branch%3Amain)
[](https://codecov.io/gh/szvsw/epinterface)
[](https://img.shields.io/github/commit-activity/m/szvsw/epinterface)
[](https://img.shields.io/github/license/szvsw/epinterface)
This is a repository for dynamically generating energy models within Python, relying on Archetypal and Eppy for most of its functionality.
- **Github repository**: <https://github.com/szvsw/epinterface/>
## Configuration
The EnergyPlus version used when creating IDF objects can be configured via the `EPINTERFACE_ENERGYPLUS_VERSION` environment variable. It defaults to `22.2.0`. Both dotted (`22.2.0`) and hyphenated (`22-2-0`) version formats are accepted.
- **Documentation** <https://szvsw.github.io/epinterface/>
## Getting started with your project
First, create a repository on GitHub with the same name as this project, and then run the following commands:
```bash
git init -b main
git add .
git commit -m "init commit"
git remote add origin git@github.com:szvsw/epinterface.git
git push -u origin main
```
Finally, install the environment and the pre-commit hooks with
```bash
make install
```
You are now ready to start development on your project!
The CI/CD pipeline will be triggered when you open a pull request, merge to main, or when you create a new release.
To finalize the set-up for publishing to PyPI or Artifactory, see [here](https://fpgmaas.github.io/cookiecutter-uv/features/publishing/#set-up-for-pypi).
For activating the automatic documentation with MkDocs, see [here](https://fpgmaas.github.io/cookiecutter-uv/features/mkdocs/#enabling-the-documentation-on-github).
To enable the code coverage reports, see [here](https://fpgmaas.github.io/cookiecutter-uv/features/codecov/).
## Releasing a new version
- Create an API Token on [PyPI](https://pypi.org/).
- Add the API Token to your projects secrets with the name `PYPI_TOKEN` by visiting [this page](https://github.com/szvsw/epinterface/settings/secrets/actions/new).
- Create a [new release](https://github.com/szvsw/epinterface/releases/new) on Github.
- Create a new tag in the form `*.*.*`.
- For more details, see [here](https://fpgmaas.github.io/cookiecutter-uv/features/cicd/#how-to-trigger-a-release).
---
Repository initiated with [fpgmaas/cookiecutter-uv](https://github.com/fpgmaas/cookiecutter-uv).
| text/markdown | null | Sam Wolk <mail@samwolk.info> | null | null | null | null | [] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"archetypal==2.18.10",
"click==8.1.7",
"geopandas~=1.0.1",
"httpx~=0.27.2",
"ladybug-core>=0.44.30",
"openpyxl~=3.1.5",
"pandas<2.3,>=2.2",
"prisma~=0.15.0",
"pydantic-settings<3,>=2.0",
"pydantic<3,>=2.9",
"pythermalcomfort>=3.8.0"
] | [] | [] | [] | [
"Homepage, https://github.com/szvsw/epinterface",
"Documentation, https://szvsw.github.io/epinterface",
"Repository, https://github.com/szvsw/epinterface",
"Issues, https://github.com/szvsw/epinterface/issues"
] | uv/0.6.14 | 2026-02-19T13:20:07.994238 | epinterface-1.3.0.tar.gz | 5,980,782 | 64/ec/2088293d4017a4bafb48ae4a5d586f462024a53aac2de28af954ebe1ae3e/epinterface-1.3.0.tar.gz | source | sdist | null | false | cdd64148a45979bee0833f25987b1a7d | 517aac62534cad83e9ffa19ed3cc364f6745eb04089b1bfd99c63f2425380d26 | 64ec2088293d4017a4bafb48ae4a5d586f462024a53aac2de28af954ebe1ae3e | null | [
"LICENSE"
] | 259 |
2.4 | pydantic-market-data | 0.1.11 | Shared models and interfaces for finance datasources | # pydantic-market-data
Shared Pydantic models and interfaces for financial data sources.
Defines a standard contract (`DataSource`) and data structures (`OHLCV`, `Symbol`, `History`) to interoperability between finance packages.
## Installation
```bash
pip install pydantic-market-data
```
## Usage
### Models
Standardized data models for financial entities.
```python
from pydantic_market_data.models import Symbol, OHLCV, History, SecurityCriteria
# Symbol Definition
s = Symbol(
ticker="AAPL",
name="Apple Inc.",
exchange="NASDAQ",
currency="USD"
)
# Historical Data Point
candle = OHLCV(
date="2023-12-01",
open=150.0,
high=155.0,
low=149.0,
close=154.0,
volume=50000000
)
# Security Lookup Criteria
criteria = SecurityCriteria(
symbol="AAPL",
target_date="2023-12-01"
)
```
### Protocol
Implement the `DataSource` protocol to create compatible data providers.
```python
from typing import Optional, List
from pydantic_market_data.interfaces import DataSource
from pydantic_market_data.models import SecurityCriteria, Symbol, History
class MySource(DataSource):
def resolve(self, criteria: SecurityCriteria) -> Optional[Symbol]:
# Implementation...
pass
def history(self, ticker: str, period: str = "1mo") -> History:
# Implementation...
pass
```
## License
MIT
| text/markdown | null | Roman Medvedev <pypi@romavm.dev> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python... | [] | null | null | >=3.10 | [] | [] | [] | [
"pandas>=2.2.0",
"pycountry>=24.6.1",
"pydantic-extra-types>=2.11.0",
"pydantic>=2.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/romamo/pydantic-market-data",
"Repository, https://github.com/romamo/pydantic-market-data",
"Issues, https://github.com/romamo/pydantic-market-data/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T13:19:08.410824 | pydantic_market_data-0.1.11-py3-none-any.whl | 5,359 | 30/41/95e26f2757102e31966768fd3fdb4374c0793e391c1cdc54409464a7eb53/pydantic_market_data-0.1.11-py3-none-any.whl | py3 | bdist_wheel | null | false | fd9902d5e9d36b10109240142e3b7fd1 | 68a9635faec674ec395ffb7e1f7d491af4bdd06763259f739f7a73afffaa3953 | 304195e26f2757102e31966768fd3fdb4374c0793e391c1cdc54409464a7eb53 | null | [
"LICENSE"
] | 197 |
2.4 | compas-timber | 2.0.0.dev0 | COMPAS package for modeling, designing and fabricating timber assemblies. | <h1>
<img src="docs\_logo\PNG_tranparent-background.png" alt="Logo" width="70" height="70" style="vertical-align:middle">
COMPAS Timber
</h1>
[](https://github.com/gramaziokohler/compas_timber/actions)
[](https://codecov.io/gh/gramaziokohler/compas_timber)
[](https://pypi.python.org/pypi/compas_timber)
[](https://pypi.python.org/project/compas_timber)
[](https://pypi.python.org/pypi/compas_timber)
[](https://pypi.python.org/pypi/compas_timber)
[](https://doi.org/10.5281/zenodo.7934267)
[](https://twitter.com/compas_dev)
[](https://compas.dev/#/)

`compas_timber` is a user-friendly open-source software toolkit to streamline the design of timber frame structures. Despite its advances in digitalization compared to other building techniques, timber construction is often perceived as a challenging field, involving intricate processes in design, planning, coordination, and fabrication. We aim to increase the use of timber in architecture by lowering the threshold of creating versatile and resource-aware designs.
## Installation
> It is recomended you install `compas_timber` inside a virtual environment.
```bash
pip install compas_timber
```
## First Steps
* [Documentation](https://gramaziokohler.github.io/compas_timber/)
* [COMPAS TIMBER Grasshopper Tutorial](https://gramaziokohler.github.io/compas_timber/latest/tutorials.html)
* [COMPAS TIMBER API Reference](https://gramaziokohler.github.io/compas_timber/latest/api.html)
## Questions and feedback
We encourage the use of the [COMPAS framework forum](https://forum.compas-framework.org/)
for questions and discussions.
## Issue tracker
If you found an issue or have a suggestion for a dandy new feature, please file a new issue in our [issue tracker](https://github.com/gramaziokohler/compas_timber/issues).
## Contributing
We love contributions!
Check the [Contributor's Guide](https://github.com/gramaziokohler/compas_timber/blob/main/CONTRIBUTING.md)
for more details.
## Credits
`compas_timber` is currently developed by Gramazio Kohler Research. See the [list of authors](https://github.com/gramaziokohler/compas_timber/blob/main/AUTHORS.md) for a complete overview.
| text/markdown | null | Aleksandra Anna Apolinarska <apolinarska@arch.ethz.ch>, Chen Kasirer <kasirer@arch.ethz.ch>, Gonzalo Casas <casas@arch.ethz.ch>, Jonas Haldemann <haldemann@arch.ethz.ch>, Oliver Appling Bucklin <bucklin@arch.ethz.ch>, "Aurèle L. Gheyselinck" <gheyselinck@arch.ethz.ch>, Panayiotis Papacharalambous <papacharalambous@arch.ethz.ch>, Anastasiia Stryzhevska <astryzhevska@arch.ethz.ch>, Jelle Feringa <jelleferinga@gmail.com>, Joseph Kenny <jk6372@princeton.edu>, Beverly Lytle <lytle@arch.ethz.ch>, Eric Gozzi <eric.gozzi@arch.ethz.ch>, Rodrigo Arca Zimmermann <rodrigo.arca@gmail.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Topic :: Scientific/Engineering",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"rtree",
"compas<3.0,>=2.0",
"compas_model==0.9.1",
"bump-my-version; extra == \"dev\"",
"compas_invocations2; extra == \"dev\"",
"invoke>=0.14; extra == \"dev\"",
"ruff; extra == \"dev\"",
"sphinx_compas2_theme; extra == \"dev\"",
"sphinxcontrib-mermaid; extra == \"dev\"",
"twine; extra == \"dev\... | [] | [] | [] | [
"Homepage, https://gramaziokohler.github.io/compas_timber/latest/",
"Repository, https://github.com/gramaziokohler/compas_timber"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-19T13:18:00.519280 | compas_timber-2.0.0.dev0.tar.gz | 155,796 | 7b/0c/75dcc59ab81078aef19a0b1c75b8e278c3f9a647743a96a4cf2b10d7e097/compas_timber-2.0.0.dev0.tar.gz | source | sdist | null | false | ee61a9dcaf6e94867a5bb98fdf77496d | 9ec363e20e037dea654e5c5c0c9814592fb5fb9d8c76a44a7a507334378f7bdd | 7b0c75dcc59ab81078aef19a0b1c75b8e278c3f9a647743a96a4cf2b10d7e097 | MIT | [
"LICENSE"
] | 177 |
2.4 | scikit-bayes | 0.1.3 | A Python package for AnDE classifiers. | # scikit-bayes
[](https://github.com/ptorrijos99/scikit-bayes/actions/workflows/python-app.yml)
[](https://codecov.io/gh/ptorrijos99/scikit-bayes)
[](https://ptorrijos99.github.io/scikit-bayes/)
[](https://www.python.org/downloads/)
[](LICENSE)
**scikit-bayes** is a Python package that extends `scikit-learn` with a suite of Bayesian Network Classifiers.
The primary goal of this package is to provide robust, `scikit-learn`-compatible implementations of advanced Bayesian classifiers that are not available in the core library.
## Key Features
- **MixedNB**: Naive Bayes for mixed data types (Gaussian + Categorical + Bernoulli) in a single model
- **AnDE**: Averaged n-Dependence Estimators (AODE, A2DE) that relax the independence assumption
- **ALR**: Accelerated Logistic Regression - hybrid generative-discriminative models with 4 weight granularity levels
- **WeightedAnDE**: Discriminatively-weighted ensemble models
- **Full scikit-learn API**: Compatible with pipelines, cross-validation, and grid search
## Quick Start
```python
import numpy as np
from skbn import MixedNB, AnDE
# MixedNB: Handle mixed data types automatically
X = np.array([[1.5, 0, 2], [-0.5, 1, 0], [2.1, 1, 1], [-1.2, 0, 2]])
y = np.array([0, 1, 1, 0])
clf = MixedNB()
clf.fit(X, y)
print(clf.predict([[0.5, 1, 1]])) # Automatically handles Gaussian, Bernoulli, Categorical
# AnDE: Solve problems Naive Bayes cannot (XOR)
X_xor = np.array([[-1, -1], [-1, 1], [1, -1], [1, 1]])
y_xor = np.array([0, 1, 1, 0])
clf = AnDE(n_dependence=1, n_bins=2)
clf.fit(X_xor, y_xor)
print(clf.predict(X_xor)) # [0, 1, 1, 0] ✓
```
## Installation
```bash
pip install scikit-bayes
```
Or install from source:
```bash
pip install git+https://github.com/ptorrijos99/scikit-bayes.git
```
## Documentation
- 📖 [User Guide](https://ptorrijos99.github.io/scikit-bayes/user_guide.html) - Detailed documentation
- 📚 [API Reference](https://ptorrijos99.github.io/scikit-bayes/api.html) - Complete API docs
- 🎨 [Examples Gallery](https://ptorrijos99.github.io/scikit-bayes/auto_examples/index.html) - Visual examples
## Development
This project uses [pixi](https://pixi.sh) for environment management.
```bash
# Run tests
pixi run test
# Run linter
pixi run lint
# Build documentation
pixi run build-doc
# Activate development environment
pixi shell -e dev
```
## Citation
If you use scikit-bayes in a scientific publication, please cite:
```bibtex
@software{scikit_bayes,
author = {Torrijos, Pablo},
title = {scikit-bayes: Bayesian Network Classifiers for Python},
year = {2025},
url = {https://github.com/ptorrijos99/scikit-bayes}
}
```
## References
- Webb, G. I., Boughton, J., & Wang, Z. (2005). *Not so naive Bayes: Aggregating one-dependence estimators*. Machine Learning, 58(1), 5-24.
- Flores, M. J., Gámez, J. A., Martínez, A. M., & Puerta, J. M. (2009). *GAODE and HAODE: Two proposals based on AODE to deal with continuous variables*. ICML '09, 313-320.
- Zaidi, N. A., Webb, G. I., Carman, M. J., & Petitjean, F. (2017). *Efficient parameter learning of Bayesian network classifiers*. Machine Learning, 106(9-10), 1289-1329.
## License
BSD-3-Clause. See [LICENSE](LICENSE) for details.
| text/markdown | null | Pablo Torrijos <pablo.torrijos@uclm.es> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: BSD License",
"Operating S... | [] | null | null | >=3.9 | [] | [] | [] | [
"scikit-learn>=1.4.2",
"pyyaml>=6.0",
"liac-arff>=2.5"
] | [] | [] | [] | [
"Homepage, https://github.com/ptorrijos99/scikit-bayes",
"Issues, https://github.com/ptorrijos99/scikit-bayes/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:17:16.862779 | scikit_bayes-0.1.3.tar.gz | 142,951 | 93/fe/3267bc45c5357c08ffb3c3867d2b1479a2aca97e288061fb5f97105224cf/scikit_bayes-0.1.3.tar.gz | source | sdist | null | false | 3a71bfb7801636ffca53d4a60bc575ca | efa18592b8f52e47b09c2b6f847c9239ba34d8edfed33685dbb8f276ee429859 | 93fe3267bc45c5357c08ffb3c3867d2b1479a2aca97e288061fb5f97105224cf | null | [
"LICENSE"
] | 197 |
2.1 | odoo-addon-auth-jwt-demo | 18.0.1.0.1 | Test/demo module for auth_jwt. | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
=============
Auth JWT Test
=============
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:9107748d0e8fe356b5dbf0e6a8af05342460d05d3d8f5b123ea96eb8db7b26ae
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-LGPL--3-blue.png
:target: http://www.gnu.org/licenses/lgpl-3.0-standalone.html
:alt: License: LGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fserver--auth-lightgray.png?logo=github
:target: https://github.com/OCA/server-auth/tree/18.0/auth_jwt_demo
:alt: OCA/server-auth
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/server-auth-18-0/server-auth-18-0-auth_jwt_demo
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/server-auth&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
A test/demo module for ``auth_jwt``.
**Table of contents**
.. contents::
:local:
Usage
=====
This modules creates a JWT validator named ``demo``, and adds a
``/auth_jwt_demo/whoami`` route which returns information about the
partner identified in the token.
The ``whoami`` endpoint can be invoked as such, assuming
`python-jose <https://pypi.org/project/python-jose/>`__ is installed.
.. code:: python
# /usr/bin/env python3
import time
import requests
from jose import jwt
token = jwt.encode(
{
"aud": "auth_jwt_test_api",
"iss": "some issuer",
"exp": time.time() + 60,
"email": "mark.brown23@example.com",
},
key="thesecret",
algorithm=jwt.ALGORITHMS.HS256,
)
r = requests.get(
"http://localhost:8069/auth_jwt_demo/whoami",
headers={"Authorization": "Bearer " + token},
)
r.raise_for_status()
print(r.json())
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/server-auth/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/server-auth/issues/new?body=module:%20auth_jwt_demo%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* ACSONE SA/NV
Contributors
------------
- Stéphane Bidoul <stephane.bidoul@acsone.eu>
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-sbidoul| image:: https://github.com/sbidoul.png?size=40px
:target: https://github.com/sbidoul
:alt: sbidoul
Current `maintainer <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-sbidoul|
This module is part of the `OCA/server-auth <https://github.com/OCA/server-auth/tree/18.0/auth_jwt_demo>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | ACSONE SA/NV,Odoo Community Association (OCA) | support@odoo-community.org | null | null | LGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)"
] | [] | https://github.com/OCA/server-auth | null | >=3.10 | [] | [] | [] | [
"odoo-addon-auth_jwt==18.0.*",
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T13:17:15.499513 | odoo_addon_auth_jwt_demo-18.0.1.0.1-py3-none-any.whl | 649,921 | 14/3e/b36e5c09f44fc8f2f7b7b8b8120b910e6c50b616e8c7f8e1b6498b46c3df/odoo_addon_auth_jwt_demo-18.0.1.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 3d3fe94f12b9288c9014a1eaf0bdebc9 | 891a6ab470d27a8e34d8c8ed47a2fa10d508ee7674cfe30b52077f00829a5350 | 143eb36e5c09f44fc8f2f7b7b8b8120b910e6c50b616e8c7f8e1b6498b46c3df | null | [] | 99 |
2.4 | langchain-blindfold | 0.1.0 | LangChain integration for Blindfold PII detection and protection | # LangChain Blindfold
LangChain integration for [Blindfold](https://blindfold.dev) PII detection and protection. Tokenize PII before it reaches your LLM, then restore originals in the response.
| | |
|---|---|
| Developed by | [Blindfold](https://blindfold.dev) |
| License | MIT |
| Input/Output | String, Document |
## Installation
```bash
pip install langchain-blindfold
```
Set your Blindfold API key:
```bash
export BLINDFOLD_API_KEY=your-api-key
```
Get a free API key at [app.blindfold.dev](https://app.blindfold.dev).
## Quick Start
### Protect a LangChain Chain
```python
from langchain_blindfold import blindfold_protect
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
tokenize, detokenize = blindfold_protect(policy="basic")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{input}"),
])
llm = ChatOpenAI(model="gpt-4o-mini")
chain = tokenize | prompt | llm | (lambda msg: msg.content) | detokenize
# PII is tokenized before the LLM sees it, then restored in the response
result = chain.invoke("Write a follow-up email to John Doe at john@example.com")
```
### Transform Documents for RAG
```python
from langchain_blindfold import BlindfoldPIITransformer
from langchain_core.documents import Document
transformer = BlindfoldPIITransformer(pii_method="redact", policy="hipaa_us", region="us")
docs = [Document(page_content="Patient John Smith, SSN 123-45-6789")]
safe_docs = transformer.transform_documents(docs)
# safe_docs[0].page_content → "Patient [REDACTED], SSN [REDACTED]"
```
## Components
### `blindfold_protect()`
Convenience function that returns a paired tokenizer and detokenizer:
```python
tokenize, detokenize = blindfold_protect(
api_key=None, # Falls back to BLINDFOLD_API_KEY env var
region=None, # "eu" or "us" for data residency
policy="basic", # Detection policy
entities=None, # Specific entity types to detect
score_threshold=None, # Confidence threshold (0.0-1.0)
)
```
### `BlindfoldTokenizer`
A LangChain `Runnable` that tokenizes PII in text:
```python
from langchain_blindfold import BlindfoldTokenizer
tokenizer = BlindfoldTokenizer(policy="gdpr_eu", region="eu")
safe_text = tokenizer.invoke("Contact Hans at hans@example.de")
# → "Contact <Person_1> at <Email Address_1>"
```
### `BlindfoldDetokenizer`
A LangChain `Runnable` that restores original PII from tokenized text:
```python
from langchain_blindfold import BlindfoldTokenizer, BlindfoldDetokenizer
tokenizer = BlindfoldTokenizer(api_key="...")
detokenizer = BlindfoldDetokenizer(tokenizer=tokenizer)
tokenizer.invoke("Hi John") # stores mapping
result = detokenizer.invoke("Response to <Person_1>")
# → "Response to John"
```
### `BlindfoldPIITransformer`
A LangChain `DocumentTransformer` for protecting PII in documents:
```python
from langchain_blindfold import BlindfoldPIITransformer
transformer = BlindfoldPIITransformer(
api_key=None, # Falls back to BLINDFOLD_API_KEY env var
region=None, # "eu" or "us" for data residency
policy="basic", # Detection policy
pii_method="tokenize",# tokenize, redact, mask, hash, synthesize, encrypt
entities=None, # Specific entity types to detect
score_threshold=None, # Confidence threshold (0.0-1.0)
)
```
When `pii_method="tokenize"`, the mapping is stored in `doc.metadata["blindfold_mapping"]`.
## Policies
| Policy | Entities | Best For |
|---|---|---|
| `basic` | Names, emails, phones, locations | General PII protection |
| `gdpr_eu` | EU-specific: IBANs, addresses, dates of birth | GDPR compliance |
| `hipaa_us` | PHI: SSNs, MRNs, medical terms | HIPAA compliance |
| `pci_dss` | Card numbers, CVVs, expiry dates | PCI DSS compliance |
| `strict` | All entity types, lower threshold | Maximum detection |
## PII Methods
| Method | Output | Reversible |
|---|---|---|
| `tokenize` | `<Person_1>`, `<Email Address_1>` | Yes |
| `redact` | PII removed entirely | No |
| `mask` | `J****oe`, `j****om` | No |
| `hash` | `HASH_abc123` | No |
| `synthesize` | `Jane Smith`, `jane@example.org` | No |
| `encrypt` | AES-256 encrypted value | Yes (with key) |
## Data Residency
Use the `region` parameter to ensure PII is processed in a specific jurisdiction:
- `region="eu"` — processed in Frankfurt, Germany
- `region="us"` — processed in Virginia, US
```python
tokenize, detokenize = blindfold_protect(policy="gdpr_eu", region="eu")
```
## Links
- [Blindfold Documentation](https://docs.blindfold.dev)
- [Blindfold Dashboard](https://app.blindfold.dev)
- [LangChain Documentation](https://python.langchain.com)
- [GitHub](https://github.com/blindfold-dev/langchain-blindfold)
| text/markdown | null | Blindfold <hello@blindfold.dev> | null | null | MIT | langchain, pii, blindfold, privacy, gdpr, hipaa, ai-safety, llm | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Py... | [] | null | null | >=3.9 | [] | [] | [] | [
"langchain-core>=0.3.0",
"blindfold-sdk>=1.3.0"
] | [] | [] | [] | [
"Homepage, https://blindfold.dev",
"Documentation, https://docs.blindfold.dev",
"Repository, https://github.com/blindfold-dev/langchain-blindfold"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-19T13:16:10.251602 | langchain_blindfold-0.1.0.tar.gz | 9,054 | 8b/f9/da95a68403a6d243c82f27be3281b3ff133dba5d81308a72fbcd5454aeb3/langchain_blindfold-0.1.0.tar.gz | source | sdist | null | false | 5d22cc214ea03937686ee9b9851f3d36 | a9308f20d2383989399a7588b1dddd72b360a47f5502bdaceff0f0f2f3ddc8e2 | 8bf9da95a68403a6d243c82f27be3281b3ff133dba5d81308a72fbcd5454aeb3 | null | [
"LICENSE"
] | 227 |
2.4 | iec-api | 0.5.6 | A Python wrapper for Israel Electric Company API | # iec-api
A python wrapper for Israel Electric Company API
## Module Usage
```python
from iec_api import iec_client as iec
client = iec.IecClient("123456789")
try:
await client.manual_login() # login with user inputs
except iec.exceptions.IECError as err:
logger.error(f"Failed Login: (Code {err.code}): {err.error}")
raise
customer = await client.get_customer()
print(customer)
contracts = await client.get_contracts()
for contract in contracts:
print(contract)
reading = await client.get_last_meter_reading(customer.bp_number, contracts[0].contract_id)
print(reading)
```
## Postman
To use the API manually through Postman - read [Postman Collection Guide](POSTMAN.md)
| text/markdown | GuyKh | null | Guy Khmelnitsky | guykhmel@gmail.com | MIT | python, poetry, api, iec, israel, electric | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"aiofiles<25.2.0,>=25.1.0",
"aiohttp<4.0.0,>=3.9.1",
"cryptography<100.0.0,>=44.0.0",
"mashumaro<4.0,>=3.13",
"pkce<2.0.0,>=1.0.3",
"pyjwt<3.0.0,>=2.8.0",
"pytz<2025.0,>=2024.1",
"requests<3.0.0,>=2.31.0"
] | [] | [] | [] | [
"Repository, https://github.com/GuyKh/py-iec-api"
] | poetry/2.3.2 CPython/3.11.14 Linux/6.14.0-1017-azure | 2026-02-19T13:15:00.352002 | iec_api-0.5.6-py3-none-any.whl | 67,280 | 38/1f/ec541309ecd7a3bd3f1e250607adfa8ee3d18afa8daf0981d9ea3971f4d4/iec_api-0.5.6-py3-none-any.whl | py3 | bdist_wheel | null | false | 2bc162d6e91b49b4ce6d15d366d0c40f | 57bbaf187cff10b16d559b9a3f9669961c03f6baeace7f0146d7dd1bd3b0e528 | 381fec541309ecd7a3bd3f1e250607adfa8ee3d18afa8daf0981d9ea3971f4d4 | null | [
"LICENSE"
] | 224 |
2.4 | memora-mcp | 0.2.22 | MCP-compatible memory server backed by SQLite | <h1 align="center"><img src="media/memora_new.gif" width="60" alt="Memora Logo" align="absmiddle"> Memora</h1>
<p align="center"><sub><sub><i>"You never truly know the value of a moment until it becomes a memory."</i></sub></sub></p>
<p align="center">
<b>Give your AI agents persistent memory</b><br>
A lightweight MCP server for semantic memory storage, knowledge graphs, conversational recall, and cross-session context.
</p>
<p align="center">
<a href="https://github.com/agentic-mcp-tools/memora/releases"><img src="https://img.shields.io/github/v/tag/agentic-mcp-tools/memora?label=version&color=blue" alt="Version"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-green" alt="License"></a>
<a href="https://github.com/thedotmack/awesome-claude-code"><img src="https://awesome.re/mentioned-badge.svg" alt="Mentioned in Awesome Claude Code"></a>
</p>
<p align="center">
<img src="media/demo.gif" alt="Memora Demo" width="800">
</p>
<p align="center">
<b><a href="#features">Features</a></b> · <b><a href="#install">Install</a></b> · <b><a href="#usage">Usage</a></b> · <b><a href="#configuration">Config</a></b> · <b><a href="#live-graph-server">Live Graph</a></b> · <b><a href="#cloud-graph">Cloud Graph</a></b> · <b><a href="#chat-with-memories">Chat</a></b> · <b><a href="#semantic-search--embeddings">Semantic Search</a></b> · <b><a href="#llm-deduplication">LLM Dedup</a></b> · <b><a href="#memory-linking">Linking</a></b> · <b><a href="#neovim-integration">Neovim</a></b>
</p>
## Features
**Core Storage**
- 💾 **Persistent Storage** - SQLite with optional cloud sync (S3, R2, D1)
- 📂 **Hierarchical Organization** - Section/subsection structure with auto-hierarchy assignment
- 📦 **Export/Import** - Backup and restore with merge strategies
**Search & Intelligence**
- 🔍 **Semantic Search** - Vector embeddings (TF-IDF, sentence-transformers, OpenAI)
- 🎯 **Advanced Queries** - Full-text, date ranges, tag filters (AND/OR/NOT), hybrid search
- 🔀 **Cross-references** - Auto-linked related memories based on similarity
- 🤖 **LLM Deduplication** - Find and merge duplicates with AI-powered comparison
- 🔗 **Memory Linking** - Typed edges, importance boosting, and cluster detection
**Tools & Visualization**
- ⚡ **Memory Automation** - Structured tools for TODOs, issues, and sections
- 🕸️ **Knowledge Graph** - Interactive visualization with Mermaid rendering and cluster overlays
- 🌐 **Live Graph Server** - Built-in HTTP server with cloud-hosted option (D1/Pages)
- 💬 **Chat with Memories** - RAG-powered chat panel that searches relevant memories and streams LLM responses
- 📡 **Event Notifications** - Poll-based system for inter-agent communication
- 📊 **Statistics & Analytics** - Tag usage, trends, and connection insights
- 🧠 **Memory Insights** - Activity summary, stale detection, consolidation suggestions, and LLM-powered pattern analysis
- 📜 **Action History** - Track all memory operations (create, update, delete, merge, boost, link) with grouped timeline view
## Install
```bash
pip install git+https://github.com/agentic-mcp-tools/memora.git
```
Includes cloud storage (S3/R2) and OpenAI embeddings out of the box.
```bash
# Optional: local embeddings (offline, ~2GB for PyTorch)
pip install "memora[local]" @ git+https://github.com/agentic-mcp-tools/memora.git
```
<details id="usage">
<summary><big><big><strong>Usage</strong></big></big></summary>
The server runs automatically when configured in Claude Code. Manual invocation:
```bash
# Default (stdio mode for MCP)
memora-server
# With graph visualization server
memora-server --graph-port 8765
# HTTP transport (alternative to stdio)
memora-server --transport streamable-http --host 127.0.0.1 --port 8080
```
</details>
<details id="configuration">
<summary><big><big><strong>Configuration</strong></big></big></summary>
### Claude Code
Add to `.mcp.json` in your project root:
**Local DB:**
```json
{
"mcpServers": {
"memora": {
"command": "memora-server",
"args": [],
"env": {
"MEMORA_DB_PATH": "~/.local/share/memora/memories.db",
"MEMORA_ALLOW_ANY_TAG": "1",
"MEMORA_GRAPH_PORT": "8765"
}
}
}
}
```
**Cloud DB (Cloudflare D1) - Recommended:**
```json
{
"mcpServers": {
"memora": {
"command": "memora-server",
"args": ["--no-graph"],
"env": {
"MEMORA_STORAGE_URI": "d1://<account-id>/<database-id>",
"CLOUDFLARE_API_TOKEN": "<your-api-token>",
"MEMORA_ALLOW_ANY_TAG": "1"
}
}
}
}
```
With D1, use `--no-graph` to disable the local visualization server. Instead, use the hosted graph at your Cloudflare Pages URL (see [Cloud Graph](#cloud-graph)).
**Cloud DB (S3/R2) - Sync mode:**
```json
{
"mcpServers": {
"memora": {
"command": "memora-server",
"args": [],
"env": {
"AWS_PROFILE": "memora",
"AWS_ENDPOINT_URL": "https://<account-id>.r2.cloudflarestorage.com",
"MEMORA_STORAGE_URI": "s3://memories/memories.db",
"MEMORA_CLOUD_ENCRYPT": "true",
"MEMORA_ALLOW_ANY_TAG": "1",
"MEMORA_GRAPH_PORT": "8765"
}
}
}
}
```
### Codex CLI
Add to `~/.codex/config.toml`:
```toml
[mcp_servers.memora]
command = "memora-server" # or full path: /path/to/bin/memora-server
args = ["--no-graph"]
env = {
AWS_PROFILE = "memora",
AWS_ENDPOINT_URL = "https://<account-id>.r2.cloudflarestorage.com",
MEMORA_STORAGE_URI = "s3://memories/memories.db",
MEMORA_CLOUD_ENCRYPT = "true",
MEMORA_ALLOW_ANY_TAG = "1",
}
```
</details>
<details id="environment-variables">
<summary><big><big><strong>Environment Variables</strong></big></big></summary>
| Variable | Description |
|------------------------|-----------------------------------------------------------------------------|
| `MEMORA_DB_PATH` | Local SQLite database path (default: `~/.local/share/memora/memories.db`) |
| `MEMORA_STORAGE_URI` | Storage URI: `d1://<account>/<db-id>` (D1) or `s3://bucket/memories.db` (S3/R2) |
| `CLOUDFLARE_API_TOKEN` | API token for D1 database access (required for `d1://` URI) |
| `MEMORA_CLOUD_ENCRYPT` | Encrypt database before uploading to cloud (`true`/`false`) |
| `MEMORA_CLOUD_COMPRESS`| Compress database before uploading to cloud (`true`/`false`) |
| `MEMORA_CACHE_DIR` | Local cache directory for cloud-synced database |
| `MEMORA_ALLOW_ANY_TAG` | Allow any tag without validation against allowlist (`1` to enable) |
| `MEMORA_TAG_FILE` | Path to file containing allowed tags (one per line) |
| `MEMORA_TAGS` | Comma-separated list of allowed tags |
| `MEMORA_GRAPH_PORT` | Port for the knowledge graph visualization server (default: `8765`) |
| `MEMORA_EMBEDDING_MODEL` | Embedding backend: `openai` (default), `sentence-transformers`, or `tfidf` |
| `SENTENCE_TRANSFORMERS_MODEL` | Model for sentence-transformers (default: `all-MiniLM-L6-v2`) |
| `OPENAI_API_KEY` | API key for OpenAI embeddings and LLM deduplication |
| `OPENAI_BASE_URL` | Base URL for OpenAI-compatible APIs (OpenRouter, Azure, etc.) |
| `OPENAI_EMBEDDING_MODEL` | OpenAI embedding model (default: `text-embedding-3-small`) |
| `MEMORA_LLM_ENABLED` | Enable LLM-powered deduplication comparison (`true`/`false`, default: `true`) |
| `MEMORA_LLM_MODEL` | Model for deduplication comparison (default: `gpt-4o-mini`) |
| `CHAT_MODEL` | Model for the chat panel (default: `deepseek/deepseek-chat`, falls back to `MEMORA_LLM_MODEL`) |
| `AWS_PROFILE` | AWS credentials profile from `~/.aws/credentials` (useful for R2) |
| `AWS_ENDPOINT_URL` | S3-compatible endpoint for R2/MinIO |
| `R2_PUBLIC_DOMAIN` | Public domain for R2 image URLs |
</details>
<details id="semantic-search--embeddings">
<summary><big><big><strong>Semantic Search & Embeddings</strong></big></big></summary>
Memora supports three embedding backends:
| Backend | Install | Quality | Speed |
|---------|---------|---------|-------|
| `openai` (default) | Included | High quality | API latency |
| `sentence-transformers` | `pip install memora[local]` | Good, runs offline | Medium |
| `tfidf` | Included | Basic keyword matching | Fast |
**Automatic:** Embeddings and cross-references are computed automatically when you `memory_create`, `memory_update`, or `memory_create_batch`.
**Manual rebuild required** when:
- Changing `MEMORA_EMBEDDING_MODEL` after memories exist
- Switching to a different sentence-transformers model
```bash
# After changing embedding model, rebuild all embeddings
memory_rebuild_embeddings
# Then rebuild cross-references to update the knowledge graph
memory_rebuild_crossrefs
```
</details>
<details id="live-graph-server">
<summary><big><big><strong>Live Graph Server</strong></big></big></summary>
A built-in HTTP server starts automatically with the MCP server, serving an interactive knowledge graph visualization.
<table>
<tr>
<td align="center"><img src="media/ui_details.png" alt="Details Panel" width="400"><br><em>Details Panel</em></td>
<td align="center"><img src="media/ui_timeline.png" alt="Timeline Panel" width="400"><br><em>Timeline Panel</em></td>
</tr>
</table>
**Access locally:**
```
http://localhost:8765/graph
```
**Remote access via SSH:**
```bash
ssh -L 8765:localhost:8765 user@remote
# Then open http://localhost:8765/graph in your browser
```
**Configuration:**
```json
{
"env": {
"MEMORA_GRAPH_PORT": "8765"
}
}
```
To disable: add `"--no-graph"` to args in your MCP config.
### Graph UI Features
- **Details Panel** - View memory content, metadata, tags, and related memories
- **Timeline Panel** - Browse memories chronologically, click to highlight in graph
- **History Panel** - Action log of all operations with grouped consecutive entries and clickable memory references (deleted memories shown as strikethrough)
- **Chat Panel** - Ask questions about your memories using RAG-powered LLM chat with streaming responses and clickable `[Memory #ID]` references
- **Time Slider** - Filter memories by date range, drag to explore history
- **Real-time Updates** - Graph, timeline, and history update via SSE when memories change
- **Filters** - Tag/section dropdowns, zoom controls
- **Mermaid Rendering** - Code blocks render as diagrams
### Node Colors
- 🟣 **Tags** - Purple shades by tag
- 🔴 **Issues** - Red (open), Orange (in progress), Green (resolved), Gray (won't fix)
- 🔵 **TODOs** - Blue (open), Orange (in progress), Green (completed), Red (blocked)
Node size reflects connection count.
</details>
<details id="cloud-graph">
<summary><big><big><strong>Cloud Graph (Recommended for D1)</strong></big></big></summary>
When using Cloudflare D1 as your database, the graph visualization is hosted on Cloudflare Pages - no local server needed.
**Benefits:**
- Access from anywhere (no SSH tunneling)
- Real-time updates via WebSocket
- Multi-database support via `?db=` parameter
- Secure access with Cloudflare Zero Trust
**Setup:**
1. **Create D1 database:**
```bash
npx wrangler d1 create memora-graph
npx wrangler d1 execute memora-graph --file=memora-graph/schema.sql
```
2. **Deploy Pages:**
```bash
cd memora-graph
npx wrangler pages deploy ./public --project-name=memora-graph
```
3. **Configure bindings** in Cloudflare Dashboard:
- Pages → memora-graph → Settings → Bindings
- Add D1: `DB_MEMORA` → your database
- Add R2: `R2_MEMORA` → your bucket (for images)
4. **Configure MCP** with D1 URI:
```json
{
"env": {
"MEMORA_STORAGE_URI": "d1://<account-id>/<database-id>",
"CLOUDFLARE_API_TOKEN": "<your-token>"
}
}
```
**Access:** `https://memora-graph.pages.dev`
**Secure with Zero Trust:**
1. Cloudflare Dashboard → Zero Trust → Access → Applications
2. Add application for `memora-graph.pages.dev`
3. Create policy with allowed emails
4. Pages → Settings → Enable Access Policy
See [`memora-graph/`](memora-graph/) for detailed setup and multi-database configuration.
</details>
<details id="chat-with-memories">
<summary><big><big><strong>Chat with Memories</strong></big></big></summary>
Ask questions about your knowledge base directly from the graph UI. The chat panel uses RAG (Retrieval-Augmented Generation) to search relevant memories and stream LLM responses.
- **Toggle** via the floating chat icon at bottom-right
- **Semantic search** finds the most relevant memories as context
- **Streaming responses** with clickable `[Memory #ID]` references that focus the graph node
- Works on both the local server and Cloudflare Pages deployment
**Configure the chat model:**
| Backend | Variable | Default |
|---------|----------|---------|
| Local server | `CHAT_MODEL` env var | Falls back to `MEMORA_LLM_MODEL` |
| Cloudflare Pages | `CHAT_MODEL` in `wrangler.toml` | `deepseek/deepseek-chat` |
Requires an OpenAI-compatible API (`OPENAI_API_KEY` + `OPENAI_BASE_URL` for local, `OPENROUTER_API_KEY` secret for Cloudflare).
</details>
<details id="llm-deduplication">
<summary><big><big><strong>LLM Deduplication</strong></big></big></summary>
Find and merge duplicate memories using AI-powered semantic comparison:
```python
# Find potential duplicates (uses cross-refs + optional LLM analysis)
memory_find_duplicates(min_similarity=0.7, max_similarity=0.95, limit=10, use_llm=True)
# Merge duplicates (append, prepend, or replace strategies)
memory_merge(source_id=123, target_id=456, merge_strategy="append")
```
**LLM Comparison** analyzes memory pairs and returns:
- `verdict`: "duplicate", "similar", or "different"
- `confidence`: 0.0-1.0 score
- `reasoning`: Brief explanation
- `suggested_action`: "merge", "keep_both", or "review"
Works with any OpenAI-compatible API (OpenAI, OpenRouter, Azure, etc.) via `OPENAI_BASE_URL`.
</details>
<details id="memory-automation-tools">
<summary><big><big><strong>Memory Automation Tools</strong></big></big></summary>
Structured tools for common memory types:
```python
# Create a TODO with status and priority
memory_create_todo(content="Implement feature X", status="open", priority="high", category="backend")
# Create an issue with severity
memory_create_issue(content="Bug in login flow", status="open", severity="major", component="auth")
# Create a section placeholder (hidden from graph)
memory_create_section(content="Architecture", section="docs", subsection="api")
```
</details>
<details id="memory-insights">
<summary><big><big><strong>Memory Insights</strong></big></big></summary>
Analyze stored memories and surface actionable insights:
```python
# Full analysis with LLM-powered pattern detection
memory_insights(period="7d", include_llm_analysis=True)
# Quick summary without LLM (faster, no API key needed)
memory_insights(period="1m", include_llm_analysis=False)
```
Returns:
- **Activity summary** — memories created in the period, grouped by type and tag
- **Open items** — open TODOs and issues with stale detection (configurable via `MEMORA_STALE_DAYS`, default 14)
- **Consolidation candidates** — similar memory pairs that could be merged
- **LLM analysis** — themes, focus areas, knowledge gaps, and a summary (requires `OPENAI_API_KEY`)
</details>
<details id="memory-linking">
<summary><big><big><strong>Memory Linking</strong></big></big></summary>
Manage relationships between memories:
```python
# Create typed edges between memories
memory_link(from_id=1, to_id=2, edge_type="implements", bidirectional=True)
# Edge types: references, implements, supersedes, extends, contradicts, related_to
# Remove links
memory_unlink(from_id=1, to_id=2)
# Boost memory importance for ranking
memory_boost(memory_id=42, boost_amount=0.5)
# Detect clusters of related memories
memory_clusters(min_cluster_size=2, min_score=0.3)
```
</details>
<details id="knowledge-graph-export">
<summary><big><big><strong>Knowledge Graph Export (Optional)</strong></big></big></summary>
For offline viewing, export memories as a static HTML file:
```python
memory_export_graph(output_path="~/memories_graph.html", min_score=0.25)
```
This is optional - the Live Graph Server provides the same visualization with real-time updates.
</details>
<details id="neovim-integration">
<summary><big><big><strong>Neovim Integration</strong></big></big></summary>
Browse memories directly in Neovim with Telescope. Copy the plugin to your config:
```bash
# For kickstart.nvim / lazy.nvim
cp nvim/memora.lua ~/.config/nvim/lua/kickstart/plugins/
```
**Usage:** Press `<leader>sm` to open the memory browser with fuzzy search and preview.
Requires: `telescope.nvim`, `plenary.nvim`, and `memora` installed in your Python environment.
</details>
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp>=1.0.0",
"Pillow>=10.0.0",
"boto3>=1.28.0",
"filelock>=3.12.0",
"openai>=1.0.0",
"starlette>=0.37.0",
"sse-starlette>=2.1.0",
"uvicorn>=0.30.0",
"sentence-transformers>=2.2.0; extra == \"local\"",
"moto[s3]>=5.0.0; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-cov>=4.1.0; ... | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-19T13:14:20.980401 | memora_mcp-0.2.22.tar.gz | 129,647 | 39/61/d95f1ed1f5ff9591d21c7cb32e2c080e8bb15931b4aa816c881f08d2031f/memora_mcp-0.2.22.tar.gz | source | sdist | null | false | bcef959562c1ae63c6c730bd4751f16b | f6570ed8826f5aedb245c6f4019269e3202acd07b29d9feeff2d268e388cfc5f | 3961d95f1ed1f5ff9591d21c7cb32e2c080e8bb15931b4aa816c881f08d2031f | MIT | [
"LICENSE"
] | 236 |
2.4 | pyan3 | 2.0.0 | Generate approximate call graphs for Python programs | # Pyan3
Offline call graph generator for Python 3
    [](https://codecov.io/gh/Technologicat/pyan)
  
  [](http://makeapullrequest.com/)
Pyan takes one or more Python source files, performs a (rather superficial) static analysis, and constructs a directed graph of the objects in the combined source, and how they define or use each other. The graph can be output for rendering by GraphViz or yEd.
This project has 2 official repositories:
- The original stable [davidfraser/pyan](https://github.com/davidfraser/pyan).
- The development repository [Technologicat/pyan](https://github.com/Technologicat/pyan)
> The PyPI package [pyan3](https://pypi.org/project/pyan3/) is built from development
# Revived! [February 2026]
Pyan3 is back in active development. The analyzer has been modernized and tested on **Python 3.10–3.14**, with fixes for all modern syntax (walrus operator, `match` statements, `async with`, type aliases, inlined comprehension scopes in 3.12+, and more).
**What's new in the revival:**
- Full support for Python 3.10–3.14 syntax
- Module-level import dependency analysis (`--module-level` flag and `create_modulegraph()` API), with import cycle detection
- Comprehensive test suite (80+ tests)
- Modernized build system and dependencies
This revival was carried out by [Technologicat](https://github.com/Technologicat) with [Claude](https://claude.ai/) (Anthropic) as AI pair programmer. See [AUTHORS.md](AUTHORS.md) for the full contributor history.
## About
[")](graph0.svg)
**Defines** relations are drawn with _dotted gray arrows_.
**Uses** relations are drawn with _black solid arrows_. Recursion is indicated by an arrow from a node to itself. [Mutual recursion](https://en.wikipedia.org/wiki/Mutual_recursion#Basic_examples) between nodes X and Y is indicated by a pair of arrows, one pointing from X to Y, and the other from Y to X.
**Nodes** are always filled, and made translucent to clearly show any arrows passing underneath them. This is especially useful for large graphs with GraphViz's `fdp` filter. If colored output is not enabled, the fill is white.
In **node coloring**, the [HSL](https://en.wikipedia.org/wiki/HSL_and_HSV) color model is used. The **hue** is determined by the _filename_ the node comes from. The **lightness** is determined by _depth of namespace nesting_, with darker meaning more deeply nested. Saturation is constant. The spacing between different hues depends on the number of files analyzed; better results are obtained for fewer files.
**Groups** are filled with translucent gray to avoid clashes with any node color.
The nodes can be **annotated** by _filename and source line number_ information.
## Note
The static analysis approach Pyan takes is different from running the code and seeing which functions are called and how often. There are various tools that will generate a call graph that way, usually using a debugger or profiling trace hooks, such as [Python Call Graph](https://pycallgraph.readthedocs.org/).
In Pyan3, the analyzer was ported from `compiler` ([good riddance](https://stackoverflow.com/a/909172)) to a combination of `ast` and `symtable`, and slightly extended.
# Install
pip install pyan3
Pyan3 requires Python 3.10 or newer.
For SVG and HTML output, you need the `dot` command from [Graphviz](https://graphviz.org/) installed on your system (e.g. `sudo apt-get install graphviz` on Debian/Ubuntu, `brew install graphviz` on macOS). Dot output requires no extra system dependencies.
## Development setup
This repository uses [uv](https://github.com/astral-sh/uv) for local builds and releases.
```bash
# install uv if needed
curl -LsSf https://astral.sh/uv/install.sh | sh
# set up a development environment (editable install + dev/test extras)
uv sync --extra dev --extra test
# alternatively, use the helper wrapper
scripts/uv-dev.sh setup
# run the CLI locally
uv run pyan3 --help
# build distribution artifacts
uv build
# run the default test suite
uv run pytest tests -q
```
Helper scripts are provided for common workflows:
- `./makedist.sh` – builds wheels and source distributions via `uv build`.
- `./uploaddist.sh <version>` – publishes artifacts, preferring `uv publish` when available.
- `scripts/test-python-versions.sh` – smoke-tests the package across the Python interpreters detected on your system.
- `scripts/uv-dev.sh` – wraps the most common uv commands (setup, test, lint, build, matrix tests). Run with no arguments for an interactive menu.
If you are new to uv, read [CONTRIBUTING.md](CONTRIBUTING.md) for a concise
onboarding guide that covers:
- Installing uv and managing Python versions.
- Creating project environments, installing an editable copy, and running
tests/builds/lint.
- Using helper scripts such as `scripts/uv-dev.sh` and `scripts/test-python-versions.sh`.
- Links to the [ROADMAP](ROADMAP.md) and open issues (e.g.,
[#105](https://github.com/Technologicat/pyan/issues/105)) if you are looking
for contribution ideas.
# Usage
See `pyan3 --help`.
Example:
`pyan *.py --uses --no-defines --colored --grouped --annotated --dot >myuses.dot`
Then render using your favorite GraphViz filter, mainly `dot` or `fdp`:
`dot -Tsvg myuses.dot >myuses.svg`
Or use directly
`pyan *.py --uses --no-defines --colored --grouped --annotated --svg >myuses.svg`
You can also export as an interactive HTML
`pyan *.py --uses --no-defines --colored --grouped --annotated --html > myuses.html`
Alternatively, you can call `pyan` from a script
```shell script
import pyan
from IPython.display import HTML
HTML(pyan.create_callgraph(filenames="**/*.py", format="html"))
```
#### Sphinx integration
You can integrate callgraphs into Sphinx.
Install graphviz (e.g. via `sudo apt-get install graphviz`) and modify `source/conf.py` so that
```
# modify extensions
extensions = [
...
"sphinx.ext.graphviz"
"pyan.sphinx",
]
# add graphviz options
graphviz_output_format = "svg"
```
Now, there is a callgraph directive which has all the options of the [graphviz directive](https://www.sphinx-doc.org/en/master/usage/extensions/graphviz.html)
and in addition:
- **:no-groups:** (boolean flag): do not group
- **:no-defines:** (boolean flag): if to not draw edges that show which functions, methods and classes are defined by a class or module
- **:no-uses:** (boolean flag): if to not draw edges that show how a function uses other functions
- **:no-colors:** (boolean flag): if to not color in callgraph (default is coloring)
- **:nested-groups:** (boolean flag): if to group by modules and submodules
- **:annotated:** (boolean flag): annotate callgraph with file names
- **:direction:** (string): "horizontal" or "vertical" callgraph
- **:toctree:** (string): path to toctree (as used with autosummary) to link elements of callgraph to documentation (makes all nodes clickable)
- **:zoomable:** (boolean flag): enables users to zoom and pan callgraph
Example to create a callgraph for the function `pyan.create_callgraph` that is
zoomable, is defined from left to right and links each node to the API documentation that
was created at the toctree path `api`.
```
.. callgraph:: pyan.create_callgraph
:toctree: api
:zoomable:
:direction: horizontal
```
#### Troubleshooting
If GraphViz says _trouble in init_rank_, try adding `-Gnewrank=true`, as in:
`dot -Gnewrank=true -Tsvg myuses.dot >myuses.svg`
Usually either old or new rank (but often not both) works; this is a long-standing GraphViz issue with complex graphs.
## Too much detail?
If the graph is visually unreadable due to too much detail, consider visualizing only a subset of the files in your project. Any references to files outside the analyzed set will be considered as undefined, and will not be drawn.
For a higher-level view, use `--module-level` mode (see below).
## Module-level analysis
The `--module-level` flag switches pyan3 from call-graph mode to **module-level import dependency analysis**. Instead of graphing individual functions and methods, it shows which modules import which other modules.
### CLI usage
```
pyan3 --module-level pkg/**/*.py --dot -c -e >modules.dot
pyan3 --module-level pkg/**/*.py --dot -c -e | dot -Tsvg >modules.svg
```
The module-level mode has its own set of options (separate from the call-graph mode). Use `pyan3 --module-level --help` for the full list. Key options:
- `--dot`, `--svg`, `--html`, `--tgf`, `--yed` — output format (default: dot)
- `-c`, `--colored` — color by package
- `-g`, `--grouped` — group by namespace
- `-e`, `--nested-groups` — nested subgraph clusters (implies `-g`)
- `-C`, `--cycles` — detect and report import cycles to stdout
- `--dot-rankdir` — layout direction (`TB`, `LR`, `BT`, `RL`)
- `--root` — project root directory (file paths are made relative to this before deriving module names; if omitted, cwd is assumed)
### Cycle detection
The `-C` flag performs exhaustive import cycle detection using depth-first search (DFS) from every module:
```
pyan3 --module-level pkg/**/*.py -C
```
This finds all unique import cycles in the analyzed module set, and reports statistics (count, min/average/median/max cycle length). Note that for large codebases, the number of cycles can be large — most are harmless consequences of cross-package imports.
If a cycle is actually causing an `ImportError`, you usually already know which cycle from the traceback. The `-C` flag provides a broader view of what other cycles exist.
### Python API
```python
import pyan
# Generate a module dependency graph as a DOT string
dot_source = pyan.create_modulegraph(
filenames="pkg/**/*.py",
root=".", # project root; paths made relative to this
format="dot", # also: "svg", "html", "tgf", "yed"
colored=True,
nested_groups=True,
)
```
See `pyan.create_modulegraph()` for the full list of parameters.
# Features
_Items tagged with ☆ are new in Pyan3._
**Graph creation**:
- Nodes for functions and classes
- Edges for defines
- Edges for uses
- This includes recursive calls ☆
- Grouping to represent defines, with or without nesting
- Coloring of nodes by filename
- Unlimited number of hues ☆
**Analysis**:
- Name lookup across the given set of files
- Nested function definitions
- Nested class definitions ☆
- Nested attribute accesses like `self.a.b` ☆
- Inherited attributes ☆
- Pyan3 looks up also in base classes when resolving attributes. In the old Pyan, calls to inherited methods used to be picked up by `contract_nonexistents()` followed by `expand_unknowns()`, but that often generated spurious uses edges (because the wildcard to `*.name` expands to `X.name` _for all_ `X` that have an attribute called `name`.).
- Resolution of `super()` based on the static type at the call site ☆
- MRO is (statically) respected in looking up inherited attributes and `super()` ☆
- Assignment tracking with lexical scoping
- E.g. if `self.a = MyFancyClass()`, the analyzer knows that any references to `self.a` point to `MyFancyClass`
- All binding forms are supported (assign, augassign, for, comprehensions, generator expressions, with) ☆
- Name clashes between `for` loop counter variables and functions or classes defined elsewhere no longer confuse Pyan.
- `self` is defined by capturing the name of the first argument of a method definition, like Python does. ☆
- Simple item-by-item tuple assignments like `x,y,z = a,b,c` ☆
- Chained assignments `a = b = c` ☆
- Local scope for lambda, listcomp, setcomp, dictcomp, genexpr ☆
- Keep in mind that list comprehensions gained a local scope (being treated like a function) only in Python 3. Thus, Pyan3, when applied to legacy Python 2 code, will give subtly wrong results if the code uses list comprehensions.
- Source filename and line number annotation ☆
- The annotation is appended to the node label. If grouping is off, namespace is included in the annotation. If grouping is on, only source filename and line number information is included, because the group title already shows the namespace.
## TODO
For the full list of planned improvements and known limitations, see [TODO_DEFERRED.md](TODO_DEFERRED.md).
- Determine confidence of detected edges (probability that the edge is correct)
- Improve the wildcard resolution mechanism, see discussion [here](https://github.com/johnyf/pyan/issues/5)
- Type inference for function arguments (would reduce wildcard noise)
- Prefix methods by class name in the graph; create a legend for annotations. See the discussion [here](https://github.com/johnyf/pyan/issues/4)
The analyzer **does not currently support**:
- Tuples/lists as first-class values (currently ignores any assignment of a tuple/list to a single name)
- Starred assignment `a,*b,c = d,e,f,g,h` (basic tuple unpacking works; starred targets overapproximate)
- Slicing and indexing in assignment (`ast.Subscript`)
- Additional unpacking generalizations ([PEP 448](https://www.python.org/dev/peps/pep-0448/), Python 3.5+)
- Any **uses** on the RHS _at the binding site_ in all of the above are already detected by the name and attribute analyzers, but the binding information from assignments of these forms will not be recorded (at least not correctly).
- Enums; need to mark the use of any of their attributes as use of the Enum
- Resolving results of function calls, except for a very limited special case for `super()`
- Distinguishing between different Lambdas in the same namespace
- Type inference for function arguments
# How it works
From the viewpoint of graphing the defines and uses relations, the interesting parts of the [AST](https://en.wikipedia.org/wiki/Abstract_syntax_tree) are bindings (defining new names, or assigning new values to existing names), and any name that appears in an `ast.Load` context (i.e. a use). The latter includes function calls; the function's name then appears in a load context inside the `ast.Call` node that represents the call site.
Bindings are tracked, with lexical scoping, to determine which type of object, or which function, each name points to at any given point in the source code being analyzed. This allows tracking things like:
```python
def some_func():
pass
class MyClass:
def __init__(self):
self.f = some_func
def dostuff(self)
self.f()
```
By tracking the name `self.f`, the analyzer will see that `MyClass.dostuff()` uses `some_func()`.
The analyzer also needs to keep track of what type of object `self` currently points to. In a method definition, the literal name representing `self` is captured from the argument list, as Python does; then in the lexical scope of that method, that name points to the current class (since Pyan cares only about object types, not instances).
Of course, this simple approach cannot correctly track cases where the current binding of `self.f` depends on the order in which the methods of the class are executed. To keep things simple, Pyan decides to ignore this complication, just reads through the code in a linear fashion (twice so that any forward-references are picked up), and uses the most recent binding that is currently in scope.
When a binding statement is encountered, the current namespace determines in which scope to store the new value for the name. Similarly, when encountering a use, the current namespace determines which object type or function to tag as the user.
# Authors
See [AUTHORS.md](AUTHORS.md).
# License
[GPL v2](LICENSE.md), as per [comments here](https://ejrh.wordpress.com/2012/08/18/coloured-call-graphs/).
| text/markdown | null | Juha Jeronen <juha.m.jeronen@gmail.com> | null | null | GPL-2.0-or-later | call-graph, code-visualization, dependency-analysis, static-code-analysis | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v2 (GPLv2)",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Lang... | [] | null | null | >=3.10 | [] | [] | [] | [
"jinja2",
"cryptography>=46.0.5; extra == \"dev\"",
"ruff>=0.14.0; extra == \"dev\"",
"twine>=6.0.0; extra == \"dev\"",
"urllib3>=2.6.3; extra == \"dev\"",
"docutils; extra == \"sphinx\"",
"sphinx; extra == \"sphinx\"",
"coverage>=5.3; extra == \"test\"",
"docutils; extra == \"test\"",
"pytest-cov... | [] | [] | [] | [
"Homepage, https://github.com/Technologicat/pyan",
"Documentation, https://github.com/Technologicat/pyan",
"Repository, https://github.com/Technologicat/pyan",
"Bug Tracker, https://github.com/Technologicat/pyan/issues"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-19T13:13:25.660900 | pyan3-2.0.0.tar.gz | 62,721 | e2/e6/204c1a23b9250effd9e74c7ad1ba66c86340d1f311e1d4734363db114919/pyan3-2.0.0.tar.gz | source | sdist | null | false | 82ad0f9860932a80bf27cd729c5b5ccb | f4f81d2c629cbc62b7e5ad8d828a0d9bf79229b3b8598c3d39725a8540565c79 | e2e6204c1a23b9250effd9e74c7ad1ba66c86340d1f311e1d4734363db114919 | null | [
"AUTHORS.md",
"LICENSE.md"
] | 1,051 |
2.4 | stockprice-mcp | 0.1.0 | Stock price & FX MCP server for Claude Desktop, powered by yfinance — no API key required | # stockprice-mcp
Stock price & FX rate MCP server for Claude Desktop, powered by [yfinance](https://github.com/ranaroussi/yfinance). No API key required.
> **Note**: An unrelated package named `yfinance-mcp` exists on PyPI — it is not affiliated with this project.
> This project is published as **`stockprice-mcp`**.
## Setup (Claude Desktop)
```bash
uvx stockprice-mcp serve
```
Add to `claude_desktop_config.json`:
```json
{
"mcpServers": {
"stockprice": {
"command": "uvx",
"args": ["stockprice-mcp", "serve"]
}
}
}
```
## Tools
| Tool | Description |
|------|-------------|
| `get_stock_price` | Latest price + fundamentals for TSE-listed stocks (code.T) |
| `get_stock_history` | OHLCV history for a date range |
| `get_fx_rates` | JPY FX rates (USDJPY, EURJPY, GBPJPY, CNYJPY) |
| `search_ticker` | Search ticker by company name or keyword |
## Usage in Claude Desktop
```text
stockprice でトヨタ(7203)の最新株価を教えて
```
```text
stockprice で USDJPY の直近1週間の推移を確認して
```
```text
stockprice でソニーのティッカーを検索して
```
## CLI
```bash
pip install stockprice-mcp
yfinance-mcp price 7203 # 最新株価
yfinance-mcp history 7203 --start 2025-01-01 # 価格履歴
yfinance-mcp fx # FXレート
yfinance-mcp search Toyota # ティッカー検索
yfinance-mcp test # 疎通確認
yfinance-mcp serve # MCPサーバー起動
```
## Python
```python
import asyncio
from yfinance_mcp import YfinanceClient
async def main():
client = YfinanceClient()
price = await client.get_stock_price("7203")
print(price.close, price.trailing_pe)
asyncio.run(main())
```
## Disclaimer
This package uses [yfinance](https://github.com/ranaroussi/yfinance) (Apache 2.0) to access Yahoo Finance data.
yfinance is not affiliated with or endorsed by Yahoo.
Users are responsible for complying with [Yahoo Finance's Terms of Service](https://legal.yahoo.com/us/en/yahoo/terms/otos/).
Data is intended for personal, educational, and research use.
## License
Apache-2.0
| text/markdown | null | null | null | null | null | claude, fx, japan, mcp, stock, yahoo-finance, yfinance | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3... | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"fastmcp>=2.0",
"loguru>=0.7",
"pydantic>=2.0",
"yfinance>=0.2"
] | [] | [] | [] | [
"Homepage, https://github.com/ajtgjmdjp/stockprice-mcp",
"Repository, https://github.com/ajtgjmdjp/stockprice-mcp",
"Issues, https://github.com/ajtgjmdjp/stockprice-mcp/issues"
] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T13:12:44.100939 | stockprice_mcp-0.1.0.tar.gz | 6,916 | 38/83/c6b50309fe23a9e503afbd75d9fff48f86512f9f8281b543da7b70b52a00/stockprice_mcp-0.1.0.tar.gz | source | sdist | null | false | f7706328d39f9182b966612f3b6d47b6 | 1a707c89336afefb4190740848b050772ddd7f0ca6a89b781073df3670b6e1f1 | 3883c6b50309fe23a9e503afbd75d9fff48f86512f9f8281b543da7b70b52a00 | Apache-2.0 | [] | 173 |
2.4 | wigglystuff | 0.2.30 | Collection of Anywidget Widgets | # wigglystuff
> "A collection of creative AnyWidgets for Python notebook environments."
The project uses [anywidget](https://anywidget.dev/) under the hood so our tools should work in [marimo](https://marimo.io/), [Jupyter](https://jupyter.org/), [Shiny for Python](https://shiny.posit.co/py/docs/jupyter-widgets.html), [VSCode](https://code.visualstudio.com/docs/datascience/jupyter-notebooks), [Colab](https://colab.google/), [Solara](https://solara.dev/), etc. Because of the anywidget integration you should also be able interact with [ipywidgets](https://ipywidgets.readthedocs.io/en/stable/) natively.
## Install
```
uv pip install wigglystuff
```
## Widget Gallery
<table>
<tr>
<td align="center"><b>Slider2D</b><br><a href="https://koaning.github.io/wigglystuff/examples/slider2d/"><img src="./mkdocs/assets/gallery/slider2d.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/slider2d/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/slider2d/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/slider2d.md">MD</a></td>
<td align="center"><b>Matrix</b><br><a href="https://koaning.github.io/wigglystuff/examples/matrix/"><img src="./mkdocs/assets/gallery/matrix.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/matrix/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/matrix/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/matrix.md">MD</a></td>
<td align="center"><b>Paint</b><br><a href="https://koaning.github.io/wigglystuff/examples/paint/"><img src="./mkdocs/assets/gallery/paint.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/paint/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/paint/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/paint.md">MD</a></td>
</tr>
<tr>
<td align="center"><b>EdgeDraw</b><br><a href="https://koaning.github.io/wigglystuff/examples/edgedraw/"><img src="./mkdocs/assets/gallery/edgedraw.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/edgedraw/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/edge-draw/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/edge-draw.md">MD</a></td>
<td align="center"><b>SortableList</b><br><a href="https://koaning.github.io/wigglystuff/examples/sortlist/"><img src="./mkdocs/assets/gallery/sortablelist.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/sortlist/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/sortable-list/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/sortable-list.md">MD</a></td>
<td align="center"><b>ColorPicker</b><br><a href="https://koaning.github.io/wigglystuff/examples/colorpicker/"><img src="./mkdocs/assets/gallery/colorpicker.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/colorpicker/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/color-picker/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/color-picker.md">MD</a></td>
</tr>
<tr>
<td align="center"><b>GamepadWidget</b><br><a href="https://koaning.github.io/wigglystuff/examples/gamepad/"><img src="./mkdocs/assets/gallery/gamepad.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/gamepad/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/gamepad/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/gamepad.md">MD</a></td>
<td align="center"><b>KeystrokeWidget</b><br><a href="https://koaning.github.io/wigglystuff/examples/keystroke/"><img src="./mkdocs/assets/gallery/keystroke.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/keystroke/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/keystroke/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/keystroke.md">MD</a></td>
<td align="center"><b>SpeechToText</b><br><a href="https://koaning.github.io/wigglystuff/examples/talk/"><img src="./mkdocs/assets/gallery/speechtotext.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/talk/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/talk/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/talk.md">MD</a></td>
</tr>
<tr>
<td align="center"><b>CopyToClipboard</b><br><a href="https://koaning.github.io/wigglystuff/examples/copytoclipboard/"><img src="./mkdocs/assets/gallery/copytoclipboard.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/copytoclipboard/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/copy-to-clipboard/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/copy-to-clipboard.md">MD</a></td>
<td align="center"><b>CellTour</b><br><a href="https://koaning.github.io/wigglystuff/examples/celltour/"><img src="./mkdocs/assets/gallery/celltour.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/celltour/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/cell-tour/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/cell-tour.md">MD</a></td>
<td align="center"><b>WebcamCapture</b><br><a href="https://koaning.github.io/wigglystuff/examples/webcam_capture/"><img src="./mkdocs/assets/gallery/webcam-capture.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/webcam_capture/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/webcam-capture/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/webcam-capture.md">MD</a></td>
</tr>
<tr>
<td align="center"><b>ThreeWidget</b><br><a href="https://koaning.github.io/wigglystuff/examples/threewidget/"><img src="./mkdocs/assets/gallery/threewidget.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/threewidget/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/three-widget/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/three-widget.md">MD</a></td>
<td align="center"><b>ImageRefreshWidget</b><br><a href="https://koaning.github.io/wigglystuff/examples/htmlwidget/"><img src="./mkdocs/assets/gallery/imagerefresh.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/htmlwidget/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/image-refresh/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/image-refresh.md">MD</a></td>
<td align="center"><b>HTMLRefreshWidget</b><br><a href="https://koaning.github.io/wigglystuff/examples/htmlwidget/"><img src="./mkdocs/assets/gallery/htmlwidget.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/htmlwidget/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/html-refresh/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/html-refresh.md">MD</a></td>
</tr>
<tr>
<td align="center"><b>ProgressBar</b><br><a href="https://koaning.github.io/wigglystuff/examples/htmlwidget/"><img src="./mkdocs/assets/gallery/progressbar.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/htmlwidget/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/progress-bar/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/progress-bar.md">MD</a></td>
<td align="center"><b>PulsarChart</b><br><a href="https://koaning.github.io/wigglystuff/examples/pulsarchart/"><img src="./mkdocs/assets/gallery/pulsarchart.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/pulsarchart/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/pulsar-chart/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/pulsar-chart.md">MD</a></td>
<td align="center"><b>TextCompare</b><br><a href="https://koaning.github.io/wigglystuff/examples/textcompare/"><img src="./mkdocs/assets/gallery/textcompare.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/textcompare/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/text-compare/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/text-compare.md">MD</a></td>
</tr>
<tr>
<td align="center"><b>EnvConfig</b><br><a href="https://koaning.github.io/wigglystuff/examples/envconfig/"><img src="./mkdocs/assets/gallery/envconfig.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/envconfig/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/env-config/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/env-config.md">MD</a></td>
<td align="center"><b>Tangle</b><br><a href="https://koaning.github.io/wigglystuff/examples/tangle/"><img src="./mkdocs/assets/gallery/tangle.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/tangle/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/tangle/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/tangle.md">MD</a></td>
<td align="center"><b>ChartPuck</b><br><a href="https://koaning.github.io/wigglystuff/examples/chartpuck/"><img src="./mkdocs/assets/gallery/chartpuck.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/chartpuck/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/chart-puck/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/chart-puck.md">MD</a></td>
</tr>
<tr>
<td align="center"><b>ChartSelect</b><br><a href="https://koaning.github.io/wigglystuff/examples/chartselect/"><img src="./mkdocs/assets/gallery/chartselect.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/chartselect/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/chart-select/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/chart-select.md">MD</a></td>
<td align="center"><b>ScatterWidget</b><br><a href="https://koaning.github.io/wigglystuff/examples/scatterwidget/"><img src="./mkdocs/assets/gallery/scatterwidget.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/scatterwidget/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/scatter-widget/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/scatter-widget.md">MD</a></td>
<td align="center"><b>DiffViewer</b><br><a href="https://koaning.github.io/wigglystuff/examples/diffviewer/"><img src="./mkdocs/assets/gallery/diffviewer.png" width="330"></a><br><a href="https://koaning.github.io/wigglystuff/examples/diffviewer/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/diff-viewer/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/diff-viewer.md">MD</a></td>
</tr>
</table>
## 3rd party widgets
These widgets depend on 3rd party packages. They still ship with wigglystuff but have demos hosted on [molab](https://molab.marimo.io) because many of the dependencies are not compatible with WASM.
<table>
<tr>
<td align="center"><b>ModuleTreeWidget</b><br><a href="https://molab.marimo.io/notebooks/nb_K7QvvoASZErgKxwD8XSMWi"><img src="./mkdocs/assets/gallery/moduletree.png" width="330"></a><br><a href="https://molab.marimo.io/notebooks/nb_K7QvvoASZErgKxwD8XSMWi">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/module-tree/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/module-tree.md">MD</a></td>
<td align="center"><b>WandbChart</b><br><a href="https://molab.marimo.io/notebooks/nb_pbN8i6DyggB26Xrzw9Bztw"><img src="./mkdocs/assets/gallery/wandbchart.png" width="330"></a><br><a href="https://molab.marimo.io/notebooks/nb_pbN8i6DyggB26Xrzw9Bztw">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/wandb-chart/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/wandb-chart.md">MD</a></td>
<td align="center"><b>Neo4jWidget</b><br><a href="https://molab.marimo.io/notebooks/nb_ghifaw8nRCuDAgc1UTajXU"><img src="./mkdocs/assets/gallery/neo4j-widget.png" width="330"></a><br><a href="https://molab.marimo.io/notebooks/nb_ghifaw8nRCuDAgc1UTajXU">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/neo4j-widget/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/neo4j-widget.md">MD</a></td>
</tr>
<tr>
<td align="center"><b>AltairWidget</b><br><a href="https://koaning.github.io/wigglystuff/examples/altairwidget/"><img src="./mkdocs/assets/gallery/altairwidget.png" width="200"></a><br><a href="https://koaning.github.io/wigglystuff/examples/altairwidget/">Demo</a> · <a href="https://koaning.github.io/wigglystuff/reference/altair-widget/">API</a> · <a href="https://koaning.github.io/wigglystuff/reference/altair-widget.md">MD</a></td>
</tr>
</table>
| text/markdown | Vincent D. Warmerdam | null | null | null | MIT License Copyright (c) 2022 Vincent D. Warmerdam Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"anywidget>=0.9.2",
"drawdata",
"numpy",
"pillow",
"python-dotenv>=1.2.1",
"altair>=6.0.0; extra == \"docs\"",
"black>=24.8.0; extra == \"docs\"",
"marimo>=0.18.0; extra == \"docs\"",
"mike>=2.1.0; extra == \"docs\"",
"mkdocs-git-revision-date-localized-plugin>=1.2.6; extra == \"docs\"",
"mkdocs... | [] | [] | [] | [] | uv/0.9.20 {"installer":{"name":"uv","version":"0.9.20","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T13:12:37.295401 | wigglystuff-0.2.30.tar.gz | 2,423,247 | fe/6a/ef0342b851056ef562ebc481b4115a3edbb0f6d0084e6b0ece9935541141/wigglystuff-0.2.30.tar.gz | source | sdist | null | false | b652bee8971db1678f9c5697bf827a49 | a15b1caccf071f8a50323aac46f0a6f229f1328589215f5055e04a6d6e44c29a | fe6aef0342b851056ef562ebc481b4115a3edbb0f6d0084e6b0ece9935541141 | null | [
"LICENSE"
] | 925 |
2.4 | agent-lab-sdk | 0.1.53.dev3 | SDK для работы с Agent Lab | # Agent Lab SDK
Набор утилит и обёрток для упрощённой работы с LLM, Agent Gateway и метриками в проектах Giga Labs.
## Установка
```bash
pip install agent_lab_sdk
```
## Список изменений
Ознакомиться со списком изменений между версиями agent-lab-sdk можно по [ссылке](/CHANGELOG.md)
## Содержание
1. [Модуль `agent_lab_sdk.llm`](#1-модуль-agent_lab_sdkllm)
2. [Модуль `agent_lab_sdk.llm.throttled`](#2-модуль-agent_lab_sdkllmthrottled)
3. [Модуль `agent_lab_sdk.metrics`](#3-модуль-agent_lab_sdkmetrics)
4. [Хранилище](#4-хранилище)
5. [Схема](#5-схема)
6. [Сборка и публикация](#6-сборка-и-публикация)
---
## 1. Модуль `agent_lab_sdk.llm`
### 1.1. Получение модели
```python
from agent_lab_sdk.llm import get_model, RetryConfig
# Использует токен из окружения по умолчанию
model = get_model()
# Получить модель GigaChat (throttled по умолчанию), использует токен из окружения по умолчанию
model = get_model("chat")
# Получить модель GigaChat без throttled-обертки
model = get_model("chat", throttled=False)
# Получить модель EmbeddingsGigaChat (throttled по умолчанию), использует токен из окружения по умолчанию
model = get_model("embeddings")
# Получить модель EmbeddingsGigaChat без throttled-обертки
model = get_model("embeddings", throttled=False)
# Передача явных параметров GigaChat, токен из окружения не использует
model = get_model(
access_token="YOUR_TOKEN",
timeout=60,
scope="GIGACHAT_API_CORP"
)
# Включить retry (по умолчанию выключен) и настроить backoff
model = get_model(
"chat",
retry=True,
retry_config=RetryConfig(
retry_attempts_count=5,
wait_min=0.2,
wait_max=8,
),
)
```
> если не передавать access_token, токен будет выбран через GigaChatTokenManager, либо AgsTokenManager в зависимости от настройки `use_ags_token`
> throttled включен по умолчанию и включает ограничения и метрики для GigaChat и GigaChatEmbeddings. Подробнее [здесь](#2-модуль-agent_lab_sdkllmthrottled)
> retry отключен по умолчанию. Для включения используйте `retry=True`, а параметры настраивайте через `retry_attempts_count` или `retry_config=RetryConfig(...)` (только для throttled-оберток).
### 1.2. Менеджеры токенов
| Класс | Описание | Пример использования |
|------------------------|-----------------------------------------------------------------------------------------|--------------------------------------------------------------|
| `AgwTokenManager` | Кеширование + получение токена через Agent Gateway | `token = AgwTokenManager.get_token("provider")` |
| `GigaChatTokenManager` | Кеширование + получение через GigaChat OAuth с использованием пользовательских секретов | `token = GigaChatTokenManager.get_token()` |
| `AgsTokenManager` | Кеширование + получение токена и его лимитов через Agent Service | `token = AgsTokenManager.get_token(agent_id, credential_id)` |
### 1.3. Переменные окружения
| Переменная | Описание | Значение по умолчанию / Пример |
|------------------------------------------|-------------------------------------------------------------------------------------------------|----------------------------------------------------|
| `GIGACHAT_SCOPE` | Scope GigaChat API | `GIGACHAT_API_PERS` |
| `GIGACHAT_TIMEOUT` | Таймаут запросов к GigaChat (секунды) | `120` |
| `USE_TOKEN_PROVIDER_AGW` | Использовать `AgwTokenManager` для получения токена GigaChat | `true` |
| `GIGACHAT_CREDENTIALS` | Базовые креды для GigaChat (`b64(clientId:secretId)`) | `Y2xpZW50SWQ6c2VjcmV0SWQ=` |
| `GIGACHAT_USER` | Имя пользователя GigaChat advanced | `user` |
| `GIGACHAT_PASSWORD` | Пароль пользователя GigaChat advanced) | `password` |
| `GIGACHAT_TOKEN_PATH` | Путь к файлу кеша токена GigaChat (если не задан, путь вычисляется по хешу кредов и параметров) | `/tmp/gigachat_token_<hash>.json` |
| `GIGACHAT_TOKEN_PATH_SALT` | Опциональная соль для вычисления пути кеша токена | `my-salt` |
| `GIGACHAT_TOKEN_FETCH_RETRIES` | Количество попыток получения токена (GigaChat) | `3` |
| `USE_GIGACHAT_ADVANCED` | Включает запрос токена GigaChat API в продвинутом режиме | `true` |
| `GIGACHAT_BASE_URL` | Базовый URL GigaChat (важно чтобы заканчивался на символ `/`) | `https://gigachat.sberdevices.ru/v1/` |
| `TOKEN_PROVIDER_AGW_URL` | URL Agent Gateway для получения AGW-токена | `https://agent-gateway.apps.advosd.sberdevices.ru` |
| `TOKEN_PROVIDER_AGW_DEFAULT_MAX_RETRIES` | Макс. попыток запроса токена (AGW) | `3` |
| `TOKEN_PROVIDER_AGW_TIMEOUT_SEC` | Таймаут запроса к AGW (секунды) | `5` |
| `GIGACHAT_MODEL` | Модель GigaChat | `GigaChat` |
| `AGENT_LAB_SDK_GIGACHAT_CREDENTIAL_ID` | Credential Id для получения токена из Agent Service | - |
| `AGENT_SERVICE_NAME` | Имя сервиса агента (обязательно) | - |
| `AGENT_LAB_SDK_USE_TOKEN_PROVIDER_AGS` | Использовать `AgsTokenManager` для получения токена и лимитов GigaChat | `false` |
---
## 2. Модуль `agent_lab_sdk.llm.throttled`
Позволяет ограничивать число одновременных вызовов к GigaChat и сервису эмбеддингов, автоматически собирая соответствующие метрики.
```python
from agent_lab_sdk.llm import GigaChatTokenManager
from agent_lab_sdk.llm.throttled import ThrottledGigaChat, ThrottledGigaChatEmbeddings
from agent_lab_sdk.llm import RetryConfig
access_token = GigaChatTokenManager.get_token()
# Чат с учётом ограничений
chat = ThrottledGigaChat(access_token=access_token)
response = chat.invoke("Привет!")
# Эмбеддинги с учётом ограничений
emb = ThrottledGigaChatEmbeddings(access_token=access_token)
vectors = emb.embed_documents(["Text1", "Text2"])
# Retry отключен по умолчанию, включайте явно при необходимости
chat_with_retry = ThrottledGigaChat(
access_token=access_token,
retry=True,
retry_config=RetryConfig(retry_attempts_count=3, wait_min=0.5, wait_max=4),
)
```
### 2.1. Переменные окружения для ограничения
| Переменная | Описание | Значение по умолчанию |
|----------------------------------------|---------------------------------------------|-----------------------|
| `MAX_CHAT_CONCURRENCY` | Максимум одновременных чат-запросов | `100000` |
| `MAX_EMBED_CONCURRENCY` | Максимум одновременных запросов эмбеддингов | `100000` |
| `EMBEDDINGS_MAX_BATCH_SIZE_PARTS` | Макс. размер батча частей для эмбеддингов | `90` |
| `AGENT_LAB_SDK_REDIS_THROTTLE_ENABLED` | Включает глобальное лимитирование запросов | `false` |
### 2.2. Метрики
Метрики доступны через `agent_lab_sdk.metrics.get_metric`:
| Метрика | Описание | Тип |
| ------------------------- | ---------------------------------------------- | --------- |
| `chat_slots_in_use` | Число занятых слотов для чата | Gauge |
| `chat_waiting_tasks` | Число задач, ожидающих освобождения слота чата | Gauge |
| `chat_wait_time_seconds` | Время ожидания слота чата (секунды) | Histogram |
| `embed_slots_in_use` | Число занятых слотов для эмбеддингов | Gauge |
| `embed_waiting_tasks` | Число задач, ожидающих слота эмбеддингов | Gauge |
| `embed_wait_time_seconds` | Время ожидания слота эмбеддингов (секунды) | Histogram |
---
## 3. Модуль `agent_lab_sdk.metrics`
Предоставляет удобный интерфейс для создания и управления метриками через Prometheus-клиент.
### 3.1. Основные функции
```python
from agent_lab_sdk.metrics import get_metric
# Создать метрику
g = get_metric(
metric_type="gauge", # тип: "gauge", "counter" или "histogram"
name="my_gauge", # имя метрики в Prometheus
documentation="Моя метрика gauge" # описание
)
# Увеличить счётчик
g.inc()
# Установить конкретное значение
g.set(42)
```
### 3.2. Пример использования в коде
```python
from agent_lab_sdk.metrics import get_metric
import time
# Счётчик HTTP-запросов с метками
reqs = get_metric(
metric_type="counter",
name="http_requests_total",
documentation="Всего HTTP-запросов",
labelnames=["method", "endpoint"]
)
reqs.labels("GET", "/api").inc()
# Гистограмма задержек
lat = get_metric(
metric_type="histogram",
name="http_request_latency_seconds",
documentation="Длительность HTTP-запроса",
buckets=[0.1, 0.5, 1.0, 5.0]
)
with lat.time():
time.sleep(0.5)
print(reqs.collect())
print(lat.collect())
```
## 4. Хранилище
### 4.1 SD Ассетница
функция `store_file_in_sd_asset` сохраняет base64‑файл в хранилище S3 и отдаёт публичную ссылку на файл
```python
from agent_lab_sdk.storage import store_file_in_sd_asset
store_file_in_storage("my-agent-name-filename.png", file_b64, "giga-agents")
```
### 4.2 V2 File Upload
Новый v2 API для загрузки файлов через Agent Gateway с поддержкой бинарных данных и автоматическим выбором сервиса хранения.
```python
from agent_lab_sdk.storage import upload_file, FileUploadResponse
# Загрузка из байтов
with open("document.pdf", "rb") as f:
file_bytes = f.read()
result: FileUploadResponse = upload_file("document.pdf", file_bytes)
# Результат - Pydantic модель с информацией о файле
print(f"File ID: {result.id}")
print(f"Absolute Path: {result.absolute_path}")
print(f"Storage: {result.storage}")
```
#### Переменные окружения для V2 Upload
| Переменная | Описание | Значение по умолчанию |
| -------------------------- | ---------------------------------- | --------------------- |
| `AGENT_SERVICE_NAME` | Имя сервиса агента (обязательно) | - |
| `STORAGE_PROVIDER_AGW_URL` | URL Agent Gateway | `http://localhost` |
### 4.3 AGW Checkpointer
AGW поддерживает langgraph checkpoint API и в SDK представлен `AsyncAGWCheckpointSaver`, который позволяет сохранять состояние графа в AGW напрямую.
## 5. Схема
### 5.1. Типы входных данных
Модуль `agent_lab_sdk.schema.input_types` предоставляет фабричные функции для создания аннотированных типов полей, которые могут использоваться в Pydantic моделях для описания интерфейса агентов.
#### Основные типы полей
```python
from typing import List, Annotated
from pydantic import BaseModel, Field
from agent_lab_sdk.schema import (
MainInput, StringInput, StringArrayInput, NumberInput,
SelectInput, CheckboxInput, FileInput, FilesInput, SelectOption, Visibility
)
class AgentState(BaseModel):
# Основное поле ввода
query: Annotated[str, MainInput(placeholder="Введите ваш запрос")]
# Строковое поле
title: Annotated[str, StringInput(
default="Без названия",
title="Заголовок",
description="Название для вашего запроса",
visibility=Visibility.ALWAYS # или visibility="always"
)]
# Массив строк
keywords: Annotated[List[str], StringArrayInput(
placeholder="Добавьте ключевые слова...",
title="Ключевые слова",
description="Список ключевых слов для поиска",
group="Параметры"
)]
# Числовое поле
temperature: Annotated[float, NumberInput(
default=0.7,
title="Температура",
description="Параметр креативности модели (0.0 - 1.0)",
hidden=True
)]
# Выпадающий список
mode: Annotated[str, SelectInput(
title="Режим работы",
items=[
SelectOption(label="Быстрый", value="fast").model_dump(),
SelectOption(label="Точный", value="precise").model_dump()
],
default="fast",
group="Настройки"
)]
# Чекбокс
save_history: Annotated[bool, CheckboxInput(
title="Сохранять историю",
description="Сохранять диалог для последующего анализа",
default=True,
group="Опции"
)]
# Загрузка одного файла
document: Annotated[str, FileInput(
title="Документ",
file_extensions=".pdf,.docx,.txt",
view="button" # или "dropzone" для drag-and-drop,
max_size_mb=15.0 # применяем ограничение максимального размера для одного файла
)]
# Загрузка нескольких файлов
attachments: Annotated[List[str], FilesInput(
title="Прикрепленные файлы",
file_extensions=".pdf,.csv,.xlsx",
group="Файлы",
view="dropzone" # область перетаскивания файлов,
max_size_mb=15.0 # применяем ограничение максимального размера для всех файлов
)]
```
#### Доступные фабричные функции
| Тип | Описание | Основные параметры |
|--------------------------|-----------------------------------|--------------------------------------------------------------------------------------------|
| `MainInput` | Основное поле ввода | `placeholder`, `visibility` |
| `StringInput` | Текстовое поле | `default`, `title`, `description`, `hidden`, `depends`, `visibility` |
| `StringArrayInput` | Массив строк | `placeholder`, `title`, `description`, `group`, `hidden`, `depends`, `visibility` |
| `StringArrayInputInline` | Массив строк в одной строке ввода | `placeholder`, `title`, `description`, `group`, `hidden`, `depends`, `visibility` |
| `NumberInput` | Числовое поле | `default`, `title`, `description`, `hidden`, `depends`, `visibility` |
| `SelectInput` | Выпадающий список | `items`, `title`, `group`, `default`, `hidden`, `depends`, `visibility` |
| `CheckboxInput` | Чекбокс | `title`, `group`, `description`, `default`, `hidden`, `depends`, `visibility` |
| `SwitchInput` | Switch | `title`, `group`, `description`, `default`, `hidden`, `depends`, `visibility` |
| `FileInput` | Загрузка одного файла | `title`, `file_extensions`, `group`, `hidden`, `depends`, `view`, `visibility`, `max_size_mb` |
| `FilesInput` | Загрузка нескольких файлов | `title`, `file_extensions`, `group`, `hidden`, `depends`, `limit`, `view`, `visibility`, `max_size_mb` |
#### Группировка полей
Используйте параметр `group` для логической группировки полей в интерфейсе:
```python
class TaskConfig(BaseModel):
# Группа "Основные параметры"
task_type: Annotated[str, SelectInput(
title="Тип задачи",
items=[...],
group="Основные параметры"
)]
priority: Annotated[str, SelectInput(
title="Приоритет",
items=[...],
group="Основные параметры"
)]
# Группа "Дополнительно"
notifications: Annotated[bool, CheckboxInput(
title="Уведомления",
group="Дополнительно"
)]
tags: Annotated[List[str], StringArrayInput(
placeholder="Теги...",
group="Дополнительно"
)]
```
#### Управление видимостью полей
Параметр `visibility` контролирует, когда поле отображается в интерфейсе. Доступные значения:
```python
from agent_lab_sdk.schema import Visibility
# Enum с тремя значениями:
Visibility.ALWAYS # "always" - поле всегда доступно для ввода (по умолчанию)
Visibility.START # "start" - поле доступно для ввода только при старте
Visibility.AFTER_START # "after_start" - поле доступно для ввода после старта
```
**Пример использования:**
```python
class AgentConfig(BaseModel):
# Всегда доступно для ввода поле
query: Annotated[str, MainInput(
placeholder="Введите запрос",
visibility=Visibility.ALWAYS
)]
# Поле доступно для ввода только при первом запуске
api_key: Annotated[str, StringInput(
title="API ключ",
description="Ключ для доступа к внешнему API",
visibility=Visibility.START
)]
# Поле появляется после первого сообщения
session_id: Annotated[str, StringInput(
title="ID сессии",
description="Идентификатор текущей сессии",
visibility=Visibility.AFTER_START,
hidden=True
)]
```
Можно также передавать строковые значения напрямую:
```python
title: Annotated[str, StringInput(
title="Заголовок",
visibility="always" # эквивалентно Visibility.ALWAYS
)]
```
### 5.2. LogMessage
`LogMessage` — вспомогательное сообщение для потоковой передачи логов из узлов LangGraph / LangChain. Экземпляры создаются как обычные сообщения чата, но получают тип `log`, поэтому фронтенд может отображать их отдельно от ответов модели.
- Импортируется из `agent_lab_sdk.schema`.
- По умолчанию наследуется от `langchain.schema.AIMessage` и устанавливает `additional_kwargs={"type": "log"}`.
- Если установить переменную окружения `IS_LOG_MESSAGE_CUSTOM=true`, будет использоваться наследник `BaseMessage` с явным типом `log`.
Переменная окружения `IS_LOG_MESSAGE_CUSTOM` на текущий момент установлена для всех агентов в значение `true`
#### Пример использования со `StreamWriter`
```python
from langgraph.graph import MessagesState
from langgraph.types import StreamWriter
from agent_lab_sdk.schema import LogMessage
async def run(state: MessagesState, writer: StreamWriter) -> MessagesState:
writer(LogMessage("Запускаю обработку запроса"))
# ... полезная работа здесь ...
writer(LogMessage("Обработка завершена"))
return state
```
Вызов `writer(LogMessage(...))` отправляет лог во время выполнения шага графа, позволяя клиенту сразу видеть прогресс.
## 6. Сборка и публикация
1. Установка twine
```bash
pip install --upgrade build twine
```
2. Собрать и загрузить в pypi
перед обновлением сборки нужно не забыть поменять версию в [pyproject.toml](/pyproject.toml)
```bash
python -m build && python -m twine upload dist/*
```
3. Ссылка на проект pypi
> https://pypi.org/project/agent-lab-sdk/
4. установка локально в editable mode. Предварительно может потребоваться выбрать необходимое окружение
```bash
pip install -e .
```
| text/markdown | null | Andrew Ohurtsov <andermirik@yandex.com> | null | null | Proprietary and Confidential — All Rights Reserved | agent, lab, sdk | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
... | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"requests",
"langgraph",
"langchain_gigachat",
"prometheus-client",
"langchain",
"httpx",
"orjson",
"cloudpickle",
"tenacity",
"pydantic",
"pydantic-settings",
"redis; extra == \"redis\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-19T13:11:43.249561 | agent_lab_sdk-0.1.53.dev3.tar.gz | 43,924 | f7/80/53427e254b7d69faed44d8e763e1738296ab985d253bebe6a8b04f1a04e4/agent_lab_sdk-0.1.53.dev3.tar.gz | source | sdist | null | false | b987f878498f7210248a2dac2825d318 | d56030e7d6bb3c799b360e0c3992e8f801abf41e2b4512d0b4b2cde5a5ff0d5c | f78053427e254b7d69faed44d8e763e1738296ab985d253bebe6a8b04f1a04e4 | null | [
"LICENSE"
] | 185 |
2.4 | articlealpha | 0.1.1 | Python client for ArticleAlpha Wikipedia-based market attention data. | # ArticleAlpha Python Client
The official Python wrapper for the [ArticleAlpha](https://articlealpha.com) API.
ArticleAlpha tracks investor curiosity by monitoring Wikipedia pageview trends for hundreds of stocks. This library allows data scientists and analysts to pull that "Attention Data" directly into Pandas for behavioral finance research.
## Installation
```bash
pip install articlealpha
```
### Usage
```python
from articlealpha import ArticleAlpha
# Get daily time-series for all stocks
df = ArticleAlpha.get_timeseries()
print(df.head())
# Get deep details for a specific ticker
nvda = ArticleAlpha.get_ticker_details("NVDA")
```
## Data Source & Attribution
This project utilizes data from the **Wikimedia Metrics API**.
- **Source:** [Wikimedia API](https://doc.wikimedia.org/generated-data-platform/aqs/analytics-api/concepts/page-views.html)
- **Trademark Notice:** Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. ArticleAlpha is an independent project and is not affiliated with or endorsed by the Wikimedia Foundation.
## Disclaimer
This data is for informational purposes only. ArticleAlpha does not provide investment advice. Wikipedia pageviews represent public interest and curiosity, which may or may not correlate with market performance.
## License
MIT
| text/markdown | ArticleAlpha | info@articlealpha.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Office/Business :: Financial :: Investment"
] | [] | https://github.com/articlealpha/articlealpha | null | >=3.6 | [] | [] | [] | [
"pandas",
"requests"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.12 | 2026-02-19T13:11:36.459868 | articlealpha-0.1.1.tar.gz | 3,010 | b7/bf/d2e638d10549c75cad9ba79fe22c437fe29cd6ac3bb3bff5deb19844122f/articlealpha-0.1.1.tar.gz | source | sdist | null | false | 5817b8f4eef6be3d5e81a0fb3659c926 | f0a365b52f18b328584377f78697e3ecf403f8636dd05b6f0e9eb696202b9d57 | b7bfd2e638d10549c75cad9ba79fe22c437fe29cd6ac3bb3bff5deb19844122f | null | [] | 217 |
2.4 | snakemake-software-deployment-plugin-envmodules | 0.1.6 | Software deployment plugin for Snakemake using environment modules. | # snakemake-software-deployment-plugin-envmodules
A snakemake software deployment plugin using [environment modules](https://modules.readthedocs.io). | text/markdown | null | Johannes Köster <johannes.koester@uni-due.de> | null | null | null | null | [] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"snakemake-interface-common<2.0.0,>=1.17.4",
"snakemake-interface-software-deployment-plugins<1.0,>=0.10.2"
] | [] | [] | [] | [
"repository, https://github.com/snakemake/snakemake-software-deployment-plugin-envmodules",
"documentation, https://snakemake.github.io/snakemake-plugin-catalog/plugins/software-deployment/envmodules.html"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:11:27.426834 | snakemake_software_deployment_plugin_envmodules-0.1.6.tar.gz | 5,933 | 65/50/39394f3657c62b7ec4f583018a84585a5fac4661f562418c95a8c3e50ec8/snakemake_software_deployment_plugin_envmodules-0.1.6.tar.gz | source | sdist | null | false | 6accf6235f8f8f5580f71b6772baa630 | ef8982a45b7ca488f397f74d9336a5483cf6dad6a6b9ea1b745206249f90dc4f | 655039394f3657c62b7ec4f583018a84585a5fac4661f562418c95a8c3e50ec8 | null | [] | 204 |
2.3 | crewmaster | 0.1.22b0 | A powerful and flexible framework for building, orchestrating, and deploying multi-agent systems. | 
# **crewmaster: The Ultimate Framework for Building AI Teams 🚀**
crewmaster is a powerful and flexible framework for building, orchestrating, and deploying multi-agent systems. It provides a structured, type-safe, and modular approach to creating intelligent crews that can solve complex, multi-step problems.
### **Features**
* **Modular & Scalable**: Build complex systems by combining simple, single-purpose agents.
* **Type-Safe & Robust**: A strong emphasis on Python's type system ensures predictable and maintainable code.
* **Asynchronous by Design**: Built to handle complex, concurrent tasks efficiently.
* **Production-Ready**: Comes with built-in tools to effortlessly serve your crews as API endpoints.
### **Getting Started for Developers**
We recommend starting with our comprehensive documentation, which includes guides on installation, core concepts, and step-by-step tutorials.
[**View the official documentation here**](https://crewmaster-2cc7a6.gitlab.io/) | text/markdown | Imolko | info@imolko.com | null | null | null | ai, llm, agent | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: POSIX :: Linux",
"Framework :: FastAPI",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | <3.14,>=3.12.3 | [] | [] | [] | [
"langchain<0.4.0,>=0.3.18",
"langgraph<0.3.0,>=0.2.71",
"langserve<0.4.0,>=0.3.1",
"langchain-community<0.4.0,>=0.3.17",
"langchain-postgres<0.0.14,>=0.0.13",
"langchain-openai<0.4.0,>=0.3.5",
"langchain-cli<0.0.36,>=0.0.35",
"langchain-core<0.4.0,>=0.3.35",
"pydantic<3.0.0,>=2.10.6",
"pydantic-se... | [] | [] | [] | [] | poetry/2.1.3 CPython/3.12.11 Linux/5.15.154+ | 2026-02-19T13:11:00.152302 | crewmaster-0.1.22b0.tar.gz | 107,355 | 75/74/9a93c936084f507685cb4b441ee2c3bdb7ae3a30e2b9a2fa3196bad8e770/crewmaster-0.1.22b0.tar.gz | source | sdist | null | false | a3583215ee62a0d3401e8d5bbf7b7fd4 | 0c83337bdf232b46b1689d88b77f5b1c244f8876ac3d07028ca914213e4cb8b0 | 75749a93c936084f507685cb4b441ee2c3bdb7ae3a30e2b9a2fa3196bad8e770 | null | [] | 196 |
2.4 | nexat-trace | 1.1.1 | Nexat Terrain Routing and Coverage Engine | # Nexat Terrain Routing And Coverage Engine
A sophisticated complete coverage path planning library developed for controlled traffic farming applications. Specifically useful for vehicles with nonholonomic steering kinematics that have a similar turning radius to its track / working width.
## Features
### Complete coverage path planning
This library excels in robust & intelligent route optimization and curve planning for complex field geometries.
### Flexible route / task specification
The route planner has a lot of options and parameters that change the way the route is planned and how the curves are calculated.
Planner parameters include:
- Start / finish location
- Variable working width (multiple of tack width e.g. for spraying applications)
- Block working configuration (group sets of neighboring ab lines together)
- Reusing existing paths on a track system to minimize soil compaction
- Working corridor error avoidance
- Multiple turning maneuvers
- Weighted prioritization of overall distance vs overall coverage
## Installation
Releases of this library are hosted on pip
```
pip install nexat-trace
```
## Usage
When using this library, you should start with generating a track system. The ```TrackSystem``` class provides basic track system generation from an outer field border:
> [!CAUTION]
> All geometry should be in a metric coordinate system e.g. UTM projection.
```python
from nexat_trace import TrackSystem
track_system = TrackSystem.from_border(
field_border, # your outer field border as a shapely Polygon with holes as obstacles
14.0, # desired track width in meters
reference_ab_line, # reference LineString within your field border
[0.5, 1.0, 1.0, 0.5] # headland widths configuration
)
```
To use this library most effectively, you should generate your own specific track systems with field border, headlands, obstacles, AB lines and obstacle avoidance segments.
Now it is time to configure the planner parameters to match the desired task definition. Here is a basic example:
```python
from nexat_trace import RoutePlanner, CorridorStrategy
planner = RoutePlanner()
# should be whole multiple of track width
planner.route_params.working_width = 14.0
# ignore working corridor errors for now
planner.route_params.corridor_strategy = CorridorStrategy.DRIVE_NONE
# neutral distance optimizing weights
planner.route_params.weights.headland_distance_factor = 1.0
planner.route_params.weights.headland_cost_exponent = 1.0
```
Now a route can be planned using the prepared track system and configured planner:
```python
route = planner.plan_route_from_track_system(
track_system, # prepared track system instance
5, # time in s spent doing guided local search optimization
)
path = route.get_linestring()
```
The route can be plotted using the utility functions:
> [!NOTE]
> For this you need to have the dev requirements installed. See dev_requirements.txt or setup_venv.sh for info
```python
from nexat_trace.util import plot_geometry as pg
pg.plot_linestring_rainbow(path)
pg.show_plot()
```
## Developing
When developing on linux you can use
```bash
source setup_venv.sh
```
to setup & activate a python venv with all development dependencies. The script also builds and installs the pydubins extension.
Now files in the root of the repo like ```example_basic.py``` or ```example_complex.py``` can be ran and debugged running code in this repo.
## Credits
This library depends on [shapely](https://github.com/shapely/shapely), [ortools](https://developers.google.com/optimization) and [numpy](https://numpy.org/) as well as [pydubins](https://github.com/rdesc/pydubins).
The pydubins module is redistributed in the nexat-trace package. See THIRD_PARTY file for license info of the pydubins software.
| text/markdown | Fabian Tepe | fabiantepe1.2@gmail.com | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"ortools>=9.11.4210",
"shapely>=2.1.1",
"numpy>=1.26.4",
"wheel; extra == \"debugging\"",
"setuptools; extra == \"debugging\"",
"matplotlib; extra == \"debugging\"",
"cython; extra == \"debugging\"",
"ruff; extra == \"debugging\"",
"build; extra == \"debugging\"",
"pytest; extra == \"debugging\""
... | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T13:10:52.389074 | nexat_trace-1.1.1-cp313-cp313-win_amd64.whl | 118,772 | 79/28/5e92b6cf364010b95b1359fd264fced6b22cc5df8b22ac8cda6f1996a6fc/nexat_trace-1.1.1-cp313-cp313-win_amd64.whl | cp313 | bdist_wheel | null | false | 6b33c524c8e06657163263183f021732 | 924acfa334cf688c27bb2d28bf725cbdeb6098dde61f140676c2d46a96f3e381 | 79285e92b6cf364010b95b1359fd264fced6b22cc5df8b22ac8cda6f1996a6fc | null | [
"LICENSE",
"THIRD_PARTY"
] | 1,540 |
2.3 | lithic | 0.115.0 | The official Python library for the lithic API | # Lithic Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/lithic/)
The Lithic Python library provides convenient access to the Lithic REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
## MCP Server
Use the Lithic MCP Server to enable AI assistants to interact with this API, allowing them to explore endpoints, make test requests, and use documentation to help integrate this SDK into your application.
[](https://cursor.com/en-US/install-mcp?name=lithic-mcp&config=eyJuYW1lIjoibGl0aGljLW1jcCIsInRyYW5zcG9ydCI6Imh0dHAiLCJ1cmwiOiJodHRwczovL2xpdGhpYy5zdGxtY3AuY29tIiwiaGVhZGVycyI6eyJ4LWxpdGhpYy1hcGkta2V5IjoiTXkgTGl0aGljIEFQSSBLZXkifX0)
[](https://vscode.stainless.com/mcp/%7B%22name%22%3A%22lithic-mcp%22%2C%22type%22%3A%22http%22%2C%22url%22%3A%22https%3A%2F%2Flithic.stlmcp.com%22%2C%22headers%22%3A%7B%22x-lithic-api-key%22%3A%22My%20Lithic%20API%20Key%22%7D%7D)
> Note: You may need to set environment variables in your MCP client.
## Documentation
The REST API documentation can be found on [docs.lithic.com](https://docs.lithic.com). The full API of this library can be found in [api.md](https://github.com/lithic-com/lithic-python/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install lithic
```
## Usage
The full API of this library can be found in [api.md](https://github.com/lithic-com/lithic-python/tree/main/api.md).
```python
import os
from lithic import Lithic
client = Lithic(
api_key=os.environ.get("LITHIC_API_KEY"), # This is the default and can be omitted
# defaults to "production".
environment="sandbox",
)
card = client.cards.create(
type="SINGLE_USE",
)
print(card.token)
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `LITHIC_API_KEY="My Lithic API Key"` to your `.env` file
so that your API Key is not stored in source control.
## Async usage
Simply import `AsyncLithic` instead of `Lithic` and use `await` with each API call:
```python
import os
import asyncio
from lithic import AsyncLithic
client = AsyncLithic(
api_key=os.environ.get("LITHIC_API_KEY"), # This is the default and can be omitted
# defaults to "production".
environment="sandbox",
)
async def main() -> None:
card = await client.cards.create(
type="SINGLE_USE",
)
print(card.token)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install lithic[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from lithic import DefaultAioHttpClient
from lithic import AsyncLithic
async def main() -> None:
async with AsyncLithic(
api_key=os.environ.get("LITHIC_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
card = await client.cards.create(
type="SINGLE_USE",
)
print(card.token)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Pagination
List methods in the Lithic API are paginated.
This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
```python
from lithic import Lithic
client = Lithic()
all_cards = []
# Automatically fetches more pages as needed.
for card in client.cards.list():
# Do something with card here
all_cards.append(card)
print(all_cards)
```
Or, asynchronously:
```python
import asyncio
from lithic import AsyncLithic
client = AsyncLithic()
async def main() -> None:
all_cards = []
# Iterate through items across all pages, issuing requests as needed.
async for card in client.cards.list():
all_cards.append(card)
print(all_cards)
asyncio.run(main())
```
Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:
```python
first_page = await client.cards.list()
if first_page.has_next_page():
print(f"will fetch next page using these details: {first_page.next_page_info()}")
next_page = await first_page.get_next_page()
print(f"number of items we just fetched: {len(next_page.data)}")
# Remove `await` for non-async usage.
```
Or just work directly with the returned data:
```python
first_page = await client.cards.list()
print(f"next page cursor: {first_page.starting_after}") # => "next page cursor: ..."
for card in first_page.data:
print(card.product_id)
# Remove `await` for non-async usage.
```
## Nested params
Nested parameters are dictionaries. The SDK uses TypedDict for type validation, but you can pass regular dictionaries as shown below:
```python
from lithic import Lithic
client = Lithic()
card = client.cards.create(
type="PHYSICAL",
shipping_address={
"address1": "123",
"city": "NEW YORK",
"country": "USA",
"first_name": "Johnny",
"last_name": "Appleseed",
"postal_code": "10001",
"state": "NY",
},
)
```
## Webhooks
Lithic uses webhooks to notify your application when events happen. The library provides signature verification via the optional `standardwebhooks` package.
### Parsing and verifying webhooks
```py
from lithic.types import CardCreatedWebhookEvent
# Verifies signature and returns typed event
event = client.webhooks.parse(
request.body, # raw request body as string
headers=request.headers,
secret=os.environ["LITHIC_WEBHOOK_SECRET"] # optional, reads from env by default
)
# Use isinstance to narrow the type
if isinstance(event, CardCreatedWebhookEvent):
print(f"Card created: {event.card_token}")
```
### Parsing without verification
```py
# Parse only - skips signature verification (not recommended for production)
event = client.webhooks.parse_unsafe(request.body)
```
### Verifying signatures only
```py
# Verify signature without parsing (raises exception if invalid)
client.webhooks.verify_signature(request.body, headers=request.headers, secret=secret)
```
### Installing standardwebhooks (optional)
To use signature verification, install the webhooks extra:
```sh
pip install lithic[webhooks]
```
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `lithic.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `lithic.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `lithic.APIError`.
```python
import lithic
from lithic import Lithic
client = Lithic()
try:
client.cards.create(
type="MERCHANT_LOCKED",
)
except lithic.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except lithic.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except lithic.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from lithic import Lithic
# Configure the default for all requests:
client = Lithic(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).cards.list(
page_size=10,
)
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from lithic import Lithic
# Configure the default for all requests:
client = Lithic(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = Lithic(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).cards.list(
page_size=10,
)
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/lithic-com/lithic-python/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `LITHIC_LOG` to `info`.
```shell
$ export LITHIC_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from lithic import Lithic
client = Lithic()
response = client.cards.with_raw_response.create(
type="SINGLE_USE",
)
print(response.headers.get('X-My-Header'))
card = response.parse() # get the object that `cards.create()` would have returned
print(card.token)
```
These methods return a [`LegacyAPIResponse`](https://github.com/lithic-com/lithic-python/tree/main/src/lithic/_legacy_response.py) object. This is a legacy class as we're changing it slightly in the next major version.
For the sync client this will mostly be the same with the exception
of `content` & `text` will be methods instead of properties. In the
async client, all methods will be async.
A migration script will be provided & the migration in general should
be smooth.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
As such, `.with_streaming_response` methods return a different [`APIResponse`](https://github.com/lithic-com/lithic-python/tree/main/src/lithic/_response.py) object, and the async client returns an [`AsyncAPIResponse`](https://github.com/lithic-com/lithic-python/tree/main/src/lithic/_response.py) object.
```python
with client.cards.with_streaming_response.create(
type="SINGLE_USE",
) as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from lithic import Lithic, DefaultHttpxClient
client = Lithic(
# Or use the `LITHIC_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from lithic import Lithic
with Lithic() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/lithic-com/lithic-python/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import lithic
print(lithic.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/lithic-com/lithic-python/tree/main/./CONTRIBUTING.md).
| text/markdown | null | Lithic <sdk-feedback@lithic.com> | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: ... | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\"",
"standardwebhooks; extra == \"webhooks\""
] | [] | [] | [] | [
"Homepage, https://github.com/lithic-com/lithic-python",
"Repository, https://github.com/lithic-com/lithic-python"
] | twine/5.1.1 CPython/3.12.9 | 2026-02-19T13:10:08.731761 | lithic-0.115.0.tar.gz | 413,041 | de/0e/388e54ee4c948aef40253f0745265fe7c5837bf9cfe2c2aa90b0a9c09334/lithic-0.115.0.tar.gz | source | sdist | null | false | a5e0ec02fe15f583503656cc977a9119 | f963a45bac63aef3d4d8f776e52b2cac9ad74d6998c882a3dc16d788006b9fc0 | de0e388e54ee4c948aef40253f0745265fe7c5837bf9cfe2c2aa90b0a9c09334 | null | [] | 458 |
2.4 | ixoncdkingress | 0.0.22 | IXON CDK Ingress used in Cloud Functions for the IXON Cloud | # IXON CDK Ingress
The ixoncdkingress package provides the interface between the IXON Cloud and your Cloud Function.
Learn more about how to develop Cloud Functions on https://developer.ixon.cloud/docs/.
| text/markdown | null | IXON <development@ixon.cloud> | null | null | null | null | [
"Programming Language :: Python :: 3"
] | [] | null | null | ~=3.10.0 | [] | [] | [] | [
"cryptography~=42.0.5",
"docker~=7.0.0",
"load-environ-typed~=0.3.0",
"pymongo~=4.7.0",
"requests~=2.31.0",
"pydantic~=2.10.6"
] | [] | [] | [] | [
"homepage, https://www.ixon.cloud/",
"documentation, https://developer.ixon.cloud/"
] | uv/0.7.20 | 2026-02-19T13:09:50.162045 | ixoncdkingress-0.0.22.tar.gz | 23,644 | 24/d7/01b1263685971f843bbee4431392e8aca6632046713178c17cf43774af2d/ixoncdkingress-0.0.22.tar.gz | source | sdist | null | false | 13ee9851fad1858c57b166c2abab318b | 3f70e9b4d4f82b890096d8d8d4444779d705dc5cc50ded72ebc69272e47bd007 | 24d701b1263685971f843bbee4431392e8aca6632046713178c17cf43774af2d | null | [] | 203 |
2.4 | jpstock-mcp | 0.1.0 | Yahoo Finance MCP server — free stock prices & FX rates for Claude Desktop | # yfinance-mcp
Yahoo Finance MCP server for Claude Desktop — free stock prices, price history, and FX rates. No API key required.
> **Note**: The PyPI package for this project is published as **`jpstock-mcp`** (not `yfinance-mcp`).
> An unrelated package named `yfinance-mcp` exists on PyPI — it is not affiliated with this project or with [yfinance](https://github.com/ranaroussi/yfinance).
> Please install via `pip install jpstock-mcp` or `uvx jpstock-mcp serve`.
## Setup (Claude Desktop)
```bash
uvx jpstock-mcp serve
```
Add to `claude_desktop_config.json`:
```json
{
"mcpServers": {
"yfinance": {
"command": "uvx",
"args": ["jpstock-mcp", "serve"]
}
}
}
```
## Tools
| Tool | Description |
|------|-------------|
| `get_stock_price` | Latest price + fundamentals for TSE-listed stocks (code.T) |
| `get_stock_history` | OHLCV history for a date range |
| `get_fx_rates` | JPY FX rates (USDJPY, EURJPY, GBPJPY, CNYJPY) |
| `search_ticker` | Search ticker by company name or keyword |
## Usage in Claude Desktop
```text
yfinance でトヨタ(7203)の最新株価を教えて
```
```text
yfinance で USDJPY の直近1週間の推移を確認して
```
```text
yfinance でソニーのティッカーを検索して
```
## CLI
```bash
pip install jpstock-mcp
yfinance-mcp price 7203 # 最新株価
yfinance-mcp history 7203 --start 2025-01-01 # 価格履歴
yfinance-mcp fx # FXレート
yfinance-mcp search Toyota # ティッカー検索
yfinance-mcp test # 疎通確認
yfinance-mcp serve # MCPサーバー起動
```
## Python
```python
import asyncio
from yfinance_mcp import YfinanceClient
async def main():
client = YfinanceClient()
price = await client.get_stock_price("7203")
print(price.close, price.trailing_pe)
asyncio.run(main())
```
## Disclaimer
This package uses [yfinance](https://github.com/ranaroussi/yfinance) (Apache 2.0) to access Yahoo Finance data.
yfinance is not affiliated with or endorsed by Yahoo.
Users are responsible for complying with [Yahoo Finance's Terms of Service](https://legal.yahoo.com/us/en/yahoo/terms/otos/).
Data is intended for personal, educational, and research use.
## License
Apache-2.0
| text/markdown | null | null | null | null | null | claude, fx, japan, mcp, stock, yahoo-finance, yfinance | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3... | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"fastmcp>=2.0",
"loguru>=0.7",
"pydantic>=2.0",
"yfinance>=0.2"
] | [] | [] | [] | [
"Homepage, https://github.com/ajtgjmdjp/yfinance-mcp",
"Repository, https://github.com/ajtgjmdjp/yfinance-mcp",
"Issues, https://github.com/ajtgjmdjp/yfinance-mcp/issues"
] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T13:08:46.909869 | jpstock_mcp-0.1.0.tar.gz | 6,957 | 74/2f/78901ea254c881fe4485a872d92367315792ddee9c0c497593bf28d91321/jpstock_mcp-0.1.0.tar.gz | source | sdist | null | false | 29ab4b0133bf04a1149263bbd31a4219 | 7ad86ae121eb67696017dd1afed8a44eecd85b01bf255938be75270c4b4f1548 | 742f78901ea254c881fe4485a872d92367315792ddee9c0c497593bf28d91321 | Apache-2.0 | [] | 128 |
2.4 | isobuilder | 0.9.0 | This is a tool for building custom ISO images used for IMRS | # Isobuilder
Tool for building custom iso images for IMRS hosts
| text/markdown | null | John Bond <john.bond@icann.org> | null | null | null | null | [] | [] | null | null | >=3.6 | [] | [] | [] | [
"dataclasses",
"passlib",
"PyYAML"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.12 | 2026-02-19T13:08:34.741154 | isobuilder-0.9.0.tar.gz | 9,987 | 75/76/0595d4ea5f40d3bb2edb860905c003cc837953143779636d74111bb56785/isobuilder-0.9.0.tar.gz | source | sdist | null | false | a6e9d3ed6d0a186c1242b6140b8bc56b | 6fe392e3663a7ef10d7a37ed2b201d10be6af9103eed2ef6d72b03167ed89271 | 75760595d4ea5f40d3bb2edb860905c003cc837953143779636d74111bb56785 | null | [
"LICENCE"
] | 257 |
2.4 | bluer-journal | 5.153.1 | 📜 A journal for the age of AI. | # 📜 bluer-journal
📜 `@journal` with command access maintained in a github repo.
## installation
```bash
pip install bluer-journal
```
# aliases
[@journal](https://github.com/kamangir/bluer-journal/blob/main/bluer_journal/docs/aliases/journal.md).
---
> 📜 For the [Global South](https://github.com/kamangir/bluer-south).
---
[](https://github.com/kamangir/bluer-journal/actions/workflows/pylint.yml) [](https://github.com/kamangir/bluer-journal/actions/workflows/pytest.yml) [](https://github.com/kamangir/bluer-journal/actions/workflows/bashtest.yml) [](https://pypi.org/project/bluer-journal/) [](https://pypistats.org/packages/bluer-journal)
built by 🌀 [`bluer README`](https://github.com/kamangir/bluer-objects/tree/main/bluer_objects/docs/bluer-README), based on 📜 [`bluer_journal-5.153.1`](https://github.com/kamangir/bluer-journal).
built by 🌀 [`blueness-3.122.1`](https://github.com/kamangir/blueness).
| text/markdown | Arash Abadpour (Kamangir) | arash.abadpour@gmail.com | null | null | CC0-1.0 | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Unix Shell",
"Operating System :: OS Independent"
] | [] | https://github.com/kamangir/bluer-journal | null | null | [] | [] | [] | [
"bluer_ai"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.9 | 2026-02-19T13:08:25.290439 | bluer_journal-5.153.1.tar.gz | 16,138 | 8e/16/0b3447c6f37b43c0392203240e1f11270b33a381a86d03358a583e7ea2ff/bluer_journal-5.153.1.tar.gz | source | sdist | null | false | b27405aa8fec18ccb8774ad8519e42a9 | 462c98db25b3fb30f58386a0263de6affd78a0fafda98dd9c92c446d18d4a90e | 8e160b3447c6f37b43c0392203240e1f11270b33a381a86d03358a583e7ea2ff | null | [
"LICENSE"
] | 239 |
2.1 | envisionhgdetector | 3.0.1 | Hand gesture detection using MediaPipe and CNN, kinematic analysis, and visualization. | # EnvisionHGDetector: Co-speech Hand Gesture Detection Python Package
A Python package for detecting and classifying hand gestures using MediaPipe Holistic and deep learning.
<div align="center">Wim Pouw (wim.pouw@donders.ru.nl), Bosco Yung, Sharjeel Shaikh, James Trujillo, Gerard de Melo, Babajide Owoyele</div>
<div align="center">
<img src="images/ex.gif" alt="Hand Gesture Detection Demo">
</div>
## Info
Please go to [envisionbox.org](www.envisionbox.org) for notebook tutorials on how to use this package. This package provides a straightforward way to detect hand gestures in a variety of videos using a combination of MediaPipe Holistic features and a convolutional neural network (CNN). We plan to update this package with better predicting network in the near future, and we plan to also make an evaluation report so that it is clear how it performs for several types of videos. For now, feel free to experiment. If your looking to just quickly generate isolate some gestures into elan, this is the package for you. Do note that annotation by rates will be much superior to this gesture coder.
The package performs:
* Feature extraction using MediaPipe Holistic (hand, body, and face features)
* Post-hoc gesture detection using a pre-trained CNN model or LIGHTGBM model, that we trained on SAGA, SAGA+, ECOLANG, TEDM3D dataset, and the zhubo, open gesture annotated datasets.
* Real-time Webcam Detection: Live gesture detection with configurable parameters
* Automatic annotation of videos with gesture classifications
* Output generation in CSV format and ELAN files, and video labeled
* Kinematic analysis: DTW distance matrices and gesture similarity visualization
* Interactive dashboard: Explore gesture spaces and kinematic features
Currently, the detector can identify:
- Just a general hand gesture, ("Gesture" vs. "NoGesture")
- Movement patterns ("Move"; this is only trained on SAGA, because these also annotated movements that were not gestures, like nose scratching); it will therefore be an unreliable category perhaps
## Installation
Consider creating a conda environment first (conda create -n envision python==3.9; conda activate envision).
```bash
conda create -n envision python==3.9
conda activate envision
(envision) pip install envisionhgdetector
```
otherwise install like this
```bash
pip install envisionhgdetector
```
Note: This package is CPU-only for wider compatibility and ease of use.
## Quick Start
### Batch Video Processing
```python
from envisionhgdetector import GestureDetector
# Initialize detector with model selection
detector = GestureDetector(
model_type="lightgbm", # "cnn" or "lightgbm"
motion_threshold=0.5, # CNN only: sensitivity to motion
gesture_threshold=0.6, # Confidence threshold for gestures
min_gap_s=0.3, # Minimum gap between gestures (post-hoc)
min_length_s=0.5, # Minimum gesture duration (post-hoc)
gesture_class_bias=0.0 # CNN only: bias toward gesture vs move
)
# Process multiple videos
results = detector.process_folder(
input_folder="path/to/videos",
output_folder="path/to/output"
)
```
### Real-time Webcam Detection
```python
from envisionhgdetector import RealtimeGestureDetector
# Initialize real-time detector
detector = RealtimeGestureDetector(
confidence_threshold=0.2, # Applied during detection
min_gap_s=0.3, # Applied post-hoc
min_length_s=0.5 # Applied post-hoc
)
# Process webcam feed
raw_results, segments = detector.process_webcam(
duration=None, # Unlimited (press 'q' to quit)
save_video=True, # Save annotated video
apply_post_processing=True # Apply segment refinement
)
# Analyze previous sessions
detector.load_and_analyze_session("output_realtime/session_20240621_143022/")
```
### Advanced Processing
```python
from envisionhgdetector import utils
import os
# Step 1: Cut videos by detected segments
segments = utils.cut_video_by_segments(output_folder)
# Step 2: Set up analysis folders
gesture_segments_folder = os.path.join(output_folder, "gesture_segments")
retracked_folder = os.path.join(output_folder, "retracked")
analysis_folder = os.path.join(output_folder, "analysis")
# Step 3: Retrack gestures with world landmarks
tracking_results = detector.retrack_gestures(
input_folder=gesture_segments_folder,
output_folder=retracked_folder
)
# Step 4: Compute DTW distances and kinematic features
analysis_results = detector.analyze_dtw_kinematics(
landmarks_folder=tracking_results["landmarks_folder"],
output_folder=analysis_folder
)
# Step 5: Create interactive dashboard
detector.prepare_gesture_dashboard(
data_folder=analysis_folder
)
# Then run: python app.py (in output folder)
```
## Features CNN
The detector uses 29 features extracted from MediaPipe Holistic, including:
- Head rotations
- Hand positions and movements
- Body landmark distances
- Normalized feature metrics
## LightGBM Model (69 features):
- Key joint positions (shoulders, elbows, wrists)
- Velocities
- Movement ranges and patterns
- Index Thumb Middle Finger Distances and Positions
## Output
The detector generates comprehensive output in organized folder structures depending on the processing mode:
### Batch Video Processing Output
When processing videos with `GestureDetector.process_folder()`, the following files are generated for each video:
1. **Prediction Data**
- `video_name_predictions.csv` - Frame-by-frame predictions with confidence scores
- `video_name_segments.csv` - Refined gesture segments after post-processing
- `video_name_features.npy` - Extracted feature arrays for further analysis
2. **Annotation Files**
- `video_name.eaf` - ELAN annotation file with time-aligned segments
- Useful for manual verification, research, and integration with ELAN software
3. **Visual Output**
- `labeled_video_name.mp4` - Processed video with gesture annotations and confidence graphs
- Shows real-time detection results with temporal confidence visualization
### Advanced Analysis Pipeline Output
When using the complete analysis pipeline, additional structured outputs are created:
4. **Gesture Segments** (`/gesture_segments/`)
- Individual video clips for each detected gesture
- Organized by source video with timing information
- Format: `video_segment_N_Gesture_start_end.mp4`
5. **Retracked Data** (`/retracked/`)
- `tracked_videos/` - Videos with MediaPipe world landmark visualization
- `*_world_landmarks.npy` - 3D world coordinate arrays for each gesture
- `*_visibility.npy` - Landmark visibility scores
6. **Kinematic Analysis** (`/analysis/`)
- `dtw_distances.csv` - Dynamic Time Warping distance matrix between all gestures
- `kinematic_features.csv` - Comprehensive kinematic metrics per gesture
- Number of submovements, peak speeds, accelerations
- Spatial features (McNeillian space usage, volume, height)
- Temporal features (duration, holds, gesture rate)
- `gesture_visualization.csv` - UMAP projection of DTW distances for dashboard
7. **Interactive Dashboard** (`/app.py`)
- Web application for exploring gesture similarity space
- Click gestures to view videos and kinematic features
- Visualizes gesture relationships and feature distributions
### Real-time Webcam Processing Output
When using `RealtimeGestureDetector.process_webcam()`, outputs are organized in timestamped session folders:
**Session Structure** (`/output_realtime/session_YYYYMMDD_HHMMSS/`):
1. **Raw Detection Data**
- `raw_frame_results.csv` - Frame-by-frame detection results during recording
- Contains gesture names, confidence scores, and timestamps
2. **Processed Segments**
- `gesture_segments.csv` - Refined segments after applying gap and length filters
- `gesture_segments.eaf` - ELAN annotation file for the session
3. **Session Recording**
- `webcam_session.mp4` - Annotated video of the entire session
- Shows real-time detection with overlay information
4. **Session Metadata**
- `session_summary.json` - Complete session parameters and statistics
- Includes detection settings, processing results, and performance metrics
### File Format Details
**CSV Prediction Files** contain:
- `time` - Timestamp in seconds
- `has_motion` - Motion detection confidence
- `Gesture_confidence` - Gesture classification confidence
- `Move_confidence` - Movement classification confidence
- `label` - Final classification after thresholding
**Segment Files** contain:
- `start_time`, `end_time` - Segment boundaries in seconds
- `duration` - Segment length
- `label` - Gesture classification
- `labelid` - Unique segment identifier
**Kinematic Features** include:
- Spatial metrics: gesture space usage, volume, height
- Temporal metrics: duration, holds, submovement counts
- Dynamic metrics: peak speeds, accelerations, jerk values
- Shape descriptors: DTW distances, movement patterns
All outputs are designed for integration with research workflows, ELAN annotation software, and further analysis pipelines.
The detector generates three types of output in your specified output folder:
1. Automated Annotations (`/output/automated_annotations/`)
- CSV files with frame-by-frame predictions
- Contains confidence values and classifications for each frame
- Format: `video_name_confidence_timeseries.csv`
2. ELAN Files (`/output/elan_files/`)
- ELAN-compatible annotation files (.eaf)
- Contains time-aligned gesture segments
- Useful for manual verification and research purposes
- Format: `video_name.eaf`
3. Labeled Videos (`/output/labeled_videos/`)
- Processed videos with visual annotations
- Shows real-time gesture detection and confidence scores
- Useful for quick verification of detection quality
- Format: `labeled_video_name.mp4`
4. Retracked Videos (`/output/retracked/`)
- rendered tracked videos and pose world landmarks
5. Kinematic analysis (`output/analyis/`)
- DTW distance matrix (.csv) between all gesture comparisons
- Kinematic features (.csv) per gesture (e.g., number of submovements, max speed, max acceleration)
- Gesture visualization (.csv; UMAP of DTW distance matrix, for input for Dashboard)
6. Dashboard (`/output/app.py`)
- This app visualizes the gesture similarity space and shows the kinematic features, the user can click on the videos and identify metrics
## Technical Background
The package builds on previous work in gesture detection, particularly focused on using MediaPipe Holistic for comprehensive feature extraction. The CNN model is designed to handle complex temporal patterns in the extracted features.
## Requirements
- Python 3.7+
- tensorflow-cpu
- mediapipe
- opencv-python
- numpy
- pandas
## Citation
If you use this package, please cite:
Pouw, W., Yung, B., Shaikh, S., Trujillo, J., Rueda-Toicen, A., de Melo, G., Owoyele, B. (2024). envisionhgdetector: Hand Gesture Detection Using a Convolutional Neural Network (Version 0.0.5.0) [Computer software]. https://pypi.org/project/envisionhgdetector/
### Additional Citations
Zhubo dataset (used for training):
* Bao, Y., Weng, D., & Gao, N. (2024). Editable Co-Speech Gesture Synthesis Enhanced with Individual Representative Gestures. Electronics, 13(16), 3315.
SAGA dataset (used for training)
* Lücking, A., Bergmann, K., Hahn, F., Kopp, S., & Rieser, H. (2010). The Bielefeld speech and gesture alignment corpus (SaGA). In LREC 2010 workshop: Multimodal corpora–advances in capturing, coding and analyzing multimodality.
TED M3D:
* Rohrer, Patrick. A temporal and pragmatic analysis of gesture-speech association: A corpus-based approach using the novel MultiModal MultiDimensional (M3D) labeling system. Diss. Nantes Université; Universitat Pompeu Fabra (Barcelone, Espagne), 2022.
MediaPipe:
* Lugaresi, C., Tang, J., Nash, H., McClanahan, C., Uboweja, E., Hays, M., ... & Grundmann, M. (2019). MediaPipe: A framework for building perception pipelines. arXiv preprint arXiv:1906.08172.
Adapted CNN Training and inference code:
* Pouw, W. (2024). EnvisionBOX modules for social signal processing (Version 1.0.0) [Computer software]. https://github.com/WimPouw/envisionBOX_modulesWP
Original Noddingpigeon Training code:
* Yung, B. (2022). Nodding Pigeon (Version 0.6.0) [Computer software]. https://github.com/bhky/nodding-pigeon
Some code I reused for creating ELAN files came from Cravotta et al., 2022:
* Ienaga, N., Cravotta, A., Terayama, K., Scotney, B. W., Saito, H., & Busa, M. G. (2022). Semi-automation of gesture annotation by machine learning and human collaboration. Language Resources and Evaluation, 56(3), 673-700.
## Contributing
Feel free to help improve this code. As this is primarily aimed at making automatic gesture detection easily accessible for research purposes, contributions focusing on usability and reliability are especially welcome (happy to collaborate, just reach out to wim.pouw@donders.ru.nl).
| text/markdown | Wim Pouw, Bosco Yung, Sharjeel Shaikh, James Trujillo, Antonio Rueda-Toicen, Gerard de Melo, Babajide Owoyele | wim.pouw@donders.ru.nl | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Image Recognition"
] | [] | https://github.com/wimpouw/envisionhgdetector | null | >=3.10 | [] | [] | [] | [
"moviepy==2.1.1",
"protobuf<4,>=3.11",
"numpy<2.0.0,>=1.21.0",
"opencv-python>=4.7.0.72",
"tensorflow-cpu==2.15.1",
"mediapipe==0.10.7",
"pandas<2.0.0,>=1.5.0",
"pytest>=7.0.0",
"black>=22.0.0",
"tqdm<5.0.0,>=4.65.0",
"shapedtw==1.0.3",
"umap-learn>=0.5.5",
"numba>=0.56.0",
"scipy>=1.10.0"... | [] | [] | [] | [] | twine/6.0.1 CPython/3.11.5 | 2026-02-19T13:06:26.387129 | envisionhgdetector-3.0.1.tar.gz | 51,308,493 | 35/9b/0610ea9b1fe845d506175cda1a49f3300b097d8e95be973bf913a8a6ba6f/envisionhgdetector-3.0.1.tar.gz | source | sdist | null | false | 86a49190bdd12f04d86637f5bcf1f560 | 28436d3c974079ac43f9e2f60235d6cf9f3994bd79276ff73892e06e7de64084 | 359b0610ea9b1fe845d506175cda1a49f3300b097d8e95be973bf913a8a6ba6f | null | [] | 262 |
2.4 | nucliadb-utils | 6.12.0.post5907 | NucliaDB util library | # nucliadb util python library
- Nats driver
- FastAPI fixes
- S3/GCS drivers
# Install and run tests
```bash
uv sync
make test
```
| text/markdown | null | Nuclia <nucliadb@nuclia.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Software Development :: Libraries :: Python Modules"
... | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"pydantic>=2.6",
"pydantic-settings>=2.2",
"aiohttp>=3.9.4",
"httpx>=0.27.0",
"prometheus-client>=0.12.0",
"types-requests>=2.27.7",
"mmh3>=3.0.0",
"nats-py[nkeys]>=2.6.0",
"PyNaCl",
"pyjwt>=2.4.0",
"mrflagly>=0.2.14",
"nidx-protos>=6.12.0.post5907",
"nucliadb-protos>=6.12.0.post5907",
"nu... | [] | [] | [] | [
"Homepage, https://nuclia.com",
"Repository, https://github.com/nuclia/nucliadb"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T13:06:10.655117 | nucliadb_utils-6.12.0.post5907-py3-none-any.whl | 101,628 | 31/d7/494ad5fcdcba29edb3ca70ddbdc58fc9984f36b6d07a1c22f6ba222bfde7/nucliadb_utils-6.12.0.post5907-py3-none-any.whl | py3 | bdist_wheel | null | false | b385a1caa353d119976ff72fcaee7a77 | 8b9fb39d6d55bc21f2b9570991f3132453738cde0927916f7c91e43fe9fa1475 | 31d7494ad5fcdcba29edb3ca70ddbdc58fc9984f36b6d07a1c22f6ba222bfde7 | AGPL-3.0-or-later | [] | 140 |
2.3 | sqlmodel-translation | 0.1.2 | Translation library for SQLModel and FastAPI | # SQLModel-translation
SQLModel-translation is a translation library for [SQLModel](https://sqlmodel.tiangolo.com) and [FastAPI](https://fastapi.tiangolo.com).
Documentation: [https://dnafivuq.github.io/sqlmodel-translation](https://dnafivuq.github.io/sqlmodel-translation)
This project uses [uv](https://docs.astral.sh/uv/) for package managment.
To generate the documentation run `make docs` and visit http://127.0.0.1:8000/.
For more actions see the Makefile in this directory. Running `make` will print out all the targets with descriptions.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"fastapi>=0.119.1",
"sqlalchemy>=2.0.44",
"sqlmodel>=0.0.27"
] | [] | [] | [] | [
"Repository, https://github.com/Dnafivuq/sqlmodel-translation",
"Documentation, https://dnafivuq.github.io/sqlmodel-translation/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T13:06:09.205811 | sqlmodel_translation-0.1.2-py3-none-any.whl | 6,696 | e6/f5/35c3c2e1e9b4e0b8482d70efe44d15c941a789923effbf5577ed60fed834/sqlmodel_translation-0.1.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 664f8e5145b080d4cfd91ecf815043bd | bcf358431a705471be02bcfdf3bcf4e39123f59799f6e255217bc79eed209dfa | e6f535c3c2e1e9b4e0b8482d70efe44d15c941a789923effbf5577ed60fed834 | null | [] | 214 |
2.4 | nucliadb-telemetry | 6.12.0.post5907 | NucliaDB Telemetry Library Python process | # NucliaDB Telemetry
Open telemetry compatible plugin to propagate traceid on FastAPI, Nats and GRPC with Asyncio.
ENV vars:
```
JAEGER_ENABLED = True
JAEGER_HOST = "127.0.0.1"
JAEGER_PORT = server.port
```
On FastAPI you should add:
```python
tracer_provider = get_telemetry("HTTP_SERVICE")
app = FastAPI(title="Test API") # type: ignore
if not tracer_provider.initialized:
await init_telemetry(tracer_provider)
set_global_textmap(B3MultiFormat())
FastAPIInstrumentor.instrument_app(app, tracer_provider=tracer_provider)
..
await init_telemetry(tracer_provider) # To start asyncio task
..
```
On GRPC Server you should add:
```python
tracer_provider = get_telemetry("GRPC_SERVER_SERVICE")
telemetry_grpc = GRPCTelemetry("GRPC_CLIENT_SERVICE", tracer_provider)
if not tracer_provider.initialized:
await init_telemetry(tracer_provider)
set_global_textmap(B3MultiFormat())
server = telemetry_grpc.init_server()
helloworld_pb2_grpc.add_GreeterServicer_to_server(SERVICER, server)
..
await init_telemetry(tracer_provider) # To start asyncio task
..
```
On GRPC Client you should add:
```python
tracer_provider = get_telemetry("GRPC_CLIENT_SERVICE")
telemetry_grpc = GRPCTelemetry("GRPC_CLIENT_SERVICE", tracer_provider)
if not tracer_provider.initialized:
await init_telemetry(tracer_provider)
set_global_textmap(B3MultiFormat())
channel = telemetry_grpc.init_client(f"localhost:{grpc_service}")
stub = helloworld_pb2_grpc.GreeterStub(channel)
..
await init_telemetry(tracer_provider) # To start asyncio task
..
```
On Nats jetstream push subscriber you should add:
```python
nc = await nats.connect(servers=[self.natsd])
js = self.nc.jetstream()
tracer_provider = get_telemetry("NATS_SERVICE")
if not tracer_provider.initialized:
await init_telemetry(tracer_provider)
set_global_textmap(B3MultiFormat())
jsotel = JetStreamContextTelemetry(
js, "NATS_SERVICE", tracer_provider
)
subscription = await jsotel.subscribe(
subject="testing.telemetry",
stream="testing",
cb=handler,
)
```
On Nats publisher you should add:
```python
nc = await nats.connect(servers=[self.natsd])
js = self.nc.jetstream()
tracer_provider = get_telemetry("NATS_SERVICE")
if not tracer_provider.initialized:
await init_telemetry(tracer_provider)
set_global_textmap(B3MultiFormat())
jsotel = JetStreamContextTelemetry(
js, "NATS_SERVICE", tracer_provider
)
await jsotel.publish("testing.telemetry", request.name.encode())
```
On Nats jetstream pull subscription you can use different patterns if you want to
just get one message and exit or pull several ones. For just one message
```python
nc = await nats.connect(servers=[self.natsd])
js = self.nc.jetstream()
tracer_provider = get_telemetry("NATS_SERVICE")
if not tracer_provider.initialized:
await init_telemetry(tracer_provider)
set_global_textmap(B3MultiFormat())
jsotel = JetStreamContextTelemetry(
js, "NATS_SERVICE", tracer_provider
)
# You can use either pull_subscribe or pull_subscribe_bind
subscription = await jsotel.pull_subscribe(
subject="testing.telemetry",
durable="consumer_name"
stream="testing",
)
async def callback(message):
# Do something with your message
# and optionally return something
return True
try:
result = await jsotel.pull_one(subscription, callback)
except errors.TimeoutError
pass
```
For multiple messages just wrap it in a loop:
```python
while True:
try:
result = await jsotel.pull_one(subscription, callback)
except errors.TimeoutError
pass
```
On Nats client (NO Jestream! ) publisher you should add:
```python
nc = await nats.connect(servers=[self.natsd])
js = self.nc.jetstream()
tracer_provider = get_telemetry("NATS_SERVICE")
if not tracer_provider.initialized:
await init_telemetry(tracer_provider)
set_global_textmap(B3MultiFormat())
ncotel = NatsClientTelemetry(
nc, "NATS_SERVICE", tracer_provider
)
await ncotel.publish("testing.telemetry", request.name.encode())
```
On Nats client (NO Jestream! ) subscriber you should add:
```python
nc = await nats.connect(servers=[self.natsd])
js = self.nc.jetstream()
tracer_provider = get_telemetry("NATS_SERVICE")
if not tracer_provider.initialized:
await init_telemetry(tracer_provider)
set_global_textmap(B3MultiFormat())
ncotel = NatsClientContextTelemetry(
js, "NATS_SERVICE", tracer_provider
)
subscription = await ncotel.subscribe(
subject="testing.telemetry",
queue="queue_nname",
cb=handler,
)
```
On Nats client (NO Jestream! ) request you should add:
```python
nc = await nats.connect(servers=[self.natsd])
js = self.nc.jetstream()
tracer_provider = get_telemetry("NATS_SERVICE")
if not tracer_provider.initialized:
await init_telemetry(tracer_provider)
set_global_textmap(B3MultiFormat())
ncotel = NatsClientTelemetry(
nc, "NATS_SERVICE", tracer_provider
)
response = await ncotel.request("testing.telemetry", request.name.encode())
```
And to handle responses on the other side, you can use the same pattern as in plain Nats client
subscriber, just adding the `msg.respond()` on the handler when done
| text/markdown | null | Nuclia <nucliadb@nuclia.com> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Framework :: AsyncIO",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
... | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"pydantic>=2.6",
"pydantic-settings>=2.2",
"prometheus-client>=0.12.0",
"orjson>=3.6.7",
"wrapt>=1.14.1",
"opentelemetry-sdk>=1.21.0; extra == \"otel\"",
"opentelemetry-api>=1.21.0; extra == \"otel\"",
"opentelemetry-proto>=1.21.0; extra == \"otel\"",
"opentelemetry-exporter-jaeger-thrift>=1.21.0; e... | [] | [] | [] | [
"Homepage, https://nuclia.com",
"Repository, https://github.com/nuclia/nucliadb"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T13:06:08.593915 | nucliadb_telemetry-6.12.0.post5907-py3-none-any.whl | 53,976 | 39/23/abe3d736bf046f0c07f04b9034f6d3a2a237830568112976b0f4ab49e169/nucliadb_telemetry-6.12.0.post5907-py3-none-any.whl | py3 | bdist_wheel | null | false | f4e915ce33968b8ec99fc88ce623d885 | 670b398a4f53f7dbb8d4c65d84d7e22f45ae517879642b3bf7c58165c7e91508 | 3923abe3d736bf046f0c07f04b9034f6d3a2a237830568112976b0f4ab49e169 | AGPL-3.0-or-later | [] | 183 |
2.4 | nucliadb-sdk | 6.12.0.post5907 | NucliaDB SDK | # NucliaDB SDK
The NucliaDB SDK is a Python library designed as a thin wrapper around the [NucliaDB HTTP API](https://docs.nuclia.dev/docs/api). It is tailored for developers who wish to create low-level scripts to interact with NucliaDB.
## WARNING
⚠ If it's your first time using Nuclia or you want a simple way to push your unstructured data to Nuclia with a script or a CLI, we highly recommend using the [Nuclia CLI/SDK](https://github.com/nuclia/nuclia.py) instead, as it is much more user-friendly and use-case focused. ⚠
## Installation
To install it, simply with pip:
```bash
pip install nucliadb-sdk
```
## How to use it?
To connect to a Nuclia-hosted NucliaDB instance, just use the `NucliaDB` constructor method with the `api_key`:
```python
from nucliadb_sdk import NucliaDB, Region
ndb = NucliaDB(region=Region.EUROPE1, api_key="my-api-key")
```
Alternatively, to connect to a NucliaDB local installation, use:
```python
ndb = NucliaDB(region=Region.ON_PREM, api="http://localhost:8080/api")
```
Then, each method of the `NucliaDB` class maps to an HTTP endpoint of the NucliaDB API. The parameters it accepts correspond to the Pydantic models associated to the request body scheme of the endpoint.
The method-to-endpoint mappings for the sdk are declared in-code [in the _NucliaDBBase class](https://github.com/nuclia/nucliadb/blob/main/nucliadb_sdk/src/nucliadb_sdk/v2/sdk.py).
For instance, to create a resource in your Knowledge Box, the endpoint is defined [here](https://docs.nuclia.dev/docs/api#tag/Resources/operation/Create_Resource_kb__kbid__resources_post).
It has a `{kbid}` path parameter and is expecting a json payload with some optional keys like `slug` or `title`, that are of type string. With `curl`, the command would be:
```bash
curl -XPOST http://localhost:8080/api/v1/kb/my-kbid/resources -H 'x-nucliadb-roles: WRITER' --data-binary '{"slug":"my-resource","title":"My Resource"}' -H "Content-Type: application/json"
{"uuid":"fbdb10a79abc45c0b13400f5697ea2ba","seqid":1}
```
and with the NucliaDB sdk:
```python
>>> from nucliadb_sdk import NucliaDB
>>>
>>> ndb = NucliaDB(region="on-prem", url="http://localhost:8080/api")
>>> ndb.create_resource(kbid="my-kbid", slug="my-resource", title="My Resource")
ResourceCreated(uuid='fbdb10a79abc45c0b13400f5697ea2ba', elapsed=None, seqid=1)
```
Note that paths parameters are mapped as required keyword arguments of the `NucliaDB` class methods: hence the `kbid="my-kbid"`. Any other keyword arguments specified in the method will be sent along in the json request body of the HTTP request.
Alternatively, you can also define the `content` parameter and pass an instance of the Pydantic model that the endpoint expects:
```python
>>> from nucliadb_sdk import NucliaDB
>>> from nucliadb_models.writer import CreateResourcePayload
>>>
>>> ndb = NucliaDB(region="on-prem", url="http://localhost:8080/api")
>>> content = CreateResourcePayload(slug="my-resource", title="My Resource")
>>> ndb.create_resource(kbid="my-kbid", content=content)
ResourceCreated(uuid='fbdb10a79abc45c0b13400f5697ea2ba', elapsed=None, seqid=1)
```
Query parameters can be passed too on each method with the `query_params` argument. For instance:
```python
>>> ndb.get_resource_by_id(kbid="my-kbid", rid="rid", query_params={"show": ["values"]})
```
### Example Usage
The following is a sample script that fetches the HTML of a website, extracts all links that it finds on it and pushes them to NucliaDB so that they get processed by Nuclia's processing engine.
```python
from nucliadb_models.link import LinkField
from nucliadb_models.writer import CreateResourcePayload
import nucliadb_sdk
import requests
from bs4 import BeautifulSoup
def extract_links_from_url(url):
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
unique_links = set()
for link in soup.find_all("a"):
unique_links.add(link.get("href"))
return unique_links
def upload_link_to_nuclia(ndb, *, kbid, link, tags):
try:
title = link.replace("-", " ")
slug = "-".join(tags) + "-" + link.split("/")[-1]
content = CreateResourcePayload(
title=title,
slug=slug,
links={
"link": LinkField(
uri=link,
language="en",
)
},
)
ndb.create_resource(kbid=kbid, content=content)
print(f"Resource created from {link}. Title={title} Slug={slug}")
except nucliadb_sdk.exceptions.ConflictError:
print(f"Resource already exists: {link} {slug}")
except Exception as ex:
print(f"Failed to create resource: {link} {slug}: {ex}")
def main(site):
# Define the NucliaDB instance with region and URL
ndb = nucliadb_sdk.NucliaDB(region="on-prem", url="http://localhost:8080")
# Loop through extracted links and upload to NucliaDB
for link in extract_links_from_url(site):
upload_link_to_nuclia(ndb, kbid="my-kb-id", link=link, tags=["news"])
if __name__ == "__main__":
main(site="https://en.wikipedia.org/wiki/The_Lion_King")
```
After the data is pushed, the NucliaDB SDK could also be used to find answers on top of the extracted links.
```python
>>> import nucliadb_sdk
>>>
>>> ndb = nucliadb_sdk.NucliaDB(region="on-prem", url="http://localhost:8080")
>>> resp = ndb.ask(kbid="my-kb-id", query="What does Hakuna Matata mean?")
>>> print(resp.answer)
'Hakuna matata is actually a phrase in the East African language of Swahili that literally means “no trouble” or “no problems”.'
```
| text/markdown | null | Nuclia <nucliadb@nuclia.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Pro... | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"httpx",
"orjson",
"pydantic>=2.6",
"nuclia-models>=0.50.0",
"nucliadb-models>=6.12.0.post5907"
] | [] | [] | [] | [
"Homepage, https://nuclia.com",
"Repository, https://github.com/nuclia/nucliadb"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T13:06:06.963902 | nucliadb_sdk-6.12.0.post5907-py3-none-any.whl | 17,638 | 7f/55/a6cd62a839bf52444a95e2f895cac69ffc1232e62dd01a4f04039b6ac096/nucliadb_sdk-6.12.0.post5907-py3-none-any.whl | py3 | bdist_wheel | null | false | bfa3c2db25d3d5cfca2d17359ceac76c | 4bc77443f4d1afdd25032d60efe22af62393120015c6f528e1b6b679061f53ef | 7f55a6cd62a839bf52444a95e2f895cac69ffc1232e62dd01a4f04039b6ac096 | Apache-2.0 | [] | 235 |
2.4 | open-otk | 1.0.7 | Open Ollama Toolkit - Professional Python library for building AI applications with Ollama | # Open OTK (Open Ollama Toolkit)

A professional Python toolkit for building AI applications with Ollama.
[](LICENSE)
[](https://www.python.org/downloads/)
[](https://aiextension.github.io/otk)
## Features
- Visual GUI for model browsing and template generation
- Comprehensive API for chat, streaming, and embeddings
- Automatic response processing for thinking models (DeepSeek-R1, Qwen)
- Model management and comparison tools
- Production-ready with proper error handling
- Works with all Ollama models
---
## Installation
### Prerequisites
1. Install [Ollama](https://ollama.ai)
2. Install a model: `ollama pull llama2`
3. Ensure Ollama is running
### Install Open OTK
**From PyPI (Recommended):**
```bash
# Install the package
pip install open-otk
# Launch GUI - Method 1 (if Python Scripts in PATH)
otk
# Launch GUI - Method 2 (always works, no PATH needed)
python -m otk
# Check version
otk --version
python -m otk --version
# Get help
otk --help
python -m otk --help
```
**From Source (For Development):**
```bash
# 1. Clone the repository
git clone https://github.com/aiextension/open-otk.git
cd open-otk
# 2. Install in editable mode
pip install -e .
# 3. Launch from anywhere
otk
# OR
python -m otk
```
### Launch GUI
```bash
# Method 1: Direct command (recommended)
otk
# Method 2: Python module (always works, no PATH needed)
python -m otk
# Check version
otk --version
python -m otk --version
```
Or run directly from source:
```bash
python otk.py
```
## Quick Start
### Basic Usage
```python
from otk import OllamaClient
client = OllamaClient()
response = client.generate("llama2", "Tell me a joke")
print(response)
```
### Chat Session
```python
from otk import ChatSession
session = ChatSession("llama2", system_message="You are a helpful assistant")
response = session.send("Hello!")
print(response)
```
### Streaming Responses
```python
from otk import OllamaClient
client = OllamaClient()
for chunk in client.stream_generate("llama2", "Write a story"):
print(chunk, end='', flush=True)
```
### Model Management
```python
from otk import ModelManager
manager = ModelManager()
# List models
models = manager.list_models()
for model in models:
print(f"{model['name']} - {model['size']}")
# Pull a model
manager.pull_model("mistral")
# Check if model exists
if manager.model_exists("llama2"):
print("Model is ready!")
```
### Automatic Response Processing
```python
from otk import ChatSession
session = ChatSession("deepseek-r1:8b", auto_process=True)
response = session.send("Solve 234 + 567")
print(response) # Clean answer
# Access reasoning
thinking = session.get_last_thinking()
```
```python
from otk import clean_thinking_tags, ModelResponseHandler, ModelType
clean_text, thinking = clean_thinking_tags(raw_response)
handler = ModelResponseHandler(ModelType.THINKING)
processed = handler.process(raw_response)
```
### Customization
```python
from otk import ModelBuilder, HookType
model = (ModelBuilder("llama2")
.with_preset("creative")
.with_temperature(0.85)
.with_hook(HookType.POST_PROCESS, my_logger)
.build())
```
### Experimentation
```python
from otk import ModelExperiment
experiment = ModelExperiment()
result = experiment.compare_models(
models=["llama2", "mistral"],
prompt="Explain quantum computing"
)
experiment.print_comparison(result)
```
## Examples
The `examples/` directory contains ready-to-run examples:
| Example | Description |
|---------|-------------|
| [`simple_chat.py`](examples/simple_chat.py) | Basic chat with models |
| [`streaming_chat.py`](examples/streaming_chat.py) | Real-time streaming responses |
| [`chat_session.py`](examples/chat_session.py) | Interactive chat with history |
| [`model_manager.py`](examples/model_manager.py) | Manage models interactively |
| [`embeddings.py`](examples/embeddings.py) | Generate and compare embeddings |
| [`model_comparison.py`](examples/model_comparison.py) | Compare different models |
| [`advanced_model_handling.py`](examples/advanced_model_handling.py) | Different model format handling |
| [`efficient_response_processing.py`](examples/efficient_response_processing.py) | Efficient response processing |
| [`creative_integrations.py`](examples/creative_integrations.py) | **Real-world integration patterns** |
| [`experimentation_playground.py`](examples/experimentation_playground.py) | **Interactive experimentation tool** |
Run any example:
```bash
python examples/simple_chat.py
```
## Generate Your Starter Template (Interactive)
**NEW! Create custom templates with a beautiful interactive wizard:**
```bash
python create_starter.py
```
### What You Get:
1. **Pick Your Model** - Select from installed models or install one interactively
2. **Choose Template Type:**
- **Simple Chat** - Basic conversational interface
- **Custom Model** - Hooks, callbacks, preprocessing
- **Streaming Chat** - Real-time responses
- **Experimentation** - Compare and test settings
- **Integration** - Template for integrating into your app
- **Tkinter GUI** - Desktop app with custom UI (no dependencies!)
- **Tkinter Advanced** - Multi-tab desktop app with styling
3. **Name Your File** - Get ready-to-run code!
### GUI Templates Preview:
**Tkinter Desktop GUI:**
```python
# Auto-generated code with:
# - Beautiful custom styling
# - Real-time chat interface
# - Threaded operations
# - Native desktop app
# - NO extra dependencies!
```
**Run with:**
```bash
python your_app.py
# Window opens immediately!
```
**Tkinter Advanced:**
```python
# Auto-generated code with:
# - Multiple tabs (Chat, Generate, Settings)
# - Professional dark theme
# - Parameter controls
# - Content generation tools
# - Production-ready
```
**Want web/API?** Use the Integration template and add Flask/FastAPI/whatever you prefer!
### No Models Installed?
No problem! The wizard will:
1. Detect you have no models
2. Show you recommended models with sizes
3. Install the model for you interactively
4. Generate your template ready to use!
## Starter Templates
Ready-to-use templates for common applications:
### 1. Chatbot
```bash
cd templates/chatbot
python simple_chatbot.py
```
A complete chatbot with conversation history and commands.
### 2. RAG System
```bash
cd templates/rag_system
python simple_rag.py
```
Retrieval Augmented Generation for question-answering with custom knowledge.
### 3. Text Analyzer
```bash
cd templates/text_analyzer
python text_analyzer.py
```
Analyze text for sentiment, keywords, entities, and more.
### 4. Code Assistant
```bash
cd templates/code_assistant
python code_assistant.py
```
AI-powered coding assistant for generation, debugging, and review.
## API Reference
### OllamaClient
Main client for interacting with Ollama:
```python
client = OllamaClient(host="http://localhost:11434")
# Generate text
response = client.generate(model, prompt, system=None, temperature=0.7)
# Stream generation
for chunk in client.stream_generate(model, prompt):
print(chunk)
# Chat completion
response = client.chat(model, messages, temperature=0.7)
# Stream chat
for chunk in client.stream_chat(model, messages):
print(chunk)
# Generate embeddings
embedding = client.embeddings(model, text)
# Check if running
is_running = client.is_running()
```
### ChatSession
Maintain conversation context with automatic response processing:
```python
session = ChatSession(
model="llama2",
system_message="You are helpful",
temperature=0.7,
max_history=50,
auto_process=True # Automatically handle different model formats
)
# Send message (automatically cleaned!)
response = session.send("Hello")
# Stream message
for chunk in session.send_stream("Tell me more"):
print(chunk)
# Access thinking/reasoning (if available)
thinking = session.get_last_thinking()
metadata = session.get_last_metadata()
# Clear history
session.clear_history()
# Get history
history = session.get_history()
# Export/import
session.export_history("chat.json")
session.load_history("chat.json")
```
### Response Handlers
Handle different model formats automatically:
```python
from otk import (
AutoModelHandler,
ModelResponseHandler,
ModelType,
clean_thinking_tags
)
# Automatic handler (detects model type)
auto_handler = AutoModelHandler()
processed = auto_handler.process_response(raw_text, "deepseek-r1")
# Manual handler for specific type
handler = ModelResponseHandler(ModelType.THINKING)
processed = handler.process(raw_text)
# Quick utility functions
clean_text, thinking = clean_thinking_tags(response)
# Custom patterns
custom_handler = ModelResponseHandler(
ModelType.CUSTOM,
custom_patterns={'tag': r'<tag>(.*?)</tag>'}
)
```
session.load_history("chat.json")
```
### ModelManager
Manage Ollama models:
```python
manager = ModelManager()
# List models
models = manager.list_models()
# Pull model
manager.pull_model("llama2", stream=True)
# Delete model
manager.delete_model("old-model")
# Check existence
exists = manager.model_exists("llama2")
# Get model info
info = manager.show_model_info("llama2")
# Get recommendations
recommendations = manager.recommend_models()
```
## Utility Functions
```python
from otk import (
format_response,
estimate_tokens,
chunk_text,
create_prompt_template,
extract_code_blocks,
clean_response
)
# Format for readability
formatted = format_response(long_text, max_width=80)
# Estimate tokens
tokens = estimate_tokens(text)
# Chunk text
chunks = chunk_text(text, chunk_size=1000, overlap=100)
# Use templates
prompt = create_prompt_template(
"Translate {text} to {language}",
{"text": "Hello", "language": "Spanish"}
)
# Extract code
code_blocks = extract_code_blocks(markdown_text)
```
## Recommended Models
### General Chat
- `llama2` - Meta's general-purpose model
- `mistral` - Fast and capable
- `phi` - Small but powerful
### Coding
- `codellama` - Code generation and explanation
- `deepseek-coder` - Excellent for code
- `starcoder2` - Strong coding capabilities
### Embeddings
- `nomic-embed-text` - Text embeddings
- `all-minilm` - Lightweight embeddings
Pull models with:
```bash
ollama pull llama2
ollama pull codellama
ollama pull nomic-embed-text
```
## Examples
See [`examples/`](examples/) directory for working code samples.
## Templates
Ready-to-use application templates in [`templates/`](templates/).
## Testing
```bash
python test_quick.py
```
### Test Features
```python
from otk import clean_thinking_tags, ModelBuilder
clean, thinking = clean_thinking_tags("<think>x</think>answer")
model = ModelBuilder("llama2").with_temperature(0.8).build()
```
## Troubleshooting
**Issue: 'otk' command not recognized (after pip install)**
This happens when Python's Scripts directory is not in your system PATH.
```bash
# Quick Fix - Use Python module (always works, no PATH needed):
python -m otk
# Check if it's working:
python -m otk --version
# Permanent Fix - Add Scripts to PATH:
# Step 1: Find your Python Scripts directory
python -c "import sys, os; print(os.path.join(sys.prefix, 'Scripts'))"
# Step 2: Add to PATH (Windows PowerShell as Administrator):
$scriptsPath = python -c "import sys, os; print(os.path.join(sys.prefix, 'Scripts'))"
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";$scriptsPath", "User")
# Step 3: Restart your terminal and try:
otk
# Alternative: Install with pipx (manages PATH automatically)
pip install pipx
pipx install open-otk
otk # Now works!
```
**Manual PATH Setup (Windows):**
1. Search "Environment Variables" in Start menu
2. Click "Environment Variables" button
3. Under "User variables", select "Path" and click "Edit"
4. Click "New" and add: `C:\Users\<YourUsername>\AppData\Local\Programs\Python\Python3X\Scripts`
5. Click "OK" on all windows
6. **Restart your terminal/Command Prompt**
7. Run `otk`
**Issue: Ollama not running**
```bash
# Solution: Make sure Ollama is running
# Windows: Start Ollama app
# Linux/Mac: ollama serve
```
**Issue: Model not found**
```bash
# Solution: Pull the model
ollama pull llama2
# Or list available models
ollama list
```
**Issue: Import errors**
```bash
# Solution: Install dependencies
pip install ollama
# Or install from requirements
pip install -r requirements.txt
```
**Full Testing Guide:** [TESTING_GUIDE.md](TESTING_GUIDE.md)
## Contributing
Contributions are welcome! Feel free to:
- Report bugs
- Suggest features
- Submit pull requests
- Improve documentation
## License
MIT License - feel free to use in your projects!
## Acknowledgments
- Built on top of [Ollama](https://ollama.ai)
- Uses the official [ollama-python](https://github.com/ollama/ollama-python) library
## Documentation
- [Getting Started Guide](docs/guides/GETTING_STARTED.md)
- [GUI Documentation](docs/gui/GUI_APP_README.md)
- [API Reference](docs/reference/QUICK_REFERENCE.md)
## Contributing
Contributions welcome! Open an issue or submit a pull request.
## License
MIT License - see [LICENSE](LICENSE) for details.
## Author
**Md. Abid Hasan Rafi**
- Email: [ahr16.abidhasanrafi@gmail.com](mailto:ahr16.abidhasanrafi@gmail.com)
- GitHub: [@abidhasanrafi](https://github.com/abidhasanrafi)
- Portfolio: [abidhasanrafi.github.io](https://abidhasanrafi.github.io)
- Organization: [AI Extension](https://aiextension.org)
## Links
- [OTK Website](https://aiextension.github.io/otk)
- [Project Repository](https://github.com/aiextension/open-otk)
- [Report Issues](https://github.com/aiextension/open-otk/issues)
- [Ollama Documentation](https://github.com/ollama/ollama)
| text/markdown | Md. Abid Hasan Rafi | ahr16.abidhasanrafi@gmail.com | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Pyt... | [] | https://github.com/aiextension/open-otk | null | >=3.8 | [] | [] | [] | [
"ollama>=0.1.0",
"requests>=2.31.0",
"beautifulsoup4>=4.12.0",
"pytest>=7.0.0; extra == \"dev\"",
"black>=22.0.0; extra == \"dev\"",
"flake8>=4.0.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T13:06:05.800318 | open_otk-1.0.7.tar.gz | 41,574 | f2/39/b3edbc67f9bb60b271d77ec03848ced96c84595aa0881bd8cefa0f516907/open_otk-1.0.7.tar.gz | source | sdist | null | false | 866b7bf9bb3c9f332876679bb732c9ba | f5141f24623a1c26d792683d3751b44e88aaa3dfef76c4f129f756ee940f942e | f239b3edbc67f9bb60b271d77ec03848ced96c84595aa0881bd8cefa0f516907 | null | [
"LICENSE"
] | 224 |
2.4 | nucliadb-dataset | 6.12.0.post5907 | NucliaDB Train Python client | # NUCLIADB TRAIN CLIENT
Library to connect NucliaDB to Training APIs
## INSTALL
```
pip install nucliadb_dataset
```
| text/markdown | null | Nuclia <nucliadb@nuclia.com> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: ... | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"protobuf",
"types-protobuf",
"grpcio",
"requests",
"aiohttp",
"argdantic",
"pydantic-settings>=2.2",
"pyarrow",
"nucliadb-protos>=6.12.0.post5907",
"nucliadb-sdk>=6.12.0.post5907",
"nucliadb-models>=6.12.0.post5907"
] | [] | [] | [] | [
"Homepage, https://nuclia.com",
"Repository, https://github.com/nuclia/nucliadb"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T13:06:00.633424 | nucliadb_dataset-6.12.0.post5907-py3-none-any.whl | 17,811 | f1/45/bb6133e0af9f5dd4703375ef61002b6abd407a630844fce43d56d43e3fb5/nucliadb_dataset-6.12.0.post5907-py3-none-any.whl | py3 | bdist_wheel | null | false | 168fb1d3c6ee0a6cfec7967224929cad | 09d299e41b43056c4ebc1d5c4e8299b281ef9cc03d7b60cdcbefb9761267a9b8 | f145bb6133e0af9f5dd4703375ef61002b6abd407a630844fce43d56d43e3fb5 | Apache-2.0 | [] | 111 |
2.4 | nucliadb | 6.12.0.post5907 | NucliaDB | # nucliadb
This module contains most of the Python components for NucliaDB:
- ingest
- reader
- writer
- search
- train
# NucliaDB Migrations
This module is used to manage NucliaDB Migrations.
All migrations will be provided in the `migrations` folder and have a filename
that follows the structure: `[sequence]_[migration name].py`.
Where `sequence` is the order the migration should be run in with zero padding.
Example: `0001_migrate_data.py`.
Each migration should have the following:
```python
from nucliadb.migrator.context import ExecutionContext
async def migrate(context: ExecutionContext) -> None:
"""
Non-kb type of migration. Migrate global data.
"""
async def migrate_kb(context: ExecutionContext, kbid: str) -> None:
"""
Migrate kb.
Must have both types of migrations.
"""
```
## How migrations are managed
- All migrations utilize a distributed lock to prevent simulateously running jobs
- Global migration state:
- current version
- target version
- KBs to migrate
- KB Migration State:
- current version
- Migrations are currently run with a deployment and will be continuously retried on failure.
- Running migrations in a deployment is to make sure a migration does not prevent code deployment.
| text/markdown | null | Nuclia <nucliadb@nuclia.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: P... | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"nucliadb-telemetry[all]>=6.12.0.post5907",
"nucliadb-utils[cache,fastapi,storages]>=6.12.0.post5907",
"nucliadb-protos[grpc]>=6.12.0.post5907",
"nucliadb-models>=6.12.0.post5907",
"nidx-protos[grpc]>=6.12.0.post5907",
"nucliadb-admin-assets>=1.0.0.post1224",
"nuclia-models>=0.50.0",
"uvicorn[standard... | [] | [] | [] | [
"Nuclia, https://nuclia.com",
"Github, https://github.com/nuclia/nucliadb",
"Slack, https://nuclia-community.slack.com",
"API Reference, https://docs.nuclia.dev/docs/api"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T13:05:58.505082 | nucliadb-6.12.0.post5907-py3-none-any.whl | 739,572 | 99/b2/95e4c1c98397c2aea0027db1471842b877834c2cf95584ff70232f11e230/nucliadb-6.12.0.post5907-py3-none-any.whl | py3 | bdist_wheel | null | false | d6259fc577a7419858aa1c3e7be9982b | 67d53058579d3fe81dc4b438e29b2b0afde11b08ea83cab544436c85d263450b | 99b295e4c1c98397c2aea0027db1471842b877834c2cf95584ff70232f11e230 | AGPL-3.0-or-later | [] | 107 |
2.4 | aiotrade-sdk | 0.8.0 | High-performance async trading API client for Python supporting BingX and Bybit exchanges with intelligent session and cache management | # aiotrade
[](https://pypi.org/project/aiotrade-sdk/) [](https://pypi.org/project/aiotrade-sdk/) [](https://opensource.org/licenses/MIT)
[](https://github.com/vispar-tech/aiotrade/actions/workflows/release.yml)
High-performance async trading API client for Python supporting BingX and Bybit exchanges with intelligent session and cache management.
## Architecture
The library uses a sophisticated architecture for optimal performance:
### Session Management
- **Shared Session**: `SharedSessionManager` creates a single aiohttp session with high-performance connection pooling
- **Individual Sessions**: Clients automatically create individual sessions if shared session isn't initialized
- **Connection Pooling**: Up to 2000 concurrent connections with smart distribution per host
### Client Caching
- **TTL Cache**: `BingxClientsCache` and `BybitClientsCache` cache client instances with 10-minute lifetime
- **Lock-Free**: No blocking operations for maximum performance
- **Lazy Cleanup**: Expired entries removed on access, not proactively
#### Implemented methods
```text
BybitClient methods (43):
batch_cancel_order get_server_time
batch_place_order get_smp_group_id
batch_set_collateral_coin get_trade_behaviour_setting
cancel_all_orders get_transaction_log
cancel_order get_transferable_amount
decode_str get_wallet_balance
get_account_info manual_borrow
get_account_instruments_info manual_repay
get_api_key_info manual_repay_without_asset_conversion
get_borrow_history place_order
get_closed_pnl repay_liability
get_coin_greeks reset_mmp
get_collateral_info set_collateral_coin
get_dcp_info set_leverage
get_fee_rate set_limit_price_behaviour
get_instruments_info set_margin_mode
get_kline set_mmp
get_mmp_state set_spot_hedging
get_open_and_closed_orders set_trading_stop
get_order_history switch_position_mode
get_position_info upgrade_to_unified_account_pro
get_risk_limit
BingxClient methods (48):
cancel_all_spot_open_orders get_spot_profit_details
cancel_all_swap_open_orders get_spot_profit_overview
cancel_spot_batch_orders get_spot_symbols
cancel_swap_batch_orders get_spot_trade_details
change_swap_margin_type get_swap_account_balance
close_perpetual_trader_position_by_order get_swap_contracts
close_swap_position get_swap_full_orders
decode_str get_swap_klines
get_account_asset_overview get_swap_leverage_and_available_positions
get_account_uid get_swap_margin_type
get_api_permissions get_swap_open_orders
get_perpetual_copy_trading_pairs get_swap_order_details
get_perpetual_current_trader_order get_swap_order_history
get_perpetual_personal_trading_overview get_swap_position_history
get_perpetual_profit_details get_swap_position_mode
get_perpetual_profit_overview get_swap_positions
get_server_time place_spot_order
get_spot_account_assets place_swap_batch_orders
get_spot_history_orders place_swap_order
get_spot_klines sell_spot_asset_by_order
get_spot_open_orders set_perpetual_commission_rate
get_spot_order_details set_perpetual_trader_tpsl_by_order
get_spot_order_history set_swap_leverage
get_spot_personal_trading_overview set_swap_position_mode
```
## Installation
```bash
poetry add aiotrade-sdk
```
## Quick Start
### Option 1: Shared Session (Recommended for Production)
```python
from aiotrade import SharedSessionManager, BingxClient, BybitClient
# Initialize shared session at startup (once per application)
SharedSessionManager.setup(max_connections=2000)
# Create clients for different exchanges - they automatically use the shared session
bingx_client = BingxClient(api_key="bingx_key", api_secret="bingx_secret", demo=True)
bybit_client = BybitClient(api_key="bybit_key", api_secret="bybit_secret", testnet=True)
try:
# Use clients for API calls
bingx_assets = await bingx_client.get_spot_account_assets()
bybit_tickers = await bybit_client.get_tickers(category="spot")
finally:
# Close shared session at shutdown
await SharedSessionManager.close()
```
### Option 2: Individual Sessions
```python
from aiotrade import BingxClient, BybitClient
# BingX client with individual session
async with BingxClient(api_key="your_key", api_secret="your_secret", demo=True) as client:
assets = await client.get_spot_account_assets()
print(f"BingX assets: {assets}")
# Bybit client with individual session
async with BybitClient(api_key="your_key", api_secret="your_secret", testnet=True) as client:
tickers = await client.get_tickers(category="spot")
print(f"Bybit tickers: {tickers}")
```
### Option 3: Cached Clients
```python
from aiotrade import BingxClientsCache, BybitClientsCache
# Get cached BingX client (creates new if doesn't exist)
bingx_client = BingxClientsCache.get_or_create(
api_key="your_key",
api_secret="your_secret",
demo=True
)
# Get cached Bybit client
bybit_client = BybitClientsCache.get_or_create(
api_key="your_key",
api_secret="your_secret",
testnet=True
)
# Use clients (session management is automatic)
async with bingx_client:
assets = await bingx_client.get_spot_account_assets()
async with bybit_client:
tickers = await bybit_client.get_tickers(category="spot")
# Same parameters return the same cached instance
cached_bingx = BingxClientsCache.get_or_create(
api_key="your_key",
api_secret="your_secret",
demo=True
)
assert bingx_client is cached_bingx # True
```
## Session Behavior
| Scenario | Session Type | When Used |
| ------------------------------------- | ------------------------- | ---------------------- |
| `SharedSessionManager.setup()` called | Shared session | All clients |
| No shared session initialized | Individual session | Each client |
| Cached clients | Depends on initialization | Cached per credentials |
## Cache Features
- **Automatic TTL**: 10 minutes default, configurable
- **Memory Safe**: Prevents client accumulation
- **High Performance**: Lock-free operations
- **Background Cleanup**: Optional periodic cleanup task
```python
# Configure cache lifetime for each exchange
BingxClientsCache.configure(lifetime_seconds=1800) # 30 minutes
BybitClientsCache.configure(lifetime_seconds=1800) # 30 minutes
# Start background cleanup
bingx_cleanup = BingxClientsCache.create_cleanup_task(interval_seconds=300)
bybit_cleanup = BybitClientsCache.create_cleanup_task(interval_seconds=300)
# Manual cleanup
bingx_removed = BingxClientsCache.cleanup_expired()
bybit_removed = BybitClientsCache.cleanup_expired()
```
## API Methods
### BingX Client Methods
```python
from aiotrade import BingxClient
client = BingxClient(api_key="your_key", api_secret="your_secret", demo=True)
# Market data
server_time = await client.get_server_time()
# Spot trading
assets = await client.get_spot_account_assets()
tickers = await client.get_spot_tickers()
# Swap trading (perpetual futures)
await client.place_swap_order({
"symbol": "BTC-USDT",
"side": "BUY",
"positionSide": "BOTH",
"type": "MARKET",
"quantity": 0.001
})
```
### Bybit Client Methods
```python
from aiotrade import BybitClient
client = BybitClient(api_key="your_key", api_secret="your_secret", testnet=True)
# Market data
server_time = await client.get_server_time()
tickers = await client.get_tickers(category="spot")
klines = await client.get_kline("BTCUSDT", "1h", category="linear")
# Trading
await client.place_order({
"category": "linear",
"symbol": "BTCUSDT",
"side": "Buy",
"orderType": "Market",
"qty": "0.001"
})
```
## Requirements
- Python >= 3.12
- aiohttp
- High-performance connection pooling for production use
## Performance Tips
1. **Use Shared Session** for applications creating many clients
2. **Enable Caching** for repeated API credential usage
3. **Configure Connection Limits** based on your throughput needs
4. **Use Background Cleanup** for long-running applications
| text/markdown | Daniil Pavlovich | layred.dota2@mail.ru | null | null | null | trading, api, async, cryptocurrency, exchange, bingx, bybit, finance, asyncio, aiohttp | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"aiohttp<4.0.0,>=3.13.3",
"orjson<4.0.0,>=3.11.5"
] | [] | [] | [] | [
"Changelog, https://github.com/vispar-tech/aiotrade/blob/main/CHANGELOG.md",
"Documentation, https://github.com/vispar-tech/aiotrade#readme",
"Homepage, https://github.com/vispar-tech/aiotrade",
"Issues, https://github.com/vispar-tech/aiotrade/issues",
"Repository, https://github.com/vispar-tech/aiotrade"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:05:38.560968 | aiotrade_sdk-0.8.0.tar.gz | 39,863 | a5/cb/a4475cdf3bb1f3c95c524f11a12e6295e0cceb5a4f16129caca6afd379ae/aiotrade_sdk-0.8.0.tar.gz | source | sdist | null | false | 516d14c93fd6d1c7f8616bbe48c621ea | a3e4da73fd8f1ce62a58d6e221ac69f452da34114ae0cb1868c014b02228a6b8 | a5cba4475cdf3bb1f3c95c524f11a12e6295e0cceb5a4f16129caca6afd379ae | MIT | [
"LICENSE"
] | 254 |
2.4 | laser-learning-environment | 2.6.5 | Laser Learning Environment (LLE) for Multi-Agent Reinforcement Learning | # Laser Learning Environment (LLE)
Documentation: [https://yamoling.github.io/lle/](https://yamoling.github.io/lle/)
LLE is a fast Multi-Agent Reinforcement Learning environment written in Rust which has proven to be a difficult exploration benchmark so far. The agents start in the start tiles, must collect the gems and finish the game by reaching the exit tiles. There are five actions: North, South, East, West and Stay.
When an agent enters a laser of its own colour, it blocks it. Otherwise, it dies and the game ends.

# Quick start
## Installation
Install the Laser Learning Environment with uv, pip, poetry, ...
```bash
pip install laser-learning-environment # Latest stable release with pip
pip install git+https://github.com/yamoling/lle # latest push on master
```
## Usage
LLE can be used at two levels of abstraction: as an `MARLEnv` for cooperative multi-agent reinforcement learning or as a `World` for many other purposes.
### For cooperative multi-agent reinforcement learning
The `LLE` class inherits from the `MARLEnv` class in the [marlenv](https://github.com/yamoling/multi-agent-rlenv) framework. Here is an example with the following map: 
```python
from lle import LLE
env = LLE.from_str("S0 G X").build()
done = False
obs, state = env.reset()
while not done:
# env.render() # Uncomment to render
actions = env.sample_action()
step = env.step(actions)
# Access the step data with `step.obs`, `step.reward`, ...
done = step.is_terminal # Either done or truncated
```
### For other purposes or fine grained control
The `World` class provides fine grained control on the environment by exposing the state of the world and the events that happen when the agents move.
```python
from lle import World, Action, EventType
world = World("S0 G X") # Linear world with start S0, gem G and exit X
world.reset()
available_actions = world.available_actions()[0] # [Action.STAY, Action.EAST]
events = world.step([Action.EAST])
assert events[0].event_type == EventType.GEM_COLLECTED
events = world.step([Action.EAST])
assert events[0].event_type == EventType.AGENT_EXIT
```
You can also access and force the state of the world
```python
state = world.get_state()
...
events = world.set_state(state)
```
You can query the world on the tiles with `world.start_pos`, `world.exit_pos`, `world.gem_pos`, ...
## Citing our work
The environment has been presented at [EWRL 2023](https://openreview.net/pdf?id=IPfdjr4rIs) and at [BNAIC 2023](https://bnaic2023.tudelft.nl/static/media/BNAICBENELEARN_2023_paper_124.c9f5d29e757e5ee27c44.pdf) where it received the best paper award.
```
@inproceedings{molinghen2023lle,
title={Laser Learning Environment: A new environment for coordination-critical multi-agent tasks},
author={Molinghen, Yannick and Avalos, Raphaël and Van Achter, Mark and Nowé, Ann and Lenaerts, Tom},
year={2023},
series={BeNeLux Artificial Intelligence Conference},
booktitle={BNAIC 2023}
}
```
## Development
If you want to modify the environment, you can clone the repo, install the python dependencies then compile it with `maturin`. The below example assumes that you are using `uv` as package manager but it should work with `conda`, `poetry` or just `pip` as well.
```
git clone https://github.com/yamoling/lle
uv venv # create a virtual environment
source .venv/bin/activate
uv sync # install python dependencies
maturin dev # build and install lle in the venv
```
You can also re-generate the python bindings in the folder `python/lle` with
```bash
cargo run --bin stub-gen
```
## Tests
Run unit tests in rust & python with
```bash
cargo test
maturin develop
pytest
```
| text/markdown; charset=UTF-8; variant=GFM | null | Yannick Molinghen <yannick.molinghen@ulb.be> | null | null | null | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | https://github.com/yamoling/lle | null | <4,>=3.10 | [] | [] | [] | [
"numpy>=2.0.0",
"multi-agent-rlenv>=3.5.0",
"opencv-python>=4.0.0",
"orjson>=3.10.15"
] | [] | [] | [] | [] | maturin/1.12.2 | 2026-02-19T13:04:44.430894 | laser_learning_environment-2.6.5-cp314-cp314t-win_amd64.whl | 1,356,426 | 49/c7/625502a14f48e2fc5b9eaf22503518b390ce9078b31fca63839ae364323c/laser_learning_environment-2.6.5-cp314-cp314t-win_amd64.whl | cp314 | bdist_wheel | null | false | 74616a113f33dead999e460af7d48c44 | 3e84f9c412501f85669eee7f940b06e705528eb810d04f451ddbe2ea076983d4 | 49c7625502a14f48e2fc5b9eaf22503518b390ce9078b31fca63839ae364323c | null | [] | 3,069 |
2.4 | taggedLog | 1.0.5 | structured log for python | # Doc
Thank you for using the tag-logger package !
This log module is meant to be as little and simple as possible, and to work on every project you want it on.
Feel free to modify the code depending on the uses, and if you think somethin misses or might be improved, please tell me on the github page of the project.
## To use it :
the module is built to allow only one hidden instance of the Log object, so you don't have to instanciate the Log object.
To call the module :
from taggedLog.log import Log
To open the log :
Log.start_log(...)
then you can use the methods :
Log.info(...) which write an information in the log
Log.warning(...) which write a warning in the log
Log.error(...) which write an error in the log
Finally, close your log at the end of your program :
Log.close_log()
| text/markdown | epsilonkn | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/epsilonkn/Log-Manager/tree/main",
"Issues, https://github.com/epsilonkn/Log-Manager/issues"
] | twine/6.2.0 CPython/3.12.7 | 2026-02-19T13:04:19.528158 | taggedlog-1.0.5.tar.gz | 4,330 | 2b/3c/cbb4ba829e20cca99f978b4c8965d8cf52d0419cfb06270a20cb5a6a4627/taggedlog-1.0.5.tar.gz | source | sdist | null | false | cd92870bcb0689a6238de3037a90d603 | 1bcbfb513e23135726c6e16d2787378e40bce560cdfb5ca76a4227fc2aed73f7 | 2b3ccbb4ba829e20cca99f978b4c8965d8cf52d0419cfb06270a20cb5a6a4627 | MIT | [
"LICENSE"
] | 0 |
2.4 | arcane-mct | 2.1.0 | Package description | # Arcane MCT
Internal package for **Merchant Center (MCT)** access. It talks to Google Shopping via **Content API for Shopping v2.1** by default, with an optional migration path to the **Merchant API** (Accounts + Products client libraries).
---
## Installation
Install from the monorepo (Poetry):
```bash
poetry add ../arcane-mct # or your path to this package
```
Dependencies include `arcane-core`, `arcane-datastore`, `arcane-credentials`, and the Google Shopping client libraries (see `pyproject.toml`).
---
## Usage
### MctClient (service account only)
Use a GCP service account JSON to create a client and call MCT:
```python
from arcane.mct import MctClient
client = MctClient(
gcp_service_account="/path/to/service-account.json",
user_email="user@example.com", # optional; needed for user-delegated credentials
secret_key_file="/path/to/secret.key", # required when user_email is set
gcp_project="my-gcp-project",
)
# Get the account display name for a merchant ID
name = client.get_mct_account_details(merchant_id=123456789)
print(name) # e.g. "My Store"
# Check that the account is not a multi-client account (MCA); raises if it is
client.check_if_multi_client_account(merchant_id=123456789)
```
When you already have an MCT account dict (e.g. from your API), you can pass it instead of `user_email`:
```python
client = MctClient(
gcp_service_account="/path/to/service-account.json",
mct_account={"creator_email": "user@example.com", ...},
secret_key_file="/path/to/secret.key",
gcp_project="my-gcp-project",
)
name = client.get_mct_account_details(merchant_id=123456789)
```
Using only the service account (no user credentials):
```python
client = MctClient(
gcp_service_account="/path/to/service-account.json",
)
name = client.get_mct_account_details(merchant_id=123456789)
```
### check_access_before_creation
Before linking or creating an MCT account, you can verify that the user has access and that the account is not an MCA:
```python
from arcane.mct import check_access_before_creation
should_use_user_access = check_access_before_creation(
mct_account_id=123456789,
user_email="user@example.com",
gcp_service_account="/path/to/arcane-service-account.json",
secret_key_file="/path/to/secret.key",
gcp_project="my-gcp-project",
)
# Returns True if the user's credentials have access (use user access when linking).
# Raises MctAccountLostAccessException or MerchantCenterServiceDownException on failure.
```
### Exceptions
- **`MctAccountLostAccessException`** — The account is not accessible (e.g. wrong merchant ID, revoked access, or MCA when only sub-accounts are allowed).
- **`MerchantCenterServiceDownException`** — The Merchant Center API is unavailable or returned a server error.
---
## Merchant API migration
Content API for Shopping is being sunset (August 2026). You can switch to the **Merchant API** in two ways:
1. **Environment**: set `USE_MERCHANT_API=true` (or `1`) so all clients use the Merchant API when no explicit flag is passed.
2. **In code**: pass `use_merchant_api=True` (or `False`) where supported:
- `MctClient(..., use_merchant_api=True)`
- `check_access_before_creation(..., use_merchant_api=True)`
If you omit `use_merchant_api` (or pass `None`), the backend is chosen from the `USE_MERCHANT_API` environment variable. Default is Content API (with a deprecation warning).
| text/markdown | Arcane | product@wearcane.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.9 | [] | [] | [] | [
"arcane-core<2,>=1",
"arcane-credentials<0.2,>=0.1",
"arcane-datastore<2,>=1",
"arcane-requests<2,>=1",
"backoff>=1.10.0",
"google-api-python-client>=2.149.0",
"google-shopping-merchant-accounts<2.0,>=1.0",
"google-shopping-merchant-products<2.0,>=1.0"
] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.12.12 Linux/6.14.0-1017-azure | 2026-02-19T13:04:16.074614 | arcane_mct-2.1.0-py3-none-any.whl | 7,968 | f3/e8/719e07fd796d48ee174d4f2cc71c8a28604fcf0c5a5e614356124e49722a/arcane_mct-2.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | b5e6873c4969a19849949b5fbacc2146 | b81a3a287f4d41da440e8a1e2b0092676f222b7774144dc846b9482e1bf810df | f3e8719e07fd796d48ee174d4f2cc71c8a28604fcf0c5a5e614356124e49722a | null | [] | 248 |
2.4 | pyveb | 4.0.7 | Package containing common code and reusable components for pipelines and dags | # General
Package containing resuable code components for data pipelines and dags deployed to pypi.
# Usage
Install/Upgrade locally:
```
$ pip3 install pyveb
$ pip3 install pyveb --upgrade
```
Import in python
```
import pyveb
from pyveb import selenium_client
```
# Update package
Package is automaticly deployed to pypi via github actions. Just commit and open a pull request. During the action workflow, the version will be automatically bumped and updated pyproject.toml is commited back.
! in case a dependency is added to pyproject.toml, no workflow is started unless there are also changes to src/pyveb/**
| text/markdown | pieter | pieter.de.petter@veb.be | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | ==3.13.12 | [] | [] | [] | [
"Office365-REST-Python-Client==2.6.2",
"boto3==1.42.42",
"botocore==1.42.42",
"numpy==2.4.2",
"pandas==2.3.3",
"psutil==7.2.2",
"psycopg2-binary==2.9.11",
"pyarrow==23.0.0",
"pyodbc==5.3.0",
"pyspark==4.1.1",
"pyyaml==6.0.3",
"requests==2.32.5",
"s3fs==2026.2.0",
"selenium==4.40.0",
"sim... | [] | [] | [] | [
"Homepage, https://vlaamsenergiebedrijf.visualstudio.com/Terra/_git/terra-etl?path=/common_code"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:04:11.030273 | pyveb-4.0.7.tar.gz | 51,593 | 12/ee/dcb8795071f82d697b2443d9e44ebfdd0115c1b8ee752e2854132581bbb6/pyveb-4.0.7.tar.gz | source | sdist | null | false | cd61d6befdcdc770dfef2bc0f1638570 | 3b334a82466e18f65bbe180d1abdf30bc0e261ce18748425faa2535c9ffbceb4 | 12eedcb8795071f82d697b2443d9e44ebfdd0115c1b8ee752e2854132581bbb6 | null | [
"LICENSE"
] | 255 |
2.1 | meld-fourdiff | 0.0.5 | A repackaged version of Meld, including the Fourdiff feature. | # Fourdiff: A new way to resolve merge conflicts with ease and confidence
A few years ago I was trying to resolve a particularly horrible merge conflict, and as a way to escape the horror, I started thinking: could there be a better way? A way that would let me really understand what I'm doing, instead of trying to guess my way out of the mess? Thanks God, I found such a way, and I've been using it ever since. I think it can help others as well. Would you like to give it a try?
To try it on Ubuntu (including WSL), install [uv](https://docs.astral.sh/uv/getting-started/installation/), some packages required to build [pygobject](https://pygobject.gnome.org/getting_started.html#ubuntu-getting-started), and `meld-fourdiff`:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
sudo apt update && sudo apt install -y --no-install-recommends libgirepository1.0-dev gcc libcairo2-dev pkg-config gir1.2-gtk-3.0 gir1.2-gtksource-4
uv tool install --python 3.10 meld-fourdiff
```
Make sure that `meld-fourdiff` works properly, by running `meld-fourdiff` and making sure you see a window, and add this section to your `~/.gitconfig`:
```ini
[mergetool "fourdiff"]
cmd = meld-fourdiff "$BASE" "$REMOTE" "$LOCAL" "$MERGED"
```
I want to share with you a hack I made about two years ago, which I'm using to resolve merge conflicts (and cherry-pick conflicts, and revert conflicts) in my job. Before I discovered this idea, I have hated merges, I never understood what was going on. Now I resolve conflicts with confidence. It's actually really hard for me to use any other tool.
It looks like this. After running `git merge` and getting merge conflicts, I run `git mergetool` and see this (except for the arrows):

In every merge, the goal is to take the difference between two revisions (the "base revision" and the "remote revision") and apply it to the current revision (the "local revision"). From left to right we see: 1. the remote revision 2. the base revision 3. the local revision (without any changes applied) and 4. the merged revision, which is what we have in our working directory.
The first lines show a successful merge. We can see that the change between BASE and REMOTE is the same change between LOCAL and merged. Namely, the change is to remove the `name` argument and remove the `print()` line. I can visually verify that the changes along the left arrow look the same as the changes along the right arrow.
Then, we have a merge conflict. Git doesn't know how to apply the change, and leaves the job to us. The right pane shows the git merge conflict format. To find the conflicts, click the right pane, press Ctrl-F, and search for `<<<`.
To better understand what's going on, I can press Ctrl-T, and the view switches to show the difference between BASE and LOCAL. This difference is the cause of the merge conflict:

We see that the change between BASE and LOCAL is that we added another argument to the function, so it now adds 3 numbers instead of 2 (Please excuse my stupid example). Now that I understand what causes the merge conflict, I can apply the change with confidence, by pressing again Ctrl-T to switch back to the original view, and editing the file on the right. I get this:

Again, I can visually verify that the change applied from LOCAL to MERGED looks the same as the change from BASE to REMOTE.
Perhaps it would help to show how the conflict looks with existing tools. If I run `git mergetool -t meld`, I get this:

On one side we have the arguments `name, a, b, c` and a `print`. On the other side we have only `a, b` and no `print`. What should I take? The truth is that It's actually impossible to resolve the conflict using only LOCAL and REMOTE[^1].
## Installation
This is known to work on Ubuntu 20.04. You just need to clone my fork of `meld`, the excellent diff viewer, and configure git to use it to resolve merge conflicts.
Run this:
```bash
cd ~/
git clone -b fourdiff https://github.com/noamraph/meld.git
sudo apt install meld # to install dependencies
```
And add this to `~/.gitconfig`:
```ini
[mergetool "fourdiff"]
cmd = ~/meld/bin/meld "$REMOTE" "$BASE" "$LOCAL" "$MERGED"
[merge]
tool = fourdiff
```
If you want to test it with the example I showed above, run this:
```bash
cd /tmp
git clone https://github.com/noamraph/conflict-example.git
cd conflict-example/
git merge origin/remove-greetings
```
You should get a merge conflict. Run this and the fourdiff should appear:
```bash
git mergetool
```
## Final thoughts
I find this to be a simple and very effective concept. I hope the `meld` developers would agree to add this feature, and I hope others will add it as well.
My initial thought was to arrange the panes in this order: BASE, REMOTE, LOCAL, MERGED. This would have made both the original change and the new change apply from left to right. However, this would have made the second view, showing the difference between BASE and LOCAL, confusing.
If you're a developer of a diff viewer and you want to add this to your app, the actual implementation is quite simple: there are always three 2-way diff widgets behind the scenes: BASE-REMOTE, REMOTE-LOCAL, LOCAL-MERGED. When one REMOTE is scrolled the other REMOTE is scrolled, and the same goes with LOCAL and LOCAL. This keeps all the panes in sync. The user can switch between showing BASE-REMOTE and LOCAL-MERGED, and showing only REMOTE-LOCAL.
[^1]: I could in theory compare the middle column to both sides, understand how it changed relative to both sides, and figure out how to apply both changes. But the visual diff doesn't help this at all, and after editing the middle there's no way to check this, and all this wouldn't work at all if you do a revert or a cherry-pick.
| text/markdown | null | null | null | Noam Raphael <noamraph@gmail.com> | GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. <one line to give the program's name and a brief idea of what it does.> Copyright (C) <year> <name of author> This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. <signature of Ty Coon>, 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. | diff, merge | [
"Development Status :: 5 - Production/Stable",
"Environment :: X11 Applications :: GTK",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)",
"Programming Language :: Python",
"Programming Language ... | [] | null | null | >=3.10 | [] | [] | [] | [
"pycairo==1.29.0",
"pygobject==3.38.0"
] | [] | [] | [] | [
"homepage, https://github.com/noamraph/meld/"
] | uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T13:04:05.634124 | meld_fourdiff-0.0.5.tar.gz | 1,629,887 | 56/df/8fc013e4b166d3f86a41d87189e72b2b23bddde2a669ac0fe4da10a25390/meld_fourdiff-0.0.5.tar.gz | source | sdist | null | false | 8fab44a5b13f352a99cd7a607f0c5ab9 | f49dc00ef2711dcd8acb64f175a32635678ccc424536ad30923de74e82e50f36 | 56df8fc013e4b166d3f86a41d87189e72b2b23bddde2a669ac0fe4da10a25390 | null | [] | 224 |
2.4 | aquamvs | 1.3.1 | Multi-view stereo reconstruction of underwater surfaces with refractive modeling | 
[](https://pypi.org/project/aquamvs/)
[](https://pypi.org/project/aquamvs/)
[](https://github.com/tlancaster6/AquaMVS/actions/workflows/test.yml)
[](https://opensource.org/licenses/MIT)
# AquaMVS
Multi-view-stereo (MVS) reconstruction of underwater surfaces viewed through a flat water surface, with Snell's law refraction modeling.
## :construction: Status :construction:
**02/17/26: This project is under active and rapid development.**
The API and internal structure are subject to frequent breaking changes without notice. It is not yet recommended for
production use. A stable release is planned by the end of the month. This section will be updated accordingly once that
milestone is reached.
## What it does
AquaMVS is a companion library to [AquaCal](https://github.com/tlancaster6/AquaCal). It consumes calibration output and synchronized video from above-water cameras to produce time-series 3D surface reconstructions. The pipeline handles the unique challenge of cameras positioned in air observing underwater geometry, accounting for refraction at the air-water interface using Snell's law.
## Key Features
- **Refractive ray casting** through air-water interface (Snell's law)
- **Dual matching pathways**: LightGlue (sparse) and RoMa v2 (dense) for different accuracy/speed tradeoffs
- **Multi-view depth fusion** with geometric consistency filtering
- **Surface reconstruction** (Poisson, heightfield, Ball Pivoting Algorithm)
- **Mesh export** (PLY, OBJ, STL, GLTF) with simplification
- **Full CLI and Python API** for pipeline users and custom workflow developers
## Quick Start
```python
from aquamvs import Pipeline
pipeline = Pipeline("config.yaml")
pipeline.run()
```
See the [full documentation](https://aquamvs.readthedocs.io/) for configuration details, API reference, and examples.
## Installation
AquaMVS requires several prerequisites (PyTorch, LightGlue, RoMa v2) to be installed first.
**See [INSTALL.md](INSTALL.md) for complete installation instructions.**
Quick summary:
```bash
# 1. Install PyTorch from pytorch.org
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
# 2. Install git-based prerequisites
pip install -r requirements-prereqs.txt
# 3. Install AquaMVS
pip install aquamvs
```
## Documentation
Full documentation is available at [https://aquamvs.readthedocs.io/](https://aquamvs.readthedocs.io/)
Topics include:
- Installation guide
- Configuration reference
- API documentation
- Usage examples
- Extension points for custom workflows
## Citation
If you use AquaMVS in your research, please cite:
```
Lancaster, T. (2026). AquaMVS: Multi-view stereo reconstruction with refractive geometry.
GitHub: https://github.com/tlancaster6/AquaMVS
Example dataset: https://github.com/tlancaster6/AquaMVS/releases/tag/v0.1.0-example-data
```
A Zenodo DOI will be added in a future release.
## License
MIT License. See [LICENSE](LICENSE) for details.
| text/markdown | Tucker Lancaster | null | null | null | MIT | multi-view-stereo, underwater, refraction, computer-vision, depth-estimation | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Image Recognition",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Prog... | [] | null | null | >=3.10 | [] | [] | [] | [
"kornia>=0.7.0",
"open3d>=0.18.0",
"opencv-python>=4.6.0",
"numpy>=1.24.0",
"scipy>=1.10.0",
"pyyaml>=6.0",
"matplotlib>=3.7.0",
"aquacal>=0.1.0",
"lightglue",
"romav2",
"pydantic>=2.12.0",
"tqdm>=4.66.0",
"tabulate>=0.9.0",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"... | [] | [] | [] | [
"Repository, https://github.com/tlancaster6/AquaMVS"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:03:13.364174 | aquamvs-1.3.1.tar.gz | 136,509 | 99/ab/c4f3f489c9508f1d5525848fc7d915174f3112c309d3b04f5429a6f309a4/aquamvs-1.3.1.tar.gz | source | sdist | null | false | eeabddf7cf8d944d5b8899e448dfbfb1 | c3cf965c299a2feb73e12b9bb5982b0a1196a4ad605f6adef3de5dbc26a7c74f | 99abc4f3f489c9508f1d5525848fc7d915174f3112c309d3b04f5429a6f309a4 | null | [
"LICENSE"
] | 251 |
2.4 | robotframework-robosapiens | 2.24.6 | Fully localized Robot Framework library for automating the SAP GUI using text selectors. | # RoboSAPiens: SAP GUI Automation for Humans
Fully localized Robot Framework library for automating the SAP GUI using text locators.
Available localizations:
- English
- German
## Requirements
- Scripting must be [enabled in the Application Server](https://help.sap.com/saphelp_aii710/helpdata/en/ba/b8710932b8c64a9e8acf5b6f65e740/content.htm?no_cache=true)
- Scripting must be [enabled in the SAP GUI client](https://help.sap.com/docs/sap_gui_for_windows/63bd20104af84112973ad59590645513/7ddb7c9c4a4c43219a65eee4ca8db001.html?version=760.01&locale=en-US)
## Installation
```bash
pip install robotframework-robosapiens
```
## Usage
Consult the [Documentation](https://imbus.github.io/robotframework-robosapiens/).
| text/markdown | imbus Rheinland GmbH | null | Marduk Bolaños | marduk.bolanos@imbus.de | Apache 2 | robotframework testing test automation sap gui | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: Apache Software License",
"Operating System :: Microsoft :: Windows :: Windows 10",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python... | [
"windows"
] | https://github.com/imbus/robotframework-robosapiens | null | >=3.8.2 | [] | [] | [] | [
"robotframework>=5.0.1"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.9 | 2026-02-19T13:03:00.917414 | robotframework_robosapiens-2.24.6-py3-none-any.whl | 18,452,543 | e1/0c/f7f7e9f3770a05d65e33b8a6c6a2bcdfe770f87ebb11bb93f46a94ae868b/robotframework_robosapiens-2.24.6-py3-none-any.whl | py3 | bdist_wheel | null | false | 81555fc16e510738bd74b860829b029a | c048cc13bfa9c3c1721da4bd403840488fa778aabff6a216ba9737895898c4f3 | e10cf7f7e9f3770a05d65e33b8a6c6a2bcdfe770f87ebb11bb93f46a94ae868b | null | [] | 115 |
2.4 | certReport | 3.4 | This script supports the reporting of Authenticode Certificates by reducing the effort on individuals to report. | # CertReport
This tool is intended to reduce the load of effort required to report authenticode certificates. It is intended to take the smallest amount of effort from the reporter, but provide the certificate authority with most the information they need to make a decision. When possible, it is recommended to augment the report with your own findings to help the certificate provider know what suspicious indicators you found.
As of version 2, we have added support to use VirusTotal API. In order to allow for VirusTotal API, we have added additional functions.
The default behavior of cert report is to query MalwareBazaar, which does not require an API key.
In version 3, we have added a SQLite database which stores information about the reports. This can be used for personal reference but also augments the report. See information in the database section below for more information!
## Installing
Use pip! `pip install certReport` or `pip3 install certReport`
## Usage
**Note: In version 2, it is required to provide the `--hash` (or `-#`) switch**
Here is an example:
Calling the script and passing in a SHA256 like this:<br>
`certReport --hash 89dc50024836f9ad406504a3b7445d284e97ec5dafdd8f2741f496cac84ccda9`
Will print the following information to the console:
```
---------------------------------
Greetings,
We identified a malware signed with a SSL.com EV Code Signing Intermediate CA RSA R3 certificate.
The malware sample is available on MalwareBazaar here: https://bazaar.abuse.ch/sample/89dc50024836f9ad406504a3b7445d284e97ec5dafdd8f2741f496cac84ccda9
Here are the signature details:
Name: A.P.Hernandez Consulting s.r.o.
Issuer: SSL.com EV Code Signing Intermediate CA RSA R3
Serial Number: 2941d5f8758501f9dbc4ba158058c3b5
SHA256 Thumbprint: a982917ba6de9588f0f7ed554223d292524e832c1621acae9ad11c0573df54a5
Valid From: 2024-01-25T16:51:40Z
Valid Until: 2025-01-24T16:51:40Z
The malware was tagged as exe, Pikabot and signed.
MalwareBazaar submitted the file to multiple public sandboxes, the links to the sandbox results are below:
Sandbox / Malware Family / Verdict / Analysis URL
Intezer None unknown https://analyze.intezer.com/analyses/c4915ef4-198f-4aba-81ed-81b29cd4dce6?utm_source=MalwareBazaar
Triage pikabot 10 / 10 https://tria.ge/reports/240222-pqlqkshb2w/
VMRay Pikabot malicious https://www.vmray.com/analyses/_mb/89dc50024836/report/overview.html
Please let us know if you have any questions.
------------------------
Send the above message to the certificate provider.
This report should be sent to SSL.com: https://ssl.com/revoke
```
This information is to be provided to the Certificate Issuer using the appropriate abuse report channels (such as email or website). The appropriate channel is provided at the end of the report (see above).
## Using VirusTotal
In version 2, it became possible to query VirusTotal. To use VirusTotal first set up your API key using the appropriate method for your operating system:
```
On Linux:
echo "export VT_API_KEY=your_api_key_here" >> ~/.bashrc
source ~/.bashrc
On Windows:
setx VT_API_KEY "your_api_key"
On MacOS:
echo "export VT_API_KEY=your_api_key_here" >> ~/.zprofile
source ~/.zprofile
```
Once the API key is configured as an environment variable the following command will generate a report:
```
certReport --hash 89dc50024836f9ad406504a3b7445d284e97ec5dafdd8f2741f496cac84ccda9 --service virustotal
```
Alternatively, the switches can be simplified:
```
certReport -# 89dc50024836f9ad406504a3b7445d284e97ec5dafdd8f2741f496cac84ccda9 -s VT
```
Both commands will return the following report:
```
---------------------------------
Greetings,
We identified a malware signed with a SSL.com EV Code Signing Intermediate CA RSA R3 certificate.
The malware sample is available on VirusTotal here: https://www.virustotal.com/gui/file/89dc50024836f9ad406504a3b7445d284e97ec5dafdd8f2741f496cac84ccda9/detection
Here are the signature details:
Name: A.P.Hernandez Consulting s.r.o.
Issuer: SSL.com EV Code Signing Intermediate CA RSA R3
Serial Number: 56 B6 29 CD 34 BC 78 F6
Thumbprint: 743AF0529BD032A0F44A83CDD4BAA97B7C2EC49A
Valid From: 2017-05-31 18:14:37
Valid Until: 2042-05-30 18:14:37
The malware was tagged as a peexe, long-sleeps, spreader, detect-debug-environment, service-scan, overlay, revoked-cert, signed and checks-user-input.
The malware was detected by 50 out of 74 antivirus engines.
The malware was classified as trojan by 30 detection engines.
The file was flagged as pikabot by 23 detection engines, zusy by 6 detection engines and gdfvt by 2 detection engines
This file was found during our investigation and had the following suspicious indicators:
- The file triggered the following high IDS rules:
- ET CNC Feodo Tracker Reported CnC Server group 1
- ET CNC Feodo Tracker Reported CnC Server group 2
Please let us know if you have any questions.
------------------------
Send the above message to the certificate provider.
This report should be sent to SSL.com: https://ssl.com/revoke
```
As stated previously, it is recommended to add additional bulletpoints near the end of the report. Additional bulletpoints should include findings from your own investigation. These details can help provide decision support for the certificate provider.
## Pushing reports to public database.
To push reports to the Cert Graveyard, use the option `-p`. It is required to have a CertGraveyard API key set as an environment variable named "CERT_GRAVEYARD_API". The API key can be obtained from your profile page when logged into TheCertGraveyard.org
```
Please set your certGraveyard API key by running the doing the following:
On Linux:
echo "CERT_GRAVEYARD_API=your_api_key_here" >> ~/.bashrc
source ~/.bashrc
On Windows:
setx CERT_GRAVEYARD_API "your_api_key"
On MacOS:
echo "export CERT_GRAVEYARD_API=your_api_key_here" >> ~/.zprofile
source ~/.zprofile
```
## Database
In version 3, a database was stored with information about all certs processed with certReport. This database contains most of the details which occur in the report. When running the command the user can use the option `-t` and supply a malware family. If the user does so, the report will add that user supplied name to the database and will check the database for any other instances of that malware name; when there are matches, it will augment the report with information about how many times that malware has been reported. For example, it could print a message like the following near the bottom of the report:
```
We have reported this same malware to SSL.com 2 times. We have reported the malware to other providers 10 times.
```
As of the current version, the database needs to be viewed or managed with a SQLite database viewer. It cannot be viewed or managed within the program.
NOTE: If the user runs the application with the same hash, the first instance of the hash will be removed from the database and replaced with the new information.
### Where is it?
The database is created in a folder in the user's home directory. The folder will be named "certReport" and the database will be named "certReport.db".
## Contributing
Please feel free to suggest changes to the script for additional certificate provider email addresses or methods of reporting. Half of the battle in reporting is finding where certificates should be submitted.
# Why Report?
Starting in 2018, the majority of certificates were no longer stolen, but they are issued to impostors (this case is argued in a scholarly article here: http://users.umiacs.umd.edu/~tdumitra/papers/WEIS-2018.pdf). I call these "Impostor Certs".
In 2023, I published my research into 50 certificates used by one actor. My findings confirmed that certificates are used to sign multiple malware families: https://squiblydoo.blog/2023/05/12/certified-bad/.
In 2024, I published an article on Impostor certs, after having revoked 100 certificates used to sign the same malware, that article can be read here: https://squiblydoo.blog/2024/05/13/impostor-certs/.
The TLDR is that multiple actors use the same certificate and reporting a certificate raises the cost of signing for all threat actors and it can impact multiple malware campaigns.
| text/markdown | null | Squiblydoo <Squiblydoo@pm.me> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.31.0",
"pytest>=6.2.4; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Squiblydoo/certReport",
"Bug Tracker, https://github.com/Squiblydoo/certReport/issues"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-19T13:02:25.884601 | certreport-3.4.tar.gz | 12,756 | a3/2e/3274cf8796a7941154ccd90f4734e6aebfceac31972473c1266ced8faccc/certreport-3.4.tar.gz | source | sdist | null | false | e89cb4631aee2ab6f950bd5a546161b4 | 1fa5e5d406197b05655022891542a19f7eba5153759ab2524cd8ed402b8dc3ed | a32e3274cf8796a7941154ccd90f4734e6aebfceac31972473c1266ced8faccc | null | [] | 0 |
2.4 | sm-bluesky | 1.0.1 | Bluesky code for Diamond's surface and magnetic materials beamline. | [](https://github.com/DiamondLightSource/sm-bluesky/actions/workflows/ci.yml)
[](https://codecov.io/gh/DiamondLightSource/sm-bluesky)
[](https://pypi.org/project/sm_bluesky)
[](https://www.apache.org/licenses/LICENSE-2.0)
# sm_bluesky
This module is a collection of custom Bluesky plans and utilities specific to
Materials and Magnetism Village (MMG) and
Surface and Structure Interface Village (SSG)
of Diamond Light Source.
List of supported beamlines:
- i05 Angular Resolved Photoelectron Emission Spectroscopy
- i06 Nanoscience
- i07 Surface and Interface Diffraction
- i09 Surface and Interface Structural Analysis
- i10 Advanced Dichroism Experiments
- i16 Materials and Magnetism
- i17 Coherent Soft X-ray Imaging and Diffraction (CSXID) (Diamond II Flagship beamline)
- i21 Resonant Inelastic soft X-ray Scattering
- b07 Versatile Soft X-ray (VerSoX) Beamline
- p99 Mapping Test Rig
- p60
- k07 Future of b07-1 beamline
Core beamline configurations and device logic can be found in <https://github.com/DiamondLightSource/dodal>
Source | <https://github.com/DiamondLightSource/sm-bluesky>
:---: | :---:
PyPI | `pip install sm_bluesky`
Documentation | <https://diamondlightsource.github.io/sm-bluesky>
Releases | <https://github.com/DiamondLightSource/sm-bluesky/releases>
<!-- README only content. Anything below this line won't be included in index.md -->
See https://diamondlightsource.github.io/sm-bluesky for more detailed documentation.
| text/markdown | null | Raymond Fan <raymond.fan@diamond.ac.uk> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"bluesky",
"dls-dodal>=2.0.0",
"ophyd-async[sim]",
"scanspec"
] | [] | [] | [] | [
"GitHub, https://github.com/DiamondLightSource/sm-bluesky"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:01:55.580107 | sm_bluesky-1.0.1.tar.gz | 725,388 | 2d/bf/b0601cba017d843d1b2275e3a0fb17776b13ffbbc2ba2dd2aee49cfed574/sm_bluesky-1.0.1.tar.gz | source | sdist | null | false | 8401796d7a8106ed286550bfe43fbbd7 | c68d48132ac76fd55ee0401d55d48af8bc09709a59f40200beb1012bdc09c6e2 | 2dbfb0601cba017d843d1b2275e3a0fb17776b13ffbbc2ba2dd2aee49cfed574 | null | [
"LICENSE"
] | 228 |
2.4 | interposition | 0.5.0 | Protocol-agnostic interaction interposition with lifecycle hooks for record, replay, and control. | # interposition
Protocol-agnostic interaction interposition with lifecycle hooks for record, replay, and control.
## Overview
Interposition is a Python library for replaying recorded interactions. Unlike VCRpy or other HTTP-specific tools, **Interposition does not automatically hook into network libraries**.
Instead, it provides a **pure logic engine** for storage, matching, and replay. You write the adapter for your specific target (HTTP client, database driver, IoT message handler), and Interposition handles the rest.
**Key Features:**
- **Protocol-agnostic**: Works with any protocol (HTTP, gRPC, SQL, Pub/Sub, etc.)
- **Type-safe**: Full mypy strict mode support with Pydantic v2
- **Immutable**: All data structures are frozen Pydantic models
- **Serializable**: Built-in JSON/YAML serialization for cassette persistence
- **Memory-efficient**: O(1) lookup with fingerprint indexing
- **Streaming**: Generator-based response delivery
- **Multi-mode**: Supports replay, record, and auto modes
## Architecture
Interposition sits behind your application's data access layer. You provide the "Adapter" that captures live traffic or requests replay from the Broker.
```text
+-------------+ +------------------+ +---------------+
| Application | <--> | Your Adapter | <--> | Interposition |
+-------------+ +------------------+ +---------------+
| |
(Traps calls) (Manages)
|
[Cassette]
```
## Installation
```bash
pip install interposition
```
## Practical Integration (Pytest Recipe)
The most common use case is using Interposition as a test fixture. Here is a production-ready recipe for `pytest`:
```python
import pytest
from interposition import Broker, Cassette, InteractionRequest
@pytest.fixture
def cassette_broker():
# Load cassette from a JSON file (or create one programmatically)
with open("tests/fixtures/my_cassette.json", "rb") as f:
cassette = Cassette.model_validate_json(f.read())
return Broker(cassette)
def test_user_service(cassette_broker, monkeypatch):
# 1. Create your adapter (mocking your actual client)
def mock_fetch(url):
request = InteractionRequest(
protocol="http",
action="GET",
target=url,
headers=(),
body=b"",
)
# Delegate to Interposition
chunks = list(cassette_broker.replay(request))
return chunks[0].data
# 2. Inject the adapter
monkeypatch.setattr("my_app.client.fetch", mock_fetch)
# 3. Run your test
from my_app import get_user_name
assert get_user_name(42) == "Alice"
```
## Protocol-Agnostic Examples
Interposition shines where HTTP-only tools fail.
### SQL Database Query
```python
request = InteractionRequest(
protocol="postgres",
action="SELECT",
target="users_table",
headers=(),
body=b"SELECT id, name FROM users WHERE id = 42",
)
# Replay returns: b'[(42, "Alice")]'
```
### MQTT / PubSub Message
```python
request = InteractionRequest(
protocol="mqtt",
action="subscribe",
target="sensors/temp/room1",
headers=(("qos", "1"),),
body=b"",
)
# Replay returns stream of messages: b'24.5', b'24.6', ...
```
## Usage Guide
### Manual Construction (Quick Start)
If you need to build interactions programmatically (e.g., for seeding tests):
```python
from interposition import (
Broker,
Cassette,
Interaction,
InteractionRequest,
ResponseChunk,
)
# 1. Define the Request
request = InteractionRequest(
protocol="api",
action="query",
target="users/42",
headers=(),
body=b"",
)
# 2. Define the Response
chunks = (
ResponseChunk(data=b'{"id": 42, "name": "Alice"}', sequence=0),
)
# 3. Create Interaction & Cassette
interaction = Interaction(
request=request,
fingerprint=request.fingerprint(),
response_chunks=chunks,
)
cassette = Cassette(interactions=(interaction,))
# 4. Replay
broker = Broker(cassette=cassette)
response = list(broker.replay(request))
```
### Persistence & Serialization
Interposition models are Pydantic v2 models, making serialization trivial.
```python
# Save to JSON
with open("cassette.json", "w") as f:
f.write(cassette.model_dump_json(indent=2))
# Load from JSON
with open("cassette.json") as f:
cassette = Cassette.model_validate_json(f.read())
# Generate JSON Schema
schema = Cassette.model_json_schema()
```
### Streaming Responses
For large files or streaming protocols, responses are yielded lazily:
```python
# The broker returns a generator
for chunk in broker.replay(request):
print(f"Received chunk: {len(chunk.data)} bytes")
```
### Broker Modes
The `Broker` supports three modes via the `mode` parameter:
| Mode | Behavior |
|------|----------|
| `replay` | Default. Returns recorded responses only. Raises `InteractionNotFoundError` on cache miss. |
| `record` | Always forwards to live responder and records. Ignores existing cassette entries. |
| `auto` | Returns recorded response if available; otherwise forwards to live and records. |
The `BrokerMode` type alias is available for type hints:
```python
from interposition import BrokerMode
mode: BrokerMode = "auto"
```
### Live Responder
For `record` and `auto` modes, you must provide a `live_responder` callable that forwards requests to your actual backend:
```python
from interposition import (
Broker,
Cassette,
InteractionRequest,
ResponseChunk,
)
from collections.abc import Iterable
def my_live_responder(request: InteractionRequest) -> Iterable[ResponseChunk]:
"""Forward request to actual backend and yield response chunks."""
# Your actual implementation here
response = your_http_client.request(
method=request.action,
url=request.target,
headers=dict(request.headers),
data=request.body,
)
yield ResponseChunk(data=response.content, sequence=0)
```
The `LiveResponder` type alias is available:
```python
from interposition.services import LiveResponder
```
### Record Mode
Use `record` mode to capture new interactions:
```python
# Start with empty cassette
cassette = Cassette(interactions=())
broker = Broker(
cassette=cassette,
mode="record",
live_responder=my_live_responder,
)
# All requests are forwarded and recorded
response = list(broker.replay(request))
# Save the updated cassette
with open("cassette.json", "w") as f:
f.write(broker.cassette.model_dump_json(indent=2))
```
### Auto Mode
Use `auto` mode for hybrid workflows (replay if available, record if not):
```python
# Load existing cassette (may be empty or partial)
with open("cassette.json") as f:
cassette = Cassette.model_validate_json(f.read())
broker = Broker(
cassette=cassette,
mode="auto",
live_responder=my_live_responder,
)
# Returns recorded response if exists, otherwise forwards and records
response = list(broker.replay(request))
```
### Cassette Store
For automatic cassette persistence during recording, use a `CassetteStore`. The `CassetteStore` protocol defines a simple interface for loading and saving cassettes:
```python
from interposition import CassetteStore
class MyCassetteStore:
"""Custom store implementation."""
def load(self) -> Cassette:
"""Load cassette from storage."""
...
def save(self, cassette: Cassette) -> None:
"""Save cassette to storage."""
...
```
When a `cassette_store` is provided to the `Broker`, it automatically saves the cassette after each recorded interaction.
### JsonFileCassetteStore
A built-in file-based cassette store using JSON format:
```python
from pathlib import Path
from interposition import Broker, Cassette, JsonFileCassetteStore
# Create store pointing to a JSON file
store = JsonFileCassetteStore(Path("cassettes/my_test.json"))
# Load existing cassette (raises CassetteLoadError if not exists)
cassette = store.load()
# Or start with empty cassette
cassette = Cassette(interactions=())
# Create broker with automatic persistence
broker = Broker(
cassette=cassette,
mode="record",
live_responder=my_live_responder,
cassette_store=store, # Auto-saves after each recording
)
# After replay, cassette is automatically saved to file
response = list(broker.replay(request))
```
By default, `load()` raises `CassetteLoadError` if the file doesn't exist. Use `create_if_missing=True` to return an empty cassette instead — useful for record/auto workflows where the file is created on first save:
```python
store = JsonFileCassetteStore(
Path("cassettes/my_test.json"),
create_if_missing=True,
)
cassette = store.load() # Returns empty Cassette if file doesn't exist
```
You can also use `Broker.from_store()` to load the cassette and create a broker in one step:
```python
store = JsonFileCassetteStore(
Path("cassettes/my_test.json"),
create_if_missing=True,
)
broker = Broker.from_store(store, mode="auto", live_responder=my_live_responder)
```
The `JsonFileCassetteStore` creates parent directories automatically when saving.
If saving fails, the error is propagated and response streaming stops (fail-fast).
### Error Handling
All interposition exceptions inherit from `InterpositionError`, allowing you to catch all domain errors with a single handler:
```python
from interposition import InterpositionError
try:
broker.replay(request)
except InterpositionError as e:
print(f"Interposition error: {e}")
```
**InteractionNotFoundError**: Raised when no matching interaction exists (in `replay` mode) or when `auto` mode has a cache miss without a configured `live_responder`:
```python
from interposition import InteractionNotFoundError
try:
broker.replay(unknown_request)
except InteractionNotFoundError as e:
print(f"Not recorded: {e.request.target}")
```
**LiveResponderRequiredError**: Raised when `record` mode is used without a `live_responder`:
```python
from interposition import LiveResponderRequiredError
broker = Broker(cassette=cassette, mode="record") # No live_responder!
try:
broker.replay(request)
except LiveResponderRequiredError as e:
print(f"live_responder required for {e.mode} mode")
```
**InteractionValidationError**: Raised when an `Interaction` fails validation (e.g., fingerprint mismatch or invalid response chunk sequence):
```python
from interposition import Interaction, InteractionValidationError
try:
# This will fail: fingerprint doesn't match request
interaction = Interaction(
request=request,
fingerprint=wrong_fingerprint, # Mismatch!
response_chunks=chunks,
)
except InteractionValidationError as e:
print(f"Validation failed: {e}")
```
**CassetteLoadError**: Raised when `JsonFileCassetteStore.load()` fails (file not found, permission denied, corrupted JSON, etc.):
```python
from pathlib import Path
from interposition import CassetteLoadError, JsonFileCassetteStore
store = JsonFileCassetteStore(Path("cassettes/missing.json"))
try:
cassette = store.load()
except CassetteLoadError as e:
print(f"Failed to load from {e.path}: {e.__cause__}")
```
**CassetteSaveError**: Raised when `JsonFileCassetteStore.save()` fails due to I/O errors (permission denied, disk full, etc.):
```python
from pathlib import Path
from interposition import CassetteSaveError, JsonFileCassetteStore
store = JsonFileCassetteStore(Path("/readonly/cassette.json"))
try:
store.save(cassette)
except CassetteSaveError as e:
print(f"Failed to save to {e.path}: {e.__cause__}")
```
## Version
Access the package version programmatically:
```python
from interposition import __version__
if __version__ < "0.2.0":
print("Auto mode is not supported")
else:
print("Auto mode is supported")
```
## Development
### Prerequisites
- Python 3.10+
- [uv](https://github.com/astral-sh/uv) (recommended)
### Setup & Testing
```bash
# Clone and install
git clone https://github.com/osoekawaitlab/interposition.git
cd interposition
uv pip install -e . --group=dev
# Run tests
nox -s tests
```
## License
MIT
| text/markdown | osoken | null | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Operating System :: POSIX",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python... | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic<3.0,>=2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/osoekawaitlab/interposition",
"Repository, https://github.com/osoekawaitlab/interposition"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:01:54.221543 | interposition-0.5.0.tar.gz | 15,735 | ec/64/e90190d3f4f522a55868fb618eb94fc48e63b669ee2078ad1f93fe07f1cc/interposition-0.5.0.tar.gz | source | sdist | null | false | c6210630f3efb7be120283f3a9dcc55b | 6641333fe7146abf3dca01a37b0ff70366b60b4670c5760a329d0b4cf7386ca8 | ec64e90190d3f4f522a55868fb618eb94fc48e63b669ee2078ad1f93fe07f1cc | MIT | [
"LICENSE"
] | 231 |
2.4 | fixpi | 1.1.1 | LLM-Powered Remote OS Repair Agent — SSH + LLM diagnostic tool for any Linux device | # fixpi — Remote OS Repair Agent
SSH into any Linux device, describe the problem, let the LLM diagnose and fix it.
```bash
fixpi -p "Docker won't start, cgroup error" # one-liner
fixpi run -f /var/log/syslog -p "service crashing" # attach log
fixpi run -d rpi3 -l groq -p "display not working" # device + LLM profile
fixpi serve # REST/WS API on :7771
```
## Quick Start
```bash
cd fixPI
cp .env.example .env
# Set: RPI_HOST, RPI_USER, RPI_PASSWORD, LLM_MODEL, GROQ_API_KEY
pip install -e ".[clickmd]"
fixpi # interactive menu
```
```bash
# .env minimum:
RPI_HOST=raspberrypi.local
RPI_USER=tom
RPI_PASSWORD=secret
LLM_MODEL=groq/llama-3.3-70b-versatile
GROQ_API_KEY=gsk_...
```
## CLI Reference
```text
fixpi [OPTIONS] COMMAND [ARGS]
Global options:
-p, --prompt TEXT Problem to fix (runs immediately without menu)
-f, --file PATH Log/error file to attach as context
Commands:
run Diagnose & fix via LLM agent
diagnose Diagnose only — no changes applied
serve Start REST/WebSocket API server
config Configure LLM provider & SSH wizard
test Test LLM connection
list-models Show all supported LLM providers & models
list-devices Show configured device profiles
run / diagnose options:
-p, --prompt TEXT Problem description
-f, --file PATH Log file to include as LLM context
-d, --device NAME Device profile (~/.fixpi/devices/NAME.env)
-l, --llm NAME LLM profile (~/.fixpi/llm/NAME.env)
serve options:
--host TEXT Bind host [default: 0.0.0.0]
--port INT Bind port [default: 7771]
--reload Dev mode
```
### Examples
```bash
# Fix any problem:
fixpi -p "pydantic-core fails to install on Python 3.13"
fixpi run -p "Docker not starting" -f /var/log/syslog
# Diagnose without changes:
fixpi diagnose -p "why is systemd service crashing?"
# Multi-device:
fixpi run -d rpi3 -p "display black after reboot"
fixpi run -d rpi4-prod -l gemini -p "OOM killer active"
# Pipe log from stdin:
journalctl -u docker --since '1h ago' | fixpi run -f /dev/stdin
# REST API:
fixpi serve &
curl -X POST http://localhost:7771/run \
-d '{"prompt": "fix display", "device": "rpi3"}' \
-H 'Content-Type: application/json'
```
## Device & LLM Profiles
```bash
# Create device profiles in ~/.fixpi/devices/NAME.env
mkdir -p ~/.fixpi/devices ~/.fixpi/llm
cat > ~/.fixpi/devices/rpi3.env << 'EOF'
RPI_HOST=192.168.1.100
RPI_USER=tom
RPI_PASSWORD=secret
EOF
cat > ~/.fixpi/llm/groq.env << 'EOF'
LLM_MODEL=groq/llama-3.3-70b-versatile
GROQ_API_KEY=gsk_...
EOF
# Use:
fixpi run -d rpi3 -l groq -p "fix problem"
fixpi list-devices
```
## REST / WebSocket API
```bash
pip install 'fixpi[server]' # fastapi + uvicorn
fixpi serve # http://localhost:7771
```
| Method | Endpoint | Description |
| ------ | -------- | ----------- |
| `POST` | `/run` | Start repair job → returns `job_id` |
| `POST` | `/diagnose` | Diagnose only |
| `GET` | `/status/{id}` | Job status + result |
| `GET` | `/jobs` | List all jobs |
| `GET` | `/devices` | List device profiles |
| `GET` | `/models` | List LLM providers/models |
| `WS` | `/ws/run` | Streaming via WebSocket |
## LLM Providers
Any [litellm](https://docs.litellm.ai/docs/providers)-compatible provider:
| Provider | Model example | Env key |
| -------- | ------------ | ------- |
| **Groq** | `groq/llama-3.3-70b-versatile` | `GROQ_API_KEY` |
| **OpenRouter** | `openrouter/google/gemma-3-27b-it:free` | `OPENROUTER_API_KEY` |
| **Anthropic** | `anthropic/claude-3-5-sonnet-20241022` | `ANTHROPIC_API_KEY` |
| **OpenAI** | `gpt-4o-mini` | `OPENAI_API_KEY` |
| **Gemini** | `gemini/gemini-2.0-flash-exp` | `GEMINI_API_KEY` |
| **Ollama** | `ollama/llama3.2` | *(none — local)* |
> **Note**: `groq/groq/llama-...` is automatically normalized to `groq/llama-...`
## What it fixes
Any Linux system problem accessible via SSH. Verified examples:
| Problem | Command |
| ------- | ------- |
| pydantic-core build fail (Python 3.13) | `fixpi run -p "pip install fails" -f /tmp/pip.log` |
| Docker cgroups error | `fixpi run -p "docker not starting"` |
| WaveShare DSI not detected | `fixpi run` (display mode) |
| Frontend build fail (npm/tsc) | `fixpi run -p "npm build fails" -f /tmp/build.log` |
| Systemd service crash | `fixpi run -p "service failing" -f /tmp/journal.log` |
| OOM killer active | `fixpi run -p "OOM killer killing app"` |
31 documented RPi examples: [examples/](examples/)
### Verified: RPi3 + WaveShare 7.9" DSI (kernel 6.12.x)
```ini
# /boot/firmware/config.txt — working config
dtoverlay=vc4-kms-v3d
# display_auto_detect=1 ← must be disabled
dtoverlay=vc4-kms-dsi-waveshare-panel,7_9_inch
hdmi_force_hotplug=1
```
*fixpi discovered this fix automatically by analysing kernel logs and dmesg.*
## Architecture
```text
fixPI/
├── pyproject.toml
├── Makefile
├── .env.example
├── examples/ ← 31 documented RPi problems + integration guide
│ └── integration/ ← bash/Python/REST/WS/CI examples
├── tests/
│ ├── test_agent.py
│ ├── test_cli.py
│ ├── test_diagnostics.py
│ ├── test_llm_agent.py
│ └── e2e/ ← Docker-based SSH e2e tests
└── fixpi/
├── __main__.py ← CLI entry point (clickmd)
├── cli.py ← interactive menu + _load_profiles()
├── agent.py ← LLM decision loop (generic + display mode)
├── server.py ← FastAPI REST/WS server (fixpi serve)
├── llm_agent.py ← multi-provider LLM via litellm
├── ssh_client.py ← paramiko SSH + reboot/reconnect
└── diagnostics.py ← system state collector
```
## Agent Modes
| Mode | How | Behaviour |
| ---- | --- | --------- |
| **Display** | `fixpi run` | Decision tree → LLM, targets display config |
| **Generic** | `fixpi run -p "..."` | Skips display tree, LLM gets your problem |
| **Log** | `fixpi run -f file` | Log content injected into LLM context |
| **Diagnose** | `fixpi diagnose` | Analysis only, no changes applied |
## Integration
See [examples/integration/README.md](examples/integration/README.md) for:
- Bash pipe / one-liners
- Python SDK
- REST API via curl
- WebSocket streaming (Node.js + Python)
- c2004 installation fallback
- Multi-device management
- CI/CD pipeline example
## License
Apache License 2.0 - see [LICENSE](LICENSE) for details.
## Author
Created by **Tom Sapletta** - [tom@sapletta.com](mailto:tom@sapletta.com)
| text/markdown | null | Tom Sapletta <tom@sapletta.com> | null | Tom Sapletta <tom@sapletta.com> | null | raspberry-pi, ssh, llm, diagnostics, repair, display, devops, automation, rest-api | [
"Development Status :: 4 - Beta",
"Intended Audience :: System Administrators",
"Intended Audience :: Developers",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Py... | [] | null | null | >=3.9 | [] | [] | [] | [
"paramiko>=3.4.0",
"litellm>=1.40.0",
"python-dotenv>=1.0.0",
"click>=8.0",
"rich>=13.7.0",
"clickmd>=1.0.0; extra == \"clickmd\"",
"fastapi>=0.115.0; extra == \"server\"",
"uvicorn[standard]>=0.32.0; extra == \"server\"",
"websockets>=12.0; extra == \"server\"",
"pytest>=7.0.0; extra == \"dev\"",... | [] | [] | [] | [
"Homepage, https://github.com/zlecenia/c2004/tree/main/fixPI",
"Repository, https://github.com/zlecenia/c2004/tree/main/fixPI",
"Documentation, https://github.com/zlecenia/c2004/tree/main/fixPI#readme",
"Bug Tracker, https://github.com/zlecenia/c2004/issues",
"Changelog, https://github.com/zlecenia/c2004/tr... | twine/6.2.0 CPython/3.13.7 | 2026-02-19T13:01:27.744609 | fixpi-1.1.1.tar.gz | 38,338 | ae/8f/b9dafcda3f4733253639fdf8aaf690297a0a9ffa754e737890d3c3fdefc0/fixpi-1.1.1.tar.gz | source | sdist | null | false | 5b59c34144910664b0529037083f908f | 0b03798a4ed8ad45af12a5b848cf9840b5eb09d2469fb5194591e996ff67ddea | ae8fb9dafcda3f4733253639fdf8aaf690297a0a9ffa754e737890d3c3fdefc0 | Apache-2.0 | [
"LICENSE"
] | 238 |
2.4 | ccdfits | 1.2.0 | Utilities to work with .fits files that were taken with CCDs and Skipper CCDs | # CCDFits
This package provides utilities to work with .fits files that were taken with CCDs. It provides a FITS class to easily view and analyse images, along with useful functions to process them. It is particularly useful for Skipper CCD images, which can be calibrated by fitting gaussians to the zero- and one-electron peaks.
## Installation
### Pre-requisites
This library has been developed for Python 3.
`ccdfits` requires the following packages:
* numpy
* scipy
* astropy
* matplotlib
In addition, if you intend to use `ccdfits.processing`, you will also need to install:
* scikit-learn (for `cal2phys`)
* scikit-image (for `generateMask`)
### Install ccdfits via pip:
`python -m pip install ccdfits`
### Installing latest public version (may be different from that on pypi)
`python -m pip install git+https://gitlab.com/nicolaseavalos/ccdfits.git`
## Usage
The following example shows how to load and view a .fits image. Replace `'ccd-image.fits'` with a string indicating the full or relative path to the image you are trying to load.
# imports
from ccdfits import Fits
import matplotlib.pyplot as plt
plt.ion()
# set the image path
fits_path = 'ccd-image.fits'
# load and view the image
img = Fits(fits_path)
img.view()
Complete documentation is in process.
## Changelog
### Version 1.1
- Added more options to event extraction
### Version 1.0
- FITS class renamed to Fits
- Some documentation inside the code
- Various changes throughout the library
### Hotfix 0.3.1
- Optimized for installation via pip
### Version 0.3.0
- Added `skp2raw_lowmem` function to `ccdfits.processing`
### Hotfix 0.2.1
- Added `subtract_baseline` option to `ccdfits.processing.raw2proc`
### Version 0.2.0
- Added `skp2raw` function to `ccdfits.processing` | text/markdown | null | Nicolás Avalos <nicolaseavalos@gmail.com> | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"astropy",
"matplotlib",
"numpy",
"scipy",
"pandas; extra == \"pandas\"",
"scikit-image; extra == \"scikit-image\"",
"scikit-learn; extra == \"scikit-learn\""
] | [] | [] | [] | [
"GitLab, https://gitlab.com/nicolaseavalos/ccdfits"
] | Hatch/1.16.3 cpython/3.14.3 HTTPX/0.28.1 | 2026-02-19T13:01:02.924718 | ccdfits-1.2.0.tar.gz | 49,802 | 65/4e/38f0e6e0205935175210e6b631ccaeade9a37026391e4df7a10c0dd09a67/ccdfits-1.2.0.tar.gz | source | sdist | null | false | 4aba0f8980717fba01d563f88c9ac2c6 | 2445111ed6e604e058c2c8f4ffe1ffebbcffc9d1f423e70337096df604d99d87 | 654e38f0e6e0205935175210e6b631ccaeade9a37026391e4df7a10c0dd09a67 | MIT | [
"LICENSE"
] | 244 |
2.4 | octave-mcp | 1.3.0 | OCTAVE MCP Server - Lenient-to-Canonical OCTAVE pipeline with schema validation and generative holographic contracts | # OCTAVE MCP Server
[](https://opensource.org/licenses/Apache-2.0)
[](https://www.python.org/downloads/)
[]()
[]()
Production-grade MCP server implementing the **OCTAVE v6** document protocol: **Generative Holographic Contracts**.
## Table of Contents
- [For AI Agents](#for-ai-agents)
- [What It Does](#what-it-does)
- [Quick Start](#quick-start)
- [Installation](#installation)
- [MCP Tools](#mcp-tools)
- [When OCTAVE Helps](#when-octave-helps)
- [Documentation](#documentation)
- [Contributing](#contributing)
---
## For AI Agents
**Reading this section makes you OCTAVE-literate.** The ~200-token primer below is all you need to write valid OCTAVE:
```octave
===OCTAVE_LITERACY_PRIMER===
META:
TYPE::PRIMER
VERSION::"6.0.0"
TOKENS::~200
§1::ESSENCE
PURPOSE::"Write OCTAVE"
OCTAVE::"Semantic DSL for LLMs"
STRUCTURE::KEY::value,[list],indent_2
§2::SYNTAX
::→assign
→→flow
⊕→synthesis
⇌→tension
§3::ONE_SHOT
IN::"flow from A to B"
OUT::A→B
§4::VALIDATE
MUST::[valid_OCTAVE,preserve_§_names,"===END==="]
===END===
```
**Project Context** (for working on this codebase):
```octave
===AGENT_BOOTSTRAP===
QUALITY_GATES::[mypy,ruff,black,pytest]
DEV_SETUP::docs/guides/development-setup.md
SPECS::src/octave_mcp/resources/specs/
PRIMERS::src/octave_mcp/resources/primers/
IMMUTABLES::[
I1::SYNTACTIC_FIDELITY, // Preserve semantic meaning exactly
I2::DETERMINISTIC_ABSENCE, // Distinguish absent vs null vs default
I3::MIRROR_CONSTRAINT, // Reflect only what exists, create nothing
I4::TRANSFORM_AUDITABILITY, // Log every transformation with IDs
I5::SCHEMA_SOVEREIGNTY // Make validation status visible
]
===END===
```
---
## What It Does
This repository ships the **OCTAVE MCP Server** (v1.0.0)—a Model Context Protocol implementation that transforms OCTAVE documents from passive text into **Generative Holographic Contracts**.
OCTAVE (Olympian Common Text And Vocabulary Engine) is a deterministic document format and control plane for LLM systems. It keeps meaning durable when text is compressed, routed between agents, or projected into different views.
**Core Philosophy: Validation Precedes Generation**
OCTAVE v6 introduces the principle that schemas should constrain LLM output *during* generation, not just validate it afterward. The `META` block can compile to strict grammars (Regex/GBNF) for constrained generation.
> **Implementation Status (v0.6.0):** Grammar compilation is implemented and available via `debug_grammar=True`. However, the MCP tools (`octave_validate`, `octave_write`) currently perform post-hoc validation rather than enforcing grammar constraints during generation. See the [architecture spec](src/octave_mcp/resources/specs/octave-mcp-architecture.oct.md) for details.
- **Generative Constraints**: `META.CONTRACT` compiles to regex/GBNF grammar (use `debug_grammar=True` to inspect).
- **Holographic Sovereignty**: The document defines its own schema laws inline.
- **Hermetic Anchoring**: No network calls in the hot path. Standards are frozen or local.
- **Auditable Loss**: Compression tiers declared in `META` (`LOSSLESS`, `AGGRESSIVE`).
### Language, operators, and readability
- **Syntax**: Unicode-first operators (`→`, `⊕`, `⧺`, `⇌`, `∨`, `∧`, `§`) with ASCII aliases.
- **Vocabulary**: Mythological terms as semantic compression shorthands.
- **Authoring**: Humans write in the lenient view; tools normalize to canonical Unicode.
See the [protocol specs in `src/octave_mcp/resources/specs/`](src/octave_mcp/resources/specs/) for v6.0.0 rules.
## What this server provides
`octave-mcp` bundles the OCTAVE tooling as MCP tools and a CLI.
- **3 MCP tools**: `octave_validate`, `octave_write`, `octave_eject`
- **Grammar Compiler**: Compiles `META.CONTRACT` constraints to GBNF grammars (inspect via `debug_grammar=True`).
- **Hermetic Hydrator**: Resolves standards without network dependency.
## When OCTAVE Helps
Use OCTAVE when documents must survive multiple agent/tool hops, repeated compression, or auditing:
- **Self-Validating Agents**: Agents that define their own output grammar.
- **Coordination Briefs**: Decision logs that circulate between agents.
- **Compressed Context**: Reusable prompts needing stable structure (54–68% token reduction).
## Installation
**PyPI:**
```bash
pip install octave-mcp
# or
uv pip install octave-mcp
```
**From source:**
```bash
git clone https://github.com/elevanaltd/octave-mcp.git
cd octave-mcp
uv pip install -e ".[dev]"
```
## Quick Start
### CLI
```bash
# Validate and normalize (v6 auto-detection)
octave validate document.oct.md
# Write with validation (from content)
echo "===DOC===\nMETA:\n TYPE::LOG\n CONTRACT::GRAMMAR[...]\n..." | octave write output.oct.md --stdin
# Project to a view/format
octave eject document.oct.md --mode executive --format markdown
```
### MCP Setup
Add to Claude Desktop (`claude_desktop_config.json`) or Claude Code (`~/.claude.json`):
```json
{
"mcpServers": {
"octave": {
"command": "octave-mcp-server"
}
}
}
```
## MCP Tools
| Tool | Purpose |
|------|---------|
| `octave_validate` | Schema validation + repair suggestions + grammar compilation |
| `octave_write` | Unified file creation/modification with validation |
| `octave_eject` | Format projection and template generation |
### `octave_validate`
Validates OCTAVE content against a schema and returns normalized canonical output.
```python
# Parameters
content: str # OCTAVE content to validate (or use file_path)
file_path: str # Path to file (mutually exclusive with content)
schema: str # Schema name (e.g., 'META', 'SESSION_LOG')
fix: bool = False # Apply repairs (enum casefold, type coercion)
profile: str # Validation strictness: STRICT, STANDARD, LENIENT, ULTRA
diff_only: bool # Return diff instead of full canonical (saves tokens)
compact: bool # Return counts instead of full error lists
debug_grammar: bool # Include compiled regex/GBNF grammar in output
```
**Returns**: `{ status, canonical, repairs, warnings, errors, validation_status }`
### `octave_write`
Unified write operation for creating new files or modifying existing ones.
```python
# Parameters
target_path: str # File path to write
content: str # Full content for new files (mutually exclusive with changes)
changes: dict # Delta updates for existing files (tri-state: absent=no-op, DELETE=remove, value=set)
mutations: dict # META field overrides
base_hash: str # Expected SHA-256 for consistency check (CAS)
schema: str # Schema name for validation
lenient: bool # Enable lenient parsing with auto-repairs
corrections_only: bool # Dry run - return corrections without writing
```
**Returns**: `{ status, mode, canonical, repairs, warnings, errors, validation_status, file_hash }`
### `octave_eject`
Projects OCTAVE content to different formats and views.
```python
# Parameters
content: str # OCTAVE content to project (null for template generation)
schema: str # Schema name for validation or template generation
mode: str # Projection: canonical, authoring, executive, developer
format: str # Output: octave, json, yaml, markdown, gbnf
```
**Returns**: `{ output, lossy, fields_omitted, validation_status }`
### Generative Holographic Contracts (v6)
OCTAVE v6 introduces the **Holographic Contract** concept:
1. **Read META**: The parser reads the `META` block first.
2. **Compile Grammar**: Constraints (`REQ`, `ENUM`, `REGEX`) compile into GBNF grammar (available via `debug_grammar=True`).
3. **Generate/Validate**: The body can be validated against this bespoke grammar.
> **Note:** In v0.6.0, grammar compilation is implemented but tools perform post-hoc validation. Grammar-constrained generation is a roadmap feature.
## Documentation
| Doc | Content |
|-----|---------|
| [Usage Guide](docs/usage.md) | CLI, MCP, and API examples |
| [API Reference](docs/api.md) | Python API documentation |
| [MCP Configuration](docs/mcp-configuration.md) | Client setup and integration |
| [Protocol Specs](src/octave_mcp/resources/specs/) | v6.0.0 Generative Holographic Specs |
| [EBNF Grammar](docs/grammar/octave-v1.0-grammar.ebnf) | Formal v1.0.0 grammar specification |
| [Development Setup](docs/guides/development-setup.md) | Dev environment, testing, quality gates |
| [Architecture Decisions](docs/adr/) | Architecture Decision Records (ADRs) |
| [Research](docs/research/) | Benchmarks and validation studies |
### Architecture Immutables
| ID | Principle |
|----|-----------|
| **I1** | Syntactic Fidelity — normalization alters syntax, never semantics |
| **I2** | Deterministic Absence — distinguish absent vs null vs default |
| **I3** | Mirror Constraint — reflect only what's present, create nothing |
| **I4** | Transform Auditability — log every transformation with stable IDs |
| **I5** | Schema Sovereignty — validation status visible in output |
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup, testing, and guidelines.
```bash
# Quick dev setup
git clone https://github.com/elevanaltd/octave-mcp.git
cd octave-mcp
uv venv && source .venv/bin/activate
uv pip install -e ".[dev]"
# Run tests
pytest
# Quality checks
ruff check src tests && mypy src && black --check src tests
```
## License
Apache-2.0 — Built with [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk).
| text/markdown | null | Elevana <shaun.buswell@elevana.com> | null | Elevana <shaun.buswell@elevana.com> | Apache-2.0 | octave, mcp, model-context-protocol, ai, llm, protocol, schema-validation, compression | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Prog... | [] | null | null | >=3.11 | [] | [] | [] | [
"mcp>=1.0.0",
"click>=8.1.0",
"pydantic>=2.12.0",
"python-dotenv>=1.2.0",
"PyYAML>=6.0.3",
"starlette>=0.52.0; extra == \"http\"",
"uvicorn>=0.34.0; extra == \"http\"",
"httpx>=0.28.0; extra == \"http\"",
"sse-starlette>=2.2.0; extra == \"http\"",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-cov>=... | [] | [] | [] | [
"Homepage, https://github.com/elevanaltd/octave-mcp",
"Documentation, https://github.com/elevanaltd/octave-mcp/tree/main/docs",
"Repository, https://github.com/elevanaltd/octave-mcp.git",
"Issues, https://github.com/elevanaltd/octave-mcp/issues",
"Changelog, https://github.com/elevanaltd/octave-mcp/blob/mai... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:00:42.922333 | octave_mcp-1.3.0.tar.gz | 211,510 | 31/da/8d01c2758e968d63ee1e10ff7126301ad867f1e5c1bf6f69d9f2e17919e0/octave_mcp-1.3.0.tar.gz | source | sdist | null | false | 4e389836061cbc97360e6b2853b358e4 | 6f98da765fc8571bb0ab5feee6b286d94313ee61272299d713b81c8ef71dd0e3 | 31da8d01c2758e968d63ee1e10ff7126301ad867f1e5c1bf6f69d9f2e17919e0 | null | [
"LICENSE"
] | 273 |
2.4 | graphsense-lib | 2.9.5 | Graphsense backend lib and automation cli | # GraphSense Library
[](https://github.com/graphsense/graphsense-lib/actions) [](https://badge.fury.io/py/graphsense-lib) [](https://pypi.org/project/graphsense-lib/) [](https://pepy.tech/project/graphsense-lib)
A comprehensive Python library for the GraphSense crypto-analytics platform. It provides database access, data ingestion, maintenance tools, and analysis capabilities for cryptocurrency transactions and networks.
> **Note:** This library uses optional dependencies. Use `graphsense-lib[all]` to install all features.
## Quick Start
### Installation
```bash
# Install with all features
uv add graphsense-lib[all]
# Install from source
git clone https://github.com/graphsense/graphsense-lib.git
cd graphsense-lib
make install
```
### Serving the REST API locally
The web API requires two backend connections: a **Cassandra** cluster (blockchain data) and a **TagStore** (PostgreSQL). You can configure them via environment variables or a YAML config file.
#### Option A: Environment variables only
```bash
GS_CASSANDRA_ASYNC_NODES='["<cassandra-host>"]' \
GRAPHSENSE_TAGSTORE_READ_URL='postgresql+asyncpg://<user>:<password>@<host>:<port>/tagstore' \
uv run --extra web uvicorn graphsenselib.web.app:create_app --factory --host localhost --port 9000 --reload
```
#### Option B: YAML config file
Point `CONFIG_FILE` to a REST-specific config (see `instance/config.yaml` for a full example):
```bash
CONFIG_FILE=./instance/config.yaml make serve-web
```
Or without Make:
```bash
CONFIG_FILE=./instance/config.yaml \
uv run --extra web uvicorn graphsenselib.web.app:create_app --factory --host localhost --port 9000 --reload
```
#### Option C: `.graphsense.yaml` with a `web` key
If you already have a `.graphsense.yaml` (or `~/.graphsense.yaml`) for the CLI, you can add a `web` key containing the REST config. The app will pick it up automatically without setting `CONFIG_FILE`:
```yaml
# .graphsense.yaml
environments:
# ... your existing CLI config ...
web:
database:
nodes: ["<cassandra-host>"]
currencies:
btc:
eth:
gs-tagstore:
url: "postgresql+asyncpg://<user>:<password>@<host>:<port>/tagstore"
```
```bash
make serve-web
```
**Config resolution order:** explicit `config_file` param > `CONFIG_FILE` env var > `./instance/config.yaml` > `.graphsense.yaml` `web` key > env vars only.
#### Optional REST settings (env vars)
| Variable | Default | Description |
|---|---|---|
| `GSREST_DISABLE_AUTH` | `false` | Disable API key authentication |
| `GSREST_ALLOWED_ORIGINS` | `*` | CORS allowed origins |
| `GSREST_LOGGING_LEVEL` | — | Logging level (DEBUG, INFO, …) |
| `GS_CASSANDRA_ASYNC_PORT` | `9042` | Cassandra port |
| `GS_CASSANDRA_ASYNC_USERNAME` | — | Cassandra username |
| `GS_CASSANDRA_ASYNC_PASSWORD` | — | Cassandra password |
### Basic Usage
#### Database Access with Configuration File
```python
from graphsenselib.db import DbFactory
# Using GraphSense config file (default: ~/.graphsense.yaml)
with DbFactory().from_config("development", "btc") as db:
highest_block = db.transformed.get_highest_block()
print(f"Highest BTC block: {highest_block}")
# Get block details
block = db.transformed.get_block(100000)
print(f"Block 100000: {block.block_hash}")
```
#### Direct Database Connection
```python
from graphsenselib.db import DbFactory
# Direct connection without config file
with DbFactory().from_name(
raw_keyspace_name="eth_raw",
transformed_keyspace_name="eth_transformed",
schema_type="account",
cassandra_nodes=["localhost"],
currency="eth"
) as db:
print(f"Highest block: {db.transformed.get_highest_block()}")
```
#### Async Database Services
The async services are used internally by the REST API and can also be used standalone. `AddressesService` depends on several other services:
```python
from graphsenselib.db.asynchronous.services import (
BlocksService, AddressesService, TagsService,
EntitiesService, RatesService,
)
# Services are initialized with their dependencies
blocks_service = BlocksService(db, rates_service, config, logger)
addresses_service = AddressesService(
db, tags_service, entities_service, blocks_service, rates_service, logger
)
address_info = await addresses_service.get_address("btc", "1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa")
txs = await addresses_service.list_address_txs("btc", "1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa")
```
## Command Line Interface
GraphSense-lib exposes a comprehensive CLI tool: `graphsense-cli`
### Basic Commands
```bash
# Show help and available commands
graphsense-cli --help
# Check version
graphsense-cli version
# Show current configuration
graphsense-cli config show
# Generate config template
graphsense-cli config template > ~/.graphsense.yaml
# Show config file path
graphsense-cli config path
```
## Modules
### Database Management
Query and manage the GraphSense database state.
```bash
# Show database management options
graphsense-cli db --help
# Check database state/summary
graphsense-cli db state -e development
# Get block information
graphsense-cli db block info -e development -c btc --height 100000
# Query logs (for Ethereum-based chains)
graphsense-cli db logs -e development -c eth --from-block 1000000 --to-block 1000100
```
### Schema Operations
Create and validate database schemas.
```bash
# Show schema options
graphsense-cli schema --help
# Create database schema for a currency
graphsense-cli schema create -e dev -c btc
# Validate existing schema
graphsense-cli schema validate -e dev -c btc
# Show expected schema for currency
graphsense-cli schema show-by-currency btc
# Show schema by type (utxo/account)
graphsense-cli schema show-by-schema-type utxo
```
### Data Ingestion
Ingest raw cryptocurrency data from nodes.
```bash
# Show ingestion options
graphsense-cli ingest --help
# Ingest blocks from cryptocurrency node
graphsense-cli ingest from-node \
-e dev \
-c btc \
--start-block 0 \
--end-block 1000 \
--create-schema
# Ingest with custom batch size
graphsense-cli ingest from-node \
-e dev \
-c eth \
--start-block 1000000 \
--end-block 1001000 \
--batch-size 100
```
### Delta Updates
Update transformed keyspace from raw keyspace.
```bash
# Show delta update options
graphsense-cli delta-update --help
# Check update status
graphsense-cli delta-update status -e dev -c btc
# Perform delta update
graphsense-cli delta-update update -e dev -c btc
# Validate delta update consistency
graphsense-cli delta-update validate -e dev -c btc
# Patch exchange rates for specific blocks
graphsense-cli delta-update patch-exchange-rates \
-e dev \
-c btc \
--start-block 100000 \
--end-block 200000
```
### Exchange Rates
Fetch and ingest exchange rates from various sources.
```bash
# Show exchange rate options
graphsense-cli exchange-rates --help
# Fetch from CoinDesk
graphsense-cli exchange-rates coindesk -e dev -c btc
# Fetch from CoinMarketCap (requires API key in config)
graphsense-cli exchange-rates coinmarketcap -e dev -c btc
```
### Monitoring
Monitor GraphSense infrastructure health and state.
```bash
# Show monitoring options
graphsense-cli monitoring --help
# Get database summary
graphsense-cli monitoring get-summary -e dev
# Get summary for specific currency
graphsense-cli monitoring get-summary -e dev -c btc
# Send notifications to configured handlers
graphsense-cli monitoring notify \
--topic "database-update" \
--message "BTC ingestion completed"
```
### Event Watching (Alpha)
Watch for cryptocurrency events and generate notifications.
```bash
# Show watch options
graphsense-cli watch --help
# Watch for money flows on specific addresses
graphsense-cli watch money-flows \
-e dev \
-c btc \
--address 1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa \
--threshold 1000000 # satoshis
```
### File Conversion Tools
Convert between different file formats.
```bash
# Show conversion options
graphsense-cli convert --help
```
## Configuration
GraphSense-lib uses a YAML configuration file that defines database connections and environment settings. Default locations: `./.graphsense.yaml`, `~/.graphsense.yaml`.
### Generate Configuration Template
```bash
graphsense-cli config template > ~/.graphsense.yaml
```
### Example Configuration Structure
```yaml
# Optional: default environment to use
default_environment: dev
environments:
dev:
# Cassandra cluster configuration
cassandra_nodes: ["localhost"]
port: 9042
# Optional authentication
# username: "cassandra"
# password: "cassandra"
# Currency/keyspace configurations
keyspaces:
btc:
raw_keyspace_name: "btc_raw"
transformed_keyspace_name: "btc_transformed"
schema_type: "utxo"
# Node connection for ingestion
ingest_config:
node_reference: "http://localhost:8332"
# Optional authentication for node
# username: "rpcuser"
# password: "rpcpassword"
# Keyspace setup for schema creation
keyspace_setup_config:
raw:
replication_config: "{'class': 'SimpleStrategy', 'replication_factor': 1}"
transformed:
replication_config: "{'class': 'SimpleStrategy', 'replication_factor': 1}"
eth:
raw_keyspace_name: "eth_raw"
transformed_keyspace_name: "eth_transformed"
schema_type: "account"
ingest_config:
node_reference: "http://localhost:8545"
keyspace_setup_config:
raw:
replication_config: "{'class': 'SimpleStrategy', 'replication_factor': 1}"
transformed:
replication_config: "{'class': 'SimpleStrategy', 'replication_factor': 1}"
prod:
cassandra_nodes: ["cassandra1.prod", "cassandra2.prod", "cassandra3.prod"]
username: "gs_user"
password: "secure_password"
keyspaces:
btc:
raw_keyspace_name: "btc_raw"
transformed_keyspace_name: "btc_transformed"
schema_type: "utxo"
ingest_config:
node_reference: "http://bitcoin-node.internal:8332"
keyspace_setup_config:
raw:
replication_config: "{'class': 'NetworkTopologyStrategy', 'datacenter1': 3}"
transformed:
replication_config: "{'class': 'NetworkTopologyStrategy', 'datacenter1': 3}"
# Optional: Slack notification configuration
slack_topics:
database-update:
hooks: ["https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"]
payment_flow_notifications:
hooks: ["https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"]
# Optional: API keys for external services
coingecko_api_key: ""
coinmarketcap_api_key: "YOUR_CMC_API_KEY"
# Optional: cache directory for temporary files
cache_directory: "~/.graphsense/cache"
```
## Advanced Features
### Tagpack Management
GraphSense-lib includes comprehensive tagpack management tools (formerly standalone tagpack-tool). For detailed documentation, see [Tagpack README](tagpack/docs/README.md).
```bash
# Validate tagpacks
graphsense-cli tagpack-tool tagpack validate /path/to/tagpack
# Insert tagpack into tagstore
graphsense-cli tagpack-tool insert \
--url "postgresql://user:pass@localhost/tagstore" \
/path/to/tagpack
# Show quality measures
graphsense-cli tagpack-tool quality show-measures \
--url "postgresql://user:pass@localhost/tagstore"
```
### Tagstore Operations
```bash
# Initialize tagstore database
graphsense-cli tagstore init
# Initialize with custom database URL
graphsense-cli tagstore init --db-url "postgresql://user:pass@localhost/tagstore"
# Get DDL SQL for manual setup
graphsense-cli tagstore get-create-sql
```
### Cross-chain Analysis
```python
# Using an initialized AddressesService (see above for setup)
related = await addresses_service.get_cross_chain_pubkey_related_addresses(
"1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa"
)
for addr in related:
print(f"Network: {addr.network}, Address: {addr.address}")
```
### Function Call Parsing
```python
from graphsenselib.utils.function_call_parser import parse_function_call
# Parse Ethereum function calls
function_signatures = {
"0xa9059cbb": [{
"name": "transfer",
"inputs": [
{"name": "to", "type": "address"},
{"name": "value", "type": "uint256"}
]
}]
}
parsed = parse_function_call(tx_input_bytes, function_signatures)
if parsed:
print(f"Function: {parsed['name']}")
print(f"Parameters: {parsed['parameters']}")
```
## Development
**Important:** Requires Python >=3.10, <3.13.
### Setup Development Environment
```bash
# Initialize development environment (installs deps + pre-commit hooks)
make dev
# Or install dev dependencies only
make install-dev
```
### Code Quality and Testing
Before committing, please format, lint, and test your code:
```bash
# Format code
make format
# Lint code
make lint
# Run fast tests
make test
# Or run all steps at once
make pre-commit
```
For comprehensive testing:
```bash
# Run complete test suite (including slow tests)
make test
```
### Release Process
This repository uses two source-of-truth versions in the root `Makefile`:
- **Library version**: `RELEASESEM` (released with `vX.Y.Z` tags)
- **OpenAPI/API version**: `WEBAPISEM` (written to `src/graphsenselib/web/version.py`)
The Python client package version is derived from the API version and should match it.
Use the root Makefile helpers:
```bash
# Show all current versions
make show-versions
# Update and validate OpenAPI contract version
make update-api-version WEBAPISEM=v2.10.0
make check-api-version WEBAPISEM=v2.10.0
# Sync client version from API version and validate
make sync-client-version WEBAPISEM=v2.10.0
make check-client-version WEBAPISEM=v2.10.0
# Generate Python client (package version = OpenAPI info.version)
make generate-python-client
# Create both release tags from Makefile versions
make tag-version
```
Tagging behavior:
- Library release tag: `vX.Y.Z` (from `RELEASESEM`)
- Client release tag: `webapi-vA.B.C` (from `WEBAPISEM`)
1. Update CHANGELOG.md with new features and fixes
2. Update relevant versions (library/API/client) based on what changed
3. Sync API/client versions if needed (`make update-api-version` + `make sync-client-version`)
4. Create and push tags:
```bash
make tag-version
git push origin --tags
```
## Troubleshooting
### OpenSSL Errors
Some components use OpenSSL hash functions that aren't available by default in OpenSSL 3.0+ (e.g., ripemd160). This can cause test suite failures. To fix this, enable legacy providers in your OpenSSL configuration. See the "fix openssl legacy mode" step in `.github/workflows/run_tests.yaml` for an example.
### Common Issues
1. **Connection Refused**: Verify Cassandra is running and accessible
2. **Schema Validation Errors**: Ensure database schema matches expected version
3. **Import Errors**: Install with `[all]` option for complete feature set
4. **Python Version**: Requires Python >=3.10, <3.13
### Getting Help
- Check [GitHub Issues](https://github.com/graphsense/graphsense-lib/issues)
- Review [GraphSense Documentation](https://graphsense.github.io/)
- Use `--help` with any CLI command for detailed usage information
- For tagpack-specific issues, see [Tagpack Documentation](tagpack/docs/README.md)
## License
See LICENSE file for licensing details.
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Run `make pre-commit` to ensure code quality
5. Submit a pull request
---
**GraphSense** - Open Source Crypto Analytics Platform
Website: https://graphsense.github.io/
| text/markdown; charset=UTF-8; variant=GFM | null | Iknaio Cryptoasset Analytics GmbH <contact@iknaio.com> | null | null | null | graphsense | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Intended Audience :: Developers",
"Topic :: Utilities"
] | [
"any"
] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"setuptools<80.9,>=80.0.0",
"filelock>=3.8.0",
"click>=8.0.3",
"pandas>=2.3.3",
"methodtools>=0.4",
"simplejson>=3.17.6",
"goodconf[yaml]>=3.0.0",
"pydantic>=2.0.0",
"pydantic-settings<2.13.0,>=2.0.0",
"requests>=2.32.5",
"parsy<3.0,>=2.0",
"rich>=12.6.0",
"cashaddress>=1.0.6",
"base58>=2.... | [] | [] | [] | [
"Homepage, https://graphsense.github.io/",
"Source, https://github.com/graphsense/graphsense-lib",
"Changelog, https://github.com/graphsense/graphsense-lib/blob/master/CHANGELOG.md",
"Tracker, https://github.com/graphsense/graphsense-lib/issues",
"Download, https://github.com/graphsense"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T13:00:25.054694 | graphsense_lib-2.9.5.tar.gz | 12,936,414 | 09/69/c3681a68097bf51b8e2f39880506e7fb89f8ba91c9f208953d7a068fe964/graphsense_lib-2.9.5.tar.gz | source | sdist | null | false | 9826b2ba41a75bf23a66390b9ad277b6 | a40dab1812d66675b83f3c5ef893462d07338c07d35c83604cd602917999c509 | 0969c3681a68097bf51b8e2f39880506e7fb89f8ba91c9f208953d7a068fe964 | null | [
"LICENSE"
] | 233 |
2.4 | graphsense-python | 2.9.5 | GraphSense API | # graphsense-python
GraphSense API provides programmatic access to various ledgers' addresses, entities, blocks, transactions and tags for automated and highly efficient forensics tasks.
This Python package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:
- API version: 2.9.5
- Package version: 2.9.5
- Generator version: 7.19.0
- Build package: org.openapitools.codegen.languages.PythonClientCodegen
## Requirements.
Python 3.9+
## Installation & Usage
### pip install
If the python package is hosted on a repository, you can install directly using:
```sh
pip install git+https://github.com/GIT_USER_ID/GIT_REPO_ID.git
```
(you may need to run `pip` with root permission: `sudo pip install git+https://github.com/GIT_USER_ID/GIT_REPO_ID.git`)
Then import the package:
```python
import graphsense
```
### Setuptools
Install via [Setuptools](http://pypi.python.org/pypi/setuptools).
```sh
python setup.py install --user
```
(or `sudo python setup.py install` to install the package for all users)
Then import the package:
```python
import graphsense
```
### Tests
Execute `pytest` to run the tests.
## Getting Started
Please follow the [installation procedure](#installation--usage) and then run the following:
```python
import graphsense
from graphsense.rest import ApiException
from pprint import pprint
# Defining the host is optional and defaults to https://api.iknaio.com
# See configuration.py for a list of all supported configuration parameters.
configuration = graphsense.Configuration(
host = "https://api.iknaio.com"
)
# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.
# Configure API key authorization: api_key
configuration.api_key['api_key'] = os.environ["API_KEY"]
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['api_key'] = 'Bearer'
# Enter a context with an instance of the API client
with graphsense.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = graphsense.AddressesApi(api_client)
currency = 'btc' # str | The cryptocurrency code (e.g., btc)
address = '1Archive1n2C579dMsAu3iC6tWzuQJz8dN' # str | The cryptocurrency address
include_actors = True # bool | Whether to include actor information (optional) (default to True)
try:
# Get an address
api_response = api_instance.get_address(currency, address, include_actors=include_actors)
print("The response of AddressesApi->get_address:\n")
pprint(api_response)
except ApiException as e:
print("Exception when calling AddressesApi->get_address: %s\n" % e)
```
## Documentation for API Endpoints
All URIs are relative to *https://api.iknaio.com*
Class | Method | HTTP request | Description
------------ | ------------- | ------------- | -------------
*AddressesApi* | [**get_address**](docs/AddressesApi.md#get_address) | **GET** /{currency}/addresses/{address} | Get an address
*AddressesApi* | [**get_address_entity**](docs/AddressesApi.md#get_address_entity) | **GET** /{currency}/addresses/{address}/entity | Get the entity of an address
*AddressesApi* | [**get_tag_summary_by_address**](docs/AddressesApi.md#get_tag_summary_by_address) | **GET** /{currency}/addresses/{address}/tag_summary | Get attribution tag summary for a given address
*AddressesApi* | [**list_address_links**](docs/AddressesApi.md#list_address_links) | **GET** /{currency}/addresses/{address}/links | Get outgoing transactions between two addresses
*AddressesApi* | [**list_address_neighbors**](docs/AddressesApi.md#list_address_neighbors) | **GET** /{currency}/addresses/{address}/neighbors | Get an address's neighbors in the address graph
*AddressesApi* | [**list_address_txs**](docs/AddressesApi.md#list_address_txs) | **GET** /{currency}/addresses/{address}/txs | Get all transactions an address has been involved in
*AddressesApi* | [**list_related_addresses**](docs/AddressesApi.md#list_related_addresses) | **GET** /{currency}/addresses/{address}/related_addresses | Get related addresses to the input address
*AddressesApi* | [**list_tags_by_address**](docs/AddressesApi.md#list_tags_by_address) | **GET** /{currency}/addresses/{address}/tags | Get attribution tags for a given address
*BlocksApi* | [**get_block**](docs/BlocksApi.md#get_block) | **GET** /{currency}/blocks/{height} | Get a block by its height
*BlocksApi* | [**get_block_by_date**](docs/BlocksApi.md#get_block_by_date) | **GET** /{currency}/block_by_date/{date} | Get block by date
*BlocksApi* | [**list_block_txs**](docs/BlocksApi.md#list_block_txs) | **GET** /{currency}/blocks/{height}/txs | Get block transactions
*BulkApi* | [**bulk_csv**](docs/BulkApi.md#bulk_csv) | **POST** /{currency}/bulk.csv/{operation} | Get data as CSV in bulk
*BulkApi* | [**bulk_json**](docs/BulkApi.md#bulk_json) | **POST** /{currency}/bulk.json/{operation} | Get data as JSON in bulk
*EntitiesApi* | [**get_entity**](docs/EntitiesApi.md#get_entity) | **GET** /{currency}/entities/{entity} | Get an entity
*EntitiesApi* | [**list_address_tags_by_entity**](docs/EntitiesApi.md#list_address_tags_by_entity) | **GET** /{currency}/entities/{entity}/tags | Get address tags for a given entity
*EntitiesApi* | [**list_entity_addresses**](docs/EntitiesApi.md#list_entity_addresses) | **GET** /{currency}/entities/{entity}/addresses | Get an entity's addresses
*EntitiesApi* | [**list_entity_links**](docs/EntitiesApi.md#list_entity_links) | **GET** /{currency}/entities/{entity}/links | Get transactions between two entities
*EntitiesApi* | [**list_entity_neighbors**](docs/EntitiesApi.md#list_entity_neighbors) | **GET** /{currency}/entities/{entity}/neighbors | Get an entity's neighbors in the entity graph
*EntitiesApi* | [**list_entity_txs**](docs/EntitiesApi.md#list_entity_txs) | **GET** /{currency}/entities/{entity}/txs | Get all transactions an entity has been involved in
*EntitiesApi* | [**search_entity_neighbors**](docs/EntitiesApi.md#search_entity_neighbors) | **GET** /{currency}/entities/{entity}/search | Search neighbors of an entity
*GeneralApi* | [**get_statistics**](docs/GeneralApi.md#get_statistics) | **GET** /stats | Get statistics of supported currencies
*GeneralApi* | [**search**](docs/GeneralApi.md#search) | **GET** /search | Returns matching addresses, transactions and labels
*RatesApi* | [**get_exchange_rates**](docs/RatesApi.md#get_exchange_rates) | **GET** /{currency}/rates/{height} | Get exchange rates for a given block height
*TagsApi* | [**get_actor**](docs/TagsApi.md#get_actor) | **GET** /tags/actors/{actor} | Get an actor by ID
*TagsApi* | [**get_actor_tags**](docs/TagsApi.md#get_actor_tags) | **GET** /tags/actors/{actor}/tags | Get tags associated with an actor
*TagsApi* | [**list_address_tags**](docs/TagsApi.md#list_address_tags) | **GET** /tags | Get address tags by label
*TagsApi* | [**list_concepts**](docs/TagsApi.md#list_concepts) | **GET** /tags/taxonomies/{taxonomy}/concepts | List concepts for a taxonomy
*TagsApi* | [**list_taxonomies**](docs/TagsApi.md#list_taxonomies) | **GET** /tags/taxonomies | List all taxonomies
*TagsApi* | [**report_tag**](docs/TagsApi.md#report_tag) | **POST** /tags/report-tag | Report a new tag
*TokensApi* | [**list_supported_tokens**](docs/TokensApi.md#list_supported_tokens) | **GET** /{currency}/supported_tokens | Get supported tokens for a currency
*TxsApi* | [**get_spending_txs**](docs/TxsApi.md#get_spending_txs) | **GET** /{currency}/txs/{tx_hash}/spending | Get transactions that this transaction is spending from
*TxsApi* | [**get_spent_in_txs**](docs/TxsApi.md#get_spent_in_txs) | **GET** /{currency}/txs/{tx_hash}/spent_in | Get transactions that spent outputs from this transaction
*TxsApi* | [**get_tx**](docs/TxsApi.md#get_tx) | **GET** /{currency}/txs/{tx_hash} | Get a transaction by its hash
*TxsApi* | [**get_tx_conversions**](docs/TxsApi.md#get_tx_conversions) | **GET** /{currency}/txs/{tx_hash}/conversions | Get DeFi conversions for a transaction
*TxsApi* | [**get_tx_io**](docs/TxsApi.md#get_tx_io) | **GET** /{currency}/txs/{tx_hash}/{io} | Get transaction inputs or outputs
*TxsApi* | [**list_token_txs**](docs/TxsApi.md#list_token_txs) | **GET** /{currency}/token_txs/{tx_hash} | Returns all token transactions in a given transaction
*TxsApi* | [**list_tx_flows**](docs/TxsApi.md#list_tx_flows) | **GET** /{currency}/txs/{tx_hash}/flows | Get asset flows within a transaction
## Documentation For Models
- [Actor](docs/Actor.md)
- [ActorContext](docs/ActorContext.md)
- [Address](docs/Address.md)
- [AddressTag](docs/AddressTag.md)
- [AddressTags](docs/AddressTags.md)
- [AddressTx](docs/AddressTx.md)
- [AddressTxUtxo](docs/AddressTxUtxo.md)
- [AddressTxs](docs/AddressTxs.md)
- [Block](docs/Block.md)
- [BlockAtDate](docs/BlockAtDate.md)
- [Concept](docs/Concept.md)
- [CurrencyStats](docs/CurrencyStats.md)
- [Entity](docs/Entity.md)
- [EntityAddresses](docs/EntityAddresses.md)
- [ExternalConversion](docs/ExternalConversion.md)
- [HTTPValidationError](docs/HTTPValidationError.md)
- [LabelSummary](docs/LabelSummary.md)
- [LabeledItemRef](docs/LabeledItemRef.md)
- [Link](docs/Link.md)
- [LinkUtxo](docs/LinkUtxo.md)
- [Links](docs/Links.md)
- [LinksInner](docs/LinksInner.md)
- [LocationInner](docs/LocationInner.md)
- [NeighborAddress](docs/NeighborAddress.md)
- [NeighborAddresses](docs/NeighborAddresses.md)
- [NeighborEntities](docs/NeighborEntities.md)
- [NeighborEntity](docs/NeighborEntity.md)
- [Rate](docs/Rate.md)
- [Rates](docs/Rates.md)
- [RelatedAddress](docs/RelatedAddress.md)
- [RelatedAddresses](docs/RelatedAddresses.md)
- [SearchResult](docs/SearchResult.md)
- [SearchResultByCurrency](docs/SearchResultByCurrency.md)
- [SearchResultLevel1](docs/SearchResultLevel1.md)
- [SearchResultLevel2](docs/SearchResultLevel2.md)
- [SearchResultLevel3](docs/SearchResultLevel3.md)
- [SearchResultLevel4](docs/SearchResultLevel4.md)
- [SearchResultLevel5](docs/SearchResultLevel5.md)
- [SearchResultLevel6](docs/SearchResultLevel6.md)
- [Stats](docs/Stats.md)
- [Tag](docs/Tag.md)
- [TagCloudEntry](docs/TagCloudEntry.md)
- [TagSummary](docs/TagSummary.md)
- [Taxonomy](docs/Taxonomy.md)
- [TokenConfig](docs/TokenConfig.md)
- [TokenConfigs](docs/TokenConfigs.md)
- [Tx](docs/Tx.md)
- [TxAccount](docs/TxAccount.md)
- [TxRef](docs/TxRef.md)
- [TxSummary](docs/TxSummary.md)
- [TxUtxo](docs/TxUtxo.md)
- [TxValue](docs/TxValue.md)
- [UserReportedTag](docs/UserReportedTag.md)
- [UserTagReportResponse](docs/UserTagReportResponse.md)
- [ValidationError](docs/ValidationError.md)
- [Values](docs/Values.md)
<a id="documentation-for-authorization"></a>
## Documentation For Authorization
Authentication schemes defined for the API:
<a id="api_key"></a>
### api_key
- **Type**: API key
- **API key parameter name**: Authorization
- **Location**: HTTP header
## Author
contact@iknaio.com
| text/markdown | null | Iknaio Cryptoasset Analytics GmbH <contact@iknaio.com> | null | null | null | OpenAPI, OpenAPI-Generator, GraphSense API | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"urllib3<3.0.0,>=2.1.0",
"python-dateutil>=2.8.2",
"pydantic>=2",
"typing-extensions>=4.7.1"
] | [] | [] | [] | [
"Homepage, https://graphsense.github.io/",
"Source, https://github.com/graphsense/graphsense-lib",
"Changelog, https://github.com/graphsense/graphsense-lib/blob/master/CHANGELOG.md",
"Tracker, https://github.com/graphsense/graphsense-lib/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T13:00:19.184969 | graphsense_python-2.9.5.tar.gz | 73,463 | 3b/fb/46bd80432d2997eab94f66f6be303cffe0dccdca83e53ade3e6730861a30/graphsense_python-2.9.5.tar.gz | source | sdist | null | false | 0359714fabaca7370f76e39bda03e568 | ae6f2f23556ca275ed6bd86dea197a6f02f922f2e03817a22082fd80d9363760 | 3bfb46bd80432d2997eab94f66f6be303cffe0dccdca83e53ade3e6730861a30 | MIT | [] | 233 |
2.4 | image3kit | 0.0.2a0 | image3kit - 3D image processing toolkit | # image3kit
**EXPERIMENTAL: not ready for public use**
The `image3kit` is Python packaging of a collection of C++ libraries for processing
and analysing 3D images such as those obtained using X-ray micro-tomography.
## Installation
To install the version released on PyPI, run:
```bash
pip install image3kit
```
The `main` branch (experimental) can be installed with the command:
```bash
pip install git+https://github.com/DigiPorFlow/image3kit.git
```
## Licenses
See [LICENSE](LICENSE) for the license (BSD 3-clause, subject to change).
This package uses several third-party codes, see [pkgs/](pkgs/) for details.
Initial commit is scaffolded from [pybind/scikit_build_example](https://github.com/pybind/scikit_build_example).
See [pkgs/pybind11/LICENSE](pkgs/pybind11/LICENSE) for the license covering the initial version of .github/workflows.
| text/markdown | null | Ali Q Raeini <A.Qaseminejad_Raeini@hw.ac.uk> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy"
] | [] | [] | [] | [
"Repository, https://github.com/DigiPorFlow/image3kit"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:00:08.020982 | image3kit-0.0.2a0.tar.gz | 354,096 | 6f/b6/485d08699f3b78b861896d10b48c51c4ed36065dd6680753f3248e81be41/image3kit-0.0.2a0.tar.gz | source | sdist | null | false | 6a26f88b731f9b55a646fe90bc34ed79 | 08f18b146096e9cd4f42c469af0165237ea4aafa14a27d50692c959bf8cf090b | 6fb6485d08699f3b78b861896d10b48c51c4ed36065dd6680753f3248e81be41 | BSD-3-Clause | [
"LICENSE",
"pkgs/pybind11/LICENSE"
] | 3,954 |
2.4 | moat-lib-priomap | 0.2.5 | A map with priority values | # MoaT-Lib-PrioMap
% start synopsis
% start main
A heap that behaves like a dict (or vice versa).
The keys are ordered by their associated value.
% end synopsis
## Features
* Dictionary-style access:
* `h[key] = priority` (insert/update)
* `prio = h[key]` (lookup)
* `del h[key]` (remove)
* Bulk initialization: `PrioMap({'a':1, 'b':2})`
* Priority operations:
* `h.popitem()` & `h.peekitem()` for root (min)
* `h.update(key, new_prio)` to change an existing key’s priority
* Introspection:
* `len(h)`, `key in h`, `h.is_empty()`
* Safe iteration:
* `.keys()`, `.values()`, `.items()`, and plain `for k, v in h:`
* Detects concurrent modifications and raises `RuntimeError`.
% end main
### Non-Features
* Storing more than the priority.
Workaround: use a `(prio, other_data)` tuple.
* Sorting by highest instead of lowest priority first.
Workaround: store the negative priority value.
## Installation
```bash
pip install moat-lib-priomap
```
## Usage
### PrioMap
```python
from moat.lib.priomap import PrioMap
# Min-heap example
h = PrioMap({'a':5, 'b':2, 'c':3})
print(h.peekitem()) # ('b', 2)
# Insert
h['d'] = 1
print(h.popitem()) # ('d', 1)
# Update
h.update('a', 0)
print(h.peekitem()) # ('a', 0)
# Iterate. Does not consume the data.
for key, prio in h.items(): # keys(), values()
print(f"{key} -> {prio}")
# emits a->0, d->1, b->2, c->3
# Async Iteration. Does consume the data!
# Waits for more data if/when it runs out.
async for key, prio in h:
print(f"{key} -> {prio}")
```
### TimerMap
```python
from moat.lib.priomap import TimerMap
# example
h = TimerMap({'a':5, 'b':2, 'c':3})
print(h.peekitem()) # ('b', 1.995)
# Iterate
async for key in h:
print(key)
# > waits two seconds
# b
# > waits another second
# c
# > two seconds later
# a
```
## License
MIT.
| text/markdown | null | null | null | Matthias Urlichs <matthias@urlichs.de> | null | MoaT | [
"Development Status :: 4 - Beta",
"Framework :: AnyIO",
"Framework :: Trio",
"Framework :: AsyncIO",
"Programming Language :: Python :: 3",
"Intended Audience :: Developers"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"anyio~=4.0"
] | [] | [] | [] | [
"homepage, https://m-o-a-t.org",
"repository, https://github.com/M-o-a-T/moat"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-19T12:59:07.904412 | moat_lib_priomap-0.2.5.tar.gz | 9,170 | 86/42/b286204bdc4f9fcd8d727e05ac49decb7cb186f3c3e182cdbe7d4474db07/moat_lib_priomap-0.2.5.tar.gz | source | sdist | null | false | ac327c178252e2cfb1aae5cdf7edbc1b | 01e43d0c3d59c7526d37c1689a7b309fd558ef6f1c49d724ad96cb6bf128184d | 8642b286204bdc4f9fcd8d727e05ac49decb7cb186f3c3e182cdbe7d4474db07 | null | [
"LICENSE",
"LICENSE.txt"
] | 250 |
2.4 | moat-lib-config | 0.1.3 | Configuration management for MoaT applications | # Configuration management
% start main
% start synopsis
This module provides infrastructure for loading, merging, and managing
configuration data from multiple sources. It includes:
- Multi-source configuration loading (files, environment, programmatic)
- Hierarchical configuration with automatic merging
- Context-aware configuration access
- Configuration inheritance with `$base` references
- Lazy loading of module-specific configurations
% end synopsis
% end main
## Usage
```python
from moat.lib.config import CFG
# Initial setup (once, at program startup)
CFG(name="myapp")
# loads `/etc/myapp.yaml` (and others)
# Access configuration data
print(CFG.database.host)
```
## Configuration Sources
The `CfgStore` class combines configuration from multiple sources (in order of precedence):
- Command-line arguments (via `mod` method)
- Preloaded configuration (passed to constructor)
- Environment variables (in `CfgStore.env`)
- Explicitly added config files (via `add` method)
- Default config files (from standard paths)
- Static module configurations (loaded via `with_`)
| text/markdown | null | null | null | Matthias Urlichs <matthias@urlichs.de> | null | MoaT | [
"Development Status :: 4 - Beta",
"Framework :: AnyIO",
"Framework :: Trio",
"Framework :: AsyncIO",
"Programming Language :: Python :: 3",
"Intended Audience :: Developers"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"moat-util~=0.62.6",
"pydantic~=2.12",
"pydantic_variants~=0.3"
] | [] | [] | [] | [
"homepage, https://m-o-a-t.org",
"repository, https://github.com/M-o-a-T/moat"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-19T12:59:01.356878 | moat_lib_config-0.1.3.tar.gz | 10,037 | fd/fb/251e2b02405de729e1bc256d9d8db5a04160ae3e375885a645c662aec1b8/moat_lib_config-0.1.3.tar.gz | source | sdist | null | false | 139e46cf40fc3e92e87d869c3cc2c904 | cf8c2a19470f4968ad6a58b345c20c34df51aeb75d5dc4a2d7300bf8024cf51d | fdfb251e2b02405de729e1bc256d9d8db5a04160ae3e375885a645c662aec1b8 | null | [
"LICENSE.txt"
] | 304 |
2.1 | odoo-addon-hr-timesheet-sheet-autodraft | 16.0.1.0.1 | Automatically draft a Timesheet Sheet for every time entry that does not have a relevant Timesheet Sheet existing. | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
=============================
HR Timesheet Sheet Auto-draft
=============================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:5e1d52f341a0e9f488ce216c14e9c3bac499792a54b7c939525507b3b164be52
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Ftimesheet-lightgray.png?logo=github
:target: https://github.com/OCA/timesheet/tree/16.0/hr_timesheet_sheet_autodraft
:alt: OCA/timesheet
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/timesheet-16-0/timesheet-16-0-hr_timesheet_sheet_autodraft
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/timesheet&target_branch=16.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module adds option to auto-draft Timesheet Sheets whenever a Timesheet
entry is created or modified to ensure it's covered by a relevant Timesheet
Sheet.
**Table of contents**
.. contents::
:local:
Configuration
=============
To enable auto-drafting:
# Go to *Timesheets > Configuration > Settings*
# Enable **Auto-draft Timesheet Sheets** under **Timesheet Options**
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/timesheet/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/timesheet/issues/new?body=module:%20hr_timesheet_sheet_autodraft%0Aversion:%2016.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
~~~~~~~
* CorporateHub
Contributors
~~~~~~~~~~~~
* `CorporateHub <https://corporatehub.eu/>`__
* Alexey Pelykh <alexey.pelykh@corphub.eu>
* Dhara Solanki <dhara.solanki@initos.com>
Maintainers
~~~~~~~~~~~
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
This module is part of the `OCA/timesheet <https://github.com/OCA/timesheet/tree/16.0/hr_timesheet_sheet_autodraft>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| null | CorporateHub, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 16.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/timesheet | null | >=3.10 | [] | [] | [] | [
"odoo-addon-hr-timesheet-sheet<16.1dev,>=16.0dev",
"odoo<16.1dev,>=16.0a"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T12:58:40.029376 | odoo_addon_hr_timesheet_sheet_autodraft-16.0.1.0.1-py3-none-any.whl | 29,242 | 92/3c/3f4af4712e1b67397894d03944c1285c01b388eb4ac22b4788bf14f5afb8/odoo_addon_hr_timesheet_sheet_autodraft-16.0.1.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 0cf79339f56b90be4b4f7ef0cd018fbe | eb3fe02108e7488291d65ff0c116e22387eea54deeccfdaa59045695bf22fd26 | 923c3f4af4712e1b67397894d03944c1285c01b388eb4ac22b4788bf14f5afb8 | null | [] | 93 |
2.4 | terminaix-pro | 1.0.0 | TerminaiX-Pro AI assistant CLI | # TerminaiX
AI assistant CLI by Mohamed.
| text/markdown | Mohamed | null | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests",
"openai"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.0 | 2026-02-19T12:58:11.957643 | terminaix_pro-1.0.0.tar.gz | 2,932 | 3d/d7/073e28ef2f887a3423118a7e6c15364741de575d418b6fcb2c845a407fca/terminaix_pro-1.0.0.tar.gz | source | sdist | null | false | 8035b5ac5d3e68c128fadf0b4b9fd214 | 2ad18ee06a055c1737544f72d3b64ad867160781bda9203b284929640b4c3b65 | 3dd7073e28ef2f887a3423118a7e6c15364741de575d418b6fcb2c845a407fca | null | [] | 251 |
2.4 | qat-compiler | 3.3.0 | A low-level quantum compiler and runtime which facilitates executing quantum IRs. | .. image:: https://github.com/oqc-community/qat/blob/main/qat-logo.png?raw=True
:width: 400
:alt: QAT
.. readme_text_start_label
|
**QAT** (Quantum Assembly Toolkit/Toolchain) is a low-level quantum compiler and runtime which facilitates executing quantum IRs
such as `QASM <https://openqasm.com/>`_, `OpenPulse <https://openqasm.com/language/openpulse.html>`_ and
`QIR <https://devblogs.microsoft.com/qsharp/introducing-quantum-intermediate-representation-qir/>`_ against QPU drivers.
It facilitates the execution of largely-optimised code, converted into abstract pulse-level and hardware-level instructions,
which are then transformed and delivered to an appropriate driver.
For the official QAT documentation, please see `QAT <https://oqc-community.github.io/qat>`_.
|
----------------------
Installation
----------------------
QAT can be installed from `PyPI <https://pypi.org/project/qat-compiler/>`_ via:
:code:`pip install qat-compiler`
|
----------------------
Building from Source
----------------------
We use `poetry <https://python-poetry.org/>`_ for dependency management and run on
`Python 3.10+ <https://www.python.org/downloads/>`_.
Once both of these are installed run this in the root folder to install all the dependencies that you need:
:code:`poetry install`
.. note::
If you are contributing to the project we recommend that you also run
:code:`poetry run pre-commit install`
to enable pre-commit checks.
|
----------------------
Notebooks
----------------------
We use `jupytext https://jupytext.readthedocs.io/en/latest/` to store notebooks in both 'percent' format .py scripts (in notebooks/scripts) and .ipynb jupyter notebooks (in notebooks/ipynb).
For developers, the notebooks should be synced automatically by pre-commit (which you need to install, see above) and verified automatically in the GitHub pipeline.
They will also be synced automatically on save in jupyterlab. Unfortunately VS code does not sync them automatically on save. They can be manually synced with
`poetry run jupytext-sync`.
Notebooks within the `notebooks/ipynb` folder will be tested as part of our CI to ensure they're functional and up-to-date. To run them locally, please use :code:`poetry run pytest notebooks/ipynb --nbmake`.
----------------------
Roadmap
----------------------
We're currently working on a some significant refactors, we'll be sharing more on this and our future plans soon.
|
----------------------
Contributing
----------------------
To take the first steps towards contributing to QAT, visit our
`contribution <https://github.com/oqc-community/qat/blob/main/CONTRIBUTING.rst>`_ documents, which provides details about our
process.
We also encourage new contributors to familiarise themselves with the
`code of conduct <https://github.com/oqc-community/qat/blob/main/CODE_OF_CONDUCT.rst>`_ and to adhere to these
expectations.
|
----------------------
Where to get help
----------------------
For support, please reach out in the `Discussions <https://github.com/oqc-community/qat/discussions>`_ tab of this repository or file an `issue <https://github.com/oqc-community/qat/issues>`_.
|
----------------------
Licence
----------------------
This code in this repository is licensed under the BSD 3-Clause Licence.
Please see `LICENSE <https://github.com/oqc-community/qat/blob/main/LICENSE>`_ for more information.
|
----------------------
Feedback
----------------------
Please let us know your feedback and any suggestions by reaching out in `Discussions <https://github.com/oqc-community/qat/discussions>`_.
Additionally, to report any concerns or
`code of conduct <https://github.com/oqc-community/qat/blob/main/CODE_OF_CONDUCT.rst>`_ violations please use this
`form <https://docs.google.com/forms/d/e/1FAIpQLSeyEX_txP3JDF3RQrI3R7ilPHV9JcZIyHPwLLlF6Pz7iGnocw/viewform?usp=sf_link>`_.
|
----------------------
Benchmarking
----------------------
The performance of QAT can be measured using our pre-defined benchmarks: :code:`poetry run pytest --benchmark-only`.
To compare to main, checkout the main branch and run :code:`poetry run pytest benchmarks/run.py --benchmark-only --benchmark-save="<benchmark-name>"`.
Then checkout back to the branch you are working and run :code:`poetry run pytest benchmarks/run.py --benchmark-only --benchmark-save="<benchmark-name>" --benchmark-compare --benchmark-compare-fail=min:50%`.
If the test fails, it might indicate a performance regression: use the comparison table that is outputted to verify.
The performance of pull requests to main will be automatically tested.
See the `pytest-benchmark <https://pytest-benchmark.readthedocs.io/en/latest/usage.html>`_ documentation for more information on how to use it.
|
----------------------
Documentation
----------------------
Our documentation at `QAT <https://oqc-community.github.io/qat>`_ is automatically built and deployed as part of our CI pipeline. If making changes to the documentation, you can build it locally by running :code:`poetry run build-docs`, and navigating to `docs/build/`.
|
----------------------
FAQ
----------------------
Why is this in Python?
Mixture of reasons. Primary one is that v1.0 was an early prototype and since the majority of the quantum community
know Python it was the fastest way to build a system which the most people could contribute to building. The API's would
always stick around anyway, but as time goes on the majority of its internals has been, is being, or will be moved to Rust/C++.
Where do I get started?
Our tests are a good place to start as they will show you the various ways to run QAT. Running and then stepping
through how it functions is the best way to learn.
We have what's known as an echo model and engine which is used to test QATs functionality when not attached to a QPU.
You'll see these used almost exclusively in the tests, but you can also use this model to see how QAT functions on
larger and more novel architectures.
High-level architectural documents are incoming and will help explain its various concepts at a glance, but
right now aren't complete.
What OS's does QAT run on?
Windows and Linux are its primary development environments. Most of its code is OS-agnostic but we can't
guarantee it won't have bugs on untried ones. Dependencies are usually where you'll have problems, not the core
QAT code itself.
If you need to make changes to get your OS running feel free to PR them to get them included.
I don't see anything related to OQC's hardware here!
Certain parts of how we run our QPU have to stay propriety and for our initial release we did not have time to
properly unpick this from things we can happily release. We want to release as much as possible and as you're
reading this are likely busy doing just that.
Do you have your own simulator?
We have a real-time chip simulator that is used to help test potential changes and their ramifications to hardware.
It focuses on accuracy and testing small-scale changes so should not be considered a general simulator. 3/4 qubit
simulations is its maximum without runtime being prohibitive.
| text/x-rst | Hamid El Maazouz | helmaazouz@oqc.tech | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"compiler-config<0.3.0,>=0.2.0",
"frozendict<3.0.0,>=2.4.6",
"jsonpickle>=4.0.5",
"lark<2.0.0,>=1.3.1",
"logging-config<2.0.0,>=1.0.4",
"matplotlib<4.0.0,>=3.7.5",
"more-itertools<11.0.0,>=10.7.0",
"networkx>=2.5",
"numpy>=1.26.4",
"numpydantic>=1.6.7",
"openqasm3[parser]<2.0.0,>=1.0.0",
"piny... | [] | [] | [] | [
"Documentation, https://oqc-community.github.io/qat/main/index.html",
"Homepage, https://oqc.tech/",
"Repository, https://github.com/oqc-community/qat"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T12:57:48.583043 | qat_compiler-3.3.0.tar.gz | 402,222 | 9f/3f/f9a2f774d561a012502f33684f7c50b22ee817078d2b7d9ecf95e6e20628/qat_compiler-3.3.0.tar.gz | source | sdist | null | false | 67e433a1ca9193ff0cfd5974e818329c | 606686304b398712019ad067c4306931be114552cd90ce618dd2d71ace775eb6 | 9f3ff9a2f774d561a012502f33684f7c50b22ee817078d2b7d9ecf95e6e20628 | BSD-3-Clause | [
"LICENSE"
] | 272 |
2.4 | venomqa | 0.6.4 | Stateful API testing agent — exhaustively explores every call sequence to catch bugs pytest and Schemathesis miss. | # VenomQA
**Stateful API testing that finds sequence bugs your unit tests will never catch.**
VenomQA is an autonomous QA agent for REST APIs. You define **Actions** (API calls) and **Invariants** (rules that must always hold). VenomQA exhaustively explores every reachable *sequence* through your application's state graph — automatically, using real database rollbacks to branch between paths.
The insight that drives everything: **bugs in APIs are almost never in individual endpoints. They live in sequences.** `create → refund → refund`. `delete → create`. `update → delete → list`. Your pytest suite passes. Your users find the bug.
[](https://pypi.org/project/venomqa/)
[](https://pypi.org/project/venomqa/)
[](https://opensource.org/licenses/MIT)
[](https://pepy.tech/project/venomqa)
[](https://github.com/namanag97/venomqa/actions/workflows/test.yml)
---
## See It Find a Bug (30 Seconds)
```bash
pip install venomqa
venomqa demo
```
```
Unit Test Results: 3/3 PASS ✓
VenomQA Exploration: 8 states, 20 transitions
╭─────────────────── CRITICAL VIOLATION ───────────────────╮
│ BUG FOUND! │
│ Sequence: create_order → refund → refund │
│ Problem: Refunded $200 on a $100 order! │
╰──────────────────────────────────────────────────────────╯
```
Three unit tests pass. One double-refund bug survives. VenomQA finds it in 8 states because it tests the *sequence*, not the endpoint.
---
## The Problem: Your Tests Pass. Your Users Find the Bug.
Standard testing tools check individual endpoints or fixed, hand-written sequences. Real-world bugs hide in the *orderings* that no one thought to script.
**Common bugs that only appear in sequences:**
- **Double refund** — `refund(order)` twice both return `200`. Refunded amount exceeds order total.
- **Stale state after delete** — `delete(resource)` then `create(resource)` returns ghost data from the first.
- **Cascade delete doesn't clean up** — deleting a parent leaves orphaned children that corrupt future reads.
- **Role change doesn't invalidate session** — `demote(user)` then `admin_action(user)` succeeds when it should fail.
- **Race in create → update** — creating a resource and immediately updating it hits an uninitialized field.
- **Resource leak after failed creation** — partial create followed by retry creates duplicates.
```
PUT /orders/{id}/refund → 200 # passes in isolation
PUT /orders/{id}/refund → 200 # also passes in isolation
GET /orders/{id} → 200 # refunded_amount: 200 > total: 100 ← BUG
```
These bugs do not appear in individual endpoint tests. They do not appear in a single happy-path integration test. They appear when you exhaustively explore *every ordering* — which is exactly what VenomQA does.
---
## VenomQA vs Other API Testing Tools
| Tool | What it tests | Finds sequence bugs? | Uses real DB state? | Autonomous? |
|---|---|---|---|---|
| pytest | Individual functions | No | No (mocked) | No |
| Schemathesis | Individual endpoints (random inputs) | No | No | Partial |
| Postman / Newman | Fixed sequences you wrote by hand | No (only what you script) | No | No |
| Dredd | OpenAPI contract compliance | No | No | No |
| Hypothesis | Property-based, single-function | No | No | No |
| **VenomQA** | **Every reachable sequence** | **Yes** | **Yes** | **Yes** |
**Unlike Schemathesis**, which fuzzes individual endpoints for schema violations, VenomQA composes actions into sequences and checks behavioral invariants across the entire path.
**Unlike Postman/Newman**, you do not write the test sequences. VenomQA generates and explores them automatically using BFS/DFS over the state graph.
**Unlike Hypothesis**, VenomQA is not property-based testing of a single function. It tests multi-step API flows against rules that must hold after *every* step in *every* sequence.
**Where pytest stops, VenomQA begins.** Pytest tests the function. VenomQA tests what happens when real users call your API in real sequences.
---
## When VenomQA Catches What pytest Misses
```python
# Your pytest suite. All three tests pass.
def test_create_order():
resp = client.post("/orders", json={"amount": 100})
assert resp.status_code == 201 # ✓ passes
def test_refund_order():
order = client.post("/orders", json={"amount": 100}).json()
resp = client.post(f"/orders/{order['id']}/refund", json={"amount": 100})
assert resp.status_code == 200 # ✓ passes
def test_double_refund_rejected():
order = client.post("/orders", json={"amount": 100}).json()
client.post(f"/orders/{order['id']}/refund", json={"amount": 100})
resp = client.post(f"/orders/{order['id']}/refund", json={"amount": 100})
assert resp.status_code == 400 # ✓ passes (fresh order each time)
# But in production, the sequence that matters is:
# POST /orders → 201 (order_id = "abc123")
# POST /orders/abc123/refund → 200 (refund #1 — same order, not a fresh one)
# POST /orders/abc123/refund → 200 ← BUG: double refund on the same order!
# VenomQA explores this exact sequence automatically.
# You do not need to think of it. It finds it.
```
---
## How It Works
```
You define: VenomQA does:
┌─────────────┐ ┌─────────────────────────────────────────────┐
│ Actions │ │ │
│ (API calls)│──────────▶ │ S0 ──[create]──▶ S1 ──[update]──▶ S2 │
│ │ │ │ │ │ │
│ Invariants │ │ └──[list]──▶ S3 └──[delete]──▶ S4 │ │
│ (rules that│──────────▶ │ ✓ OK ✓ OK ✗ FAIL! │ │
│ must hold) │ │ │
└─────────────┘ │ After every step: checks ALL invariants. │
│ Between branches: rolls back the database. │
└─────────────────────────────────────────────┘
```
1. VenomQA starts at the initial state (empty database).
2. It tries every available action, checking all invariants after each one.
3. When a sequence branches (multiple next actions are possible from state S1), it **checkpoints the database**, explores one branch, **rolls back** to the checkpoint, then explores the next branch.
4. This continues BFS or DFS until every reachable sequence has been tested or `max_steps` is reached.
5. Any invariant failure is recorded with the **exact reproduction path**.
**Why database access is required:**
To explore `S1 → branch A` and then `S1 → branch B`, VenomQA must reset the database to exactly S1 before taking branch B. Without real rollback, you cannot branch — you can only run linear sequences. This is the fundamental difference from tools that mock the database.
---
## Quick Start
### Install
```bash
pip install venomqa
```
### Zero-Config Run (OpenAPI + Docker)
If you have `docker-compose.yml` and `openapi.yaml` in your project:
```bash
venomqa # reads your stack, spins up isolated containers, explores
venomqa --api-key YOUR_KEY # if API requires X-API-Key
venomqa --auth-token YOUR_TOKEN # if API requires Bearer token
venomqa --basic-auth user:pass # if API requires Basic auth
```
VenomQA will:
1. Parse your `openapi.yaml` → generate actions for all endpoints
2. Spin up isolated test containers (your production database is never touched)
3. Explore sequences, check for 5xx errors and schema violations
4. Report violations with exact reproduction paths
### 5-Minute Code Example
```python
from venomqa import Action, Agent, BFS, Invariant, Severity, World
from venomqa.adapters.http import HttpClient
api = HttpClient(base_url="http://localhost:8000")
# Actions: what your API can do
def create_order(api, context):
resp = api.post("/orders", json={"amount": 100})
resp.expect_status(201)
context.set("order_id", resp.expect_json_field("id")["id"])
return resp
def refund_order(api, context):
order_id = context.get("order_id")
if order_id is None:
return api.post("/orders/none/refund") # will fail cleanly — never skip
return api.post(f"/orders/{order_id}/refund", json={"amount": 100})
def get_order(api, context):
order_id = context.get("order_id")
if order_id is None:
return api.get("/orders/none")
return api.get(f"/orders/{order_id}")
# Invariants: rules that must hold after every action in every sequence
def no_over_refund(world):
resp = world.api.get("/orders")
if resp.status_code != 200:
return True # don't flag list failures here — separate invariant
return all(
o.get("refunded_amount", 0) <= o.get("total", 0)
for o in resp.json()
)
def no_server_errors(world):
return world.context.get("last_status", 200) < 500
# Wire it together
world = World(api=api, state_from_context=["order_id"])
agent = Agent(
world=world,
actions=[
Action(name="create_order", execute=create_order),
Action(name="refund_order", execute=refund_order),
Action(name="get_order", execute=get_order),
],
invariants=[
Invariant(name="no_over_refund", check=no_over_refund, severity=Severity.CRITICAL),
Invariant(name="no_server_errors", check=no_server_errors, severity=Severity.HIGH),
],
strategy=BFS(), # BFS() takes no arguments
max_steps=100,
)
result = agent.explore() # NOT .run() — that method does not exist
print(f"States visited: {result.states_visited}")
print(f"Violations found: {len(result.violations)}")
for v in result.violations:
print(f" [{v.severity}] {v.invariant_name}: {v.message}")
```
---
## Real Bugs VenomQA Has Caught
These are patterns that appear repeatedly in real APIs. VenomQA finds all of them by exploring sequences automatically.
| Bug Pattern | Sequence That Triggers It |
|---|---|
| Double refund / double cancel | `create → refund → refund` |
| Stale data after delete | `create → delete → create → list` |
| Orphaned children after parent delete | `create_parent → create_child → delete_parent → list_children` |
| Auth bypass after role change | `login_as_admin → demote → call_admin_endpoint` |
| Race in create → update | `create → update(uninitialized_field)` |
| Resource leak on failed creation | `create(bad_data) → create(good_data) → list` |
| Quota not enforced across resources | `create_a → create_b → create_c → check_quota` |
| Idempotency key reuse | `create(key=X) → create(key=X) → list` |
---
## Configuration Reference
### World
```python
from venomqa import World
from venomqa.adapters.http import HttpClient
from venomqa.adapters.postgres import PostgresAdapter
# Option A: with a real database (enables true branching)
world = World(
api=HttpClient("http://localhost:8000"),
systems={"db": PostgresAdapter("postgresql://user:pass@localhost/mydb")},
)
# Option B: context-based (no DB access required, limited branching)
world = World(
api=HttpClient("http://localhost:8000"),
state_from_context=["order_id", "user_id", "order_count"],
)
# Option C: multiple systems
world = World(
api=HttpClient("http://localhost:8000"),
systems={
"db": PostgresAdapter("postgresql://localhost/mydb"),
"cache": RedisAdapter("redis://localhost:6379"),
},
)
```
`World` requires either `systems` or `state_from_context`. A bare `World(api=api)` raises `ValueError`.
### Action
```python
from venomqa import Action
Action(
name="create_order", # unique name, used in violation paths
execute=create_order, # callable (api, context) → response
expected_status=[201], # optional: auto-checks status code
preconditions=["create_order"], # optional: actions that must have run first
)
```
Action functions receive `(api, context)` — in that order. They must return the response object. Returning `None` raises `TypeError`. Use preconditions to skip, not `return None`.
### Invariant
```python
from venomqa import Invariant, Severity
Invariant(
name="no_over_refund",
check=lambda world: ..., # (world) → bool — True means OK
severity=Severity.CRITICAL, # CRITICAL, HIGH, MEDIUM, LOW
message="Refunded amount cannot exceed order total",
)
```
Severity is the third positional argument (after `name` and `check`).
### Agent
```python
from venomqa import Agent, BFS
Agent(
world=world,
actions=[...], # list of Action — NOT a World parameter
invariants=[...], # list of Invariant — NOT a World parameter
strategy=BFS(), # BFS() or DFS() — BFS() takes no arguments
max_steps=200, # stop after this many transitions
)
result = agent.explore() # returns ExplorationResult
```
**ExplorationResult fields:**
- `result.states_visited` — number of unique states explored
- `result.transitions_taken` — number of action executions
- `result.violations` — list of Violation objects
- `result.duration_ms` — total runtime in milliseconds
- `result.truncated_by_max_steps` — True if stopped at max_steps
### Strategies
```python
BFS() # breadth-first — finds shortest violation path (recommended)
DFS() # depth-first — required when using PostgreSQL savepoints
```
### Response Helpers
```python
resp.expect_status(201) # raises if not 201
resp.expect_status(200, 201, 204) # raises if not any of these
resp.expect_success() # raises if not 2xx/3xx
data = resp.expect_json() # raises if not valid JSON
data = resp.expect_json_field("id") # raises if "id" missing, returns dict
items = resp.expect_json_list() # raises if not a JSON array
resp.status_code # returns 0 on network error (safe)
resp.headers # returns {} on network error (safe)
```
---
## Rollback Backends
VenomQA uses these mechanisms to restore database state between branches:
| System | Mechanism |
|---|---|
| PostgreSQL | `SAVEPOINT` / `ROLLBACK TO SAVEPOINT` — entire run is one uncommitted transaction |
| SQLite | Copy file / restore file |
| Redis | `DUMP` all keys → `FLUSHALL` → `RESTORE` |
| MockQueue, MockMail, MockStorage, MockTime | In-memory copy + restore |
| Custom HTTP services | Subclass `MockHTTPServer` (3-method interface) |
---
## From an OpenAPI Spec
```bash
# Generate actions from your spec and run immediately
venomqa scaffold openapi https://api.example.com/openapi.json \
--base-url https://api.example.com \
--output actions.py
python3 actions.py
```
Or in Python:
```python
from venomqa.v1.generators.openapi_actions import generate_actions
actions = generate_actions("openapi.yaml", base_url="http://localhost:8000")
# Returns list[Action] for every endpoint in the spec
```
---
## Reporters
```python
from venomqa.reporters.console import ConsoleReporter
from venomqa.reporters.html_trace import HTMLTraceReporter
from venomqa.reporters.json_reporter import JSONReporter
from venomqa.reporters.markdown import MarkdownReporter
# Console output (default — rich colored terminal)
ConsoleReporter().report(result)
# D3 force-graph of the full state space
html = HTMLTraceReporter().report(result)
open("trace.html", "w").write(html)
# Machine-readable output
json_str = JSONReporter(indent=2).report(result)
md_str = MarkdownReporter().report(result)
```
All reporters return a string. `ConsoleReporter` also writes to stdout.
---
## Authentication
```python
from venomqa.v1.auth import BearerTokenAuth, ApiKeyAuth, MultiRoleAuth
# Bearer token
auth = BearerTokenAuth(token_fn=lambda ctx: "my-token")
# API key header
auth = ApiKeyAuth(key_fn=lambda ctx: "my-key", header="X-API-Key")
# Multiple roles (useful for testing permission boundaries)
auth = MultiRoleAuth(
roles={"admin": admin_auth, "user": user_auth},
default="user",
)
# Use in HttpClient
api = HttpClient("http://localhost:8000", auth=auth)
```
Token functions receive the current `Context` and can return dynamic tokens (e.g., from a login action stored in context). Return `None` to omit the header for that request.
---
## CLI Reference
```bash
venomqa # auto-run if docker-compose + openapi detected
venomqa demo # 30-second demo with a planted double-refund bug
venomqa init # create a new VenomQA project
venomqa init --with-sample # create project with working example
venomqa doctor # system diagnostics (Docker, dependencies, auth)
# Authentication flags
venomqa --api-key KEY # sets X-API-Key header
venomqa --auth-token TOKEN # sets Authorization: Bearer TOKEN
venomqa --basic-auth user:pass # sets Authorization: Basic ...
venomqa --skip-preflight # skip Docker and auth checks
# Environment variables (alternatives to flags)
export VENOMQA_API_KEY=your-key
export VENOMQA_AUTH_TOKEN=your-token
venomqa
```
---
## Working Example: Two Real Bugs
`examples/github_stripe_qa/` contains a complete working example with two deliberately planted bugs:
```bash
cd examples/github_stripe_qa
python3 main.py
# Bug 1: GitHub open-issues endpoint leaks closed issues [CRITICAL]
# Sequence: list_open_issues → filter_closed → compare_counts
#
# Bug 2: Stripe allows refund > original charge amount [CRITICAL]
# Sequence: create_charge → refund → refund
```
Both bugs are found automatically. No bug sequence was hand-written.
---
## FAQ
**Q: How is this different from Schemathesis?**
Schemathesis tests individual endpoints by fuzzing inputs — it sends random or schema-derived values and checks that your API doesn't crash or violate the OpenAPI contract. It tests *one call at a time*. VenomQA tests *sequences* of calls and checks behavioral rules (invariants) that span multiple steps. The tools are complementary: use Schemathesis for input validation, use VenomQA for stateful sequence bugs.
**Q: How is this different from property-based testing (Hypothesis)?**
Hypothesis generates random inputs to test a single function. VenomQA generates sequences of API calls to test stateful behavior across multiple endpoints. They operate at different levels and solve different problems.
**Q: Do I need a real database?**
For full branching exploration you need database access — PostgreSQL, SQLite, or another supported backend. Without it, VenomQA can still explore using `state_from_context`, which tracks state changes in the context dictionary. This is useful for stateless APIs or quick exploration, but cannot catch bugs that depend on actual database state.
**Q: Will this break my production database?**
No. VenomQA connects to your API's database and wraps the entire exploration in a single uncommitted transaction (PostgreSQL) or uses file copies (SQLite). Nothing is committed. Your production database is never involved — you should point VenomQA at a test/staging database.
**Q: How does it know what sequences to try?**
VenomQA performs BFS or DFS over the state graph. From any state, it tries every available action. If multiple actions are possible, it checkpoints the database and explores each branch, rolling back between them. The state is determined either by the database contents (with a systems adapter) or by context keys you specify.
**Q: What if my API requires authentication?**
Pass `auth=` to `HttpClient`, use `BearerTokenAuth`, `ApiKeyAuth`, or `MultiRoleAuth`. For token-based auth where the token comes from a login action, your token function can read the token from the exploration context: `token_fn=lambda ctx: ctx.get("auth_token")`. On the CLI, use `--auth-token` or `--api-key`.
**Q: Can I use this with any API framework?**
Yes. VenomQA talks to your API over HTTP — it doesn't care whether the API is Flask, FastAPI, Django, Express, Rails, Spring, or anything else. As long as it speaks HTTP and writes to a supported database, VenomQA can test it.
**Q: Can I run this in CI?**
Yes. `agent.explore()` returns an `ExplorationResult`. Exit non-zero if `result.violations` is non-empty. See `examples/github_stripe_qa/` for a working CI-ready example.
**Q: What's the difference between `BFS()` and `DFS()`?**
BFS (breadth-first) finds the *shortest* violation path — the minimum number of steps to reproduce a bug. DFS (depth-first) explores deeper paths first. When using PostgreSQL savepoints, use `DFS()` (PostgreSQL savepoints require linear execution). For in-memory or SQLite backends, `BFS()` is recommended.
---
## Development
```bash
git clone https://github.com/namanag97/venomqa
cd venomqa
pip install -e ".[dev]"
make test # unit tests (421 tests)
make lint # ruff
make typecheck # mypy --strict
make ci # lint + typecheck + coverage
# Run specific tests
pytest tests/v1/ --ignore=tests/v1/test_postgres.py
pytest tests/v1/ -k "test_name"
```
Test markers:
- `@pytest.mark.slow` — skipped by default
- `@pytest.mark.integration` — requires live services, skipped by default
---
## Docs
Full documentation: [namanag97.github.io/venomqa](https://namanag97.github.io/venomqa)
---
MIT License — built by [Naman Agarwal](https://github.com/namanag97)
| text/markdown | null | Naman Agarwal <naman@venomqa.dev> | null | Naman Agarwal <naman@venomqa.dev> | null | ai-testing, api-automation, api-qa, api-testing, autonomous-testing, bfs-exploration, branch-exploration, bug-hunting, checkpoint, database-rollback, dredd-alternative, e2e-testing, end-to-end, exhaustive-testing, graphql-testing, http-testing, integration-testing, invariant-testing, journey-testing, model-based-testing, openapi, openapi-testing, postgresql, postman-alternative, property-based-testing, qa, qa-agent, qa-framework, redis, regression-testing, rest-api, rollback-testing, schemathesis-alternative, sequence-testing, sqlite, state-graph, state-machine-testing, stateful-api-testing, stateful-testing, swagger, test-automation, testing | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Framework :: Pytest",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language ::... | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0.0",
"faker>=18.0.0",
"httpx>=0.25.0",
"psycopg[binary]>=3.1.0",
"pydantic-settings>=2.0.0",
"pydantic>=2.0.0",
"pyyaml>=6.0",
"rich>=13.0.0",
"watchdog>=3.0.0",
"boto3>=1.26.0; extra == \"all\"",
"docker>=6.0.0; extra == \"all\"",
"gql>=3.4.0; extra == \"all\"",
"graphql-core>=3.... | [] | [] | [] | [
"Homepage, https://namanag97.github.io/venomqa",
"Documentation, https://namanag97.github.io/venomqa",
"Repository, https://github.com/namanag97/venomqa",
"Source Code, https://github.com/namanag97/venomqa",
"Bug Tracker, https://github.com/namanag97/venomqa/issues",
"Changelog, https://github.com/namanag... | twine/6.2.0 CPython/3.13.5 | 2026-02-19T12:57:42.243587 | venomqa-0.6.4.tar.gz | 1,939,998 | 3f/f1/c34115e5fff46f5261a9dd07236599ba441ac0909086b5d636bfc17e5d83/venomqa-0.6.4.tar.gz | source | sdist | null | false | b84764d8240fcbadd1558f92cb553acb | b6813daf7801b31971a3c5041e2787387b451a90c9dc246da53185d725144130 | 3ff1c34115e5fff46f5261a9dd07236599ba441ac0909086b5d636bfc17e5d83 | MIT | [
"LICENSE"
] | 213 |
2.4 | clerrit | 0.5.0 | Supercharge your Gerrit workflow with LLM-powered code reviews and fixes | # clerrit
[](https://pypi.org/project/clerrit/)
__*clerrit*__ is a CLI tool which bridges
[Gerrit Code Review](https://www.gerritcodereview.com/) with
[Claude Code](https://claude.com/product/claude-code).
The current features are, for a given change and patchset:
* **Review a Gerrit change** using Claude
Code, identifying bugs, security issues, edge cases, style problems,
and missing error handling:
```
$ clerrit review 18263
```
This command only shows a code review report in Claude Code, helping
you write your actual review comments on Gerrit. It doesn't send
anything to Gerrit.
* **Address Gerrit code review comments** by having Claude Code fix the
issues based on reviewer feedback:
```
$ clerrit fix 2439
```
Claude Code fixes the code locally without running `git add`,
`git commit`, or such. It doesn't send anything to Gerrit.
Your typical workflow after having reviewed the changes would be
something like:
```
$ git add -u
$ git commit --amend --no-edit
$ git review
```
You can also use the `--no-fetch` option to avoid creating a new
branch for the fix: just work in your current tree as is.
clerrit is meant to assist reviewers and developers,
not to replace them.
## Try it now!
* Make clerrit review the latest patchset of change 27362 using the
`review` remote of the current Git repository:
```
$ uvx clerrit review 27362 --md
```
You'll end up in Claude Code performing a code review, providing
raw Markdown comments for specific files and line numbers.
* Make clerrit address the code review of the latest patchset of
change 1189 using the `review` remote of the current
Git repository:
```
$ uvx clerrit fix 1189
```
You'll end up in Claude Code fixing the code to address the
review comments.
See `clerrit --help` to learn more.
## Examples
* Review latest patchset of change 15753:
```
$ clerrit review 15753
```
* Review patchset 3 of change 15753:
```
$ clerrit review 15753 3
```
* Review with raw Markdown output for Gerrit comments:
```
$ clerrit review 15753 --md
```
* Review using a custom remote instead of the default `review`:
```
$ clerrit review 15753 --remote=gerrit
```
* Review with extra context for Claude Code:
```
$ clerrit review 15753 --extra-prompt='Focus on memory safety.'
```
* Fix the latest patchset, addressing the review comments of the
last patchset:
```
$ clerrit fix 8472
```
* Fix a specific patchset:
```
$ clerrit fix 8472 2
```
* Fix the latest patchset, addressing the review comments of _all_
the patchsets, and don't create a new branch:
```
$ clerrit fix 8472 all --no-fetch
```
* Fix with extra context for Claude Code:
```
$ clerrit fix 8472 --extra-prompt='Do NOT take into account the comments of Jérémie.'
```
* Fix using a specific Claude Code model:
```
$ clerrit fix 8472 --model=sonnet
```
* Fix in YOLO mode:
```
$ clerrit fix 8472 all --permission-mode=acceptEdits
```
## What clerrit does
* `review` command:
1. Fetches the patchset from the Gerrit remote.
2. Creates a temporary local branch with the change.
3. Launches Claude Code with a prompt to analyze the latest commit for
bugs, security issues, edge cases, style problems, and missing error
handling.
If a `CONTRIBUTING.adoc`, `CONTRIBUTING.md`, or `CONTRIBUTING.rst`
file exists and there's no `CLAUDE.md` file, mentions it
as context.
* `fix` command:
1. Without the `--no-fetch` option:
1. Fetches the patchset from the Gerrit remote.
2. Creates a temporary local branch with the change.
2. Queries the Gerrit server via SSH to retrieve all review comments
for the patchset(s).
3. Launches Claude Code with the comments and instructions to fix the
reported issues (without staging, committing, or creating
new files, unless requested).
| text/markdown | null | Philippe Proulx <eeppeliteloop@gmail.com> | null | null | null | claude, cli, code-review, gerrit, llm | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Versi... | [] | null | null | >=3.11 | [] | [] | [] | [
"rich~=14.2",
"typer~=0.20"
] | [] | [] | [] | [
"Homepage, https://github.com/eepp/clerrit",
"Repository, https://github.com/eepp/clerrit",
"Issues, https://github.com/eepp/clerrit/issues"
] | uv/0.9.28 {"installer":{"name":"uv","version":"0.9.28","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T12:57:15.657654 | clerrit-0.5.0-py3-none-any.whl | 10,175 | 08/7f/c8ca7037cac0895e140474e16f5ac2461cfa5e702a83b51db1bd0c232f1c/clerrit-0.5.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 8aecd8081b09c65ce46b9a9a77ab57ed | 8a373162019b2b0f25da82cde03055eebc8d3d0d70ef72006369afd6ee47d5eb | 087fc8ca7037cac0895e140474e16f5ac2461cfa5e702a83b51db1bd0c232f1c | MIT | [] | 225 |
2.4 | with-line-profiler | 0.1.0 | Context-manager based line-by-line profiler for Python functions | # lineprofiler
Statistical profiler to find lines that take a long time to compute. One can specify a folder, wherein the profiler traces lines.
The profiler can be bound using `with`.
## Features
- **Zero configuration** – just wrap code in a `with` block
- **Line-level timing** – see exactly which lines are slow
- **Auto-filtering** – only profiles code in your project (auto-detects git repo root)
- **Flexible output** – sort by time, hits, or line number; filter by threshold
## Installation
`pip install lineprofiler`
## Workflow
```python
from lineprofiler import LineProfiler
profiler = LineProfiler(project_folder="path/to/your/project")
profiler.clear()
with profiler:
your_function()
profiler.print_global_top_stats(min_time_us=0.01, top_n=40)
```
| Method | Description |
|--------|-------------|
| `print_stats(min_time_us, top_n_lines, sort_by)` | Print per-function statistics |
| `print_global_top_stats(top_n, min_time_us, sort_by)` | Print top N lines across all functions |
| `get_stats()` | Get raw `FunctionStats` dictionary |
| `clear()` / `reset()` | Clear all collected data |
## Licence
MIT
| text/markdown | null | mathematiger <mcop.dkoehler@gmail.com> | null | null | null | debugging, performance, profiler, profiling, timing | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Py... | [] | null | null | >=3.9 | [] | [] | [] | [
"typing-extensions>=4.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/mathematiger/lineprofiler",
"Repository, https://github.com/mathematiger/lineprofiler",
"Issues, https://github.com/mathematiger/lineprofiler/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-19T12:56:00.429428 | with_line_profiler-0.1.0.tar.gz | 6,789 | ac/1a/def684d3d56b5c2f5030c0de45e964e76df5ce6ff66ad980aa5d1c001b85/with_line_profiler-0.1.0.tar.gz | source | sdist | null | false | b30ca83e3f271ca9fbcc2dbff73affd6 | 7382063a13d6a71574759e974eb9d7fc458ab83cd2c9950e5a06e9679f126852 | ac1adef684d3d56b5c2f5030c0de45e964e76df5ce6ff66ad980aa5d1c001b85 | MIT | [
"LICENSE"
] | 240 |
2.4 | pytrilogy | 0.3.179 | Declarative, typed query language that compiles to SQL. | # Trilogy
**SQL with superpowers for analytics**
[](https://trilogydata.dev/)
[](https://discord.gg/Z4QSSuqGEd)
[](https://badge.fury.io/py/pytrilogy)
The Trilogy language is an experiment in better SQL for analytics - a streamlined version that replaces tables/joins with a lightweight semantic binding layer and provides easy reuse and composability. It compiles to SQL - making it easy to debug or integrate into existing workflows - and can be run against any supported SQL backend.
It shines when used with AI agents, but is built for people first.
[pytrilogy](https://github.com/trilogy-data/pytrilogy) is the reference implementation, written in Python.
## What Trilogy Gives You
- **Speed** - write less, faster. Concise but powerful syntax
- **Efficiency** - easily reuse and compose functions and models, modeled after python
- **Easy refactoring** - change and update tables without breaking queries, and easy testing snd static analysis
- **Testability** - built-in testing patterns with query fixtures
- **Straightforward** - for humans and LLMs alike
Trilogy is especially powerful for data consumption, providing a rich metadata layer that makes creating, interpreting, and visualizing queries easy and expressive.
We recommend starting with the studio to explore Trilogy. For integration, `pytrilogy` can be run locally to parse and execute trilogy model [.preql] files using the `trilogy` CLI tool, or can be run in python by importing the `trilogy` package.
## Quick Start
> [!TIP]
> **Try it now:** [Open-source studio](https://trilogydata.dev/trilogy-studio-core/) | [Interactive demo](https://trilogydata.dev/demo/) | [Documentation](https://trilogydata.dev/)
**Install**
```bash
pip install pytrilogy
```
**Save in hello.preql**
```sql
const prime <- unnest([2, 3, 5, 7, 11, 13, 17, 19, 23, 29]);
def cube_plus_one(x) -> (x * x * x + 1);
WHERE
prime_cubed_plus_one % 7 = 0
SELECT
prime,
@cube_plus_one(prime) as prime_cubed_plus_one
ORDER BY
prime asc
LIMIT 10;
```
**Run it in DuckDB**
```bash
trilogy run hello.preql duckdb
```
## Trilogy is Easy to Write
For humans *and* AI. Enjoy flexible, one-shot query generation without any DB access or security risks.
(full code in the python API section.)
```python
query = text_to_query(
executor.environment,
"number of flights by month in 2005",
Provider.OPENAI,
"gpt-5-chat-latest",
api_key,
)
# get a ready to run query
print(query)
# typical output
'''where local.dep_time.year = 2020
select
local.dep_time.month,
count(local.id2) as number_of_flights
order by
local.dep_time.month asc;'''
```
## Goals
Versus SQL, Trilogy aims to:
**Keep:**
- Correctness
- Accessibility
**Improve:**
- Simplicity
- Refactoring/maintainability
- Reusability/composability
- Expressivness
**Maintain:**
- Acceptable performance
## Backend Support
| Backend | Status | Notes |
|---------|--------|-------|
| **BigQuery** | Core | Full support |
| **DuckDB** | Core | Full support |
| **Snowflake** | Core | Full support |
| **SQL Server** | Experimental | Limited testing |
| **Presto** | Experimental | Limited testing |
## Examples
### Hello World
Save the following code in a file named `hello.preql`
```python
# semantic model is abstract from data
type word string; # types can be used to provide expressive metadata tags that propagate through dataflow
key sentence_id int;
property sentence_id.word_one string::word; # comments after a definition
property sentence_id.word_two string::word; # are syntactic sugar for adding
property sentence_id.word_three string::word; # a description to it
# comments in other places are just comments
# define our datasource to bind the model to data
# for most work, you can import something already defined
# testing using query fixtures is a common pattern
datasource word_one(
sentence: sentence_id,
word:word_one
)
grain(sentence_id)
query '''
select 1 as sentence, 'Hello' as word
union all
select 2, 'Bonjour'
''';
datasource word_two(
sentence: sentence_id,
word:word_two
)
grain(sentence_id)
query '''
select 1 as sentence, 'World' as word
union all
select 2 as sentence, 'World'
''';
datasource word_three(
sentence: sentence_id,
word:word_three
)
grain(sentence_id)
query '''
select 1 as sentence, '!' as word
union all
select 2 as sentence, '!'
''';
def concat_with_space(x,y) -> x || ' ' || y;
# an actual select statement
# joins are automatically resolved between the 3 sources
with sentences as
select sentence_id, @concat_with_space(word_one, word_two) || word_three as text;
WHERE
sentences.sentence_id in (1,2)
SELECT
sentences.text
;
```
**Run it:**
```bash
trilogy run hello.preql duckdb
```

### Python SDK Usage
Trilogy can be run directly in python through the core SDK. Trilogy code can be defined and parsed inline or parsed out of files.
A BigQuery example, similar to the [BigQuery quickstart](https://cloud.google.com/bigquery/docs/quickstarts/query-public-dataset-console):
```python
from trilogy import Dialects, Environment
environment = Environment()
environment.parse('''
key name string;
key gender string;
key state string;
key year int;
key yearly_name_count int; int;
datasource usa_names(
name:name,
number:yearly_name_count,
year:year,
gender:gender,
state:state
)
address `bigquery-public-data.usa_names.usa_1910_2013`;
''')
executor = Dialects.BIGQUERY.default_executor(environment=environment)
results = executor.execute_text('''
WHERE
name = 'Elvis'
SELECT
name,
sum(yearly_name_count) -> name_count
ORDER BY
name_count desc
LIMIT 10;
''')
# multiple queries can result from one text batch
for row in results:
# get results for first query
answers = row.fetchall()
for x in answers:
print(x)
```
### LLM Usage
Connect to your favorite provider and generate queries with confidence and high accuracy.
```python
from trilogy import Environment, Dialects
from trilogy.ai import Provider, text_to_query
import os
executor = Dialects.DUCK_DB.default_executor(
environment=Environment(working_path=Path(__file__).parent)
)
api_key = os.environ.get(OPENAI_API_KEY)
if not api_key:
raise ValueError("OPENAI_API_KEY required for gpt generation")
# load a model
executor.parse_file("flight.preql")
# create tables in the DB if needed
executor.execute_file("setup.sql")
# generate a query
query = text_to_query(
executor.environment,
"number of flights by month in 2005",
Provider.OPENAI,
"gpt-5-chat-latest",
api_key,
)
# print the generated trilogy query
print(query)
# run it
results = executor.execute_text(query)[-1].fetchall()
assert len(results) == 12
for row in results:
# all monthly flights are between 5000 and 7000
assert row[1] > 5000 and row[1] < 7000, row
```
### CLI Usage
Trilogy can be run through a CLI tool, also named 'trilogy'.
**Basic syntax:**
```bash
trilogy run <cmd or path to trilogy file> <dialect>
```
**With backend options:**
```bash
trilogy run "key x int; datasource test_source(i:x) grain(x) address test; select x;" duckdb --path <path/to/database>
```
**Format code:**
```bash
trilogy fmt <path to trilogy file>
```
#### Backend Configuration
**BigQuery:**
- Uses applicationdefault authentication (TODO: support arbitrary credential paths)
- In Python, you can pass a custom client
**DuckDB:**
- `--path` - Optional database file path
**Postgres:**
- `--host` - Database host
- `--port` - Database port
- `--username` - Username
- `--password` - Password
- `--database` - Database name
**Snowflake:**
- `--account` - Snowflake account
- `--username` - Username
- `--password` - Password
## Config Files
The CLI can pick up default configuration from a config file in the toml format.
Detection will be recursive form parent directories of the current working directory,
including the current working directory.
This can be used to set
- default engine and arguments
- parallelism for execute for the CLI
- any startup commands to run whenever creating an executor.
```toml
# Trilogy Configuration File
# Learn more at: https://github.com/trilogy-data/pytrilogy
[engine]
# Default dialect for execution
dialect = "duck_db"
# Parallelism level for directory execution
# parallelism = 2
# Startup scripts to run before execution
[setup]
# startup_trilogy = []
sql = ['setup/setup_dev.sql']
```
## More Resources
- [Interactive demo](https://trilogydata.dev/demo/)
- [Public model repository](https://github.com/trilogydata/trilogy-public-models) - Great place for modeling examples
- [Full documentation](https://trilogydata.dev/)
## Python API Integration
### Root Imports
Are stable and should be sufficient for executing code from Trilogy as text.
```python
from pytrilogy import Executor, Dialect
```
### Authoring Imports
Are also stable, and should be used for cases which programatically generate Trilogy statements without text inputs
or need to process/transform parsed code in more complicated ways.
```python
from pytrilogy.authoring import Concept, Function, ...
```
### Other Imports
Are likely to be unstable. Open an issue if you need to take dependencies on other modules outside those two paths.
## MCP/Server
Trilogy is straightforward to run as a server/MCP server; the former to generate SQL on demand and integrate into other tools, and MCP
for full interactive query loops.
This makes it easy to integrate Trilogy into existing tools or workflows.
You can see examples of both use cases in the trilogy-studio codebase [here](https://github.com/trilogy-data/trilogy-studio-core)
and install and run an MCP server directly with that codebase.
If you're interested in a more fleshed out standalone server or MCP server, please open an issue and we'll prioritize it!
## Trilogy Syntax Reference
Not exhaustive - see [documentation](https://trilogydata.dev/) for more details.
### Import
```sql
import [path] as [alias];
```
### Concepts
**Types:**
`string | int | float | bool | date | datetime | time | numeric(scale, precision) | timestamp | interval | array<[type]> | map<[type], [type]> | struct<name:[type], name:[type]>`
**Key:**
```sql
key [name] [type];
```
**Property:**
```sql
property [key].[name] [type];
property x.y int;
# or multi-key
property <[key],[key]>.[name] [type];
property <x,y>.z int;
```
**Transformation:**
```sql
auto [name] <- [expression];
auto x <- y + 1;
```
### Datasource
```sql
datasource <name>(
<column_and_concept_with_same_name>,
# or a mapping from column to concept
<column>:<concept>,
<column>:<concept>,
)
grain(<concept>, <concept>)
address <table>;
datasource orders(
order_id,
order_date,
total_rev: point_of_sale_rev,
customomer_id: customer.id
)
grain orders
address orders;
```
### Queries
**Basic SELECT:**
```sql
WHERE
<concept> = <value>
SELECT
<concept>,
<concept>+1 -> <alias>,
...
HAVING
<alias> = <value2>
ORDER BY
<concept> asc|desc
;
```
**CTEs/Rowsets:**
```sql
with <alias> as
WHERE
<concept> = <value>
select
<concept>,
<concept>+1 -> <alias>,
...
select <alias>.<concept>;
```
### Data Operations
**Persist to table:**
```sql
persist <alias> as <table_name> from
<select>;
```
**Export to file:**
```sql
COPY INTO <TARGET_TYPE> '<target_path>' FROM SELECT
<concept>, ...
ORDER BY
<concept>, ...
;
```
**Show generated SQL:**
```sql
show <select>;
```
**Validate Model**
```sql
validate all
validate concepts abc,def...
validate datasources abc,def...
```
## Contributing
Clone repository and install requirements.txt and requirements-test.txt.
Please open an issue first to discuss what you would like to change, and then create a PR against that issue.
## Similar Projects
Trilogy combines two aspects: a semantic layer and a query language. Examples of both are linked below:
**Semantic layers** - tools for defining a metadata layer above SQL/warehouse to enable higher level abstractions:
- [MetricFlow](https://github.com/dbt-labs/metricflow)
- [Cube](https://github.com/cube-js/cube)
- [Zillion](https://github.com/totalhack/zillion)
**Better SQL** has been a popular space. We believe Trilogy takes a different approach than the following, but all are worth checking out. Please open PRs/comment for anything missed!
- [Malloy](https://github.com/malloydata/malloy)
- [Preql](https://github.com/erezsh/Preql)
- [PRQL](https://github.com/PRQL/prql)
| text/markdown | null | null | null | null | null | null | [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | null | [] | [] | [] | [
"httpx; extra == \"ai\"",
"altair; extra == \"analysis\"",
"sqlalchemy-bigquery; extra == \"bigquery\"",
"rich; extra == \"cli\"",
"plotext; extra == \"cli\"",
"pyodbc; extra == \"odbc\"",
"psycopg2-binary; extra == \"postgres\"",
"fastapi; extra == \"serve\"",
"uvicorn; extra == \"serve\"",
"snow... | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.9 | 2026-02-19T12:55:32.184991 | pytrilogy-0.3.179.tar.gz | 327,048 | e6/cd/2f2885083d19a1cec371925cbac92eb1540fd42c0573d1a13bc29e67c7dd/pytrilogy-0.3.179.tar.gz | source | sdist | null | false | ee020c4117d704bb0bbd909242eef1ce | fc9df3489a88891ac73fa2578c1adc37ff5571b8df89363400667f8ff6d1f4ee | e6cd2f2885083d19a1cec371925cbac92eb1540fd42c0573d1a13bc29e67c7dd | null | [
"LICENSE.md"
] | 1,265 |
2.4 | context-verbose | 2.2.3 | Tool to simply display information about the state of the code during execution. |
***********************************************************
Library to improve the display of your code in the console.
***********************************************************
By adding only a few lines of code at strategic places in your program, you will get a nice console display that will let you know what stage your code is at.
fork of **context-printer**:
----------------------------
This project is a fork of the `context_printer <https://pypi.org/project/context-printer/>`_ project. The philosophy of this project is strictly the same as the original project. Nevertheless, this project offers the following improvements:
* Support for the ``with`` keyword (context manager).
* Formatting of exceptions for better debugging.
* Added decorator behavior.
* Possibility to implicitly name a section.
* More formatting possible (adding highlighting and flashing).
* No conflicts between thread and process (clients send text to a single server).
* Integrated timer for display the sections duration.
Basic usage example:
--------------------
.. code:: python
from context_verbose import printer as ctp
with ctp('Main Section', color='blue'):
ctp.print('Text in main section')
for i in range(3):
with ctp(f'Subsection {i}'):
ctp.print('Text in subsection')
ctp.print('Text in subsection')
The above example will print the following:
.. figure:: https://framagit.org/robinechuca/context-verbose/-/raw/main/basic_example.jpg
Exaustive example of usage:
---------------------------
.. code:: python
from context_verbose import printer as ctp
@ctp
def decorated_func(x):
return x**x**x
def error_func():
with ctp('Section that will fail'):
return 1/0
ctp.print('we will enter the main section')
with ctp('Main Section', color='cyan'):
ctp.print('text in main section')
try:
with ctp('Subsection 1'):
for x in [1, 8]:
decorated_func(x)
error_func()
except ZeroDivisionError:
pass
with ctp('Subsection 2', color='magenta'):
ctp.print('text in bold', bold=True)
ctp.print('underlined text', underline=True)
ctp.print('blinking text', blink=True)
ctp.print('yellow text', color='yellow')
ctp.print('text highlighted in blue', bg='blue')
ctp.print('text in several ', end='')
ctp.print('parts', print_headers=False)
ctp.print('''text in several
lines''')
with ctp(color='green'):
ctp.print('this subsection is automatically named')
ctp.print('we are out of the main section')
The above example will print the following:
.. figure:: https://framagit.org/robinechuca/context-verbose/-/raw/main/exaustive_example.jpg
See Also
--------
* `fabric-verbose <https://pypi.org/project/fabric-verbose/>`_
* `pretty-verbose <https://pypi.org/project/pretty-verbose/>`_
| text/x-rst | null | "Robin RICHARD (robinechuca)" <serveurpython.oz@gmail.com> | null | "Robin RICHARD (robinechuca)" <serveurpython.oz@gmail.com> | GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<http://www.gnu.org/licenses/>.
| block, context, context-printer, debug, display, print, printer, verbose | [
"Development Status :: 6 - Mature",
"Environment :: Console",
"Intended Audience :: Customer Service",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Education :: Testing",
"Topic :: Printing",
... | [] | null | null | >=3.10 | [] | [] | [] | [
"colorama",
"networkx"
] | [] | [] | [] | [
"Repository, https://framagit.org/robinechuca/context-verbose"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-19T12:55:21.714540 | context_verbose-2.2.3.tar.gz | 52,149 | ef/51/357b5b72697d333d8b56de09954a9b5e6ed32c671498a360e6f393d3f62a/context_verbose-2.2.3.tar.gz | source | sdist | null | false | 589b8fe01c103a2959a956b6d2308910 | 1e6c25d988ce6ac77459fcebe85537fcae0315d77ab2c7dbfd85232b1baedd21 | ef51357b5b72697d333d8b56de09954a9b5e6ed32c671498a360e6f393d3f62a | null | [
"LICENSE"
] | 263 |
2.4 | ariadne | 0.29.0 | Ariadne is a Python library for implementing GraphQL servers. | [](https://ariadnegraphql.org)
[](https://ariadnegraphql.org)
[](https://codecov.io/github/mirumee/ariadne)



- - - - -
# Ariadne
Ariadne is a Python library for implementing [GraphQL](http://graphql.github.io/) servers.
- **Schema-first:** Ariadne enables Python developers to use schema-first approach to the API implementation. This is the leading approach used by the GraphQL community and supported by dozens of frontend and backend developer tools, examples, and learning resources. Ariadne makes all of this immediately available to you and other members of your team.
- **Simple:** Ariadne offers small, consistent and easy to memorize API that lets developers focus on business problems, not the boilerplate.
- **Open:** Ariadne was designed to be modular and open for customization. If you are missing or unhappy with something, extend or easily swap with your own.
Documentation is available [here](https://ariadnegraphql.org).
## Ariadne ecosystem
| Repository | Description |
| ---------- | ----------- |
| [Ariadne](https://github.com/mirumee/ariadne) | Python library for implementing GraphQL servers using a schema-first approach. |
| [Ariadne codegen](https://github.com/mirumee/ariadne-codegen) | GraphQL client code generator for Python. |
| [Ariadne GraphQL modules](https://github.com/mirumee/ariadne-graphql-modules) | Ariadne package for implementing Ariadne GraphQL schemas using a modular approach. |
| [Ariadne auth](https://github.com/mirumee/ariadne-auth) | A collection of authentication and authorization utilities for Ariadne. |
| [Ariadne lambda](https://github.com/mirumee/ariadne-lambda) | Deploy Ariadne GraphQL applications as AWS Lambda functions. |
| [Ariadne GraphQL proxy](https://github.com/mirumee/ariadne-graphql-proxy) | A GraphQL proxy for Ariadne that allows you to combine multiple GraphQL APIs into a single API. |
## Features
- Simple, quick to learn and easy to memorize API.
- Compatibility with GraphQL.js version 15.5.1.
- Queries, mutations and input types.
- Asynchronous resolvers and query execution.
- Subscriptions.
- Custom scalars, enums and schema directives.
- Unions and interfaces.
- File uploads.
- Defining schema using SDL strings.
- Loading schema from `.graphql`, `.gql`, and `.graphqls` files.
- WSGI middleware for implementing GraphQL in existing sites.
- Apollo Tracing and [OpenTracing](http://opentracing.io) extensions for API monitoring.
- Opt-in automatic resolvers mapping between `camelCase` and `snake_case`, and a `@convert_kwargs_to_snake_case` function decorator for converting `camelCase` kwargs to `snake_case`.
- Built-in simple synchronous dev server for quick GraphQL experimentation and GraphQL Playground.
- Support for [Apollo GraphQL extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=apollographql.vscode-apollo).
- GraphQL syntax validation via `gql()` helper function. Also provides colorization if Apollo GraphQL extension is installed.
- No global state or object registry, support for multiple GraphQL APIs in same codebase with explicit type reuse.
- Support for `Apollo Federation`.
## Installation
Ariadne can be installed with pip:
```console
pip install ariadne
```
Ariadne requires Python 3.10 or higher.
## Quickstart
The following example creates an API defining `Person` type and single query field `people` returning a list of two persons. It also starts a local dev server with [GraphQL Playground](https://github.com/prisma/graphql-playground) available on the `http://127.0.0.1:8000` address.
Start by installing [uvicorn](http://www.uvicorn.org/), an ASGI server we will use to serve the API:
```console
pip install uvicorn
```
Then create an `example.py` file for your example application:
```python
from ariadne import ObjectType, QueryType, gql, make_executable_schema
from ariadne.asgi import GraphQL
# Define types using Schema Definition Language (https://graphql.org/learn/schema/)
# Wrapping string in gql function provides validation and better error traceback
type_defs = gql("""
type Query {
people: [Person!]!
}
type Person {
firstName: String
lastName: String
age: Int
fullName: String
}
""")
# Map resolver functions to Query fields using QueryType
query = QueryType()
# Resolvers are simple python functions
@query.field("people")
def resolve_people(*_):
return [
{"firstName": "John", "lastName": "Doe", "age": 21},
{"firstName": "Bob", "lastName": "Boberson", "age": 24},
]
# Map resolver functions to custom type fields using ObjectType
person = ObjectType("Person")
@person.field("fullName")
def resolve_person_fullname(person, *_):
return "%s %s" % (person["firstName"], person["lastName"])
# Create executable GraphQL schema
schema = make_executable_schema(type_defs, query, person)
# Create an ASGI app using the schema, running in debug mode
app = GraphQL(schema, debug=True)
```
Finally run the server:
```console
uvicorn example:app
```
For more guides and examples, please see the [documentation](https://ariadnegraphql.org).
## Versioning policy ##
`ariadne` follows a custom versioning scheme where the minor version increases for breaking changes, while the patch version increments for bug fixes, enhancements, and other non-breaking updates.
Since `ariadne` has not yet reached a stable API, this approach is in place until version 1.0.0. Once the API stabilizes, the project will adopt [Semantic Versioning](https://semver.org/).
## Contributing
We are welcoming contributions to Ariadne! If you've found a bug or issue, feel free to use [GitHub issues](https://github.com/mirumee/ariadne/issues). If you have any questions or feedback, don't hesitate to catch us on [GitHub discussions](https://github.com/mirumee/ariadne/discussions/).
For guidance and instructions, please see [CONTRIBUTING.md](CONTRIBUTING.md).
Website and the docs have their own GitHub repository: [mirumee/ariadne-website](https://github.com/mirumee/ariadne-website)
Also make sure you follow [@AriadneGraphQL](https://twitter.com/AriadneGraphQL) on Twitter for latest updates, news and random musings!
**Crafted with ❤️ by [Mirumee Software](http://mirumee.com)**
ariadne@mirumee.com
| text/markdown | null | Mirumee Software <ariadne@mirumee.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"graphql-core>=3.2.0",
"starlette<1.0,>0.17",
"typing-extensions>=3.6.0",
"python-multipart>=0.0.13; extra == \"asgi-file-uploads\"",
"ipdb; extra == \"dev\"",
"opentelemetry-api; extra == \"telemetry\"",
"aiodataloader; extra == \"test\"",
"freezegun; extra == \"test\"",
"graphql-sync-dataloaders; ... | [] | [] | [] | [
"Homepage, https://ariadnegraphql.org/",
"Repository, https://github.com/mirumee/ariadne",
"Bug Tracker, https://github.com/mirumee/ariadne/issues",
"Community, https://github.com/mirumee/ariadne/discussions",
"Twitter, https://twitter.com/AriadneGraphQL"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T12:54:20.146281 | ariadne-0.29.0.tar.gz | 85,428 | b9/86/fe5b1b53bc68bcbc5a8a80ca649b2f9b300932d1456d3b6006494d476604/ariadne-0.29.0.tar.gz | source | sdist | null | false | dcb961df035ee96c00542f5867451a78 | c6c79f459ed747b6698aeabb5eb2f35913791fec7d488f42344fe79d5edb469d | b986fe5b1b53bc68bcbc5a8a80ca649b2f9b300932d1456d3b6006494d476604 | BSD-3-Clause | [
"LICENSE"
] | 13,835 |
2.1 | bleuio | 1.7.5 | Library for using the bleuio dongle. | ## Python library v1.7.5 for BleuIO — 2026-02-19
### Supports BleuIO v.2.7.9.51 and BleuIO Pro 1.0.5.6
> NOTE: Does not support fw version 2.2.0 or earlier of BleuIO Standard (SSD005).
### Changes
#### 1.7.5
- Bugfix
* Fixed bug where the library would wrongly throw an exception saying that it doesn't support BleuIO Pro.
#### 1.7.4
- Logging
* Changed all logging with level info to logging level debug except for firmware version.
#### 1.7.3
- Bug fix
* Fixed a bug where the library was not able to reconnect to dongle if it was pulled out/reset.
- Logging
* Changed some logging outputs that ran frequently and had logging level info to debug.
#### 1.7.2
- Command functions
* Added function for command ATAR (enable/disable auto reconnect) introduced in BleuIO Standard v. 2.7.9.29:
- atar()
* Uppdated functions to poll status if no parameters are given (added in BleuIO Standard v. 2.7.9.29). The following comands are affected:
* ata()
* atasps()
* atassm()
* atassn()
* atds()
* ate()
* atew()
* atsiv()
* atsra()
* atsat()
* at_frssi()
* at_show_rssi()
- Added
* Library now uses Python's logging module instead of print statements (for debug, warnings, info etc.).
* New constructor parameters:
* w_timeout — write timeout for serial port.
* exclusive_mode — optional exclusive access flag passed to pyserial.
* rx_delay — enables short non-blocking sleep in RX thread when no bytes are waiting (helps CPU usage on some platforms / busy loops).
- RX thread behavior:
* Now supports configurable rx_delay to avoid blocking reads and reduce CPU usage when no data is available.
- Thread / exit handling:
* Safer exit_handler and SIGINT handler attempt (safe if not main thread).
#### 1.7.0
- Improved responsiveness and improved throughput
- Added functions for commands introduced up to BleuIO fw version 2.7.9.11 and BleuIO Pro fw version 1.0.4.14:
- at_set_autoexec_pwd / at_enter_autoexec_pwd / at_clr_autoexec_pwd
- Added function for missing command ATEW:
- atew
- Disables terminal echo by default
- Better handling of event and scan result parsing.
#### 1.6.1
- Renamed misleading function parameter names. The parameter name 'uuid' has been changed to 'handle' in the following functions: *at_gattcread()*, *at_gattcwrite()*, *at_gattcwritewr()*, *at_gattcwritewrb()*, *at_get_service_details()*. The affected descriptions have also been updated, as well as the documentation, to reflect this.
- Fixed typo. Changed name of function *at_divicename()* to *at_devicename()*.
#### 1.6.0
- Added functions for commands introduced up to BleuIO fw version 2.7.6 and BleuIO Pro fw 1.0.1.
- Improved the auto-detect feature, when not specifying a COM port. It will now detect if a detected BleuIO COM port is in use and try to use the next unused BleuIO COM port it finds until no more BleuIO COM ports are detected.
#### 1.5.0
- Added functions for commands introduced up to BleuIO fw version 2.7.4.
- Added support for BleuIO Pro
- Added functions for commands exclusive to BleuIO Pro fw version 1.0.0.
#### 1.4.0
- Added functions for commands introduced in firmware 2.5.0 and up to BleuIO fw version 2.7.1.
#### 1.3.1
- Added support for SUOTA commands introduced in BleuIO fw version 2.4.0
- Fixed a bug when running on MacOS where the serial responses sometimes are returned in two parts instead of just one. The library expected one but can now handle two or more.
#### 1.3.0
- Added support for commands introduced in BleuIO fw version 2.2.2 and 2.3.0
- Fixed a bug where the BLE status variables for connection weren't updated properly.
- Increased the time trying to connect to the selected Dongle COM port before aborting.
#### 1.2.0
- Supports and makes use of the new Verbose mode introduced in 2.2.1
### Instructions
- Install the library by running:
```shell
pip install bleuio
```
- In the python file import:
```python
from bleuio_lib.bleuio_funcs import BleuIO
```
- Here is an example on how to get started:
```python
# (C) 2026 Smart Sensor Devices AB
import time
from datetime import datetime
from bleuio_lib.bleuio_funcs import BleuIO
# For enabling logging
import logging
# Enable debug logging
logging.basicConfig(level=logging.DEBUG)
# Description
# This is an example script that showcase how to get started with the BleuIO library for Python.
# It will show how to setup callback functions for scan results and events.
# How to send a command and what responses you get and how you can handle them.
# How to start and stop a scan.
# How to start and stop advertising.
# How to check the BLE Status of the dongle.
# Creating a callback function for scan results. For this example we just prints out the result.
# Here you can add your code to parse the data.
def my_scan_callback(scan_input):
print("\n\nmy_evt_callback: " + str(scan_input))
# Creating a callback function for events. For this example we add a timestamp and just prints out the event.
# Here you can add your code to parse the data.
def my_evt_callback(evt_input):
cbTime = datetime.now()
currentTime = cbTime.strftime("%H:%M:%S")
print("\n\n[" + str(currentTime) + "] my_evt_callback: " + str(evt_input))
# Start
# Initiates the dongle. If port param is left as 'auto' it will auto-detect if BleuIO dongle is connected.
# port (str): Serial port name or "auto" for automatic detection
# baud (int): Baud rate for serial communication (default: 115200)
# timeout (float): Serial read timeout in seconds (default: 0.01)
# w_timeout (float): Serial write timeout in seconds (default: 0.01)
# exclusive_mode (bool): Set exclusive access mode (POSIX only) (default: None)
# rx_delay (float): If > 0 enables delay in seconds in rx thread if no bytes are waiting (default: 0)
# debug (bool): Enable debug logging (default: False)
# Auto-Detect dongle (no debug)
my_dongle = BleuIO()
# Auto-Detect dongle (showing debug info)
# my_dongle = BleuIO(debug=True)
# Specific COM port (Win) 'COMX'
# my_dongle = BleuIO(port='COM7')
# Specific COM port (Linux) 'dev/tty.xxxxx...'
# my_dongle = BleuIO(port='/dev/tty.123546877')
# Specific COM port (Mac) 'dev/cu.xxxx...'
# my_dongle = BleuIO(port='/dev/cu.123546877')
# Registers the callback functions we created earlier.
my_dongle.register_evt_cb(my_evt_callback)
my_dongle.register_scan_cb(my_scan_callback)
print("Welcome to Test BleuIO Python Library!\n\n")
# Here we send a simple AT Command. All commands will return a BleuIORESP obj.
# The object have 4 attributes:
# Cmd: Contains the command data.
# Ack: Contains the acknowledge data.
# Rsp: Contains the response data.
# End: Contains the end data.
at_exemple = my_dongle.at()
# The attributes are in JSON format. Here we we print the different attributes.
print(at_exemple.Cmd)
print(at_exemple.Ack)
print(
at_exemple.Rsp
) # Not every ccommand has a Response message, AT for example doesn't so this will return an empty list
print(at_exemple.End)
print("\n--\n")
# We can try with the ATI command, it has information in the Response message.
# An AT command can have several response messages so it will return a list of JSON objects
ati_exemple = my_dongle.ati()
print(ati_exemple.Cmd)
print(ati_exemple.Ack)
print(ati_exemple.Rsp)
print(ati_exemple.End)
print("\n--\n")
# If we only want to see if the was successful we can do like this:
print("Err: " + str(ati_exemple.Ack["err"]))
# or
print("errMsg: " + str(ati_exemple.Ack["errMsg"]))
# Here is an example on how to scan.
# First we need to put the dongle in Central or Dual Gap Role
my_dongle.at_dual()
# Now we start scanning
resp = my_dongle.at_gapscan()
print(resp.Cmd)
print(resp.Ack)
print(resp.Rsp)
print(resp.End)
# We can either send in a timeout as a parameter for the at_gapscan() command or stop the scan when we're done.
# Here we just set a three second sleep then stop scan.
# Notice that all the scan data will be printed by our my_scan_callback() function.
time.sleep(3)
print("stop scan")
my_dongle.stop_scan()
print("\n--\n")
# The BLEStatus class can help you keep track of if you are currently advertising for example.
# """A class used to handle BLE Statuses
# :attr isScanning: Keeps track on if dongle is currently scanning.
# :attr isConnected: Keeps track on if dongle is currently connected.
# :attr isAdvertising: Keeps track on if dongle is currently advertising.
# :attr isSPSStreamOn: Keeps track on if dongle is currently in SPS stream mode.
# :attr role: Keeps track of the dongle's current GAP Role.
# """
print("isScanning: " + str(my_dongle.status.isScanning))
print("isConnected: " + str(my_dongle.status.isConnected))
print("isAdvertising: " + str(my_dongle.status.isAdvertising))
print("isSPSStreamOn: " + str(my_dongle.status.isSPSStreamOn))
print("role: " + str(my_dongle.status.role))
print("\n--\n")
# If we start advertising and check isAdvertising we will see that is changes to True.
resp = my_dongle.at_advstart()
print(resp.Cmd)
print(resp.Ack)
print(resp.Rsp)
print(resp.End)
print("\nisAdvertising: " + str(my_dongle.status.isAdvertising))
print("\n--\n")
# Here we stop the advertising.
resp = my_dongle.at_advstop()
print(resp.Cmd)
print(resp.Ack)
print(resp.Rsp)
print(resp.End)
print("\nisAdvertising: " + str(my_dongle.status.isAdvertising))
```
## Functions
```python
class BleuIO(object):
def __init__(self, port='auto', baud=57600, timeout=1, debug=False):
"""
Initiates the dongle. If port param is left as 'auto' it will auto-detect if bleuio dongle is connected.
:param port: str
:param baud: int
:param timeout: int
:param debug: bool
"""
def register_scan_cb(self, callback):
"""Registers callback function for recieving scan results.
:param callback: Function with a data parameter. Function will be called for every scan result.
:type callback : hex str
:returns: Scan results.
:rtype: str
"""
def register_evt_cb(self, callback):
"""Registers callback function for recieving events.
:param callback: Function with a data parameter. Function will be called for every event.
:type callback : hex str
:returns: Event results.
:rtype: str
"""
def unregister_scan_cb(self):
"""Unregister the callback function for recieving scan results."""
def unregister_evt_cb(self):
"""Unregister the callback function for recieving events."""
def exit_bootloader(self):
"""[BleuIO Pro Only] Exits bootloader.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def stop_scan(self):
"""Stops any type of scan.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def stop_sps(self):
"""Stops SPS Stream-mode.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at(self):
"""Basic AT-Command.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def ata(self, isOn = None):
"""Shows/hides ascii values from notification/indication/read responses.
:param isOn: True=On, False=Off, None=Read state
:type isOn: bool or None
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def atar(self, isOn = None):
"""Enable/disable auto reconnect.
:param isOn: True=On, False=Off, None=Read state
:type isOn: bool or None
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def atb(self):
"""[BleuIO Pro Only] Starts bootloader.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_advextparam(
self,
handle="",
disc_mode="",
prop="",
min_intv="",
max_intv="",
chnl_map="",
local_addr_type="",
filt_pol="",
tx_pwr="",
pri_phy="",
sec_max_evt_skip="",
sec_phy="",
sid="",
scan_req_noti="",
peer_addr_type="",
peer_addr="",
):
"""[BleuIO Pro Only] Sets advertising parameters for extended advertising. Needs to be set before starting extended advertising.
:param handle: str
:param disc_mode: str
:param prop: str
:param min_intv: str
:param max_intv: str
:param chnl_map: str
:param local_addr_type: str
:param filt_pol: str
:param tx_pwr: str
:param pri_phy: str
:param sec_max_evt_skip: str
:param sec_phy: str
:param sid: str
:param scan_req_noti: bool
:param peer_addr_type: str
:param peer_addr: str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_advextstart(self, handle, advdata="", scan_rsp_data=""):
"""[BleuIO Pro Only] Sets extended advertising data and/or scan response data and starts extended advertising.
:param handle: str
:param advdata: str
:param scan_rsp_data: str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_advextupd(self, handle, advdata="", scan_rsp_data=""):
"""[BleuIO Pro Only] Sets extended advertising data and/or scan response data when advertising.
:param handle: str
:param advdata: str
:param scan_rsp_data: str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def ates(self, isOn = None):
"""[BleuIO Pro Only] Toggles showing extended scan results on/off. Off by default.
:param isOn: True=On, False=Off, None=Read state
:type isOn: bool or None
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_led(self, isOn="", toggle="", on_period="", off_period=""):
"""[BleuIO Pro Only] Controls the LED.
:param isOn: bool
:param toggle: bool
:param on_period: str
:param off_period: str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_txpwr(self, air_op="", tx_pwr=""):
"""[BleuIO Pro Only] Sets the TX output effect for advertsing, scan and/or initiate air operation.
:param air_op: str
:param tx_pwr: str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def atasps(self, isOn = None):
"""Toggle between ascii (Off) and hex responses (On) received from SPS.
:param isOn: True=On, False=Off, None=Read state
:type isOn: bool or None
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def atassm(self, isOn = None):
"""Turns on/off showing Manufacturing Specific ID (Company ID), if present, in scan results from AT+GAPSCAN, AT+FINDSCANDATA and AT+SCANTARGET scans. (Off per default).
:param isOn: True=On, False=Off, None=Read state
:type isOn: bool or None
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def atassn(self, isOn = None):
"""Turns on/off showing device names, if present, in scan results from AT+FINDSCANDATA and AT+SCANTARGET scans. (Off per default).
:param isOn: True=On, False=Off, None=Read state
:type isOn: bool or None
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def atds(self, isOn = None):
"""Turns auto discovery of services when connecting on/off.
:param isOn: (boolean) True=On, False=Off
:type isOn: bool or None
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def ate(self, isOn = None):
"""Turns Echo on/off.
:param isOn: (boolean) True=On, False=Off
:type isOn: bool or None
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def atew(self, isOn = None):
"""Turn WRITTEN DATA echo on/off after GATTCWRITE commands. (On per default).
:param isOn: (boolean) True=On, False=Off
:type isOn: bool or None
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def ati(self):
"""Device information query.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def atr(self):
"""Trigger platform reset.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def atsat(self, isOn = None):
"""Turns on/off showing address types in scan results from AT+FINDSCANDATA and AT+SCANTARGET scans.
(Off per default).
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def atsiv(self, isOn = None):
"""Turns showing verbose scan result index on/off. (Off per default).
:param isOn: True=On, False=Off, None=Read state
:type isOn: bool or None
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def atsra(self, isOn = None):
"""Turns showing resolved addr in scan results on/off. (Off per default).
:param isOn: True=On, False=Off, None=Read state
:type isOn: bool or None
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_advdata(self, advdata=""):
"""Sets or queries the advertising data.
:param: Sets advertising data. If left empty it will query what advdata is set. Format: xx:xx:xx:xx:xx.. (max 31 bytes)
:type advdata: hex str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_advdatai(self, advdata):
"""Sets advertising data in a way that lets it be used as an iBeacon.
Format = (UUID)(MAJOR)(MINOR)(TX)
Example: at_advdatai("5f2dd896-b886-4549-ae01-e41acd7a354a0203010400")
:param: Sets advertising data in iBeacon format. If left empty it will query what advdata is set
:type advdata: hex str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_advstart(self, conn_type="", intv_min="", intv_max="", timer=""):
"""Starts advertising with default settings if no params.
With params: Starts advertising with <conn_type><intv_min><intv_max><timer>.
:param: Starts advertising with default settings.
:type conn_type: str
:type intv_min: str
:type intv_max: str
:type timer: str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_advstop(self):
"""Stops advertising.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_advresp(self, respData=""):
"""Sets or queries scan response data. Data must be provided as hex string.
:param: Sets scan response data. If left empty it will query what advdata is set. Format: xx:xx:xx:xx:xx.. (max 31 bytes)
:type respData: hex str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_autoexec(self, cmds=""):
"""Sets or displays up to 10 commands that will be run upon the BleuIO starting up. Max command lenght is currently set at 255 characters.
:param: Sets commands. If left empty it will query set commands.
:type cmds: str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_cancel_connect(self):
"""While in Central Mode, cancels any ongoing connection attempts.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_central(self):
"""Sets the device Bluetooth role to central role.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_clearnoti(self, handle):
"""Disables notification for selected characteristic.
:param handle: hex str format: XXXX
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_clearindi(self, handle):
"""Disables indication for selected characteristic.
:param handle: hex str format: XXXX
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_client(self):
"""Sets the device role towards the targeted connection to client. Only in dual role.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_clrautoexec(self):
"""Clear any commands in the auto execute (AUTOEXEC) list.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_clr_autoexec_pwd(self):
"""Used to clear/remove existing password (requires entering password first). BleuIO will go back to initial state were no password is set.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_clruoi(self):
"""Clear any set Unique Organization ID.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_connectbond(self, addr):
"""Scan for and initiates a connection with a selected bonded device. Works even if the peer bonded device is advertising with a Private Random Resolvable Address.
:param addr: hex str format: XX:XX:XX:XX:XX:XX
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_connparam(self, intv_min="", intv_max="", slave_latency="", sup_timeout=""):
"""Sets or displays preferred connection parameters. When run while connected will update connection parameters on the current target connection.
:param intv_min: str
:param intv_max: str
:param slave_latency: str
:param sup_timeout: str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_connscanparam(self, scan_intv="", scan_win=""):
"""Set or queries the connection scan window and interval used.
:param scan_intv: str
:param scan_win: str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_devicename(self, name=""):
"""Gets or sets the device name.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_dis(self):
"""Shows the DIS Service info and if the DIS info is locked in or can be changed.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_dual(self):
"""Sets the device Bluetooth role to dual role.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_enter_autoexec_pwd(self, pwd=""):
"""Used to enter autoexec password when prompted.
:param sec_lvl: hex str format: "xxxxxx..."
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_enter_passkey(self, passkey):
"""Respond to Passkey request. When faced with this message: BLE_EVT_GAP_PASSKEY_REQUEST use this command to enter
the 6-digit passkey to continue the pairing request.
:param passkey: str: six-digit number string "XXXXXX"
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_findscandata(self, scandata="", timeout=0):
"""Scans for all advertising/response data which contains the search params.
:param scandata: Hex string to filter the advertising/scan response data. Can be left blank to scan for everything. Format XXXX..
:type scandata: str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_frssi(self, rssi = None):
"""Filters scan results, showing only results with <max_rssi> value or lower.
:param rssi: RSSI value. Must be negative. eg. -67 or None for Read current value
:type rssi: str, int or None
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_gapaddrtype(self, addr_type=""):
"""Change device Address Type or queries device Address Type.
:param addr_type: Range: 1-5. If left blank queries current Address Type.
:type addr_type: int
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_gapconnect(
self,
addr,
intv_min="",
intv_max="",
slave_latency="",
sup_timeout="",
):
"""Initiates a connection with a specific slave device. [<addr_type>]<address>=<intv_min>:<intv_max>:<slave_latency>:<sup_timeout>
:param addr: hex str format: [X]XX:XX:XX:XX:XX:XX
:param intv_min: str
:param intv_max: str
:param slave_latency: str
:param sup_timeout: str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_gapdisconnect(self):
"""Disconnects from a peer Bluetooth device.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_gapdisconnectall(self):
"""Disconnects from all peer Bluetooth devices.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_gapiocap(self, io_cap=""):
"""Sets or queries what input and output capabilities the device has. Parameter is number between 0 to 4.
:param io_cap: str: number
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_gappair(self, bond=False):
"""Starts a pairing (bond=False) or bonding procedure (bond=True).
:param bond: boolean
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_gapunpair(self, addr_to_unpair=""):
"""Unpair paired devices if no parameters else unpair specific device. This will also remove the device bond data
from BLE storage.
Usable both when device is connected and when not.
:param addr_to_unpair: hex str format: [X]XX:XX:XX:XX:XX:XX
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_gapscan(self, timeout=0):
"""Starts a Bluetooth device scan with or without timer set in seconds.
:param: if left empty it will scan indefinitely
:param timeout: int (time in seconds)
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_gapstatus(self):
"""Reports the Bluetooth role.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_gattcread(self, handle):
"""Read attribute of remote GATT server.
:param handle: hex str format: XXXX
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_gattcwrite(self, handle, data):
"""Write attribute to remote GATT server in ASCII.
:param handle: hex str format: XXXX
:param data: str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_gattcwriteb(self, handle, data):
"""Write attribute to remote GATT server in Hex.
:param handle: hex str format: XXXX
:param data: hex str format: XXXXXXX..
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_gattcwritewr(self, handle, data):
"""Write, without response, attribute to remote GATT server in ASCII.
:param handle: hex str format: XXXX
:param data: str
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_gattcwritewrb(self, handle, data):
"""Write, without response, attribute to remote GATT server in Hex.
:param handle: hex str format: XXXX
:param data: hex str format: XXXXXXX..
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_getbond(self):
"""Displays all MAC address of bonded devices.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_get_conn(self):
"""Gets a list of currently connected devices along with their mac addresses and conn_idx.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_get_mac(self):
"""Returns MAC address of the BleuIO device.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_get_services(self):
"""Discovers all services of a peripheral and their descriptors and characteristics.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_get_servicesonly(self):
"""Discovers a peripherals services.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_get_service_details(self, handle):
"""Discovers all characteristics and descriptors of a selected service.
:param handle: hex str format: XXXX
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_indi(self):
"""Show list of set indication handles.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_noti(self):
"""Show list of set notification handles.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_numcompa(self, auto_accept="2"):
"""Used for accepting a numeric comparison authentication request (no params) or enabling/disabling auto-accepting
numeric comparisons. auto_accept="0" = off, auto_accept="1" = on.
:param auto_accept: str format: "0" or "1"
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_peripheral(self):
"""Sets the device Bluetooth role to peripheral.
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_scantarget(self, addr):
"""Scan a target device. Displaying it's advertising and response data as it updates.
:param addr: hex str format: "xx:xx:xx:xx:xx:xx"
:returns : Object with 4 object properties: Cmd, Ack, Rsp and End. Each property contains a JSON object, except for Rsp which contains a list of JSON objects.
:rtype : obj BleuIORESP
"""
def at_sec_lvl(self, sec_lvl=""):
"""Sets or queries (no params) what minimum security level will be used when connected to other devices.
:param sec_lvl: str: string number between 0 a | text/markdown | Smart Sensor Devices AB | emil@smartsensordevices.com | null | null | MIT | null | [] | [] | https://www.bleuio.com/ | null | >=3.5 | [] | [] | [] | [
"pyserial"
] | [] | [] | [] | [] | twine/5.0.0 CPython/3.8.1 | 2026-02-19T12:54:05.275204 | bleuio-1.7.5-py3-none-any.whl | 29,273 | fd/97/60476bf1d0d89e6b782f73b33cb8b44f750fbb269716747fdf590e7f6ee4/bleuio-1.7.5-py3-none-any.whl | py3 | bdist_wheel | null | false | 58556e61a4eda4468573dab821ca1bb6 | 262feb4c9c1a43da97fc8023ee7979a13bb9b9e3d5a05027929570154a05908a | fd9760476bf1d0d89e6b782f73b33cb8b44f750fbb269716747fdf590e7f6ee4 | null | [] | 190 |
2.4 | demandify | 0.0.3 | Calibrate SUMO traffic simulations against real-world congestion data using genetic algorithms | 
[](https://pypi.org/project/demandify/)
# Welcome to demandify!
**Turn real-world traffic data into agent-based SUMO traffic scenarios.**
Do you want to recreate real-world city traffic but don't have access to precious driver trip data? **demandify** solves that.
Pick a spot on the map and demandify will:
1. Fetch real-time congestion data from TomTom 🗺️
2. Build a clean SUMO network 🛣️
3. Use the Genetic Algorithm to figure out the demand pattern to match that traffic 🧬
4. Produces a ready-to-run SUMO scenario in agent-level precision that allows you to test your urban routing policies, even for your CAVs! ([wink](https://github.com/COeXISTENCE-PROJECT/URB) [wink](https://github.com/COeXISTENCE-PROJECT/RouteRL)).

## Features
- 🌍 **Real-world calibration**: Uses TomTom Traffic Flow API for live congestion data
- 📦 **Offline calibration import**: Run from bundled/offline traffic+network snapshots
- 🎯 **Seeded & reproducible**: Same seed = identical results for same congestion and bbox
- 🚗 **Car-only SUMO networks**: Automatic OSM → SUMO conversion with car filtering, clean networks
- 🧬 **Genetic algorithm**: Optimizes demand to match observed speeds, with advanced dynamics (feasible-elite parent selection, immigrants, assortative mating, adaptive mutation boost)
- 💾 **Smart caching**: Content-addressed caching for fast re-runs (traffic snapshots bucketed to 5-minute windows)
- 📊 **Beautiful reports**: HTML reports with visualizations and statistics
- ⌨️ **CLI native**: Live in the terminal? No problem.
- 🖥️ **Clean web UI**: Leaflet map, real-time progress stepper, log console
- ✅ **Data quality labeling**: Feasibility check now reports a quality score/label before calibration starts

## Quickstart
### 1. Install demandify
```bash
# Install from PyPI (Recommended)
pip install demandify
```
If you want to contribute or install from source:
```bash
git clone https://github.com/aonurakman/demandify.git
cd demandify
pip install -e .
```
### 2. Install SUMO 🚦
**demandify** requires SUMO (Simulation of Urban MObility) to power its simulations.
> [!IMPORTANT]
> demandify is developed and tested with SUMO version 1.26.0. Ensure that your SUMO version is up to date.
👉 **[Download SUMO from the official website](https://eclipse.dev/sumo/)**
Once installed, verify it's working:
```bash
demandify doctor
```
### 3. Get a TomTom API Key
1. Sign up at [https://developer.tomtom.com/](https://developer.tomtom.com/)
2. Create a new app and copy the API key
3. The free tier includes 2,500 requests/day
### 4. Run demandify
```bash
demandify
```
This starts the web server at [http://127.0.0.1:8000](http://127.0.0.1:8000)
### 5. Calibrate a scenario
1. **Choose mode** at the top:
- `Create`: live TomTom + OSM fetch
- `Import`: select existing offline dataset (bbox auto-loaded and locked)
2. **Draw a bounding box** on the map (Create mode only)
3. **Configure parameters** (defaults work well):
- Time window: 15, 30, or 60 minutes
- Seed: any integer for reproducibility
- Warmup: a few minutes to populate the network
- GA population/generations: controls quality vs speed
4. **Paste your API key** (Create mode only; one-time, stored locally)
5. **Click "Start Calibration"**
6. **Watch the progress** through 8 stages
7. **Download your scenario** with `demand.csv`, SUMO network, and report
Before calibration starts, demandify runs a preparation feasibility check and reports:
- fetched traffic segments
- matched observed edges
- total network edges
- data quality label + score + risk flags
### 6. Run Headless (Optional) 🤖
You can run the full calibration pipeline directly from the command line, ideal for automation or remote servers.
```bash
# Basic usage (defaults: window=15, pop=50, gen=20)
demandify run "2.2961,48.8469,2.3071,48.8532" --name Paris_Test_01
# Advanced usage with custom parameters
demandify run "2.2961,48.8469,2.3071,48.8532" \
--name paris_v1 \
--window 30 \
--seed 123 \
--pop 100 \
--gen 50 \
--mutation 0.5 \
--elitism 2
# With advanced GA dynamics
demandify run "2.2961,48.8469,2.3071,48.8532" \
--name paris_v2 \
--pop 100 \
--gen 100 \
--immigrant-rate 0.05 \
--magnitude-penalty 0.002 \
--stagnation-patience 15
# Fully non-interactive (automation/CI)
demandify run "2.2961,48.8469,2.3071,48.8532" \
--name paris_v3 \
--non-interactive
# Import existing offline dataset (no live TomTom/OSM fetch)
demandify run --import krakow_v1 --name krakow_remote
```
> **Note:** By default, the CLI pauses after fetching/matching data and asks for confirmation, then asks whether to run another calibration. Pass `--non-interactive` to auto-approve and exit immediately after pipeline completion.
### 7. Build Offline Dataset (Optional) 💾
If you want a reusable prep bundle (for future no-key workflows), open:
- [http://127.0.0.1:8000/dataset-builder](http://127.0.0.1:8000/dataset-builder)
This dedicated page is separate from calibration runs. It executes preparation only (traffic snapshot + OSM + SUMO network + map matching) and stores files under:
- `demandify_datasets/<dataset_name>/`
Each dataset includes `data/traffic_data_raw.csv`, `data/observed_edges.csv`, `data/map.osm`, `sumo/network.net.xml`, and `dataset_meta.json`.
`dataset_meta.json` now includes a computed data quality block (`score`, `label`, `recommendation`, and metrics) to help decide whether a dataset is strong enough for offline calibration.
Bundled snapshot previews:
| Den Haag (`den_haag_v1`) | Krakow (`krakow_v1`) | Eskisehir (`eskisehir_v1`) |
|---|---|---|
|  |  |  |
#### Parameters
| Argument | Type | Default | Description |
|----------|------|---------|-------------|
| `bbox` | String | Req* | Bounding box (`west,south,east,north`) |
| `--import` | String | None | Use an offline dataset by name (or `source:name`) |
| `--name` | String | Auto | Custom Run ID/Name |
| `--non-interactive` | Flag | off | Disable prompts (auto-approve and exit when pipeline completes) |
| `--window` | Int | 15 | Simulation duration (min) |
| `--warmup` | Int | 5 | Warmup duration before scoring (min) |
| `--seed` | Int | 42 | Random seed |
| `--step-length`| Float | 1.0 | SUMO step length (seconds) |
| `--workers` | Int | Auto (CPU count) | Parallel GA workers |
| `--tile-zoom` | Int | 12 | TomTom vector flow tile zoom |
| `--pop` | Int | 50 | GA Population size |
| `--gen` | Int | 20 | GA Generations |
| `--mutation`| Float | 0.5 | Mutation rate (per individual) |
| `--crossover`| Float| 0.7 | Crossover rate |
| `--elitism` | Int | 2 | Top individuals to keep |
| `--sigma` | Int | 20 | Mutation magnitude (step size) |
| `--indpb` | Float | 0.3 | Mutation probability (per gene) |
| `--origins` | Int | 10 | Number of origin candidates |
| `--destinations` | Int | 10 | Number of destination candidates |
| `--max-ods` | Int | 50 | Max OD pairs to generate |
| `--bin-size` | Float | 5 | Time bin size in minutes |
| `--initial-population` | Int | 1000 | Target initial number of vehicles (controls sparse initialization) |
\* `bbox` is required in create mode. In import mode, use `--import` and do not pass `bbox`.
`Import` mode constraints:
- positional `bbox` is rejected
- `--tile-zoom` is rejected
- all calibration controls (seed, GA params, warmup/window, etc.) remain available
#### Advanced GA Dynamics
These parameters control diversity mechanisms and adaptive behavior in the genetic algorithm, addressing local optima stagnation and trip count explosion.
| Argument | Type | Default | Description |
|----------|------|---------|-------------|
| `--immigrant-rate` | Float | 0.03 | Fraction of random individuals injected per generation (0–1) |
| `--elite-top-pct` | Float | 0.1 | Defines feasible elite size per generation: `n=max(1, elite_top_pct * population)` |
| `--magnitude-penalty` | Float | 0.001 | Weight for magnitude in feasible-elite parent ranking (`weight*magnitude + E-rank term`) |
| `--stagnation-patience` | Int | 20 | Generations without improvement before mutation boost activates |
| `--stagnation-boost` | Float | 1.5 | Multiplier for mutation sigma and rate during stagnation |
| `--checkpoint-interval` | Int | 10 | Save best-individual checkpoint artifacts every N generations |
| `--assortative-mating` | Flag | off | Explicitly enable assortative mating |
| `--no-assortative-mating` | Flag | off | Disable assortative mating (dissimilar parent pairing, on by default) |
| `--deterministic-crowding` | Flag | off | Explicitly enable deterministic crowding |
| `--no-deterministic-crowding` | Flag | off | Disable deterministic crowding (diversity-preserving replacement, on by default) |
All advanced dynamics are **enabled by default** with conservative values. For most use cases, the defaults work well. You can disable features via the corresponding `--no-*` flags, explicitly force-enable them with `--assortative-mating` / `--deterministic-crowding`, or set `--magnitude-penalty 0` to remove magnitude pressure inside feasible-elite ranking.
## How It Works
demandify follows a multi-stage pipeline:
1. **Validate inputs** - Check mode/parameters and feasibility
2. **Preparation**:
- `Create`: fetch traffic + OSM, build network, match edges
- `Import`: load/copy network + observed traffic files from offline dataset
3. **Initialize demand** - Select routable OD pairs (lane-permission aware) and time bins
4. **Calibrate demand** - Run GA to optimize vehicle counts
5. **Export scenario** - Generate `demand.csv`, `trips.xml`, config, and report
### Advanced GA Dynamics
The genetic algorithm includes several mechanisms to avoid common pitfalls like local optima stagnation and trip count explosion:
- **Feasible-elite parent selection (with fallback)**: Individuals are first ordered by flow-fit error `E`, then filtered by feasibility (`fail_total = routing_failures + teleports`). If enough feasible candidates exist (`>= n` from `elite_top_pct`), parent tournaments are run only on that feasible elite slice using `magnitude_penalty_weight * magnitude + E-rank term`. If not, selection temporarily falls back to full-population tournaments on `E + reliability_penalty`. This fallback auto-stops once enough feasible individuals are present.
- **Random immigrants**: A small fraction of completely random individuals is injected each generation to maintain genetic diversity and escape local optima.
- **Assortative mating**: Parents are paired by dissimilarity (by genome magnitude) for crossover, promoting exploration of the search space.
- **Deterministic crowding**: Offspring compete with similar parents for population slots, preserving niche diversity.
- **Adaptive mutation boost**: If the best fitness stagnates for K generations, mutation sigma and rate are temporarily increased by a configurable multiplier. They reset automatically when improvement resumes.
The final return policy is **feasible-first**: if any feasible individual appears during the run, demandify returns the best feasible one by `E`; otherwise it returns the best raw objective and logs a warning.
The calibration report includes plots for **genotypic diversity** (mean pairwise L2 distance) and **phenotypic diversity** (σ of fitness values) across generations, along with markers indicating when mutation boost was active.
### Variability & Consistency
While demandify uses seeding (random seed) for all internal stochastic operations (OD selection, GA evolution), **perfect reproducibility is not guaranteed** due to the inherently chaotic nature of traffic microsimulation (SUMO) and real-time data inputs.
Seeding ensures *consistency* (runs look similar), but small timing differences in OS scheduling or dynamic routing decisions can lead to divergent outcomes. Traffic snapshots are cached in 5-minute buckets; using the same seed, bbox, and time bucket will reproduce demand.csv and SUMO randomness.
### Caching
demandify caches:
- OSM extracts (by bbox)
- SUMO networks (by bbox + conversion params)
- Traffic snapshots (by bbox + provider + style + tile zoom + 5-minute timestamp bucket)
- Map matching results (by bbox + network key + provider + timestamp bucket)
Cache location: `~/.demandify/cache/`
Clear cache: `demandify cache clear`
## CLI Commands
```bash
# Start web server (default)
demandify
# Run headless calibration
demandify run "west,south,east,north"
# Run headless from bundled offline dataset
demandify run --import krakow_v1
# Check system requirements
demandify doctor
# Set TomTom API key (CLI)
demandify set-key YOUR_KEY_HERE
# Clear cache
demandify cache clear
# Show version
demandify --version
```
## Output Files
Each run creates a folder with:
- **`demand.csv`** - Travel demand with exact schema:
- `ID`, `origin link id`, `destination link id`, `departure timestep`
- **`trips.xml`** - SUMO trips file
- **`network.net.xml`** - SUMO network
- **`scenario.sumocfg`** - SUMO configuration (ready to run; ignores route errors by default)
- **`observed_edges.csv`** - Observed traffic speeds
- **`run_meta.json`** - Complete run metadata
- **`report.html`** - Calibration report with visualizations
Run the scenario:
```bash
cd demandify_runs/run_<timestamp>
sumo-gui -c scenario.sumocfg
```
## Configuration
### API Keys
Three ways to provide your TomTom API key:
1. **Web UI**: Paste in the form (saved to `~/.demandify/config.json`)
2. **Environment variable**: `export TOMTOM_API_KEY=your_key`
3. **`.env` file**: Copy `.env.example` to `.env` and add your key
4. **CLI**: `demandify set-key YOUR_KEY` stores it in `~/.demandify/config.json`
## Development
```bash
# Install with dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Format code
black demandify/
# Lint
ruff check demandify/
```
## License
MIT
## Acknowledgments
- **SUMO**: [Eclipse SUMO](https://eclipse.dev/sumo/)
- **TomTom**: [Traffic Flow API](https://developer.tomtom.com/traffic-api)
- **OpenStreetMap**: [© OpenStreetMap contributors](https://www.openstreetmap.org/copyright)
## Citation
If you use this software for your research, please consider using the below citation.
```bibtex
@software{demandify_2026,
author = {{Ahmet Onur Akman}},
title = {{demandify}: Calibrate SUMO traffic scenarios against real-world congestion using genetic algorithms},
year = {2026},
version = {0.0.3},
publisher = {PyPI},
url = {https://pypi.org/project/demandify/},
repository = {https://github.com/aonurakman/demandify}
}
```
| text/markdown | null | Ahmet Onur Akman <ahmetonurakman@gmail.com> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: GIS",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"fastapi>=0.104.0",
"uvicorn[standard]>=0.24.0",
"jinja2>=3.1.0",
"httpx>=0.25.0",
"python-multipart>=0.0.6",
"mapbox-vector-tile>=2.1.0",
"shapely>=2.0.0",
"rtree>=1.1.0",
"protobuf>=4.24.0",
"numpy>=1.24.0",
"pandas>=2.0.0",
"lxml>=4.9.0",
"xmltodict>=0.13.0",
"deap>=1.4.0",
"matplotli... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T12:53:25.215622 | demandify-0.0.3.tar.gz | 6,094,415 | 40/d7/07b721b80e700963f85a617b730363931324aed53b829151a74270808065/demandify-0.0.3.tar.gz | source | sdist | null | false | 8d052407d22a08bdc1dc75e4706a727c | 0c1657c378ef77a0c1e41b0cf0c463530fb2cade4966e97ac9103e894485fd4d | 40d707b721b80e700963f85a617b730363931324aed53b829151a74270808065 | null | [
"LICENSE"
] | 251 |
2.4 | jaxn | 0.0.5 | A SAX-style JSON parser for processing incomplete JSON streams | # jaxn
A SAX-style JSON parser for processing incomplete JSON streams character-by-character.
## Overview
**jaxn** is a lightweight streaming JSON parser that processes JSON incrementally as it arrives, similar to how SAX parsers work with XML. Instead of waiting for the complete JSON document, jaxn fires callbacks as it encounters different parts of the JSON structure, making it perfect for:
- Real-time streaming applications (e.g., LLM responses, API streams)
- Processing large JSON files without loading them entirely into memory
- Displaying content as it arrives rather than waiting for complete responses
- Building responsive UIs that update progressively
## Installation
```bash
pip install jaxn
```
## Quick Start
Here's a simple example that prints field values as they're parsed:
```python
from jaxn import StreamingJSONParser, JSONParserHandler
class SimpleHandler(JSONParserHandler):
def on_field_end(self, path, field_name, value, parsed_value=None):
print(f"{field_name}: {value}")
handler = SimpleHandler()
parser = StreamingJSONParser(handler)
# Process JSON incrementally
json_data = '{"name": "Alice", "age": 30}'
parser.parse_incremental(json_data)
```
## Detailed Example: Streaming Markdown Renderer
This example shows how to convert a streaming JSON response into formatted markdown output in real-time. The example is based on the demo in the `demo/` directory.
### The JSON Structure
```json
{
"title": "Monitoring Data Drift in Production",
"sections": [
{
"heading": "Overview",
"content": "Monitoring data drift is crucial...",
"references": [
{
"title": "Data Drift",
"filename": "metrics/preset_data_drift.mdx"
}
]
}
]
}
```
### The Handler Implementation
```python
from pathlib import Path
from jaxn import StreamingJSONParser, JSONParserHandler
import time
class SearchResultHandler(JSONParserHandler):
def on_field_start(self, path: str, field_name: str):
# Print references header when we encounter a references array
if field_name == "references":
level = path.count("/") + 2
print(f"\n{'#' * level} References\n")
def on_field_end(self, path, field_name, value, parsed_value=None):
# Print title as main heading
if field_name == "title" and path == "":
print(f"# {value}")
# Print section headings
elif field_name == "heading":
print(f"\n\n## {value}\n")
# Add spacing after content
elif field_name == "content":
print("\n")
def on_value_chunk(self, path, field_name, chunk):
# Stream content character by character for real-time display
if field_name == "content":
print(chunk, end="", flush=True)
def on_array_item_end(self, path, field_name, item=None):
# Print references as markdown links
if field_name == "references":
title = item.get("title", "")
filename = item.get("filename", "")
print(f"- [{title}]({filename})")
# Use the handler
handler = SearchResultHandler()
parser = StreamingJSONParser(handler)
# Simulate streaming by processing JSON in small chunks
json_message = Path('message.json').read_text(encoding='utf-8')
for i in range(0, len(json_message), 4):
chunk = json_message[i:i+4]
parser.parse_incremental(chunk)
time.sleep(0.01) # Simulate network delay
```
### Output
The above code produces formatted markdown output that appears progressively:
```markdown
# Monitoring Data Drift in Production
## Overview
Monitoring data drift is crucial to understanding the health and performance of machine learning models in production...
### References
- [Data Drift](metrics/preset_data_drift.mdx)
- [How data drift detection works](metrics/explainer_drift.mdx)
- [Overview](docs/platform/monitoring_overview.mdx)
```
## API Reference
### JSONParserHandler
Base handler class for JSON parsing events. Subclass this and override the methods you need.
#### Methods
**`on_field_start(path: str, field_name: str) -> None`**
Called when starting to read a field value.
- `path`: Path to current location (e.g., "/sections/references")
- `field_name`: Name of the field being read
**`on_field_end(path: str, field_name: str, value: str, parsed_value: Any = None) -> None`**
Called when a field value is complete.
- `path`: Path to current location
- `field_name`: Name of the field
- `value`: Complete value of the field (as string from JSON)
- `parsed_value`: Parsed value (dict for objects, list for arrays, actual value for primitives)
**`on_value_chunk(path: str, field_name: str, chunk: str) -> None`**
Called for each character as string values stream in. Perfect for displaying content in real-time.
- `path`: Path to current location
- `field_name`: Name of the field being streamed
- `chunk`: Single character chunk
**`on_array_item_start(path: str, field_name: str) -> None`**
Called when starting a new object in an array.
- `path`: Path to current location
- `field_name`: Name of the array field
**`on_array_item_end(path: str, field_name: str, item: Dict[str, Any] = None) -> None`**
Called when finishing an object in an array.
- `path`: Path to current location
- `field_name`: Name of the array field
- `item`: The complete parsed dictionary for this array item
### StreamingJSONParser
Parse JSON incrementally as it streams in, character by character.
#### Methods
**`__init__(handler: JSONParserHandler = None)`**
Initialize the parser with a handler for events.
- `handler`: JSONParserHandler instance to receive parsing events
**`parse_incremental(delta: str) -> None`**
Parse new characters added since last call. Fires callbacks as events are detected.
- `delta`: New characters to parse (string)
**`parse_from_old_new(old_text: str, new_text: str) -> None`**
Convenience method that calculates the delta between old and new text.
- `old_text`: Previously processed text
- `new_text`: New text (should start with old_text)
## Use Cases
### 1. Real-time LLM Response Display
Display streaming responses from Large Language Models as they're generated:
```python
class LLMDisplayHandler(JSONParserHandler):
def on_value_chunk(self, path, field_name, chunk):
if field_name == "content":
print(chunk, end="", flush=True)
parser = StreamingJSONParser(LLMDisplayHandler())
# Feed chunks as they arrive from the LLM API
```
### 2. Progress Tracking
Track progress through large JSON structures:
```python
class ProgressHandler(JSONParserHandler):
def __init__(self):
self.items_processed = 0
def on_array_item_end(self, path, field_name, item=None):
self.items_processed += 1
print(f"Processed {self.items_processed} items...")
```
### 3. Selective Field Extraction
Extract only the fields you need without parsing the entire document:
```python
class FieldExtractor(JSONParserHandler):
def __init__(self):
self.titles = []
def on_field_end(self, path, field_name, value, parsed_value=None):
if field_name == "title":
self.titles.append(value)
```
## License
WTFPL - Do What The Fuck You Want To Public License
## Internals
The parser is implemented using a state machine pattern with 10 distinct states that handle character-by-character parsing. For details on the state machine implementation, see [states.md](states.md).
## Links
- **GitHub**: https://github.com/alexeygrigorev/jaxn
- **Issues**: https://github.com/alexeygrigorev/jaxn/issues | text/markdown | null | Alexey Grigorev <alexey@datatalks.club> | null | Alexey Grigorev <alexey@datatalks.club> | WTFPL | json, streaming-parser | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pyth... | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/alexeygrigorev/jaxn",
"Repository, https://github.com/alexeygrigorev/jaxn",
"Issues, https://github.com/alexeygrigorev/jaxn/issues"
] | python-httpx/0.28.1 | 2026-02-19T12:53:02.288461 | jaxn-0.0.5.tar.gz | 79,575 | 86/61/580621525bd1035ecaf20b56a3f215142758d5079f7e5695905ecfc4a30e/jaxn-0.0.5.tar.gz | source | sdist | null | false | d61ccd2d4b4cbf138bcd3f29ab0a20eb | c896d88023c5f6af9b650587b370528f8aae567874031bf4769a3fe58e1749d2 | 8661580621525bd1035ecaf20b56a3f215142758d5079f7e5695905ecfc4a30e | null | [] | 238 |
2.4 | cognite-neat | 1.0.40 | Knowledge graph transformation | # kNowlEdge grAph Transformer (NEAT)
[](https://github.com/cognitedata/neat/actions/workflows/release.yaml)
[](https://cognite-neat.readthedocs-hosted.com/en/latest/?badge=latest)
[](https://github.com/cognitedata/neat)
[](https://pypi.org/project/cognite-neat/)
[](https://pypistats.org/packages/cognite-neat)
[](https://github.com/cognitedata/neat/blob/master/LICENSE)
[](https://github.com/ambv/black)
[](https://github.com/astral-sh/ruff)
[](http://mypy-lang.org)
There was no easy way to make knowledge graphs, especially data models, and onboard them to
[Cognite Data Fusion](https://www.cognite.com/en/product/cognite_data_fusion_industrial_dataops_platform), so we have built NEAT!
NEAT is great for data model development, validation and deployment. It comes with an evergrowing library of validators,
which will assure that your data model adheres to the best practices and that is performant. Unlike other solutions,
which require you to be a technical wizard or modeling expert, NEAT provides you a guiding data modeling experience.
We offer various interfaces on how you can develop your data model, where majority of our users prefer
a combination of Jupyter Notebooks, leveraging NEAT features through so called [NeatSession](https://cognite-neat.readthedocs-hosted.com/en/latest/reference/NeatSession/session.html), with [a Spreadsheet data model template](https://cognite-neat.readthedocs-hosted.com/en/latest/excel_data_modeling/data_model.html).
Only Data modeling? There was more before!?
True, NEAT v0.x (legacy) offered a complete knowledge graph
tooling. Do not worry though, all the legacy features are still available and will be gradually
ported to NEAT v1.x according to the [roadmap](https://cognite-neat.readthedocs-hosted.com/en/latest/roadmap.html).
## Usage
The user interface for `NEAT` features is through `NeatSession`, which is typically instantiated in a notebook-based environment due to simplified interactivity with `NEAT`
and navigation of the session content. Once you have set up your notebook environment, and installed neat via:
```bash
pip install cognite-neat
```
you start by creating a `CogniteClient` and instantiate a `NeatSession` object:
```python
from cognite.neat import NeatSession, get_cognite_client
client = get_cognite_client(".env")
neat = NeatSession(client)
neat.physical_data_model.read.cdf("cdf_cdm", "CogniteCore", "v1")
```
## Documentation
For more information, see the [documentation](https://cognite-neat.readthedocs-hosted.com/en/latest/)
| text/markdown | Nikola Vasiljevic, Anders Albert | Nikola Vasiljevic <nikola.vasiljevic@cognite.com>, Anders Albert <anders.albert@cognite.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: Customer Service",
"Topic :: Database",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Langu... | [] | null | null | >=3.10 | [] | [] | [] | [
"cognite-sdk<8.0.0,>=7.83.0",
"httpx>=0.28.1",
"pydantic<3.0.0,>=2.0.0",
"pyyaml<7.0.0,>=6.0.1",
"urllib3<3.0.0,>=1.26.15",
"openpyxl<4.0.0,>=3.0.10",
"networkx<4.0.0,>=3.4.2",
"mixpanel<5.0.0,>=4.10.1",
"exceptiongroup<2.0.0,>=1.1.3; python_full_version < \"3.11\"",
"backports-strenum<2.0.0,>=1.2... | [] | [] | [] | [
"Documentation, https://cognite-neat.readthedocs-hosted.com/",
"Homepage, https://cognite-neat.readthedocs-hosted.com/",
"GitHub, https://github.com/cognitedata/neat",
"Changelog, https://github.com/cognitedata/neat/releases"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T12:52:57.224335 | cognite_neat-1.0.40.tar.gz | 662,937 | ff/2b/9a6326e91cb7cd5be715f49a1ac449ab06f7b611ec5ba6952f7bb91a5c13/cognite_neat-1.0.40.tar.gz | source | sdist | null | false | d42588b413cd4d0dffc1fb22866269f9 | 0cbe0c545d5b79d7cad63c4455415fdcc77db1ef00af2bc272f912fe8cc5bd94 | ff2b9a6326e91cb7cd5be715f49a1ac449ab06f7b611ec5ba6952f7bb91a5c13 | Apache-2.0 | [] | 347 |
2.4 | dstack | 0.20.11 | dstack is an open-source orchestration engine for running AI workloads on any cloud or on-premises. | <div style="text-align: center;">
<h2>
<a target="_blank" href="https://dstack.ai">
<img alt="dstack" src="https://raw.githubusercontent.com/dstackai/dstack/master/docs/assets/images/dstack-logo.svg" width="350px"/>
</a>
</h2>
[](https://github.com/dstackai/dstack/commits/)
[](https://github.com/dstackai/dstack/blob/master/LICENSE.md)
[](https://discord.gg/u8SmfwPpMd)
</div>
`dstack` is a unified control plane for GPU provisioning and orchestration that works with any GPU cloud, Kubernetes, or on-prem clusters.
It streamlines development, training, and inference, and is compatible with any hardware, open-source tools, and frameworks.
#### Accelerators
`dstack` supports `NVIDIA`, `AMD`, `Google TPU`, `Intel Gaudi`, and `Tenstorrent` accelerators out of the box.
## Latest news ✨
- [2025/12] [dstack 0.20.0: Fleet-first UX, Events, and more](https://github.com/dstackai/dstack/releases/tag/0.20.0)
- [2025/11] [dstack 0.19.38: Routers, SGLang Model Gateway integration](https://github.com/dstackai/dstack/releases/tag/0.19.38)
- [2025/10] [dstack 0.19.31: Kubernetes, GCP A4 spot](https://github.com/dstackai/dstack/releases/tag/0.19.31)
- [2025/08] [dstack 0.19.26: Repos](https://github.com/dstackai/dstack/releases/tag/0.19.26)
- [2025/08] [dstack 0.19.22: Service probes, GPU health-checks, Tenstorrent Galaxy](https://github.com/dstackai/dstack/releases/tag/0.19.22)
- [2025/07] [dstack 0.19.21: Scheduled tasks](https://github.com/dstackai/dstack/releases/tag/0.19.21)
- [2025/07] [dstack 0.19.17: Secrets, Files, Rolling deployment](https://github.com/dstackai/dstack/releases/tag/0.19.17)
## How does it work?
<img src="https://dstack.ai/static-assets/static-assets/images/dstack-architecture-diagram-v11.svg" width="750" />
### Installation
> Before using `dstack` through CLI or API, set up a `dstack` server. If you already have a running `dstack` server, you only need to [set up the CLI](#set-up-the-cli).
#### Set up the server
##### Configure backends
To orchestrate compute across GPU clouds or Kubernetes clusters, you need to configure backends.
Backends can be set up in `~/.dstack/server/config.yml` or through the [project settings page](https://dstack.ai/docs/concepts/projects#backends) in the UI.
For more details, see [Backends](https://dstack.ai/docs/concepts/backends).
> When using `dstack` with on-prem servers, backend configuration isn’t required. Simply create [SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh-fleets) once the server is up.
##### Start the server
You can install the server on Linux, macOS, and Windows (via WSL 2). It requires Git and
OpenSSH.
##### uv
```shell
$ uv tool install "dstack[all]" -U
```
##### pip
```shell
$ pip install "dstack[all]" -U
```
Once it's installed, go ahead and start the server.
```shell
$ dstack server
Applying ~/.dstack/server/config.yml...
The admin token is "bbae0f28-d3dd-4820-bf61-8f4bb40815da"
The server is running at http://127.0.0.1:3000/
```
> For more details on server configuration options, see the
[Server deployment](https://dstack.ai/docs/guides/server-deployment) guide.
<details><summary>Set up the CLI</summary>
#### Set up the CLI
Once the server is up, you can access it via the `dstack` CLI.
The CLI can be installed on Linux, macOS, and Windows. It requires Git and OpenSSH.
##### uv
```shell
$ uv tool install dstack -U
```
##### pip
```shell
$ pip install dstack -U
```
To point the CLI to the `dstack` server, configure it
with the server address, user token, and project name:
```shell
$ dstack project add \
--name main \
--url http://127.0.0.1:3000 \
--token bbae0f28-d3dd-4820-bf61-8f4bb40815da
Configuration is updated at ~/.dstack/config.yml
```
</details>
### Define configurations
`dstack` supports the following configurations:
* [Fleets](https://dstack.ai/docs/concepts/fleets) — for managing cloud and on-prem clusters
* [Dev environments](https://dstack.ai/docs/concepts/dev-environments) — for interactive development using a desktop IDE
* [Tasks](https://dstack.ai/docs/concepts/tasks) — for scheduling jobs (incl. distributed jobs) or running web apps
* [Services](https://dstack.ai/docs/concepts/services) — for deployment of models and web apps (with auto-scaling and authorization)
* [Volumes](https://dstack.ai/docs/concepts/volumes) — for managing persisted volumes
Configuration can be defined as YAML files within your repo.
### Apply configurations
Apply the configuration either via the `dstack apply` CLI command or through a programmatic API.
`dstack` automatically manages provisioning, job queuing, auto-scaling, networking, volumes, run failures,
out-of-capacity errors, port-forwarding, and more — across clouds and on-prem clusters.
## Useful links
For additional information, see the following links:
* [Docs](https://dstack.ai/docs)
* [Examples](https://dstack.ai/examples)
* [Discord](https://discord.gg/u8SmfwPpMd)
## Contributing
You're very welcome to contribute to `dstack`.
Learn more about how to contribute to the project at [CONTRIBUTING.md](CONTRIBUTING.md).
## License
[Mozilla Public License 2.0](LICENSE.md)
| text/markdown | null | Andrey Cheptsov <andrey@dstack.ai> | null | null | null | null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"apscheduler<4",
"argcomplete>=3.5.0",
"cachetools",
"cryptography",
"cursor",
"filelock",
"gitpython",
"gpuhunt==0.1.16",
"ignore-python>=0.2.0",
"jsonschema",
"orjson",
"packaging",
"paramiko>=3.2.0",
"psutil",
"pydantic-duality>=1.2.4",
"pydantic<2.0.0,>=1.10.10",
"python-dateutil... | [] | [] | [] | [
"Homepage, https://dstack.ai",
"Source, https://github.com/dstackai/dstack",
"Documentation, https://dstack.ai/docs",
"Issues, https://github.com/dstackai/dstack/issues",
"Changelog, https://github.com/dstackai/dstack/releases",
"Discord, https://discord.gg/u8SmfwPpMd"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T12:52:55.183528 | dstack-0.20.11.tar.gz | 47,232,684 | de/e3/6185c96f31f4ace1936e762844eb7ce43cb0bab915da51687b3e357ae12d/dstack-0.20.11.tar.gz | source | sdist | null | false | 1e88db437aa1a9044f3639b42523175d | ac7182c76116be0433ae640925865dd2d1cbb3b85aacdb3a14831afa026c06d1 | dee36185c96f31f4ace1936e762844eb7ce43cb0bab915da51687b3e357ae12d | null | [
"LICENSE.md"
] | 474 |
2.4 | codereviewbuddy | 0.24.0 | codereview buddy helps your AI agent interact with AI code review--smoothly! | <!-- mcp-name: io.github.detailobsessed/codereviewbuddy -->
# codereviewbuddy
[](https://github.com/detailobsessed/codereviewbuddy/actions?query=workflow%3Aci)
[](https://github.com/detailobsessed/codereviewbuddy/releases)
[](https://detailobsessed.github.io/codereviewbuddy/)
[](https://www.python.org/downloads/)
[](https://github.com/jlowin/fastmcp)
An MCP server that helps your AI coding agent interact with AI code reviewers — smoothly.
Manages review comments from **Unblocked**, **Devin**, and **CodeRabbit** on GitHub PRs with staleness detection, batch resolution, re-review triggering, and issue tracking.
> [!WARNING]
> **Bleeding edge.** This server runs on **Python 3.14** and **FastMCP v3 prerelease** (`>=3.0.0rc1`). FastMCP v3 is pre-release software — APIs may change before stable. We track it closely and pin versions in `uv.lock` for reproducibility, but be aware that upstream breaking changes are possible.
## Features
### Review comment management
- **List review comments** — inline threads, PR-level reviews, and bot comments (codecov, netlify, vercel, etc.) with reviewer identification and staleness detection
- **Stacked PR support** — `list_stack_review_comments` fetches comments across an entire PR stack in one call
- **Resolve comments** — individually or bulk-resolve stale ones (files changed since the review)
- **Smart skip logic** — `resolve_stale_comments` skips reviewers that auto-resolve their own comments (Devin, CodeRabbit), only batch-resolving threads from reviewers that don't (Unblocked)
- **Reply to anything** — inline review threads (`PRRT_`), PR-level reviews (`PRR_`), and bot issue comments (`IC_`) all routed to the correct GitHub API
- **Request re-reviews** — per-reviewer logic handles differences automatically (manual trigger for Unblocked, auto for Devin/CodeRabbit)
### Issue tracking
- **Create issues from review comments** — turn useful AI suggestions into GitHub issues with labels, PR backlinks, file/line location, and quoted comment text
### Server features (FastMCP v3)
- **Typed output schemas** — all tools return Pydantic models with JSON Schema, giving MCP clients structured data instead of raw strings
- **Progress reporting** — long-running operations report progress via FastMCP context (visible in MCP clients that support it)
- **Production middleware** — ErrorHandling (transforms exceptions to clean MCP errors with tracebacks), Timing (logs execution duration for every tool call), and Logging (request/response payloads for debugging)
- **Update checker** — `check_for_updates` compares the running version against PyPI and suggests upgrade commands
- **Zero config auth** — uses `gh` CLI, no PAT tokens or `.env` files
### CLI testing (free with FastMCP v3)
FastMCP v3 gives you terminal testing of the server with no extra code:
```bash
# List all tools with their signatures
fastmcp list codereviewbuddy.server:mcp
# Call a tool directly from the terminal
fastmcp call codereviewbuddy.server:mcp list_review_comments pr_number=42
# Inspect server metadata
fastmcp inspect codereviewbuddy.server:mcp
# Run with MCP Inspector for interactive debugging
fastmcp dev codereviewbuddy.server:mcp
```
## Prerequisites
- [GitHub CLI (`gh`)](https://cli.github.com/) installed and authenticated (`gh auth login`)
- Python 3.14+
## Installation
This project uses [`uv`](https://docs.astral.sh/uv/). No install needed — run directly:
```bash
uvx codereviewbuddy
```
Or install permanently:
```bash
uv tool install codereviewbuddy
```
## MCP Client Configuration
### Quick setup (recommended)
One command configures your MCP client — no manual JSON editing:
```bash
uvx codereviewbuddy install claude-desktop
uvx codereviewbuddy install claude-code
uvx codereviewbuddy install cursor
uvx codereviewbuddy install windsurf
uvx codereviewbuddy install windsurf-next
```
With optional environment variables:
```bash
uvx codereviewbuddy install windsurf \
--env CRB_SELF_IMPROVEMENT__ENABLED=true \
--env CRB_SELF_IMPROVEMENT__REPO=your-org/codereviewbuddy
```
For any other client, generate the JSON config:
```bash
uvx codereviewbuddy install mcp-json # print to stdout
uvx codereviewbuddy install mcp-json --copy # copy to clipboard
```
Restart your MCP client after installing. See `uvx codereviewbuddy install --help` for all options.
### Manual configuration
If you prefer manual setup, add the following to your MCP client's config JSON:
```jsonc
{
"mcpServers": {
"codereviewbuddy": {
"command": "uvx",
"args": ["--prerelease=allow", "codereviewbuddy@latest"],
"env": {
// All CRB_* env vars are optional — zero-config works out of the box.
// See Configuration section below for the full list.
// Per-reviewer overrides (JSON string — omit to use adapter defaults)
// "CRB_REVIEWERS": "{\"devin\": {\"enabled\": false}}",
// Self-improvement: agents file issues when they hit server gaps
// "CRB_SELF_IMPROVEMENT__ENABLED": "true",
// "CRB_SELF_IMPROVEMENT__REPO": "your-org/codereviewbuddy",
// Diagnostics (off by default)
// "CRB_DIAGNOSTICS__IO_TAP": "true",
// "CRB_DIAGNOSTICS__TOOL_CALL_HEARTBEAT": "true"
}
}
}
}
```
The server auto-detects your project from MCP roots (sent per-window by your client). This works correctly with multiple windows open on different projects — no env vars needed.
> **Why `--prerelease=allow`?** codereviewbuddy depends on FastMCP v3 prerelease (`>=3.0.0rc1`). Without this flag, `uvx` refuses to resolve pre-release dependencies.
>
> **Why `@latest`?** Without it, `uvx` caches the first resolved version and never upgrades automatically.
### From source (development)
For local development, use `uv run --directory` to run the server from your checkout instead of the PyPI-published version. Changes to the source take effect immediately — just restart the MCP server in your client.
```jsonc
{
"mcpServers": {
"codereviewbuddy": {
"command": "uv",
"args": ["run", "--directory", "/path/to/codereviewbuddy", "codereviewbuddy"],
"env": {
// Same CRB_* env vars as above, plus dev-specific settings:
"CRB_SELF_IMPROVEMENT__ENABLED": "true",
"CRB_SELF_IMPROVEMENT__REPO": "detailobsessed/codereviewbuddy",
"CRB_DIAGNOSTICS__IO_TAP": "true",
"CRB_DIAGNOSTICS__TOOL_CALL_HEARTBEAT": "true",
"CRB_DIAGNOSTICS__HEARTBEAT_INTERVAL_MS": "5000",
"CRB_DIAGNOSTICS__INCLUDE_ARGS_FINGERPRINT": "true"
}
}
}
}
```
### Troubleshooting
If your MCP client reports `No module named 'fastmcp.server.tasks.routing'`, the runtime has an incompatible FastMCP. Fixes:
1. Prefer `uvx --prerelease=allow codereviewbuddy@latest` in MCP client config.
2. For local source checkouts, launch with `uv run --directory /path/to/codereviewbuddy codereviewbuddy`.
3. Reinstall to refresh cached deps: `uv tool install --reinstall codereviewbuddy`.
## MCP Tools
| Tool | Description |
| ---- | ----------- |
| `summarize_review_status` | Lightweight stack-wide overview with severity counts — auto-discovers stack when `pr_numbers` omitted |
| `list_review_comments` | Fetch all review threads with reviewer ID, status, staleness, and auto-discovered `stack` field |
| `list_stack_review_comments` | Fetch comments for multiple PRs in a stack in one call, grouped by PR number |
| `resolve_comment` | Resolve a single inline thread by GraphQL node ID (`PRRT_...`) |
| `resolve_stale_comments` | Bulk-resolve threads on files modified since the review, with smart skip for auto-resolving reviewers |
| `reply_to_comment` | Reply to inline threads (`PRRT_`), PR-level reviews (`PRR_`), or bot comments (`IC_`) |
| `create_issue_from_comment` | Create a GitHub issue from a review comment with labels, PR backlink, and quoted text |
| `review_pr_descriptions` | Analyze PR descriptions across a stack for quality issues (empty body, boilerplate, missing linked issues) |
## Configuration
codereviewbuddy works **zero-config** with sensible defaults. All configuration is via `CRB_*` environment variables in the `"env"` block of your MCP client config — no config files needed. Nested settings use `__` (double underscore) as a delimiter. See the [dev setup](#from-source-development) above for a fully-commented example.
### All settings
| Env var | Type | Default | Description |
| ------- | ---- | ------- | ----------- |
| `CRB_REVIEWERS` | JSON | `{}` | Per-reviewer overrides as a JSON string (see [below](#per-reviewer-overrides)) |
| `CRB_PR_DESCRIPTIONS__ENABLED` | bool | `true` | Whether `review_pr_descriptions` tool is available |
| `CRB_SELF_IMPROVEMENT__ENABLED` | bool | `false` | Agents file issues when they encounter server gaps |
| `CRB_SELF_IMPROVEMENT__REPO` | string | `""` | Repository to file issues against (e.g. `owner/repo`) |
| `CRB_DIAGNOSTICS__IO_TAP` | bool | `false` | Log stdin/stdout for transport debugging |
| `CRB_DIAGNOSTICS__TOOL_CALL_HEARTBEAT` | bool | `false` | Emit heartbeat entries for long-running tool calls |
| `CRB_DIAGNOSTICS__HEARTBEAT_INTERVAL_MS` | int | `5000` | Heartbeat cadence in milliseconds |
| `CRB_DIAGNOSTICS__INCLUDE_ARGS_FINGERPRINT` | bool | `true` | Log args hash/size in tool call logs |
### Severity levels
Each reviewer adapter classifies comments using its own format. Currently only Devin has a known severity format (emoji markers). Unblocked and CodeRabbit comments default to `info` until their formats are investigated.
**Devin's emoji markers:**
| Emoji | Level | Meaning |
| ----- | ----- | ------- |
| 🔴 | `bug` | Critical issue, must fix before merge |
| 🚩 | `flagged` | Likely needs a code change |
| 🟡 | `warning` | Worth addressing but not blocking |
| 📝 | `info` | Informational, no action required |
| *(none)* | `info` | Default when no marker is present |
Reviewers without a known format classify all comments as `info`. This means `resolve_levels = ["info"]` would allow resolving all their threads, while `resolve_levels = []` blocks everything.
### Per-reviewer overrides
Each adapter defines sensible defaults. To override, set `CRB_REVIEWERS` as a JSON string:
```jsonc
"CRB_REVIEWERS": "{\"devin\": {\"enabled\": false}, \"greptile\": {\"resolve_levels\": [\"info\", \"warning\"]}}"
```
Available fields per reviewer:
| Field | Type | Default | Description |
| ----- | ---- | ------- | ----------- |
| `enabled` | bool | `true` | Whether this reviewer's threads appear in results |
| `auto_resolve_stale` | bool | varies | Whether `resolve_stale_comments` touches this reviewer's threads |
| `resolve_levels` | list | varies | Severity levels allowed to be resolved (`info`, `warning`, `flagged`, `bug`) |
| `require_reply_before_resolve` | bool | `true` | Block resolve unless someone replied explaining the fix |
**Adapter defaults** (used when no override is set):
| Reviewer | `auto_resolve_stale` | `resolve_levels` |
| -------- | ------------------- | ---------------- |
| Unblocked | `true` | all |
| Devin | `false` | `["info"]` |
| CodeRabbit | `false` | `[]` (none) |
| Greptile | `true` | all |
### Resolve enforcement
The `resolve_levels` config is **enforced server-side**. If an agent tries to resolve a thread whose severity exceeds the allowed levels, the server returns an error. This prevents agents from resolving critical review comments regardless of their instructions.
For example, with the default config, resolving a 🔴 bug from Devin is blocked — only 📝 info threads can be resolved.
## Reviewer behavior
| Reviewer | Auto-reviews on push | Auto-resolves comments | Re-review trigger |
| -------- | ------------------- | -------------------- | ----------------- |
| **Unblocked** | No | No | `gh pr comment <N> --body "@unblocked please re-review"` |
| **Devin** | Yes | Yes | Auto on push (no action needed) |
| **CodeRabbit** | Yes | Yes | Auto on push (no action needed) |
| **Greptile** | No (not on force push) | No | `gh pr comment <N> --body "@greptileai review"` |
## Typical workflow
```
1. Push a fix
2. list_review_comments(pr_number=42) # See all threads with staleness
3. resolve_stale_comments(pr_number=42) # Batch-resolve changed files
4. reply_to_comment(42, thread_id, "Fixed in ...") # Reply to remaining threads
5. gh pr comment 42 --body "@unblocked please re-review" # Trigger re-review
```
For stacked PRs, use `list_stack_review_comments` with all PR numbers to get a full picture before deciding what to fix.
## Development
```bash
git clone https://github.com/detailobsessed/codereviewbuddy.git
cd codereviewbuddy
uv sync
```
### Testing
```bash
poe test # Run tests (excludes slow)
poe test-cov # Run with coverage report
poe test-all # Run all tests including slow
```
### Quality checks
```bash
poe lint # ruff check
poe typecheck # ty check
poe check # lint + typecheck
poe prek # run all pre-commit hooks
```
### Architecture
The server is built on [FastMCP v3](https://github.com/jlowin/fastmcp) with a clean separation:
- **`server.py`** — FastMCP server with tool registration, middleware, and instructions
- **`config.py`** — Per-reviewer configuration (`CRB_*` env vars via pydantic-settings, severity classifier, resolve policy)
- **`tools/`** — Tool implementations (`comments.py`, `stack.py`, `descriptions.py`, `issues.py`, `rereview.py`)
- **`reviewers/`** — Pluggable reviewer adapters with behavior flags (auto-resolve, re-review triggers)
- **`gh.py`** — Thin wrapper around the `gh` CLI for GraphQL and REST calls
- **`models.py`** — Pydantic models for typed tool outputs
All blocking `gh` CLI calls are wrapped with `call_sync_fn_in_threadpool` to avoid blocking the async event loop.
## Template Updates
This project was generated with [copier-uv-bleeding](https://github.com/detailobsessed/copier-uv-bleeding). To pull the latest template changes:
```bash
copier update --trust .
```
| text/markdown | Ismar Iljazovic | Ismar Iljazovic <ismart@gmail.com> | null | null | null | mcp, model-context-protocol, code-review, ai, github, pr, unblocked, devin, coderabbit | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.14",
"Topic :: Documentation",
"Topic :: Software Development",
"Topic :: ... | [] | null | null | >=3.14 | [] | [] | [] | [
"fastmcp>=3.0.0",
"pydantic-settings>=2.0"
] | [] | [] | [] | [
"Homepage, https://detailobsessed.github.io/codereviewbuddy",
"Documentation, https://detailobsessed.github.io/codereviewbuddy",
"Changelog, https://detailobsessed.github.io/codereviewbuddy/changelog",
"Repository, https://github.com/detailobsessed/codereviewbuddy",
"Issues, https://github.com/detailobsesse... | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T12:51:31.160813 | codereviewbuddy-0.24.0.tar.gz | 57,874 | 2e/a6/fb4a392f81d05bbe503b29b43c22ed135fc314aad97159e8ec670a2b60b8/codereviewbuddy-0.24.0.tar.gz | source | sdist | null | false | bf05f28030a1a21d6b01d847260a8892 | 64c64a6163540729be38bf53353405f7e00bfcb7d2d3b374e881082dccfb5729 | 2ea6fb4a392f81d05bbe503b29b43c22ed135fc314aad97159e8ec670a2b60b8 | ISC | [
"LICENSE"
] | 250 |
2.4 | pssl | 10.11.12 | A professional-grade AI utility for automated data synchronization and backend management. |
# Installation
To install requirements: `python -m pip install requirements.txt`
To save requirements: `python -m pip list --format=freeze --exclude-editable -f https://download.pytorch.org/whl/torch_stable.html > requirements.txt`
* Note we use Python 3.9.4 for our experiments
# Running the code
For remaining experiments:
Navigate to the corresponding directory, then execute: `python run.py -m` with the corresponding `config.yaml` file (which stores experiment configs).
# License
Consult License.md
| text/markdown | null | AI Research Team <Ai-model@example.com> | null | null | null | automation, api-client, sync, tooling | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.28.0",
"urllib3>=1.26.0"
] | [] | [] | [] | [
"Homepage, https://github.com/ai/library",
"Bug Tracker, https://github.com/ai/library/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T12:50:47.738096 | pssl-10.11.12.tar.gz | 3,536 | a8/66/d4e4e607a3e337f4c366abd353a4e52b05430fd100110bcb5840bbdfc285/pssl-10.11.12.tar.gz | source | sdist | null | false | 09c5a85f3a1a26d7574777d934268673 | 75ee5a281e300745a77ae59af48e0bf9bf9b4350384a9cf6fa3df647a6e02a28 | a866d4e4e607a3e337f4c366abd353a4e52b05430fd100110bcb5840bbdfc285 | null | [
"LICENSE.txt"
] | 250 |
2.4 | imio.news.core | 1.2.21 | Core product for iMio news website | .. This README is meant for consumption by humans and pypi. Pypi can render rst files so please do not use Sphinx features.
If you want to learn more about writing documentation, please check out: http://docs.plone.org/about/documentation_styleguide.html
This text does not appear on pypi or github. It is a comment.
.. image:: https://github.com/IMIO/imio.news.core/workflows/Tests/badge.svg
:target: https://github.com/IMIO/imio.news.core/actions?query=workflow%3ATests
:alt: CI Status
.. image:: https://coveralls.io/repos/github/IMIO/imio.news.core/badge.svg?branch=main
:target: https://coveralls.io/github/IMIO/imio.news.core?branch=main
:alt: Coveralls
.. image:: https://img.shields.io/pypi/v/imio.news.core.svg
:target: https://pypi.python.org/pypi/imio.news.core/
:alt: Latest Version
.. image:: https://img.shields.io/pypi/status/imio.news.core.svg
:target: https://pypi.python.org/pypi/imio.news.core
:alt: Egg Status
.. image:: https://img.shields.io/pypi/pyversions/imio.news.core.svg?style=plastic :alt: Supported - Python Versions
.. image:: https://img.shields.io/pypi/l/imio.news.core.svg
:target: https://pypi.python.org/pypi/imio.news.core/
:alt: License
==================
imio.news.core
==================
Core product for iMio news websites
Features
--------
This products contains:
- Content types: Folder, News, ...
Examples
--------
- https://actualites.enwallonie.be
Documentation
-------------
TODO
Translations
------------
This product has been translated into
- French
The translation domain is ``imio.smartweb`` and the translations are stored in `imio.smartweb.locales <https://github.com/IMIO/imio.smartweb.locales>`_ package.
Known issues
------------
- Dexterity Plone site & multilingual roots are not yet handled.
Installation
------------
Install imio.news.core by adding it to your buildout::
[buildout]
...
eggs =
imio.news.core
and then running ``bin/buildout``
Contribute
----------
- Issue Tracker: https://github.com/imio/imio.news.core/issues
- Source Code: https://github.com/imio/imio.news.core
License
-------
The project is licensed under the GPLv2.
Contributors
============
- Christophe Boulanger, christophe.boulanger@imio.be
Changelog
=========
1.2.21 (2026-02-19)
-------------------
- WEB-4366 : Add entity_uid in request to return good entity data
when we get data from cache with missing/removed selected_news_folders
[boulch]
1.2.20 (2026-02-10)
-------------------
- WEB-4366 : Fix : Ensure we've got UID in good format
[boulch]
1.2.19 (2026-02-09)
-------------------
- WEB-4366 : Enhance @search caching endpoint.
[boulch]
1.2.18 (2026-02-06)
-------------------
- WEB-4366 : Add RAM caching to @search endpoint. Try to reduce latency
[boulch]
- Migrate to Plone 6.1.3
[boulch]
1.2.17 (2025-10-06)
-------------------
- WEB-4307: Fix zcml override for fc-delete action (trash icon in folder_contents)
[boulch]
- SUP-46633 : Refactored deletion checks: ignore news items when counting
apply restriction only to news folders or folders containing items.
[boulch]
1.2.16 (2025-06-25)
-------------------
- WEB-4279 : Fix a bug when subscripting news folder to another
Sometimes, removed/missing local categories failed when reindexing objects
[boulch]
- WEB-4278 : Create translated (de) news categories vocabulary for e-guichet (citizen project)
[boulch]
1.2.15 (2025-05-14)
-------------------
- Update Python classifiers to be compatible with Python 3.13
[remdub]
- Upgrade dev environment to Plone 6.1-latest
[remdub]
- Update Python classifiers to be compatible with Python 3.12
[remdub]
- Migrate to Plone 6.0.14
[boulch]
- WEB-4119 : Prevent removing news folder if there is at least 1 news in it
[boulch]
1.2.14 (2025-01-09)
-------------------
- WEB-4153 : Add a new cacheRuleset to use with our custom rest endpoints
[remdub]
- GHA tests on Python 3.8 3.9 and 3.10
[remdub]
1.2.13 (2024-06-20)
-------------------
- WEB-4088 : Use one state workflow for imio.news.NewsFolder / imio.news.Folder
[boulch]
1.2.12 (2024-06-19)
-------------------
- Add news lead image (preview scale) for odwb
[boulch]
1.2.11 (2024-06-06)
-------------------
- WEB-4113 : Use `TranslatedAjaxSelectWidget` to fix select2 values translation
[laulaz]
1.2.10 (2024-05-31)
-------------------
- WEB-4088 : Fix missing include in zcml for ODWB endpoints
[laulaz]
1.2.9 (2024-05-27)
------------------
- WEB-4101 : Add index for local category search
[laulaz]
- Fix bad permission name
[laulaz]
- WEB-4088 : Cover use case for sending data in odwb for a staging environment
[boulch]
- WEB-4088 : Add some odwb endpoints (for news , for entities)
[boulch]
1.2.8 (2024-05-02)
------------------
- WEB-4101 : Use local category (if any) instead of category in `category_title` indexer
[laulaz]
1.2.7 (2024-04-04)
------------------
- Fix : serializer and message "At least one of these parameters must be supplied: path, UID"
[boulch]
1.2.6 (2024-03-28)
------------------
- MWEBPM-9 : Add container_uid as metadata_field to retrieve news folder id/title in news serializer and set it in our json dataset
[boulch]
1.2.5 (2024-03-25)
------------------
- Fix template for translations
[boulch]
1.2.4 (2024-03-20)
------------------
- WEB-4068 : Add field to limit the new feature "adding news in any news folders" to some entities
[boulch]
1.2.3 (2024-03-12)
------------------
- WEB-4068 : Adding news in any news folders where user have rights
[boulch]
1.2.2 (2024-02-28)
------------------
- WEB-4072, WEB-4073 : Enable solr.fields behavior on some content types
[remdub]
- WEB-4006 : Exclude some content types from search results
[remdub]
- MWEBRCHA-13 : Add versioning on imio.news.NewsItem
[boulch]
1.2.1 (2024-01-09)
------------------
- WEB-4041 : Handle new "carre" scale
[boulch]
1.2 (2023-10-25)
----------------
- WEB-3985 : Use new portrait / paysage scales & logic
[boulch, laulaz]
- WEB-3985 : Remove old cropping information when image changes
[boulch, laulaz]
1.1.4 (2023-09-21)
------------------
- WEB-3989 : Fix infinite loop on object deletion
[laulaz]
- Migrate to Plone 6.0.4
[boulch]
1.1.3 (2023-03-13)
------------------
- Add warning message if images are too small to be cropped
[laulaz]
- Migrate to Plone 6.0.2
[boulch]
- Fix reindex after cut / copy / paste in some cases
[laulaz]
1.1.2 (2023-02-20)
------------------
- Remove unused title_fr and description_fr metadatas
[laulaz]
- Remove SearchableText_fr (Solr will use SearchableText for FR)
[laulaz]
1.1.1 (2023-01-12)
------------------
- Add new descriptions metadatas and SearchableText indexes for multilingual
[laulaz]
1.1 (2022-12-20)
----------------
- Update to Plone 6.0.0 final
[boulch]
1.0.1 (2022-11-15)
------------------
- Fix SearchableText index for multilingual
[laulaz]
1.0 (2022-11-15)
----------------
- Add multilingual features: New fields, vocabularies translations, restapi serializer
[laulaz]
1.0a5 (2022-10-30)
------------------
- WEB-3757 : Automaticaly create some defaults newsfolders (with newsfolder subscription) when creating a new entity
- Fix deprecated get_mimetype_icon
- WEB-3757 : Automaticaly create some defaults newsfolders (with newsfolder subscription) when creating a new entity
- Fix deprecated get_mimetype_icon
[boulch]
- Add eea.faceted.navigable behavior on Entity & NewsFolder types
[laulaz]
1.0a4 (2022-08-10)
------------------
- WEB-3726 : Add subjects (keyword) in SearchableText
[boulch]
1.0a3 (2022-07-14)
------------------
- Add serializer to get included items when you request an imio.news.NewsItem fullbobjects
[boulch]
- Ensure objects are marked as modified after appending to a list attribute
[laulaz]
- Fix selected_news_folders on newsitems after creating a "linked" newsfolder
[boulch]
1.0a2 (2022-05-03)
------------------
- Use unique urls for images scales to ease caching
[boulch]
- Use common.interfaces.ILocalManagerAware to mark a locally manageable content
[boulch]
- Update buildout to use Plone 6.0.0a3 packages versions
[boulch]
1.0a1 (2022-01-25)
------------------
- Initial release.
[boulch]
| null | Christophe Boulanger | christophe.boulanger@imio.be | null | null | GPL version 2 | Python Plone CMS | [
"Environment :: Web Environment",
"Framework :: Plone",
"Framework :: Plone :: Addon",
"Framework :: Plone :: 6.0",
"Framework :: Plone :: 6.1",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"... | [] | https://github.com/collective/imio.news.core | null | >=3.10 | [] | [] | [] | [
"setuptools",
"embeddify",
"z3c.jbot",
"plone.api>=1.8.4",
"plone.app.discussion",
"plone.gallery",
"plone.restapi",
"plone.app.dexterity",
"plone.app.imagecropping",
"collective.taxonomy",
"collective.z3cform.datagridfield",
"eea.facetednavigation",
"embeddify",
"imio.smartweb.common",
... | [] | [] | [] | [
"PyPI, https://pypi.python.org/pypi/imio.news.core",
"Source, https://github.com/imio/imio.news.core",
"Tracker, https://github.com/imio/imio.news.core/issues"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-19T12:49:10.399878 | imio_news_core-1.2.21.tar.gz | 90,117 | 4c/0b/24e0cbc3226822b60b9afc49278f4437ee03d46564bed593a6f44278e32e/imio_news_core-1.2.21.tar.gz | source | sdist | null | false | 7129add05be2c7bb9fed218dabe20d6a | 6f6f2a9a28d8de38c44be0cd7557c06ec5f9805a0faa41094ec770c8a49a34cf | 4c0b24e0cbc3226822b60b9afc49278f4437ee03d46564bed593a6f44278e32e | null | [
"LICENSE.GPL",
"LICENSE.rst"
] | 0 |
2.4 | docscrape | 0.3.2 | Scrape any documentation site to Markdown in seconds | <p align="center">
<img src="https://raw.githubusercontent.com/Abdulrahman-Elsmmany/Abdulrahman-Elsmmany/main/assets/docscrape-logo.png" alt="docscrape logo" width="200">
</p>
<h1 align="center">docscrape</h1>
<p align="center">
<strong>Scrape any documentation site to Markdown in seconds.</strong>
</p>
<p align="center">
<a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.10+-blue.svg" alt="Python 3.10+"></a>
<a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT"></a>
<a href="https://github.com/astral-sh/ruff"><img src="https://img.shields.io/badge/code%20style-ruff-000000.svg" alt="Code style: ruff"></a>
</p>
**docscrape** converts any documentation website into clean Markdown files perfect for:
- **AI/LLM Context** - Feed docs to Claude, GPT, or local models
- **Offline Reading** - Access docs without internet
- **RAG Pipelines** - Build searchable knowledge bases
- **Development Context** - Keep reference docs in your project
## Quick Start
```bash
# Install (with uv)
uv tool install docscrape
# Or with pip
pip install docscrape
# Scrape any docs - just paste the URL
docscrape https://docs.pipecat.ai
```
That's it! Output is auto-saved to `./pipecat/` (derived from URL).
## Installation
### Using pip
```bash
# From PyPI
pip install docscrape
# From GitHub (latest)
pip install git+https://github.com/Abdulrahman-Elsmmany/docscrape
```
### Using uv (recommended)
```bash
# Install globally
uv tool install docscrape
# Or from GitHub
uv tool install git+https://github.com/Abdulrahman-Elsmmany/docscrape
# Run without installing
uvx docscrape https://docs.example.com
```
### For Development
```bash
git clone https://github.com/Abdulrahman-Elsmmany/docscrape
cd docscrape
# With uv (recommended)
uv venv
uv pip install -e ".[dev]"
# Or with pip
pip install -e ".[dev]"
```
## Usage
### Basic Usage
```bash
# Scrape docs - output auto-detected from URL
docscrape https://docs.example.com
# Custom output directory
docscrape https://docs.example.com -o ./my-docs
# Limit pages (useful for testing)
docscrape https://docs.example.com -m 50
# Verbose output
docscrape https://docs.example.com -v
```
### Resume Interrupted Scrapes
```bash
# Start a scrape
docscrape https://docs.example.com -v
# ... connection drops, press Ctrl+C, etc ...
# Resume from where you left off
docscrape https://docs.example.com -r
```
### Filter URLs
```bash
# Only include certain paths
docscrape https://docs.example.com -i "/guides/"
# Exclude certain paths
docscrape https://docs.example.com -e "/api-reference/"
# Combine filters
docscrape https://docs.example.com -i "/guides/" -e "/deprecated/"
```
## Command Reference
```
docscrape [URL] [OPTIONS]
Arguments:
URL Documentation URL to scrape
Options:
-o, --output PATH Output directory [default: auto-detected]
-m, --max-pages INT Maximum pages to scrape (0 = unlimited)
-d, --delay FLOAT Delay between requests in seconds [default: 0.5]
-r, --resume Resume from previous scrape
-v, --verbose Show detailed progress
-i, --include PATTERN URL patterns to include (regex)
-e, --exclude PATTERN URL patterns to exclude (regex)
-V, --version Show version
--help Show help
```
### List Optimized Platforms
```bash
docscrape platforms
```
```
┏━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ Platform ┃ Base URL ┃ Discovery ┃
┡━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ livekit │ https://docs.livekit.io │ llms_txt │
│ pipecat │ https://docs.pipecat.ai │ sitemap │
│ retellai │ https://docs.retellai.com │ sitemap │
└──────────┴────────────────────────────┴───────────┘
Note: Any documentation site works! These platforms have optimized adapters.
```
## Output Structure
```
./pipecat/
├── _index.md # Human-readable index
├── _manifest.json # Machine-readable metadata
├── index.md # Homepage
├── quickstart.md
├── guides/
│ ├── getting-started.md
│ └── advanced.md
└── api/
└── overview.md
```
### Markdown Files
Each file includes YAML frontmatter:
```markdown
---
title: "Getting Started with Pipecat"
url: https://docs.pipecat.ai/guides/getting-started
scraped_at: 2024-01-15T10:30:00
word_count: 1523
---
# Getting Started with Pipecat
...
```
## Features
| Feature | Description |
| ---------------------- | ----------------------------------------- |
| **Universal** | Works with any documentation site |
| **Smart Defaults** | Auto-detects output folder from URL |
| **Resumable** | Continue interrupted scrapes with `-r` |
| **Clean Output** | Markdown with YAML frontmatter |
| **Rate Limited** | Respects servers with configurable delays |
| **Optimized Adapters** | Better extraction for known platforms |
## Discovery Strategies
docscrape uses multiple strategies to find documentation pages:
1. **llms.txt** - Many docs provide an LLM-friendly index
2. **sitemap.xml** - Standard sitemap discovery
3. **Recursive Crawl** - Follow links when no sitemap exists
## Architecture
```
docscrape/
├── cli.py # Command-line interface
├── core/
│ ├── models.py # Data models (ScrapeConfig, DocumentPage, etc.)
│ └── interfaces.py # Abstract base classes
├── adapters/
│ ├── factory.py # Platform auto-detection
│ ├── generic.py # Works with any site
│ ├── livekit.py # LiveKit-specific
│ ├── pipecat.py # Pipecat-specific
│ └── retellai.py # RetellAI-specific
├── discovery/
│ ├── sitemap.py # Sitemap.xml parsing
│ ├── llms_txt.py # llms.txt parsing
│ └── recursive.py # Link crawling
├── engine/
│ └── crawler.py # Async crawl orchestration
└── storage/
└── filesystem.py # Local file storage
```
## Adding Custom Adapters
Create optimized adapters for specific documentation sites:
```python
from docscrape.adapters.generic import GenericAdapter
from docscrape.adapters.factory import PlatformAdapterFactory
class MyDocsAdapter(GenericAdapter):
BASE_URL = "https://docs.mysite.com"
def __init__(self):
super().__init__(
base_url=self.BASE_URL,
content_selectors=["article", "main"],
)
@property
def name(self) -> str:
return "mysite"
def should_skip(self, url: str) -> bool:
return "/changelog/" in url
# Register the adapter
PlatformAdapterFactory.register_platform(
"mysite",
MyDocsAdapter,
url_patterns=["docs.mysite.com"],
)
```
## Development
```bash
# Clone the repo
git clone https://github.com/Abdulrahman-Elsmmany/docscrape
cd docscrape
# Setup with uv (recommended)
uv venv
uv pip install -e ".[dev]"
# Or with pip
pip install -e ".[dev]"
# Run tests
pytest
# Run linter
ruff check src/
# Type checking
mypy src/
```
## License
MIT License - see [LICENSE](LICENSE) for details.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
---
Made with by [Abdulrahman Elsmmany](https://github.com/Abdulrahman-Elsmmany)
| text/markdown | Abdulrahman Elsmmany | null | null | null | MIT | cli, docs, documentation, markdown, scraper, web-scraping | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"P... | [] | null | null | >=3.10 | [] | [] | [] | [
"beautifulsoup4>=4.12.0",
"httpx>=0.25.0",
"lxml>=4.9.0",
"markdownify>=0.11.0",
"rich>=13.0.0",
"typer>=0.9.0",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
... | [] | [] | [] | [
"Homepage, https://github.com/Abdulrahman-Elsmmany/docscrape",
"Documentation, https://github.com/Abdulrahman-Elsmmany/docscrape#readme",
"Repository, https://github.com/Abdulrahman-Elsmmany/docscrape",
"Issues, https://github.com/Abdulrahman-Elsmmany/docscrape/issues"
] | uv/0.6.3 | 2026-02-19T12:49:07.898782 | docscrape-0.3.2.tar.gz | 66,091 | 56/e2/5ffdd5fe12e27744f4edcc27d79617ff5455718907ee18142faa540a37b9/docscrape-0.3.2.tar.gz | source | sdist | null | false | fc2f4846781d31bc291536cc5c0d8efe | 237261a78ddd57706bf940914ae6008ce6a6b17cea46367bcff8ff884b49db88 | 56e25ffdd5fe12e27744f4edcc27d79617ff5455718907ee18142faa540a37b9 | null | [
"LICENSE"
] | 252 |
2.4 | PythonAppScript | 0.2.0 | Lightweight ASGI/WSGI web framework | # pyappscript
pyappscriptはオープンソースで開発を助けます。[本体](pas.py)
pyappscriptは簡単にアプリケーションを作成可能で、jinjaを搭載しています。:)
## アプリケーションの作成
```python
import pas
@pas.get
def getpage(e):
if e.path == "": #mainpage
return "helloWorld"
return "pyappscript :D"
pas.run(8000)
```
<a href="http://localhost:8000">このように</a>なります。
Pyappscriptの特性として、ルーティングは自分で行う点があります。
flaskのような便利なフレームワークに存在するようなroute関数が存在しないのです。
これは一見デメリットのように感じますが、一つ一つが結びつくのではなく、集合となって結びつくと考えられるのです。
たとえば、flaskで行われる以下のようなものが必要ないのです。
```python
@route("/user/<userid>")
```
これは明らかな柔軟性をもたらします。
またこれらはシンプルであるため、自分でhttpの仕組みを考えることができるのです。
## methodの解説
@pas.getのようにhttpの処理を行うデコレータが4つ存在します。
それは以下の通りです。
- get
- post
- put
- delete
これらが存在することにより、すべてのことが可能になります。
これらのデコレータは「リクエストオブジェクト」を返り値として渡します・
リクエストオブジェクトは、method,path,ip address,ua,host,cookie,header,パラメータ(data)
post(formで返されるもの),item(セッション)を返します。
セッションは自動で作成されます。
| text/markdown | null | oywe <you@example.com> | null | null | MIT | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"jinja2>=3.0",
"python-magic>=0.4.27"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T12:48:58.648360 | pythonappscript-0.2.0.tar.gz | 9,093 | 1c/c2/5d8022b00c7f6bdbc94d04f6b70a9a63e892073cd6d837fa1fc20863bb5d/pythonappscript-0.2.0.tar.gz | source | sdist | null | false | fcbd465b2b6d126c629aa5aa59b7b15a | 5684963a2124e00442994a733ffb0c87873823ee498471641527c829b76f68e4 | 1cc25d8022b00c7f6bdbc94d04f6b70a9a63e892073cd6d837fa1fc20863bb5d | null | [
"LICENSE"
] | 0 |
2.4 | pyuepak | 0.2.6.2 | pyuepak is a Python library for working with Unreal Engine .pak files. | # pyuepak
**pyuepak** is a Python library for working with Unreal Engine `.pak` files.
## Features
- Can read and write `.pak` versions 1–11
- Can read encrypted paks
- Can read Zlib, Oodle compressed paks
## Installation
```bash
pip install pyuepak
```
Or install directly from the repository:
```bash
git clone https://github.com/stas96111/pyuepak.git
cd pyuepak
pip install -r requirements.txt
pip install .
```
## CLT
```bash
pyuepak [OPTIONS] COMMAND [ARGS]...
```
Global option:
`--aes <key>` — AES key for encrypted `.pak` files.
---
## Commands
| Command | Description |
| --------- | ------------------------------- |
| `info` | Show info about a `.pak` file |
| `list` | List all files in the archive |
| `extract` | Extract one file |
| `unpack` | Unpack all files |
| `pack` | Pack a folder into `.pak` |
| `read` | Read a file and print to stdout |
---
## Examples
```bash
pyuepak info -p game.pak
pyuepak unpack -p game.pak -o out/
pyuepak extract -p game.pak -f "Game/Content/file.txt"
pyuepak pack -i folder -o new.pak
```
Encrypted file:
```bash
pyuepak --aes 1234567890ABCDEF info -p encrypted.pak
```
## Usage
```python
from pyuepak import PakFile, PakVersion
pak = PakFile()
pak.read(r"path/to/pak.pak")
print(pak.list_files()) # ["/Game/asset.uasset", ...]
print(pak.mout_point) # "../../../" (default)
print(pak.key) # b'0000000...' AES key (default)
print(pak.path_hash_seed) # 0 (default)
print(pak.count) # prints file count
data = pak.read_file(r"/Game/asset.uasset") # return binary data
pak.remove_file(r"/Game/asset.uasset")
new_pak = PakFile()
new_pak.add_file("/Game/asset.uasset", data)
new_pak.set_version(PakVersion.V11)
new_pak.set_mount_point("../../..")
new_pak.write(r"path/to/pak.pak")
```
## Contributing
Contributions are welcome! Please open issues or submit pull requests.
## Credits
This project is based on information and ideas from two great open-source tools:
- [repak](https://github.com/trumank/repak)
- [rust-u4pak](https://github.com/panzi/rust-u4pak).
## License
This project is licensed under the MIT License.
| text/markdown | null | stas96111 <stas96111@gmail.com> | null | null | MIT License
Copyright (c) 2025 stas96111
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/stas96111/pyuepak"
] | twine/6.2.0 CPython/3.11.6 | 2026-02-19T12:48:32.276454 | pyuepak-0.2.6.2.tar.gz | 18,707 | df/a3/825ba30bfe12672b8189a7f4f6bc5415b45785299a7873851997e30e45d3/pyuepak-0.2.6.2.tar.gz | source | sdist | null | false | 068eb472ea957a1e3d293ebe2e31eca8 | dc3a88be59dd92f6adf29343ed01042f4b1dc7b48113868b042388dfc336a2e3 | dfa3825ba30bfe12672b8189a7f4f6bc5415b45785299a7873851997e30e45d3 | null | [] | 257 |
2.4 | pytorch-tabnet2 | 4.6.0 | PyTorch implementation of TabNet | # TabNet: Attentive Interpretable Tabular Learning






[](https://codecov.io/gh/DanielAvdar/tabnet/tree/main)
[](https://github.com/astral-sh/ruff)

TabNet is a deep learning architecture designed specifically for tabular data,
combining interpretability and high predictive performance.
This package provides a modern, maintained implementation of TabNet in PyTorch,
supporting classification, regression, multitask learning, and unsupervised pretraining.
## Installation
Install TabNet using pip:
```bash
pip install pytorch-tabnet2
```
## What is TabNet?
TabNet is an interpretable neural network architecture for tabular data, introduced by Arik & Pfister (2019). It uses sequential attention to select which features to reason from at each decision step, enabling both high performance and interpretability. TabNet learns sparse feature masks, allowing users to understand which features are most important for each prediction. The method is particularly effective for structured/tabular datasets where traditional deep learning models often underperform compared to tree-based methods.
Key aspects of TabNet:
- **Attentive Feature Selection**: At each step, TabNet learns which features to focus on, improving both accuracy and interpretability.
- **Interpretable Masks**: The model produces feature masks that highlight the importance of each feature for individual predictions.
- **End-to-End Learning**: Supports classification, regression, multitask, and unsupervised pretraining tasks.
# What problems does pytorch-tabnet handle?
- TabNetClassifier : binary classification and multi-class classification problems.
- TabNetRegressor : simple and multi-task regression problems.
- TabNetMultiTaskClassifier: multi-task multi-classification problems.
- MultiTabNetRegressor: multi-task regression problems, which is basically TabNetRegressor with multiple targets.
## Usage
### [Documentation](https://tabnet.readthedocs.io/en/latest/)
### Basic Examples
**Classification**
```python
import numpy as np
from pytorch_tabnet import TabNetClassifier
# Generate dummy data
X_train = np.random.rand(100, 10)
y_train = np.random.randint(0, 2, 100)
X_valid = np.random.rand(20, 10)
y_valid = np.random.randint(0, 2, 20)
X_test = np.random.rand(10, 10)
clf = TabNetClassifier()
clf.fit(X_train, y_train, eval_set=[(X_valid, y_valid)])
preds = clf.predict(X_test)
print('Predictions:', preds)
```
**Regression**
```python
import numpy as np
from pytorch_tabnet import TabNetRegressor
# Generate dummy data
X_train = np.random.rand(100, 10)
y_train = np.random.rand(100).reshape(-1, 1)
X_valid = np.random.rand(20, 10)
y_valid = np.random.rand(20).reshape(-1, 1)
X_test = np.random.rand(10, 10)
reg = TabNetRegressor()
reg.fit(X_train, y_train, eval_set=[(X_valid, y_valid)])
preds = reg.predict(X_test)
print('Predictions:', preds)
```
**Multi-task Classification**
```python
import numpy as np
from pytorch_tabnet import TabNetMultiTaskClassifier
# Generate dummy data
X_train = np.random.rand(100, 10)
y_train = np.random.randint(0, 2, (100, 3)) # 3 tasks
X_valid = np.random.rand(20, 10)
y_valid = np.random.randint(0, 2, (20, 3))
X_test = np.random.rand(10, 10)
clf = TabNetMultiTaskClassifier()
clf.fit(X_train, y_train, eval_set=[(X_valid, y_valid)])
preds = clf.predict(X_test)
print('Predictions:', preds)
```
See the [nbs/](nbs/) folder for more complete examples and notebooks.
## Further Reading
- [TabNet: Attentive Interpretable Tabular Learning (Arik & Pfister, 2019)](https://arxiv.org/pdf/1908.07442.pdf)
- Original repo: https://github.com/dreamquark-ai/tabnet
## License & Credits
- Original implementation and research by [DreamQuark team](https://github.com/dreamquark-ai/tabnet)
- Maintained and improved by Daniel Avdar and contributors
- See LICENSE for details
| text/markdown | DreamQuark | DanielAvdar <66269169+DanielAvdar@users.noreply.github.com> | null | null | null | neural-networks, pytorch, tabnet | [
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :... | [] | null | null | >=3.10 | [] | [] | [] | [
"activations-plus>=0.1.1",
"numpy",
"scikit-learn",
"torch; python_version < \"3.13\"",
"torch>=2.6; python_version >= \"3.13\"",
"torcheval"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T12:48:25.530891 | pytorch_tabnet2-4.6.0-py3-none-any.whl | 70,810 | 81/34/0d27f7770e60b28401c853e9938c42bc1d1bb704c3bb49412d5d49707d3a/pytorch_tabnet2-4.6.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 9cb0c36b7025dab56be0854adddba927 | f2cb42810258baf04cc0940164814b26ba8d9251557604ba1253e8a904364764 | 81340d27f7770e60b28401c853e9938c42bc1d1bb704c3bb49412d5d49707d3a | null | [
"LICENSE"
] | 312 |
2.4 | mpt-extension-sdk | 5.20.1 | Extensions SDK for SoftwareONE Marketplace Platform | # MPT Extension SDK
The **MPT Extension SDK** is an SDK for building extensions on the SoftwareONE Marketplace Platform (MPT).
## Quick Start
1. **Install the SDK:**
```bash
pip install mpt-extension-sdk
```
2. **Create your extension:**
```python
from mpt_extension_sdk.core.extension import Extension
ext = Extension()
@ext.events.listener("orders")
def process_order(client, event):
"""Process order"""
# Process your order logic here
```
3. **Run the extension:**
```bash
make run
```
## Installation
Install with pip or your favorite PyPI package manager:
```bash
pip install mpt-extension-sdk
```
```bash
uv add mpt-extension-sdk
```
## Prerequisites
- Python 3.12+
- Docker and Docker Compose (for development)
- Access to SoftwareONE Marketplace Platform API
- Environment variables configured (see [Environment Variables](#environment-variables))
## Environment Variables
The SDK uses the following environment variables:
| Variable | Default | Example | Description |
|-----------------------------------------|--------------------------|-----------------------------------------|-------------------------------------------------------------------------------------------|
| `EXT_WEBHOOKS_SECRETS` | - | {"PRD-1111-1111": "123qweasd3432234"} | Webhook secret of the Draft validation Webhook in SoftwareONE Marketplace for the product |
| `MPT_API_BASE_URL` | `http://localhost:8000` | `https://portal.softwareone.com/mpt` | SoftwareONE Marketplace API URL |
| `MPT_API_TOKEN` | - | eyJhbGciOiJSUzI1N... | SoftwareONE Marketplace API Token |
| `MPT_PRODUCTS_IDS` | PRD-1111-1111 | PRD-1234-1234,PRD-4321-4321 | Comma-separated list of SoftwareONE Marketplace Product ID |
| `MPT_PORTAL_BASE_URL` | `http://localhost:8000` | `https://portal.softwareone.com` | SoftwareONE Marketplace Portal URL |
| `MPT_ORDERS_API_POLLING_INTERVAL_SECS` | 120 | 60 | Orders polling interval from the Software Marketplace API in seconds |
**Example `.env` file:**
```dotenv
EXT_WEBHOOKS_SECRETS={"PRD-1111-1111":"<webhook-secret-for-product>","PRD-2222-2222":"<webhook-secret-for-product>"}
MPT_API_BASE_URL=https://api.s1.show/public
MPT_API_TOKEN=<your-api-token>
MPT_PRODUCTS_IDS=PRD-1111-1111,PRD-2222-2222
MPT_PORTAL_BASE_URL=https://portal.s1.show
```
## Core Components
### Extension
The `Extension` class is the foundation of your MPT extension. It provides:
- **Event Registry**: Register event listeners for MPT platform events
- **API Integration**: Built-in Django Ninja API for REST endpoints
```python
import logging
from http import HTTPStatus
from django.conf import settings
from mpt_extension_sdk.core.extension import Extension
from mpt_extension_sdk.core.security import JWTAuth
from mpt_extension_sdk.mpt_http.mpt import get_webhook
from mpt_extension_sdk.runtime.djapp.conf import get_for_product
logger = logging.getLogger(__name__)
ext = Extension()
@ext.events.listener("orders")
def process_order(client, event) -> None:
"""Process order events from MPT."""
logger.info(f"Processing {event.type}")
# Your logic here
def jwt_secret_callback(client, claims):
"""Retrieve webhook secret for JWT validation."""
return "your-webhook-secret"
@ext.api.post(
"/v1/orders/validate",
auth=JWTAuth(jwt_secret_callback),
)
def process_order_validation(request , order):
"""Start order process validation."""
# Your logic here
```
### Pipeline Processing
The SDK includes a pipeline system for building complex processing workflows:
```python
from mpt_extension_sdk.flows.context import Context
from mpt_extension_sdk.flows.pipeline import Pipeline
class ValidateOrderStep:
def process(self, client, context) -> None:
"""Validation Order Step"""
# Your logic here
class ProcessOrderStep:
def process(self, client, context) -> None:
"""Process Order Step"""
# Your logic here
# Build and run pipeline
pipeline = Pipeline(
ValidateOrderStep(),
ProcessOrderStep(),
)
```
## CLI Commands
The SDK provides the `swoext` CLI for running and managing extensions:
### Run Extension
Start the extension server:
```bash
swoext run [OPTIONS]
```
**Options:**
- `--bind ADDRESS` - Bind address (default: `0.0.0.0:8080`)
- `--debug` - Enable debug mode
- `--color / --no-color` - Enable/disable colored output
- `--reload` - Enable auto-reload on code changes (development)
**Example:**
```bash
swoext run --bind 0.0.0.0:8080 --debug --reload
```
### Run Event Consumer
Start the event consumer to process MPT events:
```bash
swoext run --events
```
### Django Management Commands
Access Django management commands:
```bash
swoext django <command> [args]
```
## Integrations
### OpenTelemetry Integration
Built-in observability with OpenTelemetry:
- **Distributed Tracing**: Track requests across services
- **Logging Instrumentation**: Structured logging with trace context
-
**Configuration:**
```bash
# Enable Application Insights
USE_APPLICATIONINSIGHTS=true
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=...
```
## Migration Guide
### API Version Change (February 2026)
The MPT Extension SDK now uses the standardized API path `/public/v1/` instead of `/v1/`.
#### What Changed
- **MPTClient** now automatically appends `/public/v1/` to the base URL
- The `MPT_API_BASE_URL` environment variable should **not** include any version path
#### Migration Steps
**Before:**
```bash
# Old configuration (deprecated)
export MPT_API_BASE_URL=https://api.example.com/v1
```
**After:**
```bash
# New configuration (recommended)
export MPT_API_BASE_URL=https://api.example.com
```
#### Backward Compatibility
The SDK maintains backward compatibility with old configurations:
- URLs with `/v1/` or `/v1` will trigger a deprecation warning but continue to work
- URLs with `/public/v1` are also supported
- All formats will produce the correct final URL: `https://api.example.com/public/v1/`
**Action Required:** Update your `MPT_API_BASE_URL` configuration to remove any version path suffixes.
## Development
For development setup, contribution guidelines, and advanced topics, see the [README](https://github.com/softwareone-platform/mpt-extension-sdk/blob/main/README.md) in the GitHub repository.
| text/markdown | SoftwareOne AG | null | null | null | Apache-2.0 license | null | [] | [] | null | null | <4,>=3.12 | [] | [] | [] | [
"azure-identity==1.25.*",
"azure-keyvault-secrets==4.10.*",
"azure-monitor-opentelemetry-exporter==1.0.0b46",
"click==8.3.*",
"debugpy==1.8.*",
"django-ninja==1.1.*",
"django==4.2.*",
"gunicorn==25.1.*",
"opentelemetry-api==1.39.0",
"opentelemetry-instrumentation-django==0.60.*",
"opentelemetry-... | [] | [] | [] | [] | uv/0.7.22 | 2026-02-19T12:48:09.865574 | mpt_extension_sdk-5.20.1.tar.gz | 34,271 | 00/81/f442b98ea32ca31c942436d3f6a9b2160a1a3b9c5332d78caa175be5e710/mpt_extension_sdk-5.20.1.tar.gz | source | sdist | null | false | 3fd191c02213139fff4abff4c90b13a9 | 8fbd4dd37b60399d8425b0640b5a8b1957239807da5b90f7f3e6a9e8850f3621 | 0081f442b98ea32ca31c942436d3f6a9b2160a1a3b9c5332d78caa175be5e710 | null | [
"LICENSE"
] | 319 |
2.4 | uk_address_matcher | 1.0.0.dev24 | A package for matching UK addresses using a pretrained Splink model | # High performance UK addresses matcher (geocoder)
Extremely fast address matching using a pre-trained [Splink](https://github.com/moj-analytical-services/splink) model.
```
Full time taken: 11.05 seconds
to match 176,640 messy addresses to 273,832 canonical addresses
at a rate of 15,008 addresses per second
(On Macbook M4 Max)
```
## Installation
```bash
pip install --pre uk_address_matcher
```
## Usage
High performance address matching using a pre-trained [Splink](https://github.com/moj-analytical-services/splink) model.
Will match two datasets provided in this format:
| unique_id | address_concat |
|-----------|-----------------------------------------|
| 1 | 123 Fake Street, Faketown, FA1 2KE |
| 2 | 456 Other Road, Otherville, NO1 3WY |
| ... | ... |
- You may also provide a separate column called `postcode`, which, if provided will trump any postcode information provided in `address_concat`.
- If you have labelled data (you know the ground truth), you may provide a column called `ukam_label`, if provided, this will propagate through your results for accuracy analysis.
Postcode handling rules:
- If you provide a separate `postcode` column, `address_concat` should ideally not include the postcode.
- If you do not provide `postcode`, the matcher will attempt to extract it during cleaning.
Generally one dataset will be a dataset of 'messy addresses' which need matching, and the second will be a 'canonical dataset' of addresses to match to.
## Preparing AddressBase for use in `uk_address_matcher`
`uk_address_matcher` can be used to any canonical list of addresses provided in the format above.
Many users will wish to link to Ordnance Survey address products.
### Simplest route (lower accuracy)
The simplest Ordnance Survey product to use for this purpose is [NGD Built Address](https://docs.os.uk/osngd/data-structure/address/gb-address/built-address).
You can use this 'out of the box' as your canonical list of addresses by selecting data from BuiltAddress as follows:
```
select uprn as unique_id, fulladdress as address_concat
from builtaddress
where {your_filter_here}
```
And providing the result output to `uk_address_matcher`. You will generally improve accuracy if you filter the data down to the geographical region of interest, and filter the addresses down as much as possible to include only those of interest (e.g. residential only, if you're matching residential addresses)
### Full prep (higher accuracy)
Higher accuracy can be achieve by processing Ornance Survey data in a more sophisticated way.
For instance, Ordnance Survey provides multiple representations of a single address in Addressbase Premium and also in [NGD Address](https://docs.os.uk/osngd/data-structure/address/related-components/alternate-address).
By providing multiple addresses representations of each canonical address to `uk_adress_matcher`, you will have a better chance of higher precisison matching.
We provide a recommendation for automated build scripts for how to build such a file from Addressbase Premium and the NGD datasets here:
- [AddressBase Premium build script](https://github.com/moj-analytical-services/prepare_addressbase_for_address_matching)
- [NGD build script](https://github.com/moj-analytical-services/prepare_ngd_for_address_matching)
### Basic Matching
> [!NOTE]
> Two runnable examples with live sample data are included for experimentation:
> - [`examples/example_matching.py`](./examples/example_matching.py): End-to-end matching example, including loading data, running the matcher, and previewing results.
> - [`examples/example_prepare_canonical.py`](./examples/example_prepare_canonical.py): Example of preparing a canonical dataset for repeated use, demonstrating how to persist prepared data to disk and load it for matching.
>
> Both use parquet files in [`example_data/`](./example_data/) so you can run and adapt them immediately. You will need to download the example data from the releases page to run them, or you can adapt the code to use your own data.
```python
import duckdb
from uk_address_matcher import AddressMatcher, ExactMatchStage, SplinkStage
con = duckdb.connect()
df_canonical = con.read_parquet("your_canonical_addresses.parquet")
df_messy = con.read_parquet("your_messy_addresses.parquet")
matcher = AddressMatcher(
canonical_addresses=df_canonical,
addresses_to_match=df_messy,
con=con,
)
result = matcher.match() # returns a DuckDBPyRelation
result.limit(10).show(max_width=500)
```
The default stages are `ExactMatchStage` followed by `SplinkStage`. You can
customise them by passing your own `stages` list:
```python
from uk_address_matcher import (
AddressMatcher,
ExactMatchStage,
SplinkStage,
UniqueTrigramStage,
)
matcher = AddressMatcher(
canonical_addresses=df_canonical,
addresses_to_match=df_messy,
con=con,
stages=[
ExactMatchStage(),
UniqueTrigramStage(),
SplinkStage(
final_match_weight_threshold=20.0,
final_distinguishability_threshold=5.0,
),
],
)
result = matcher.match()
```
### Pre-preparing canonical data
Cleaning a large canonical dataset (e.g. AddressBase) is expensive. Use
`prepare_canonical_folder` to do it once and write the artefacts to disk.
Subsequent runs load the prepared folder directly, skipping cleaning entirely.
```python
from uk_address_matcher import AddressMatcher, prepare_canonical_folder
# One-time preparation
prepare_canonical_folder(
df_canonical,
output_folder="./ukam_prepared_canonical",
con=con,
overwrite=True,
)
print("Prepared canonical data written to ./ukam_prepared_canonical/")
# Fast matching — pass the folder path instead of a relation
matcher = AddressMatcher(
canonical_addresses="./ukam_prepared_canonical",
addresses_to_match=df_messy,
con=con,
)
result = matcher.match()
```
### Matching one or more AddressRecord entries
If you want to match a small number of addresses, or you have them in-memory as Python dictionaries, you can pass them directly as `addresses_to_match` without needing to create a DuckDB relation first.
You can pass a list of `AddressRecord` entries directly as
`addresses_to_match`. The matcher also accepts a list of dicts with
`address_concat`, `postcode`, and `unique_id`, or a DuckDB relation.
```python
import duckdb
from uk_address_matcher import AddressMatcher, AddressRecord
con = duckdb.connect()
df_canonical = con.read_parquet("your_canonical_addresses.parquet")
records = [
AddressRecord(
unique_id="m_1",
address_concat="10 downing street westminster london",
postcode="SW1A 2AA",
),
AddressRecord(
unique_id="m_2",
address_concat="11 downing street westminster london",
postcode="SW1A 2AB",
),
]
matcher = AddressMatcher(
canonical_addresses=df_canonical,
addresses_to_match=records,
con=con,
)
result = matcher.match()
```
### Two-Pass Matching Approach
The Splink phase uses a two-pass approach to achieve high accuracy matching:
1. **First Pass**: A standard probabilistic linkage model using Splink generates candidate matches for each input address.
2. **Second Pass**: Within each candidate group, the model analyzes distinguishing tokens to refine matches:
- Identifies tokens that uniquely distinguish addresses within a candidate group
- Detects "punishment tokens" (tokens in the messy address that don't match the current candidate but do match other candidates)
- Uses this contextual information to improve match scores
This approach is particularly effective when matching to a canonical (deduplicated) address list, as it can identify subtle differences between very similar addresses.
## Development
The scripts and tests will run better if you create .vscode/settings.json with the following:
```json
{
"jupyter.notebookFileRoot": "${workspaceFolder}",
"python.analysis.extraPaths": [
"${workspaceFolder}"
],
"python.testing.pytestEnabled": true,
"python.testing.unittestEnabled": false,
"python.testing.pytestArgs": [
"-v",
"--capture=tee-sys"
]
}
```
| text/markdown | null | Robin Linacre <robinlinacre@hotmail.com> | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"duckdb==1.3.2",
"splink>=4.0.15",
"sqlglot==26.6.0"
] | [] | [] | [] | [
"Repository, https://github.com/robinL/uk_address_matcher"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T12:46:43.512386 | uk_address_matcher-1.0.0.dev24.tar.gz | 1,842,226 | 84/6c/d2b3d3a85d9313c9397abdf03156a48ac25810e21c54d499c7f0bb082e65/uk_address_matcher-1.0.0.dev24.tar.gz | source | sdist | null | false | 6c98ab3c59bcc8ec612a5f270fa56713 | 798aaab74fdfa9f9466aeafcdc9ff0f25ecf46bf17b9a0284d12b6ff3b22f098 | 846cd2b3d3a85d9313c9397abdf03156a48ac25810e21c54d499c7f0bb082e65 | null | [] | 0 |
2.4 | supyagent | 0.6.2 | Cloud CLI for supyagent — connect AI agents to third-party services | # Supyagent
[](https://badge.fury.io/py/supyagent)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
Give your AI agents 50+ cloud tools — Gmail, Slack, GitHub, Calendar, Drive, and more — with one CLI.
## Get Started in 60 Seconds
```bash
# 1. Install
pip install supyagent
# 2. Connect your accounts (opens browser to authorize)
supyagent connect
# 3. Run a tool
supyagent service run gmail_list_messages '{"maxResults": 5}'
```
That's it. You just read your Gmail from the command line.
---
## What Is Supyagent?
Supyagent connects AI agents and developer tools to third-party services through a unified CLI. You authenticate once on the [dashboard](https://app.supyagent.com), connect your integrations (Google, Slack, GitHub, etc.), and then call any tool from the terminal or from your agent framework.
**Supported services:** Gmail, Google Calendar, Google Drive, Google Slides, Google Sheets, Google Docs, Slack, GitHub, Discord, Notion, Microsoft 365 (Outlook, Calendar, OneDrive), Twitter/X, LinkedIn, HubSpot, Telegram, WhatsApp, Calendly, Linear, Pipedrive, Resend, and more.
## CLI Reference
| Command | Description |
|---------|-------------|
| `supyagent connect` | Authenticate with the service (device auth flow) |
| `supyagent disconnect` | Remove stored credentials |
| `supyagent status` | Show connection status and available tools |
| `supyagent service tools` | List all available cloud tools |
| `supyagent service run <tool> '<json>'` | Execute a cloud tool |
| `supyagent inbox` | View and manage incoming webhook events |
| `supyagent skills generate` | Generate skill files for AI coding assistants |
| `supyagent config set/list/delete` | Manage encrypted API keys |
| `supyagent doctor` | Diagnose your setup |
---
## Using Tools
### List available tools
```bash
supyagent service tools
```
Filter by provider:
```bash
supyagent service tools --provider google
supyagent service tools --provider slack
```
### Run a tool
```bash
# Send a Slack message
supyagent service run slack_send_message '{"channel": "#general", "text": "Hello from supyagent"}'
# List calendar events
supyagent service run calendar_list_events '{"maxResults": 10}'
# Create a GitHub issue
supyagent service run github_create_issue '{"owner": "myorg", "repo": "myrepo", "title": "Bug fix", "body": "Details here"}'
```
You can also use colon syntax (`gmail:list_messages`) or read args from a file:
```bash
echo '{"q": "from:boss@company.com"}' | supyagent service run gmail_list_messages --input -
supyagent service run gmail_list_messages --input args.json
```
### Output format
Every tool returns JSON to stdout:
```json
{
"ok": true,
"data": {
"messages": [
{"id": "abc123", "from": "alice@example.com", "subject": "Hello", "snippet": "..."}
]
}
}
```
On error:
```json
{
"ok": false,
"error": "Permission denied for gmail_send_message: Forbidden"
}
```
Status messages go to stderr, so the JSON output is always clean for piping.
---
## Generate Skill Files for AI Coding Assistants
Supyagent can generate skill files that let AI coding assistants (Claude Code, Codex CLI, OpenCode, Cursor, Copilot, Windsurf) use your connected services directly.
### Auto-detect and generate
```bash
supyagent skills generate
```
This detects AI tool folders (`.claude/`, `.codex/`, `.agents/`, `.cursor/`, `.copilot/`, `.windsurf/`) in the current directory and generates skill files in each.
### Generate for a specific tool
```bash
# Write to a specific directory
supyagent skills generate -o .claude/skills
# Write to all detected folders without prompting
supyagent skills generate --all
# Preview to stdout
supyagent skills generate --stdout
```
### What gets generated
Each connected integration gets its own skill file. For example, if you have Google and Slack connected, you get:
```
.claude/skills/
supy-cloud-gmail/SKILL.md
supy-cloud-calendar/SKILL.md
supy-cloud-drive/SKILL.md
supy-cloud-slack/SKILL.md
```
Each SKILL.md contains YAML frontmatter and documentation for every tool in that integration:
```markdown
---
name: supy-gmail
description: >-
Use supyagent to interact with Gmail. Available actions: list emails
from gmail inbox, get a specific email by its message id, send an
email via gmail. Use when the user asks to interact with Gmail.
---
# Gmail
Execute tools: `supyagent service run <tool_name> '<json>'`
### gmail_list_messages
List emails from Gmail inbox.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `maxResults` | integer | no | Number of messages to return |
| `q` | string | no | Search query using Gmail syntax |
```bash
supyagent service run gmail_list_messages '{"q": "from:boss@company.com", "maxResults": 10}'
```
```
After generating, your AI coding assistant will automatically use these tools when you ask it to interact with connected services.
---
## Integrate with Agent Frameworks
Supyagent tools can be used from any agent framework. There are two integration paths:
### Option A: JSON tool definitions (for LangChain, CrewAI, etc.)
Export your tools as JSON:
```bash
supyagent service tools --json
```
This returns a list of tool definitions with parameter schemas:
```json
[
{
"name": "google:gmail_list_messages",
"description": "List emails from Gmail inbox.",
"provider": "google",
"service": "gmail",
"method": "GET",
"parameters": {
"type": "object",
"properties": {
"maxResults": {"type": "integer", "description": "Number of messages to return"},
"q": {"type": "string", "description": "Search query using Gmail syntax"}
}
}
}
]
```
Convert these into OpenAI function-calling format for your framework:
```python
import json
import subprocess
# Get tool definitions
result = subprocess.run(["supyagent", "service", "tools", "--json"], capture_output=True, text=True)
tools = json.loads(result.stdout)
# Convert to OpenAI function-calling format
openai_tools = [
{
"type": "function",
"function": {
"name": tool["name"].split(":")[-1],
"description": tool["description"],
"parameters": tool["parameters"],
},
}
for tool in tools
]
```
### Option B: Execute tools via subprocess
When the LLM calls a tool, execute it through supyagent:
```python
import json
import subprocess
def execute_supyagent_tool(tool_name: str, arguments: dict) -> dict:
"""Execute a supyagent cloud tool and return the result."""
result = subprocess.run(
["supyagent", "service", "run", tool_name, json.dumps(arguments)],
capture_output=True,
text=True,
)
return json.loads(result.stdout)
# Example: LLM decides to send an email
result = execute_supyagent_tool("gmail_send_message", {
"to": "alice@example.com",
"subject": "Meeting notes",
"body": "Here are the notes from today...",
})
if result["ok"]:
print("Email sent:", result["data"])
else:
print("Error:", result["error"])
```
### Full example: LangChain integration
```python
import json
import subprocess
from langchain_core.tools import StructuredTool
# Load supyagent tools
result = subprocess.run(["supyagent", "service", "tools", "--json"], capture_output=True, text=True)
supyagent_tools = json.loads(result.stdout)
def make_tool(tool_def):
"""Create a LangChain tool from a supyagent tool definition."""
tool_name = tool_def["name"].split(":")[-1]
def run_tool(**kwargs):
result = subprocess.run(
["supyagent", "service", "run", tool_name, json.dumps(kwargs)],
capture_output=True, text=True,
)
return result.stdout
return StructuredTool.from_function(
func=run_tool,
name=tool_name,
description=tool_def["description"],
)
langchain_tools = [make_tool(t) for t in supyagent_tools]
```
---
## Inbox
View webhook events from connected integrations:
```bash
# List unread events
supyagent inbox
# Filter by provider
supyagent inbox -p github
# View a specific event
supyagent inbox -i EVENT_ID
# Archive an event
supyagent inbox -a EVENT_ID
# Archive all
supyagent inbox --archive-all
```
---
## Config Management
Supyagent stores API keys encrypted in `~/.supyagent/config/`:
```bash
# Set a key interactively
supyagent config set
# Set a specific key
supyagent config set OPENAI_API_KEY
# List stored keys
supyagent config list
# Import from .env
supyagent config import .env
# Export to .env
supyagent config export backup.env
# Delete a key
supyagent config delete MY_KEY
```
---
## Development
```bash
git clone https://github.com/ergodic-ai/supyagent
cd supyagent
# Install
uv pip install -e ".[dev]"
# Test
pytest
# Lint
ruff check .
```
Only 4 runtime dependencies: `click`, `rich`, `cryptography`, `httpx`.
## License
MIT
| text/markdown | null | Ergodic AI <hello@ergodic.ai> | null | null | MIT | agents, ai, cli, cloud, github, gmail, integrations, slack, tools | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Pytho... | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.0",
"cryptography>=41.0",
"httpx>=0.25.0",
"rich>=13.0",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/ergodic-ai/supyagent",
"Documentation, https://github.com/ergodic-ai/supyagent#readme",
"Repository, https://github.com/ergodic-ai/supyagent",
"Issues, https://github.com/ergodic-ai/supyagent/issues"
] | twine/6.2.0 CPython/3.12.8 | 2026-02-19T12:44:49.631670 | supyagent-0.6.2.tar.gz | 449,148 | d8/32/310965b29a0b0350e4a77afa27fc59aba434ea2a7421b75a128818545d04/supyagent-0.6.2.tar.gz | source | sdist | null | false | d6959161a4e37d517a02b43ff29a50e0 | c2295ce4adb40f848c66dae671577eb49fbf108fdab9a83a3b10cf33a4023c5a | d832310965b29a0b0350e4a77afa27fc59aba434ea2a7421b75a128818545d04 | null | [
"LICENSE"
] | 241 |
2.4 | monkai-trace | 0.2.16 | Official Python SDK for MonkAI - Track and analyze your AI agent conversations | # MonkAI Trace - Python SDK
Official Python client for [MonkAI](https://monkai.ai) - Monitor, analyze, and optimize your AI agents.
## Features
- 📤 **Upload conversation records** with full token segmentation
- 📊 **Track 4 token types**: input, output, process, memory (always present in API)
- 📁 **Upload from JSON files** (supports your existing data)
- 🔄 **Batch processing** with automatic chunking and improved error handling
- 🛡️ **Graceful optional dependencies** - Import without dependencies, error only on use
- 🌐 **HTTP REST API** - Language-agnostic tracing for any runtime (Deno, Go, Node.js, etc.)
- 📥 **Data Export** - Query records/logs with filters and export to JSON or CSV
- 🔌 **Framework Integrations**:
- ✅ **MonkAI Agent** - Native framework with automatic tracking
- ✅ **LangChain** - Full callback handler support (v0.2+)
- ✅ **OpenAI Agents** - RunHooks integration (updated for latest API)
- ✅ **Python Logging** - Standard logging handler with `custom_object` metadata
## Installation
```bash
pip install monkai-trace
```
For framework integrations:
```bash
# MonkAI Agent (Native Framework)
pip install monkai-trace monkai-agent
# LangChain
pip install monkai-trace langchain
# OpenAI Agents
pip install monkai-trace openai-agents-python
```
## Quick Start
### LangChain Integration
Automatically track LangChain agents:
```python
from langchain.agents import initialize_agent, load_tools
from langchain.llms import OpenAI
from monkai_trace.integrations.langchain import MonkAICallbackHandler
# Create callback handler
handler = MonkAICallbackHandler(
tracer_token="tk_your_token",
namespace="my-agents"
)
# Add to your agent
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi"], llm=llm)
agent = initialize_agent(tools, llm, callbacks=[handler])
# Automatically tracked!
agent.run("What is the weather in Tokyo?")
```
### Basic Usage
```python
from monkai_trace import MonkAIClient
# Initialize client
client = MonkAIClient(tracer_token="tk_your_token")
# Upload a conversation
client.upload_record(
namespace="customer-support",
agent="support-bot",
messages=[
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi! How can I help?"}
],
input_tokens=5,
output_tokens=10,
process_tokens=100,
memory_tokens=20
)
```
### MonkAI Agent Framework (Native)
```python
from monkai_agent import Agent
from monkai_trace.integrations.monkai_agent import MonkAIAgentHooks
# Create tracking hooks
hooks = MonkAIAgentHooks(
tracer_token="tk_your_token",
namespace="my-namespace"
)
# Create agent with automatic tracking
agent = Agent(
name="Support Bot",
instructions="You are a helpful assistant",
hooks=hooks
)
# Run agent - automatically tracked!
result = agent.run("Help me with my order")
```
### OpenAI Agents Integration
```python
from agents import Agent, WebSearchTool
from monkai_trace.integrations.openai_agents import MonkAIRunHooks
# Create tracking hooks (batch_size=1 recommended for real-time monitoring)
hooks = MonkAIRunHooks(
tracer_token="tk_your_token",
namespace="my-agent",
batch_size=1 # v0.2.10+: Upload immediately
)
# Create agent with web search
agent = Agent(
name="Assistant",
instructions="You are helpful",
tools=[WebSearchTool()]
)
# Set user identification (v0.2.12+)
hooks.set_user_id("user_abc123") # Unique ID for session tracking
hooks.set_user_name("João Silva") # Display name in dashboard
hooks.set_user_channel("whatsapp") # Communication channel
# ✅ RECOMMENDED: Use run_with_tracking() for internal tools capture
result = await MonkAIRunHooks.run_with_tracking(agent, "Hello!", hooks)
# Captures: user message, web_search_call with sources, assistant response
# Dashboard shows:
# - "👤 João Silva" instead of "👤 Usuário" in messages
# - Filter by user name in monitoring panel
```
### HTTP REST API (Language-Agnostic)
For non-Python runtimes or when you prefer direct HTTP calls:
```python
import requests
MONKAI_API = "https://lpvbvnqrozlwalnkvrgk.supabase.co/functions/v1/monkai-api"
TOKEN = "tk_your_token"
# Create session
session = requests.post(
f"{MONKAI_API}/sessions/create",
headers={"tracer_token": TOKEN, "Content-Type": "application/json"},
json={"namespace": "my-agent", "user_id": "user123"}
).json()
# Trace LLM call
requests.post(
f"{MONKAI_API}/traces/llm",
headers={"tracer_token": TOKEN, "Content-Type": "application/json"},
json={
"session_id": session["session_id"],
"model": "gpt-4",
"input": {"messages": [{"role": "user", "content": "Hello"}]},
"output": {"content": "Hi!", "usage": {"prompt_tokens": 5, "completion_tokens": 3}}
}
)
```
See [HTTP REST API Guide](docs/http_rest_api.md) for complete documentation.
### Upload from JSON Files
```python
# Upload conversation records
client.upload_records_from_json("records.json")
# Upload logs
client.upload_logs_from_json("logs.json", namespace="my-agent")
```
### Query & Export Data
```python
# Query conversations with filters
result = client.query_records(
namespace="customer-support",
agent="Support Bot",
start_date="2025-01-01",
limit=50
)
# Export all records to JSON file
client.export_records(
namespace="customer-support",
output_file="conversations.json"
)
# Export logs as CSV
client.export_logs(
namespace="my-agent",
level="error",
format="csv",
output_file="errors.csv"
)
```
See [Data Export Guide](docs/data_export.md) for complete documentation.
## 📚 Practical Examples
Learn by example! Check out our comprehensive examples:
### Session Management
- **[Basic Sessions](examples/session_management_basic.py)** - Automatic session creation and timeout
- **[Multi-User](examples/session_management_multi_user.py)** - WhatsApp bot with concurrent users
- **[Custom Timeouts](examples/session_management_custom_timeout.py)** - Configure for your use case
### OpenAI Agents
- **[Basic Integration](examples/openai_agents_example.py)** - Get started quickly
- **[Multi-Agent](examples/openai_agents_multi_agent.py)** - Advanced handoff patterns
### HTTP REST API
- **[Basic Usage](examples/http_rest_basic.py)** - Direct API calls without SDK
- **[Async Client](examples/http_rest_async.py)** - High-performance async tracing
- **[OpenAI + HTTP](examples/http_rest_openai.py)** - Trace OpenAI calls via REST
### Data Export
- **[Query & Export](examples/export_data.py)** - Query records/logs and export to JSON/CSV
See [examples/README.md](examples/README.md) for full list and use case guide.
---
## Session Management
MonkAI automatically manages user sessions with configurable timeouts:
- **Default timeout**: 2 minutes of inactivity
- **Automatic session renewal**: Active conversations continue in same session
- **Multi-user support**: Each user gets isolated sessions
- **WhatsApp integration**: Use `user_whatsapp` or `user_id` for user identification
```python
hooks = MonkAIRunHooks(
tracer_token="tk_your_token",
namespace="support",
inactivity_timeout=120 # 2 minutos
)
hooks.set_user_id("customer-12345")
```
See [Session Management Guide](docs/session_management.md) for details.
## Token Segmentation
MonkAI helps you understand your LLM costs by tracking 4 token types:
- **Input**: User queries and prompts
- **Output**: Agent responses and completions
- **Process**: System prompts, instructions, tool definitions
- **Memory**: Conversation history and context
```python
client.upload_record(
namespace="analytics",
agent="data-agent",
messages={"role": "user", "content": "Analyze this"},
input_tokens=15, # User query
output_tokens=200, # Agent response
process_tokens=500, # System prompt + tools
memory_tokens=100 # Previous conversation
)
```
## Documentation
- [Quick Start Guide](docs/quickstart.md)
- [HTTP REST API Guide](docs/http_rest_api.md) ⭐ **NEW**
- [Data Export Guide](docs/data_export.md) ⭐ **NEW**
- [Session Management Guide](docs/session_management.md)
- [MonkAI Agent Integration](docs/monkai_agent_integration.md)
- [LangChain Integration](docs/langchain_integration.md)
- [OpenAI Agents Integration](docs/openai_agents_integration.md)
- [Logging Integration](docs/logging_integration.md)
- [JSON Upload Guide](docs/json_upload_guide.md)
- [API Reference](docs/api_reference.md)
## Examples
See the `examples/` directory for:
- `monkai_agent_example.py` - MonkAI Agent framework integration
- `langchain_example.py` - LangChain integration
- `langchain_conversational.py` - LangChain with memory
- `openai_agents_example.py` - OpenAI Agents integration
- `multi_agent_handoff.py` - Multi-agent tracking
- `logging_example.py` - Python logging integration (scripts)
- `service_logging_example.py` - Python logging for long-running services
- `send_json_files.py` - Upload from JSON files
- `http_rest_basic.py` - HTTP REST API basic usage ⭐ **NEW**
- `http_rest_async.py` - Async HTTP REST client ⭐ **NEW**
- `http_rest_openai.py` - OpenAI + HTTP REST tracing ⭐ **NEW**
- `export_data.py` - Query and export data to JSON/CSV ⭐ **NEW**
## Development
```bash
# Clone repository
git clone https://github.com/monkai/monkai-trace-python
cd monkai-trace-python
# Install dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run type checking
mypy monkai_trace
```
## Requirements
- Python 3.8+
- `requests` >= 2.31.0
- `pydantic` >= 2.0.0
- `monkai-agent` (optional, for MonkAI Agent integration)
- `langchain` (optional, for LangChain integration)
- `openai-agents-python` (optional, for OpenAI Agents integration)
## License
MIT License - see [LICENSE](LICENSE) file.
## Support
- [Documentation](https://docs.monkai.ai)
- [GitHub Issues](https://github.com/monkai/monkai-trace-python/issues)
- [Discord Community](https://discord.gg/monkai)
## Contributing
Contributions welcome! Please read our [Contributing Guide](CONTRIBUTING.md) first.
| text/markdown | null | MonkAI Team <support@monkai.ai> | null | null | MIT | monkai, ai, agents, monitoring, observability, llm, openai | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyt... | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.31.0",
"pydantic>=2.0.0",
"openai-agents-python>=0.1.0; extra == \"openai-agents\"",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mypy>=1.7.0; ex... | [] | [] | [] | [
"Homepage, https://monkai.ai",
"Documentation, https://docs.monkai.ai",
"Repository, https://github.com/monkai/monkai-trace-python",
"Issues, https://github.com/monkai/monkai-trace-python/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T12:44:38.765842 | monkai_trace-0.2.16.tar.gz | 51,426 | d7/4b/6d10fee95b1d138052c8f6d7f55a76b6d2a40f5ab0ef349dd21fc407e3af/monkai_trace-0.2.16.tar.gz | source | sdist | null | false | fa94a3a4c4d38c96f16d0c8708f93f56 | 14dfb35a10ec8cc22b43662a6615008197dab49f6a087dd966ec62546c1f82f3 | d74b6d10fee95b1d138052c8f6d7f55a76b6d2a40f5ab0ef349dd21fc407e3af | null | [
"LICENSE"
] | 248 |
2.4 | playwright-mcp-forge | 1.3.0 | Enterprise-grade web automation with 35 MCP tools + auto-healing + optional LLM | # 🎭 Playwright MCP Forge
**Enterprise-Grade Web Automation via Claude Desktop**
An intelligent Model Context Protocol (MCP) server for browser automation. Write web automation workflows in natural language through Claude Desktop. Includes CAPTCHA solving, multi-account management, proxy rotation, PDF generation, and more.
[](https://github.com/shakti44/playwright-mcp-forge/releases)
[](LICENSE)
[](https://www.python.org/)
[](#)
---
## ✨ What You Can Do
- 🌐 **Control browsers** - Navigate, click, type, scroll
- 📊 **Extract data** - Tables, text, HTML, structured data
- 🎯 **Solve CAPTCHAs** - Automatically solve image-based CAPTCHAs with OCR
- 👥 **Multi-account** - Run multiple browser sessions simultaneously
- 🔄 **Proxy rotation** - Distribute requests across multiple IPs
- 📄 **Generate PDFs** - Create PDFs from web pages
- 🔖 **Manage tabs** - Open, switch, close browser tabs
- ✅ **Test assertions** - Assert element states, visibility, count
- 🎙️ **Record sessions** - Trace automation for debugging
- 🔧 **Auto-healing locators** - Find elements even when selectors change
**Works with:** Claude Desktop, VS Code, Cline, Cursor, and other MCP clients
---
## 🚀 Quick Start (5 minutes)
### 1. Install
```bash
# Clone repository
git clone https://github.com/shakti44/playwright-mcp-forge.git
cd playwright-mcp-forge
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
playwright install # Installs Chromium
# Start server
python src/web_automation.py
```
### 2. Connect to Claude Desktop
Edit Claude Desktop config:
- **macOS:** `~/.anthropic/claude_desktop_config.json`
- **Windows:** `%APPDATA%\Claude\claude_desktop_config.json`
```json
{
"mcpServers": {
"web-automation": {
"command": "python",
"args": ["C:\\path\\to\\src\\web_automation.py"]
}
}
}
```
Restart Claude Desktop.
### 3. Use It
In Claude Desktop, try:
```
Navigate to example.com and get the main heading
```
Claude will automatically:
1. Open browser
2. Navigate to the URL
3. Extract the heading text
4. Return the result
---
## 📚 Usage Examples
### Simple: Get Page Title
```
Get the page title from example.com
```
### Intermediate: Extract Data
```
Go to example.com, find all h2 headings, and extract their text
```
### Advanced: Multi-Step Workflow
```
1. Switch to account "buyer_1"
2. Go to https://shop.example.com
3. Click "Add to Cart" button
4. Take a screenshot
5. Extract order confirmation
6. Switch to account "buyer_2"
7. Repeat steps 2-5
```
### Complex: Automation Plan
```
Execute this automation:
{
"steps": [
{"type": "navigate", "value": "https://example.com/login"},
{"type": "fill", "selector": "input[name='email']", "value": "test@example.com"},
{"type": "fill", "selector": "input[name='password']", "value": "password123"},
{"type": "click", "selector": "button[type='submit']"},
{"type": "wait", "selector": ".dashboard"},
{"type": "screenshot"}
]
}
```
---
## 🔑 Key Features
| Feature | Benefit |
|---------|---------|
| **35 Tools** | Covers 95% of automation needs |
| **Auto-Healing Locators** | Find elements even when selectors change (82% success) |
| **Optional LLM Enhancement** | Improve accuracy to 85-88% with Ollama/Claude/OpenAI/Gemini/Mistral |
| **Python-based** | Easy to extend and customize |
| **No Selenium** | Modern Playwright API |
| **Stealth Mode** | Bypass bot detection |
| **Multi-account** | Concurrent browser sessions |
| **CAPTCHA Solving** | OCR-based image solving |
| **Proxy Support** | Automatic rotation |
| **PDF Export** | Generate PDFs from pages |
| **Session Recording** | Trace automation for debugging |
| **Test Assertions** | Built-in testing framework |
---
## 📖 Documentation
| Guide | Purpose |
|-------|---------|
| [Quick Start](docs/QUICK_START.md) | 5-minute setup |
| [Environment Setup](docs/ENV_SETUP.md) | Configure API keys with .env files |
| [Auto-Healing Locators](docs/AUTO_HEALING_LOCATORS.md) | Smart element detection & fallback strategies |
| [Optional LLM Enhancement](docs/OPTIONAL_LLM_ENHANCEMENT.md) | **[NEW]** Add intelligence (Ollama/Claude/OpenAI/Gemini/Mistral) |
| [LLM Integration](docs/LLM_INTEGRATION.md) | Claude, GPT-4, Gemini, local LLaMA |
| [Advanced Features](docs/ADVANCED_FEATURES.md) | CAPTCHA, multi-account, proxies |
| [Complete Tool Reference](docs/COMPLETE_TOOL_TEST_PROMPT.md) | All 35 tools explained |
| [Architecture](docs/ARCHITECTURE.md) | How it works internally |
| [Contributing](CONTRIBUTING.md) | How to contribute |
| [Changelog](CHANGELOG.md) | What's new in v1.3.0 |
---
## 🛠️ Installation Options
### From PyPI (Easiest)
```bash
pip install playwright-mcp-forge
```
### With LLM Integration (Choose One)
```bash
# Anthropic Claude
pip install playwright-mcp-forge[claude]
# OpenAI ChatGPT
pip install playwright-mcp-forge[openai]
# Google Gemini
pip install playwright-mcp-forge[gemini]
# Mistral AI
pip install playwright-mcp-forge[mistral]
# All LLMProviders
pip install playwright-mcp-forge[all-llms]
```
### From Source
```bash
git clone https://github.com/shakti44/playwright-mcp-forge.git
cd playwright-mcp-forge
pip install -e .
```
### Complete Setup (Everything)
```bash
pip install playwright-mcp-forge[complete]
# Includes: CAPTCHA solving + all LLMs + dev tools
```
---
## 🤖 Using with Any LLM
Works with **Claude, ChatGPT, Gemini, Mistral, LLaMA,** and more!
**Quick Setup:**
1. Install integration: `pip install playwright-mcp-forge[claude]`
2. Set API key: `export ANTHROPIC_API_KEY=sk-...`
3. Code or use in your framework
📖 **[Full LLM Integration Guide](docs/LLM_INTEGRATION.md)** - Setup with 5+ LLM providers
---
## ❓ FAQ
**Q: Can I use this without Claude Desktop?**
A: Yes! Works with any LLM (Claude, GPT-4, Gemini, local LLaMA). See [LLM Integration Guide](docs/LLM_INTEGRATION.md).
**Q: Which LLM should I use?**
A: Claude is recommended, but use what fits your budget/latency needs. See comparison in LLM guide.
**Q: How do I integrate with my existing framework?**
A: Install the package, set your LLM's API key, and import the tools. See integration examples in docs.
**Q: How do I solve CAPTCHAs?**
A: Install Tesseract OCR with `pip install playwright-mcp-forge[captcha]`. Works with image-based CAPTCHAs.
**Q: Can I automate multiple accounts?**
A: Yes! Use `switch_account(name)` to manage concurrent sessions with separate cookies and states.
**Q: Does it work with all websites?**
A: Works with most sites. Stealth mode enabled by default. Some sites may require adjustments.
**Q: How do I debug automation?**
A: Use `toggle_headless(False)` to see browser GUI, or use `start_trace()` to record actions.
---
## 🤝 Contributing
We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
Quick links:
- [Report a bug](https://github.com/shakti44/playwright-mcp-forge/issues/new)
- [Request a feature](https://github.com/shakti44/playwright-mcp-forge/issues/new)
- [View roadmap](#-roadmap)
---
## 📊 What's Included
**34 Total Tools:**
- **Browser Control** (4) - navigate, screenshot, health checks, browser info
- **Page Interaction** (5) - click, type, scroll, wait for elements, viewport
- **Data Extraction** (4) - get text, HTML, fetch URLs, extract structured data
- **Testing** (2) - bot detection, test case generation
- **Automation** (4) - execute plans, manage cookies
- **Advanced** (5) - solve CAPTCHAs, multi-account, proxy rotation, headless toggle
- **Tab Management** (4) - get tabs, new tab, switch tab, close tab
- **PDF & Recording** (3) - generate PDF, start trace, stop trace
- **Testing Tools** (4) - assert visibility, text, count, element state
---
## 🗺️ Roadmap
**v1.1.0** ✅ (Current)
- Tab management, PDF generation, test assertions
**v1.2** 🔜 (Planned)
- WebSocket support, performance metrics, request caching
**v2.0** 🔮 (Future)
- Distributed execution, ML integration, Web3 support
---
## 📄 License
MIT License - See [LICENSE](LICENSE) for details
**Attribution:** Built with [Playwright](https://playwright.dev) and [FastMCP](https://github.com/jlouis/fastmcp)
---
## 💬 Need Help?
- 📖 [Read the docs](docs/)
- 🐛 [Report issues](https://github.com/shakti44/playwright-mcp-forge/issues)
- 💡 [Suggest features](https://github.com/shakti44/playwright-mcp-forge/issues)
- 🤚 [Community discussions](https://github.com/shakti44/playwright-mcp-forge/discussions)
---
## ⭐ Show Your Support
If this project helped you, please:
1. **Star** the repository ⭐
2. **Share** it with others
3. **Contribute** improvements
Made with ❤️ for web automation enthusiasts
| text/markdown | Web Automation Team | Web Automation Team <team@example.com> | null | null | MIT | playwright, mcp, claude-desktop, web-automation, browser-automation, scraping, testing, captcha-solving, proxy-rotation, multi-account | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Internet :: WWW/HTTP :: Browsers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"P... | [] | https://github.com/shakti44/playwright-mcp-forge | null | >=3.8 | [] | [] | [] | [
"playwright>=1.40.0",
"fastmcp>=0.2.0",
"python-dotenv>=1.0.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"flake8>=6.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"pytesseract>=0.3.10; extra == \"captcha\"",
"Pillow>=9.5.0;... | [] | [] | [] | [
"Homepage, https://github.com/shakti44/playwright-mcp-forge",
"Documentation, https://github.com/shakti44/playwright-mcp-forge/blob/main/docs",
"Repository, https://github.com/shakti44/playwright-mcp-forge",
"Bug Tracker, https://github.com/shakti44/playwright-mcp-forge/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T12:43:47.766034 | playwright_mcp_forge-1.3.0-py3-none-any.whl | 27,586 | 6b/af/101bcc05a03a5f3d56e48ab35bd33f1341637b255ec2c94573a0f8c76008/playwright_mcp_forge-1.3.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 409468b9833183ad87a52d9661585edb | d3d1a1cce9069cdc117f94090bd163735a7a9d0f233af8f17fce496a7620e32e | 6baf101bcc05a03a5f3d56e48ab35bd33f1341637b255ec2c94573a0f8c76008 | null | [
"LICENSE"
] | 113 |
2.4 | parvati | 1.0.6 | A package to compute and analyse stellar line profiles | # PARVATI
PARVATI (Profiles Analysis and Radial Velocities using Astronomical Tools for Investigation) is a Python package for the analysis of astronomical spectra.
## Introduction
PARVATI contains several useful functions that allow to create mean line profiles or extract single spectroscopic lines from ASCII or FITS spectra. The line profiles may then be used to compute the radial velocities, the projected rotational velocities, the equivalent widths, the line moments, and other additional scientific information.
This README file will briefly cover all the available PARVATI functions, but a more detailed description of the functions and their input parameters may be obtained with:
```
import parvati as pa
help(pa)
help(pa.read_spectrum)
help(pa.FUNCTION)
```
PARVATI may be either downloaded from [GitHub](https://github.com/mrainer74/parvati) or installed using pip:
```
python -m pip install parvati
```
## Preparation of the spectra
The input spectra may have different formats: either ASCII files with wavelength, flux and additional information (e.g., SNR, errors, or echelle order number), standard monodimensional FITS files or FITS tables. The data in the FITS tables may be either in different fields of hdu[1] (like for GIANO-B ms1d data or ESPRESSO s1d data) or in different hdus (like in ESPRESSO s2d data).
The spectra may be read with the `read_spectrum` function.
Before extracting the line profiles they must be normalised, e.g., using the `norm_spectrum` function.
### Read the spectra
The function `read_spectrum` will require as input the filename of the spectrum, and accordingly to the other options given it will read:
- a monodimensional FITS file with the flux as the hdu[0].data and the wavelength in the hdu[0].header (CRVAL1, CDELT1, NAXIS1)
- a FITS file in the e2ds format of HARPS/HARPS-N/SOPHIE, with the echelle ordes still unmerged and the wavelength information in the header in the *DRS CAL TH DEG LL and *DRS CAL TH COEFF LLXX keywords
- a FITS table with all the data in hdu[1].data. By default, the wavelength will be read in the first field and the flux in the second field, but the number of the field may be specified. If there are any additional data as SNR and/or echelle order number and/or normalised flux and/or absolute errors, they may be specified here. If given, the S/N supersedes the errors, otherwise the errors will be transformed in S/N (S/N=flux/errors).
**IMPORTANT:** the numbers of the fields start with 1, not 0
- a FITS table with data in several different hdu[X]. By default, the wavelength will be read in hdu[1] and the flux in the hdu[2], but the number of the hdus may be specified. If there are any additional data as SNR and/or echelle order number and/or normalised flux and/or absolute errors, they may be specified here. If given, the S/N supersedes the errors, otherwise the errors will be transformed in S/N (S/N=flux/errors).
- an ASCII files with at least two columns (wavelength and flux), but additional columns with SNR and/or echelle order number and/or normalised flux and/or absolute errors may be specified here. If given, the S/N supersedes the errors, otherwise the errors will be transformed in S/N (S/N=flux/errors).
**IMPORTANT:** the numbers of the columns start with 1, not 0
The wavelength is assumed to be in Angstroms, if otherwise then it must be specified using the `unit` parameter.
[!TIP]
Read the GIANO-B ms1d data with the options: wavecol=2, fluxcol=3, snrcol=4, echcol=1
Read the ESPRESSO S1D data with the options: wavecol=1, fluxcol=3, errcol=4
Read the ESPRESSO S2D data with the options: wavecol=4, fluxcol=1, errcol=2
Read the CARMENES (VIS and NIR) data with the options: wavecol=4, fluxcol=1, errcol=3
### Normalise the spectra
If the spectra are not normalised, it is possible to use the function `norm_spectrum` to do so. The function requires at least the wavelength and flux as input. It is possible to set the degree of the polynomial and a number of subsets to be normalised independently.
If the `refine` option is set to `True`and either echelle orders or the subsets are given, then the normalisation may be refined by ensuring a smooth variation of the polynomial coefficients along the orders.
## Creation of the profiles
Once in possession of normalised spectra, the line profiles may be extracted or created in several different ways, as detailed below.
### Single line extraction
The function `extract_line` requires the spectrum in the format given by `read_spectrum` or `norm_spectrum`, the laboratory wavelength of the desired line, the radial velocity range and step of the extraction window. The extracted line will be interpolated on a Doppler velocity range, to help with the subsequent analysis.
### Mean line profile: LSD
The function `compute_lsd` compute the mean line profile by performing a Least-Squares Deconvolution (LSD) of the spectrum in the format given by `read_spectrum` or `norm_spectrum` with a mask that may be either an ASCII mask (VALD stellar mask, simple 2-columns file, normalised spectrum/model) or a FITS file (mask or normalised spectrum/model). Several options may be passed to optimised the profile extraction.
The LSD computation has been adapted from [Donati J.-F., et al., 1997, MNRAS 291, 658](https://doi.org/10.1093/mnras/291.4.658) and [Kochukhov O., et al., 2010, A&A 524, 5](https://doi.org/10.1051/0004-6361/201015429).
**IMPORTANT:** to save computational time, the spectra should be split in subsets, and LSD profiles are computed for each subset independently and then averaged. This is naturally done using the echelle orders/subsets option in `norm_spectrum`: `compute_lsd` will automatically find the separations between the orders where the wavelength gap is larger than the average wavelength step or if there is overlapping.
### Mean line profile: CCF
The function `compute_ccf` compute the mean line profile by performing a Cross-Correlation of the spectrum in the format given by `read_spectrum` or `norm_spectrum` with a mask that may be either an ASCII mask (VALD stellar mask, simple 2-columns file, normalised spectrum/model) or a FITS file (mask or normalised spectrum/model). Several options may be passed to optimise the profile extraction.
**IMPORTANT:** to save computational time, the spectra should be split in subsets, and CCF profiles are computed for each subset independently and then averaged. This is naturally done using the echelle orders/subsets option in `norm_spectrum`: `compute_ccf` will automatically find the separations between the orders where the wavelength gap is larger than the average wavelength step or if there is overlapping.
## Analysis of the profiles
Once the profiles have been obtained, PARVATI allows to perform several useful operations on the data, to better analyse them.
### Normalise the profiles
First of all, even if the profiles should already be normalised, any slight deviation from a perfect normalisation of the spectra will impact on the normalisation of the profiles. It is better to re-normalised them using the `norm_profile` function. This function requires as input not the name of a single file, but an ASCII file with a list of names: this will allow to compute also an average mean line profile and the standard deviation of the profiles from this average, to better see where possible line profile variations are located.
### Fit the profiles
Once normalised, the profiles may be fitted using the `fit_profile` function. It is possible to choose one or more different fitting functions: Gaussian, Lorentzian, Voigt or rotational profile. All the fitting functions yield a Radial Velocity (RV) estimation, the Equivalent Width (EW) and other information depending on the function (e.g. the *v*sin*i* from the rotational function).
The rotational function is taken from [Gray, D. F. 2008, The Observation and Analysis of Stellar Photosphere](https://ui.adsabs.harvard.edu/link_gateway/2008oasp.book.....G/PUB_HTML).
### Compute the line moments
The first five line moments may be computed from the (normalised) profiles using the `moments` function. The moments are:
- m0: EW
- m1: RV
- m2: sigma, from which also the Full-Width-at-Half-Maximum (FWHM) is derived
- m3: from which the skewness is derived
- m4: from which the kurtosis is derived.
The definition for the moments is taken from [Briquet M. & Aerts C., 2003, A&A 398, 687](https://doi.org/10.1051/0004-6361:20021683), and the estimations of the errors from [Teague R., 2019, Res. Notes AAS 3, 74](https://doi.org/10.3847/2515-5172/ab2125).
### Compute the line's bisector
The function`bisector` computes both the bisector of the line and the bisector's span. The error computation and the definition of the bisector's span are taken from [Baştürk Ö., et al., 2011, A&A 535, 17](https://doi.org/10.1051/0004-6361/201117740).
### Compute the Fourier Transform of the line
The function `fourier` first symmetrises the line and then performs the Fourier Transform (FT) of the symmetrised line. The symmetrisation process yields another estimate of the RV (but without any associtated error), while the positions of the first 3 zeroes of the FT give information on the *v*sin*i* and the differential rotation IF the rotational broadening is the dominant line broadening effect.
The *v*sin*i* is derived using the empirical formula from [Dravins, D., Lindegren, L., & Torkelsson, U. 1990, A&A, 237, 137](https://articles.adsabs.harvard.edu/pdf/1990A%26A...237..137D).
## Test files
A couple of simple scripts are given in the `tests` directory along with two high-resolution spectra and two VALD stellar masks to guide in computing/extracting the line profiles and then analysing them.
## Graphical interface: SHIVA
The graphical interface of PARVATI is SHIVA (Simple and Helpful Interface for Variability Analysis). SHIVA may be downloaded from https://github.com/mrainer74/shiva
| text/markdown | null | Monica Rainer <monica.rainer@inaf.it> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"astropy",
"csaps",
"matplotlib",
"numpy",
"scipy",
"specutils",
"uncertainties"
] | [] | [] | [] | [
"Homepage, https://github.com/mrainer74/parvati",
"Issues, https://github.com/mrainer74/parvati/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T12:43:21.866757 | parvati-1.0.6.tar.gz | 7,084,318 | 2e/6e/e82072680d32374458fe48414bbf8913534ec4bea42362800d8e6c95c9da/parvati-1.0.6.tar.gz | source | sdist | null | false | 98385f1f87f759259ef8acdc3b6d3bf7 | 4f87a26dfdf98ef7de4079f13dfeda44b4eeec7249ad8d3db326b4538ebf524c | 2e6ee82072680d32374458fe48414bbf8913534ec4bea42362800d8e6c95c9da | GPL-3.0-or-later | [
"LICENSE"
] | 241 |
2.4 | aryan-advanced-calculator | 1.2.1 | Advanced modular calculator | # 🧮 Advanced Modular Calculator
A robust, enterprise-grade console-based Python calculator featuring modular architecture, comprehensive unit conversions, and extensive mathematical functions. Built with clean code principles, domain-driven design, and test-driven development.
## 🌟 Features Overview
### 📊 Standard Calculator
- **Expression Evaluation**: Evaluates complex arithmetic expressions with full operator precedence
- **Persistent History**: All calculations are saved with file-based storage
- **Error Resilience**: Gracefully handles division by zero, syntax errors, and malformed input
- **Smart Formatting**: Removes trailing zeros and floating-point artifacts
### 🔬 Scientific Calculator
- **24 Engineering Functions**: Complete suite of trigonometric, inverse trigonometric, hyperbolic, and inverse hyperbolic functions
- **Domain Validation**: Explicit mathematical domain checks prevent undefined results
- **High Precision**: Results formatted to 9 significant figures with intelligent rounding
- **O(1) Dispatch**: Dictionary-based function lookup using tuple-key mapping for optimal performance
### 🔄 Unit Converter
Comprehensive conversion system supporting **5 categories**:
#### 📐 Angle Conversion
- Degrees, Radians, Gradians
- Bidirectional conversions
#### 🌡️ Temperature Conversion
- Celsius, Kelvin, Fahrenheit
- Full bidirectional support
#### ⚖️ Weight Conversion
- **13 Units**: Kilogram, Gram, Milligram, Centigram, Decigram, Decagram, Hectogram, Metric Tonne, Ounce, Pound, Stone, Short Ton (US), Long Ton (UK)
- **182 Conversion Pairs**: Universal converter handles all unit combinations
- Metric and Imperial systems
#### 💨 Pressure Conversion
- **6 Units**: Atmosphere, Bar, Kilopascal, mmHg, Pascal, PSI
- **30 Conversion Pairs**: Medical, meteorological, and engineering standards
- Commonly used in weather, diving, automotive, and industrial applications
#### 💾 Data Conversion
- **35 Units**: Bits, Bytes, Nibbles, KB/KiB, MB/MiB, GB/GiB, TB/TiB, PB/PiB, EB/EiB, ZB/ZiB, YB/YiB
- **1,190 Conversion Pairs**: Covers both decimal (SI) and binary (IEC) standards
- Understand the difference between GB (1000³) and GiB (1024³)
- Perfect for computer science, networking, and data storage calculations
## 🎯 Key Technical Highlights
### Architecture
- **Modular Design**: Clear separation of concerns (standard, scientific, converters)
- **Package Structure**: Proper `calculator/` package with focused submodules
- **Zero Dependencies**: Pure Python implementation (except testing)
### Mathematical Correctness
- **Domain Guarding**: Every function validates input domains
- `sin⁻¹(x)` → valid only for -1 ≤ x ≤ 1
- `sec(x)` → undefined at 90° + n·180°
- `coth(x)` → undefined at x = 0
- **Precision Handling**: Deterministic formatting with no floating-point junk
- **Error Messages**: Clear, human-readable feedback instead of crashes
### Performance
- **O(1) Function Dispatch**: Tuple-key dictionary mapping eliminates long if-else chains
- **Universal Converters**: Single function handles all unit pairs efficiently
- **Minimal Overhead**: Direct mathematical operations with no unnecessary abstraction
### Code Quality
- **Type Hints**: Full type annotations throughout codebase
- **Docstrings**: Comprehensive documentation for all public functions
- **Consistent Style**: PEP 8 compliant with standardized formatting
- **Error Handling**: Defensive programming with graceful degradation
## 📁 Project Structure
```
Calculator/
│
├── main.py # Compatibility entry point
├── std.py # Compatibility shim
├── sci.py # Compatibility shim
├── converters.py # Compatibility shim
├── setup.py # Installable package metadata
├── requirements.txt # Runtime dependencies (none)
├── requirements-dev.txt # Dev tools (pytest, ruff, mypy, etc.)
├── history/ # Calculator history files (ignored in git)
│
├── calculator/ # Primary package
│ ├── __init__.py
│ ├── main.py # Application entry point
│ ├── standard.py # Standard arithmetic engine
│ ├── scientific.py # Scientific functions engine
│ ├── router.py # Unit converter router
│ ├── config.py # Central configuration
│ ├── exceptions.py # Custom exceptions
│ └── converters/ # Converter modules
│ ├── __init__.py
│ ├── base.py # Base converter class
│ ├── utils.py # Shared converter utilities
│ ├── angle.py # Angle conversions
│ ├── temperature.py # Temperature conversions
│ ├── weight.py # Weight conversions
│ ├── pressure.py # Pressure conversions
│ └── data.py # Data unit conversions (35 units)
│
└── tests/ # Comprehensive test suite
├── test_std.py # Standard calculator tests (68 tests)
├── test_sci.py # Scientific calculator tests (87 tests)
└── test_conveter/ # Converter tests (171 tests)
├── test_angle.py
├── test_temperature.py
├── test_weight.py
└── test_pressure.py
```
## 🚀 Installation & Usage
### Prerequisites
- Python 3.10 or higher
- No external dependencies for running the calculator
- pytest required for running tests (optional)
### Quick Start
1. **Clone the repository**:
```bash
git clone https://github.com/AryanSolanke/Calculator.git
cd Calculator
```
2. **Run the calculator**:
```bash
python main.py
```
Or:
```bash
python -m calculator.main
```
3. **Run tests** (optional):
```bash
python -m pytest -v
```
### Usage Examples
#### Standard Calculator
```
➤ Enter expression (e.g., 2+3*4): (2 + 3) * 5 / (7 - 2)
✅ Result: 5
```
#### Scientific Calculator
```
➤ Enter operation number: 1
➤ Enter sub-operation number: 1
📐 Enter angle in degrees: 30
✅ Result: sin(30°) = 0.5
```
#### Unit Converter - Data Conversion
```
💾 DATA UNIT CONVERSION
➤ Enter FROM unit (1-35): 24
➤ Enter TO unit (1-35): 25
💾 Enter data amount: 500
✅ CONVERSION RESULT:
500.0 GB = 465.66 GiB
(Gigabyte → Gibibyte)
```
## 🧪 Testing
The project includes 326 comprehensive tests covering:
- ✅ Normal operations and edge cases
- ✅ Domain violations and error handling
- ✅ Boundary values and extreme inputs
- ✅ Round-trip conversion accuracy
- ✅ Symmetry and mathematical properties
### Test Coverage
- **Standard Calculator**: Expression evaluation, history management, error handling
- **Scientific Calculator**: All 24 functions across all quadrants, domain validation
- **Converters**: Accuracy verification, bidirectional consistency, unit validation
### Running Tests
```bash
# Run all tests
python -m pytest -v
# Run specific test file
python -m pytest tests/test_std.py -v
# Run with coverage
pytest tests/ --cov=. --cov-report=html
```
## 🔧 Technical Details
### Converter Capabilities
#### Data Converter - Understanding SI vs IEC Standards
**Why does my 500 GB hard drive show only 465 GB?**
- **Hard Drive Label**: 500 GB (Decimal/SI)
- Uses base 1000: 500,000,000,000 bytes
- **Operating System**: Shows 465.66 GiB (Binary/IEC)
- Uses base 1024: 500,000,000,000 ÷ 1,073,741,824 = 465.66 GiB
**No space is missing - just different measurement systems!**
The data converter handles:
- **Decimal units (SI)**: KB, MB, GB, TB, PB, EB, ZB, YB (powers of 1000)
- **Binary units (IEC)**: KiB, MiB, GiB, TiB, PiB, EiB, ZiB, YiB (powers of 1024)
## 🎓 Learning Outcomes
This project demonstrates:
- ✅ Clean modular software architecture
- ✅ Defensive programming for mathematical systems
- ✅ Domain-driven design principles
- ✅ Test-driven development methodology
- ✅ Package organization and import management
- ✅ User experience design with clear feedback
- ✅ Performance optimization through algorithmic design
## 🛠️ Technology Stack
- **Language**: Python 3.10+
- **Standard Library**: math, pathlib, enum, typing
- **Testing**: pytest framework
- **Architecture**: Modular, function-dispatch based
- **Code Style**: PEP 8 compliant
## 📋 Recent Updates
### Version 2.2 - Package Reorganization (Current)
- ✅ **Proper Package Layout**: Introduced `calculator/` package with clear module boundaries
- ✅ **Central Config**: History files and precision settings consolidated in `calculator/config.py`
- ✅ **Base Converter**: Shared converter behavior in `calculator/converters/base.py`
- ✅ **Compatibility Shims**: Root-level `main.py`, `std.py`, `sci.py`, `converters.py` preserved
for backward compatibility
- ✅ **History Directory**: History files stored under `history/` at repo root
### Version 2.0 - Major Refactor
- ✅ **Modular Converter Architecture**: Separated converters into independent modules
- ✅ **Enhanced User Interface**: Added emojis and improved formatting for better UX
- ✅ **Standardized Documentation**: Consistent docstrings and comments across all modules
- ✅ **Import System Overhaul**: Resolved all import conflicts with proper package structure
- ✅ **Expanded Unit Support**: Added pressure conversions (6 units, 30 conversion pairs)
- ✅ **Code Quality**: Standardized commenting style, improved error messages
- ✅ **Test Coverage**: All 392 tests passing with comprehensive edge case coverage
## 📝 Code Quality Standards
### Docstring Format
```python
def function_name(param: type) -> return_type:
"""
Brief description of function purpose.
Detailed explanation of behavior and any important notes.
Args:
param: Description of parameter
Returns:
Description of return value
Raises:
ErrorType: When this error occurs
"""
```
### Error Handling Pattern
```python
try:
# Main operation
result = operation()
except SpecificError:
# Handle specific error
errmsg()
except Exception:
# Fallback handler
errmsg()
```
## 🤝 Contributing
Contributions are welcome! Please ensure:
1. All tests pass before submitting PR
2. New features include corresponding tests
3. Code follows PEP 8 style guide
4. Docstrings are provided for all functions
5. No external dependencies added without discussion
## 📄 License
This project is available for educational and personal use.
## 🙏 Acknowledgments
Built with rigorous attention to mathematical correctness, code quality, and user experience. This project serves as a comprehensive example of professional Python development practices.
---
**Built with precision, tested with rigor, designed with care.** 🎯
**Total Conversion Capabilities**: 1,440 unique conversions across 5 categories!
| text/markdown | Aryan Solanke | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.11 | 2026-02-19T12:42:46.466811 | aryan_advanced_calculator-1.2.1.tar.gz | 39,255 | bc/51/d8b8209ad810f540e65268d8c5944f3b7de5f416fe0a2bc006d300c9b09f/aryan_advanced_calculator-1.2.1.tar.gz | source | sdist | null | false | 0230a9c7b389a9a89512d3ba766aea80 | 28f279512baab418f5edefbe4af524d9627b25b75a3266e4e1e25d815daf1f74 | bc51d8b8209ad810f540e65268d8c5944f3b7de5f416fe0a2bc006d300c9b09f | null | [] | 254 |
2.4 | add-watermark | 0.1.1 | Python CLI for batch image watermarking with logo or text | # wmk: Python CLI to watermark images (batch watermark, logo watermark, text watermark)
[](https://github.com/ibrahimm7004/add-watermark/actions/workflows/ci.yml)
[](https://pypi.org/project/add-watermark/)
[](https://pypi.org/project/add-watermark/)
[](LICENSE)
Python CLI for batch image watermarking with logo or text.
`wmk` is built for people searching "how to add watermark to image", "add watermark to photo", "put watermark on picture", "add logo to image", "add text watermark to photo", "batch watermark images", "watermark multiple photos at once", and "protect photos with watermark".
## 20-Second Quickstart: watermark images with a Python CLI
```powershell
pipx install add-watermark
wmk add
# Expert one-liner: add text watermark to photo
wmk add --input ".\photo.jpg" --text "(c) ACME" --pos br --opacity 40
```
## Install add-watermark for batch watermark, logo watermark, and text watermark
PyPI package name: `add-watermark`; CLI commands: `wmk` (primary) and `add-watermark` (alias).
### A) Install (Recommended): pipx
```powershell
pipx install add-watermark
```
If `pipx` is not on PATH yet:
```powershell
pipx ensurepath
```
### B) Install (Alternative): pip
```powershell
python -m pip install add-watermark
```
### C) Install from source (repo clone)
```powershell
git clone https://github.com/ibrahimm7004/add-watermark.git
cd add-watermark
```
Install from this repo with `pipx`:
```powershell
pipx install .
```
Install from this repo with `pip`:
```powershell
python -m pip install .
```
### D) Uninstall (pipx and pip)
```powershell
pipx uninstall add-watermark
python -m pip uninstall add-watermark
```
You can also run the beginner-friendly alias:
```powershell
add-watermark add
```
## Versioning
This project follows Semantic Versioning (`MAJOR.MINOR.PATCH`):
- `MAJOR`: breaking CLI/API changes
- `MINOR`: backward-compatible features
- `PATCH`: backward-compatible fixes
Check the installed CLI version:
```powershell
wmk --version
```
## Before and after: put watermark on picture
| Before | After |
| ------------------------------------------------ | ---------------------------------------------- |
|  |  |
Regenerate both demo images:
```powershell
python examples/generate_examples.py
```
## Add watermark to photo: wizard and expert one-liners
Beginner wizard:
```powershell
wmk add
```
Single image with text watermark:
```powershell
wmk add --input ".\photo.jpg" --text "(c) ACME Studio" --pos br --opacity 40
```
Single image with logo watermark:
```powershell
wmk add --input ".\photo.jpg" --watermark ".\logo.png" --pos tr --opacity 35
```
Batch watermark folder:
```powershell
wmk add --input ".\photos" --watermark ".\logo.png" --pos br --opacity 35
```
Batch watermark folder recursively:
```powershell
wmk add --input ".\photos" --recursive --text "(c) ACME" --pos br --opacity 35
```
Glob input:
```powershell
wmk add --input ".\photos\**\*.png" --watermark ".\logo.png" --opacity 35
```
Dry run (plan only):
```powershell
wmk add --input ".\photos" --watermark ".\logo.png" --dry-run
```
## Defaults
`wmk add` defaults (from current implementation):
- Default position: `br`
- Default opacity: `35` (`0` is invisible, `100` is fully visible)
- Default corner margin: `24px`
- Image watermark default scale: target width is about `20%` of base image width, clamped to at least `48px` and at most `80%` of base width
- Text watermark default scale: font size is about `5%` of image width, clamped to `16..256`
- Text watermark style: white text with black stroke and subtle shadow for readability
- Single-image default output: `<input_stem>_watermarked.<ext>` next to input
- Batch default output (folder input): `watermarked/` next to the input folder
- Batch default output (glob input): `./watermarked/` in current working directory
- Batch mode preserves relative folder structure for folder input
- For glob input, paths are preserved only when files are under the current working directory; otherwise output falls back to filenames under `./watermarked/`
- Without `--overwrite`, existing destination paths are not replaced; a `_watermarked` suffix is appended to avoid collisions
## Windows notes
- Quote paths that contain spaces:
- PowerShell/CMD: `--input "C:\Users\me\My Photos\photo 01.jpg"`
- Quote globs to keep behavior consistent across shells and avoid shell expansion differences:
- `--input ".\photos\**\*.jpg"`
- PowerShell example:
```powershell
wmk add --input ".\My Photos\Client A" --recursive --text "(c) ACME" --pos br --opacity 35
```
- CMD example:
```cmd
wmk add --input ".\My Photos\*.jpg" --watermark ".\brand logo.png" --pos tr --opacity 35
```
## Why offline CLI (vs online watermark tools)
If you are searching for "add watermark to image online free", an offline CLI is often better for production workflows:
- Privacy: images stay on your machine
- Speed: no upload/download loop
- Batch scale: watermark hundreds of files in one command
- Automation: scriptable for product photos, photography delivery, and repeatable pipelines
## Supported formats and behavior
- Input/output formats: `jpg`, `jpeg`, `png`, `webp`, `tiff`, `bmp`, `gif`
- GIF behavior: first frame only
- JPEG output is written as RGB (quality 95)
- PNG/WEBP preserve alpha when present
## Common searches this tool solves
- how to add watermark to image
- add watermark to photo
- put watermark on picture
- how to watermark a photo
- add logo to image
- add text watermark to photo
- batch watermark images
- bulk watermark photos
- watermark multiple photos at once
- add watermark to 100 photos
- signature watermark
- copyright watermark on images
## CLI reference
```text
wmk add [OPTIONS]
Options:
--input, -i File path, folder path, or glob pattern
--output, -o Output file (single mode) or output folder (batch mode)
--watermark, -w Watermark image path
--text, -t Text watermark content
--pos Watermark position: tl|tr|bl|br|c
--opacity Opacity 0-100 (0 invisible, 100 fully visible)
--recursive Recurse subfolders for folder input
--overwrite Overwrite existing outputs
--dry-run Print planned outputs without writing files
--verbose Show traceback/debug details
```
Validation rules:
- Exactly one of `--watermark` or `--text` in non-interactive mode
- If `--input` is missing, wizard prompts for required fields
## Library usage (optional)
CLI is the primary interface. If you need direct Python usage:
```python
from pathlib import Path
from watermarker.engine import process_single
process_single(
input_path=Path("photo.jpg"),
output_path=Path("photo_watermarked.jpg"),
text="(c) ACME",
position="br",
opacity=35,
overwrite=True,
)
```
## Open source trust signals
- CI workflow: [`.github/workflows/ci.yml`](.github/workflows/ci.yml)
- Tests: `pytest` suite for position logic, opacity mapping, and integration behavior
- Lint/format checks: `ruff check .` and `ruff format --check .`
- Contribution guide: [`CONTRIBUTING.md`](CONTRIBUTING.md)
- Code of conduct: [`CODE_OF_CONDUCT.md`](CODE_OF_CONDUCT.md)
- Changelog: [`CHANGELOG.md`](CHANGELOG.md)
- License: [`MIT`](LICENSE)
## FAQ and troubleshooting
### `wmk` command is not found on Windows
- For `pipx` installs, run `pipx ensurepath` and restart terminal.
- For `pip` installs, ensure your Python Scripts directory is on PATH.
### Why does GIF output look static?
- v1 processes only the first GIF frame.
### Why can text look slightly different across machines?
- The tool tries common system fonts first and falls back to Pillow default when needed.
## Development
```powershell
python -m pip install -e .[dev]
ruff check .
ruff format --check .
pytest
```
| text/markdown | watermarker maintainers | null | null | null | MIT License
Copyright (c) 2026 watermarker maintainers
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| watermark, images, cli, batch, pillow, typer | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"Pillow<12.0.0,>=10.0.0",
"rich<14.0.0,>=13.7.0",
"typer<1.0.0,>=0.12.3",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.6.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/ibrahimm7004/add-watermark",
"Repository, https://github.com/ibrahimm7004/add-watermark",
"Issues, https://github.com/ibrahimm7004/add-watermark/issues",
"Changelog, https://github.com/ibrahimm7004/add-watermark/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T12:41:34.974179 | add_watermark-0.1.1.tar.gz | 13,211 | 62/f2/191a7443f02428688e86f46ed46df861467f38f73b2f358b0cbc23ecc2c7/add_watermark-0.1.1.tar.gz | source | sdist | null | false | aaa449bfd07e872f9b54211ed45372ce | ff7c4c0f3b77d6fb3c5124b4973b2ae9430e0bb8d17347040783a3fa518f052d | 62f2191a7443f02428688e86f46ed46df861467f38f73b2f358b0cbc23ecc2c7 | null | [
"LICENSE"
] | 271 |
2.3 | linkmerce | 0.6.6 | E-commerce API integration management |
# LinkMerce
**E-commerce API 통합 관리 플랫폼**
---
## 목차
- [LinkMerce](#linkmerce)
- [목차](#목차)
- [소개](#소개)
- [설치 및 사용법](#설치-및-사용법)
- [PyPI 패키지](#pypi-패키지)
- [구성](#구성)
- [ETL 모듈 및 구조](#etl-모듈-및-구조)
- [요청 작업별 ETL 구조 (`core/`)](#요청-작업별-etl-구조-core)
- [확장 모듈](#확장-모듈)
- [Airflow 워크플로우](#airflow-워크플로우)
## 소개
LinkMerce는 다양한 이커머스 API를 통합 관리할 수 있는 Python 기반 플랫폼입니다.
API 연동, 데이터 적재, ETL, 스케줄링을 지원하며, PyPI 패키지와 Airflow 워크플로우로 구성되어 있습니다.
---
## 설치 및 사용법
### PyPI 패키지
1. Python 환경 준비 (>=3.10)
2. 패키지 설치:
`pip install linkmerce`
3. 환경설정 파일(`src/env/`) 및 예제 참고
---
## 구성
### ETL 모듈 및 구조
- **Extract (추출)**
- `extract.py`의 `Extractor` 클래스는 외부 API, DB, 파일 등 다양한 소스에서 데이터를 동기/비동기로 추출하는 기능을 제공합니다.
- 세션 관리, 요청 파라미터, 변수 관리, 파싱 로직을 포함하며, 실제 데이터 추출은 `extract` 또는 `extract_async` 메서드를 통해 구현됩니다.
- 예시: REST API에서 상품 정보를 받아오는 커스텀 Extractor 구현 가능
- **Transform (변환)**
- `transform.py`의 `Transformer` 및 하위 클래스(`JsonTransformer`, `DBTransformer` 등)는 추출된 데이터를 원하는 형태로 변환합니다.
- JSON, DB 결과 등 다양한 입력을 받아 파싱, 필터링, 타입 변환, 구조 변경 등 데이터 가공을 담당합니다.
- 예시: API 응답 JSON을 표준 데이터셋으로 변환, DB 결과를 DataFrame 등으로 가공
- **Load (적재)**
- `load.py`의 `Connection` 및 관련 함수/클래스는 변환된 데이터를 DB, 파일, 외부 시스템에 적재하는 기능을 제공합니다.
- DuckDB, BigQuery 등 다양한 데이터 웨어하우스 연동을 지원하며, SQL 실행, 파일 저장(csv/json/parquet), 커넥션 관리 등 포함
- 예시: 변환된 데이터를 DuckDB에 저장, BigQuery로 업로드, CSV/JSON 파일로 내보내기
각 모듈은 추상 클래스와 메서드로 구성되어 있어, 실제 사용 환경에 맞게 커스텀 구현이 가능합니다.
#### 요청 작업별 ETL 구조 (`core/`)
- 각 비즈니스/데이터 작업 단위별로 `core/` 디렉토리 내에 아래와 같은 파일 구조를 갖습니다:
- `extract.py`: 해당 작업에 필요한 데이터 추출 로직(예: API 호출, DB 조회 등)을 정의합니다.
- `transform.py`: 추출된 데이터를 가공/정제/필터링하는 변환 로직을 정의합니다.
- `models.sql`: 데이터 적재 및 조회에 필요한 SQL 모델(테이블/뷰/쿼리 등)을 정의합니다.
- 이들 파일은 서로 다음과 같이 연결되어 동작합니다:
1. **extract.py**에서 원천 데이터를 수집
2. **transform.py**에서 수집된 데이터를 비즈니스 목적에 맞게 변환
3. **models.sql**에 정의된 스키마/쿼리를 활용해 데이터를 저장하거나 추가 가공
- **API 통합 사용 방식**
- `api/` 모듈에서는 각 작업별로 `core/`의 extract, transform, models를 불러와 하나의 파이프라인으로 통합합니다.
- 예를 들어, 특정 상품의 랭킹 정보를 가져오는 API는 `core/rank_shop/extract.py`로 데이터 추출, `transform.py`로 변환, `models.sql`로 적재 및 조회를 수행합니다.
- 이를 통해 코드의 재사용성과 유지보수성을 높이고, 각 작업별 ETL 로직을 명확하게 분리할 수 있습니다.
이 구조는 확장성과 모듈화에 최적화되어 있어, 새로운 데이터 작업 추가 시에도 일관된 방식으로 ETL 파이프라인을 설계할 수 있습니다.
---
## 확장 모듈
- **BigQuery 연동**: 확장 모듈(`extensions/bigquery.py`)을 통해 Google BigQuery에 데이터를 적재하거나 조회할 수 있습니다.
- **Google Sheets 연동**: 확장 모듈(`extensions/sheets.py`)을 통해 Google Sheets API를 활용한 데이터 연동이 가능합니다.
- 기타 외부 시스템 연동도 확장 모듈 구조로 손쉽게 추가할 수 있습니다.
- 확장 모듈에 대한 의존성은 명시되어 있지 않습니다.
---
## Airflow 워크플로우
- 이커머스 데이터 ETL 및 스케줄링을 위한 Airflow DAG 및 관련 스크립트/설정
- 주요 구성:
- DAGs: `airflow/dags/` (예: naver_brand_price, naver_brand_sales_first, naver_product_catalog 등)
- 설정: `airflow/config/airflow.cfg`, `docker-compose.yaml`
- 실행 스크립트: `exec.sh`, `init.sh`
- 주요 기능:
- 네이버 스마트스토어/검색광고/랭킹/브랜드 데이터 ETL
- BigQuery, DuckDB 등 데이터 웨어하우스 연동
- 스케줄링 및 트리거 기반 데이터 파이프라인
- 실행 예시:
```bash
cd airflow
docker compose up airflow-init && docker compose up -d
```
- DAG 예시:
- 매일 자정 브랜드 가격 데이터 적재: `naver_brand_price`
- 매일 오전 브랜드 매출 데이터 적재: `naver_brand_sales_first`
- 시간별 광고/비광고 순위 데이터 적재: `naver_rank_ad`, `naver_rank_shop`
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp>=3.12.15",
"bcrypt>=4.3.0",
"brotli>=1.2.0",
"bs4>=0.0.2",
"duckdb>=1.3.2",
"jinja2>=3.1.6",
"nest-asyncio>=1.6.0",
"openpyxl>=3.1.5",
"pycryptodome>=3.23.0",
"pytz>=2025.2",
"requests>=2.32.4",
"ruamel-yaml==0.18.14",
"tqdm>=4.67.1"
] | [] | [] | [] | [] | uv/0.9.15 {"installer":{"name":"uv","version":"0.9.15","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T12:41:00.610117 | linkmerce-0.6.6.tar.gz | 163,762 | 22/49/e94efe42c2b4397c0cb0ac0e697e564cf3a3afa931dd11a375a7eeb93794/linkmerce-0.6.6.tar.gz | source | sdist | null | false | f9097a66a2d60d2aaf2edc5b6248ae84 | c4d3be09d03d21e3b0b553505aa8084d5ee376a98e7e4b6bd197e7b47f7360fd | 2249e94efe42c2b4397c0cb0ac0e697e564cf3a3afa931dd11a375a7eeb93794 | null | [] | 247 |
2.4 | mcpgate | 0.4.0 | A stateless gateway that turns any OpenAPI spec into MCP tools on the fly. | <div align="center" markdown="1">
[![Discord][badge-chat]][chat]
<br>
<br>
| | ![Badges][label-badges] |
|:-|:-|
| ![Build][label-build] | [![Nox][badge-actions]][actions] [![semantic-release][badge-semantic-release]][semantic-release] [![PyPI][badge-pypi]][pypi] [![Read the Docs][badge-docs]][docs] |
| ![Tests][label-tests] | [![coverage][badge-coverage]][coverage] [![pre-commit][badge-pre-commit]][pre-commit] [![asv][badge-asv]][asv] |
| ![Standards][label-standards] | [![SemVer 2.0.0][badge-semver]][semver] [![Conventional Commits][badge-conventional-commits]][conventional-commits] |
| ![Code][label-code] | [![uv][badge-uv]][uv] [![Ruff][badge-ruff]][ruff] [![Nox][badge-nox]][nox] [![Checked with mypy][badge-mypy]][mypy] |
| ![Repo][label-repo] | [![GitHub issues][badge-issues]][issues] [![GitHub stars][badge-stars]][stars] [![GitHub license][badge-license]][license] [![All Contributors][badge-all-contributors]][contributors] [![Contributor Covenant][badge-code-of-conduct]][code-of-conduct] |
</div>
<!-- Badges -->
[badge-chat]: https://img.shields.io/badge/dynamic/json?color=green&label=chat&query=%24.approximate_presence_count&suffix=%20online&logo=discord&style=flat-square&url=https%3A%2F%2Fdiscord.com%2Fapi%2Fv10%2Finvites%2FYe9yJtZQuN%3Fwith_counts%3Dtrue
[chat]: https://discord.gg/Ye9yJtZQuN
<!-- Labels -->
[label-badges]: https://img.shields.io/badge/%F0%9F%94%96-badges-purple?style=for-the-badge
[label-build]: https://img.shields.io/badge/%F0%9F%94%A7-build-darkblue?style=flat-square
[label-tests]: https://img.shields.io/badge/%F0%9F%A7%AA-tests-darkblue?style=flat-square
[label-standards]: https://img.shields.io/badge/%F0%9F%93%91-standards-darkblue?style=flat-square
[label-code]: https://img.shields.io/badge/%F0%9F%92%BB-code-darkblue?style=flat-square
[label-repo]: https://img.shields.io/badge/%F0%9F%93%81-repo-darkblue?style=flat-square
<!-- Build -->
[badge-actions]: https://img.shields.io/github/actions/workflow/status/MicaelJarniac/mcpgate/ci.yml?branch=main&style=flat-square
[actions]: https://github.com/MicaelJarniac/mcpgate/actions
[badge-semantic-release]: https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic--release-e10079?style=flat-square
[semantic-release]: https://github.com/semantic-release/semantic-release
[badge-pypi]: https://img.shields.io/pypi/v/mcpgate?style=flat-square
[pypi]: https://pypi.org/project/mcpgate
[badge-docs]: https://img.shields.io/readthedocs/mcpgate?style=flat-square
[docs]: https://mcpgate.readthedocs.io
<!-- Tests -->
[badge-coverage]: https://img.shields.io/codecov/c/gh/MicaelJarniac/mcpgate?logo=codecov&style=flat-square
[coverage]: https://codecov.io/gh/MicaelJarniac/mcpgate
[badge-pre-commit]: https://img.shields.io/badge/pre--commit-enabled-brightgreen?style=flat-square&logo=pre-commit&logoColor=white
[pre-commit]: https://github.com/pre-commit/pre-commit
[badge-asv]: https://img.shields.io/badge/benchmarked%20by-asv-blue?style=flat-square
[asv]: https://github.com/airspeed-velocity/asv
<!-- Standards -->
[badge-semver]: https://img.shields.io/badge/SemVer-2.0.0-blue?style=flat-square&logo=semver
[semver]: https://semver.org/spec/v2.0.0.html
[badge-conventional-commits]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow?style=flat-square
[conventional-commits]: https://conventionalcommits.org
<!-- Code -->
[badge-uv]: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/uv/main/assets/badge/v0.json&style=flat-square
[uv]: https://github.com/astral-sh/uv
[badge-ruff]: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json&style=flat-square
[ruff]: https://github.com/astral-sh/ruff
[badge-nox]: https://img.shields.io/badge/%F0%9F%A6%8A-Nox-D85E00.svg?style=flat-square
[nox]: https://github.com/wntrblm/nox
[badge-mypy]: https://img.shields.io/badge/mypy-checked-2A6DB2?style=flat-square
[mypy]: http://mypy-lang.org
<!-- Repo -->
[badge-issues]: https://img.shields.io/github/issues/MicaelJarniac/mcpgate?style=flat-square
[issues]: https://github.com/MicaelJarniac/mcpgate/issues
[badge-stars]: https://img.shields.io/github/stars/MicaelJarniac/mcpgate?style=flat-square
[stars]: https://github.com/MicaelJarniac/mcpgate/stargazers
[badge-license]: https://img.shields.io/github/license/MicaelJarniac/mcpgate?style=flat-square
[license]: https://github.com/MicaelJarniac/mcpgate/blob/main/LICENSE
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
[badge-all-contributors]: https://img.shields.io/badge/all_contributors-0-orange.svg?style=flat-square
<!-- ALL-CONTRIBUTORS-BADGE:END -->
[contributors]: #Contributors-✨
[badge-code-of-conduct]: https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa?style=flat-square
[code-of-conduct]: CODE_OF_CONDUCT.md
<!---->
# mcpgate
Welcome to **mcpgate's** documentation!
A stateless gateway that turns any OpenAPI spec into MCP tools on the fly.
[Read the Docs][docs]
## Installation
### PyPI
[*mcpgate*][pypi] is available on PyPI:
```bash
# With uv
uv add mcpgate
# With pip
pip install mcpgate
# With Poetry
poetry add mcpgate
```
### GitHub
You can also install the latest version of the code directly from GitHub:
```bash
# With uv
uv add git+https://github.com/MicaelJarniac/mcpgate
# With pip
pip install git+git://github.com/MicaelJarniac/mcpgate
# With Poetry
poetry add git+git://github.com/MicaelJarniac/mcpgate
```
## Quick Start
Run the server directly without installing:
```bash
# With uv
uvx mcpgate
# With pipx
pipx run mcpgate
```
## Usage
For more examples, see the [full documentation][docs].
```python
from mcpgate import mcp
mcp.run(transport="http")
```
## Examples
### Nametag
Connect an MCP client to the [Nametag](https://nametag.one) API without any custom server code.
**1. Start the gateway:**
```bash
uvx --prerelease=allow mcpgate --port 8000
```
**2. Configure your MCP client** (e.g. Claude Desktop — `claude_desktop_config.json`):
```json
{
"mcpServers": {
"Nametag": {
"baseUrl": "http://localhost:8000/mcp/",
"headers": {
"X-OpenAPI-URL": "https://app.nametag.one/api/openapi.json",
"X-API-URL": "https://app.nametag.one",
"X-Cookies": "YOUR_SESSION_COOKIE"
}
}
}
}
```
To get `YOUR_SESSION_COOKIE`, open the Nametag web app in your browser, open DevTools → Application → Cookies, and copy the value of the session cookie (e.g. `__Secure-authjs.session-token=<value>`). Pass it as `X-Cookies: __Secure-authjs.session-token=<value>`.
The gateway fetches the OpenAPI spec from `X-OpenAPI-URL` once and caches it, then proxies every MCP tool call to `X-API-URL` with your session cookie attached — no backend changes required.
## Headers
mcpgate is configured per-request via HTTP headers sent by the MCP client:
| Header | Required | Description |
|--------|----------|-------------|
| `x-openapi-url` | Yes | URL of the OpenAPI JSON specification to load |
| `x-api-url` | Yes | Base URL of the target API for proxied requests |
| `x-cookies` | No | Cookie string to forward with API requests |
When both `x-openapi-url` and `x-api-url` are present, mcpgate fetches the
OpenAPI spec, generates MCP tools from it, and proxies tool calls to the target
API. When these headers are absent, the server returns no tools.
## Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Please make sure to update tests as appropriate.
More details can be found in [CONTRIBUTING](CONTRIBUTING.md).
## Contributors ✨
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
<table>
</table>
<!-- markdownlint-restore -->
<!-- prettier-ignore-end -->
<!-- ALL-CONTRIBUTORS-LIST:END -->
## License
[MIT](../LICENSE)
This project was created with the [MicaelJarniac/crustypy](https://github.com/MicaelJarniac/crustypy) template.
| text/markdown | null | Micael Jarniac <micael@jarniac.dev> | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Typing :: Typed"
] | [] | null | null | >=3.14 | [] | [] | [] | [
"fastmcp>=3.0.0",
"httpx[brotli,http2,zstd]>=0.28.1",
"loguru>=0.7.3",
"typer>=0.24.0"
] | [] | [] | [] | [
"homepage, https://github.com/MicaelJarniac/mcpgate",
"source, https://github.com/MicaelJarniac/mcpgate",
"download, https://pypi.org/project/mcpgate/#files",
"changelog, https://github.com/MicaelJarniac/mcpgate/blob/main/docs/CHANGELOG.md",
"documentation, https://mcpgate.readthedocs.io",
"issues, https:... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T12:40:44.729421 | mcpgate-0.4.0.tar.gz | 113,846 | b8/54/c05f2b45545155ade4d0af8f8b10ee36bfbd252c2876ac8ea62b04472c7b/mcpgate-0.4.0.tar.gz | source | sdist | null | false | 82a46d26ed4ecbcba2709b709acd8542 | ef3bceff2c741cbdc99c69d1b2ce349e1375f21cc2f7e3af2af9677c94f67798 | b854c05f2b45545155ade4d0af8f8b10ee36bfbd252c2876ac8ea62b04472c7b | null | [
"LICENSE"
] | 245 |
2.4 | terminaix | 1.0.0 | TerminaiX AI assistant CLI | # TerminaiX
AI assistant CLI by Mohamed.
| text/markdown | Mohamed | null | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests",
"openai"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.0 | 2026-02-19T12:40:40.019850 | terminaix-1.0.0.tar.gz | 1,198 | 18/36/3caf4638117070010f820235d3e31eed49c9d165513b87a66c1896dc3a35/terminaix-1.0.0.tar.gz | source | sdist | null | false | cbbce5b9dc8233e5d6377e05025da7ec | 2450af5bf3cb1575ae5c4d664fd10e58219d5f1df8a7e05996d4fc78a0dcb6fb | 18363caf4638117070010f820235d3e31eed49c9d165513b87a66c1896dc3a35 | null | [] | 112 |
2.4 | snmpkit | 1.2.1 | High-performance SNMP toolkit for Python, powered by Rust | <p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/darhebkf/snmpkit/refs/heads/main/docs/public/logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/darhebkf/snmpkit/refs/heads/main/docs/public/logo-light.svg">
<img src="https://raw.githubusercontent.com/darhebkf/snmpkit/refs/heads/main/docs/public/logo-light.svg" alt="SNMPKIT" width="280">
</picture>
</p>
<p align="center">
<em>High-performance SNMP toolkit for Python, powered by Rust</em>
</p>
---
## Features
- **AgentX subagent**: Full RFC 2741 Compliance
- **Fast** - Rust core for PDU encoding and OID operations
- **Type-safe**: Full type hints throughout
## Installation
```bash
uv add snmpkit
```
## Quick Start
```python
from snmpkit.agent import Agent, Updater
class MyUpdater(Updater):
async def update(self):
self.set_INTEGER("1.0", 42)
self.set_OCTETSTRING("2.0", "hello")
agent = Agent(agent_id="MyAgent")
agent.register("1.3.6.1.4.1.12345", MyUpdater(), freq=10)
agent.start_sync() # or: await agent.start()
```
## Documentation
**[snmpkit.dev](https://snmpkit.dev)** - Full documentation, guides, and API reference
## Development
Requires [kyle](https://github.com/achmedius/kyle) task runner. Linux/macOS/Unix only.
```bash
# First time setup (installs Rust, uv, bun, maturin)
kyle setup
# Or if you have the tools already
kyle setup:deps # Just install project dependencies
kyle dev # Build and install in dev mode
kyle test # Run all tests (Python + Rust)
kyle format # Format all code (Python + Rust + TS)
kyle lint # Lint all code
kyle docs:dev # Start docs dev server
kyle check # Type check and lint
```
## License
Check out the [License](LICENSE) for more information!
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | null | snmp, agentx, network, monitoring, rust | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.14",
"Programmi... | [] | null | null | >=3.11 | [] | [] | [] | [
"uvloop>=0.21",
"pytest>=8.0; extra == \"dev\"",
"pytest-asyncio>=0.24; extra == \"dev\"",
"pytest-xdist>=3.5; extra == \"dev\"",
"ruff>=0.9; extra == \"dev\""
] | [] | [] | [] | [
"Changelog, https://github.com/darhebkf/snmpkit/blob/main/CHANGELOG.md",
"Documentation, https://snmpkit.dev",
"Homepage, https://snmpkit.dev",
"Issues, https://github.com/darhebkf/snmpkit/issues",
"Repository, https://github.com/darhebkf/snmpkit"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T12:39:43.833300 | snmpkit-1.2.1.tar.gz | 293,942 | 1e/a9/60133d3ac02150e860ed6840b0ace7e7ee85d27f6d1001cd3db57c77a5c2/snmpkit-1.2.1.tar.gz | source | sdist | null | false | 3f0708ecfcff781f4e769be46cf048d8 | e06429616f40608e158942b8ba0d5589979c7d5bb177330c5a713f8b30f44c22 | 1ea960133d3ac02150e860ed6840b0ace7e7ee85d27f6d1001cd3db57c77a5c2 | AGPL-3.0-or-later | [
"LICENSE"
] | 527 |
2.1 | driverlessai | 2.3.2.1 | Python client for H2O Driverless AI. | # H2O Driverless AI Python Client
An intuitive Python client for [H2O Driverless AI](https://h2o.ai/products/h2o-driverless-ai/).
## Getting Started
### Installation
```shell
pip install driverlessai
```
```shell
conda install -c h2oai driverlessai
```
### Usage
#### Connecting to H2O Driverless AI
```python
import driverlessai
client = driverlessai.Client(address="http://localhost:12345", username="py", password="py")
```
#### Create a dataset
```python
dataset = client.datasets.create(
data="s3://h2o-public-test-data/smalldata/iris/iris.csv",
data_source="s3",
name="iris.csv",
)
```
#### Create an experiment
```python
experiment = client.experiments.preview(
train_dataset=dataset,
target_column="C5",
task="classification",
name="iris-experiment",
)
```
For more information, see the [Driverless AI Python Client documentation](http://docs.h2o.ai/driverless-ai/pyclient/docs/html/index.html).
## Supported Driverless AI Servers
The client version number indicates the most recent Driverless AI server supported by that specific client version.
However, all client versions are backwards compatible with Driverless AI servers down to version 1.10.0.
## Support
For additional support, please contact our [support team](mailto:support@h2o.ai).
| text/markdown | null | "H2O.ai" <support@h2o.ai> | null | null |
PLEASE READ THIS H2O.AI DRIVERLESS AI EVALUATION AGREEMENT ('AGREEMENT') CAREFULLY BEFORE USING THE EVALUATION SOFTWARE OFFERED BY H2O.AI, INC. ('H2O.AI'). BY CLICKING THE 'AGREE' (OR SIMILAR) BUTTON ON AN ONLINE ORDER FORM, BY USING THE EVALUATION SOFTWARE IN ANY MANNER, OR BY SIGNING AN ORDER FORM WHICH REFERENCES THESE EVALUATION TERMS AND CONDITIONS (AS APPLICABLE) YOU OR THE ENTITY YOU REPRESENT ('LICENSEE') AGREE THAT YOU HAVE READ AND AGREE TO BE BOUND BY AND A PARTY TO THE TERMS AND CONDITIONS OF THIS AGREEMENT TO THE EXCLUSION OF ALL OTHER TERMS, YOU REPRESENT AND WARRANT THAT YOU ARE AUTHORIZED TO BIND LICENSEE. IF THE TERMS OF THIS AGREEMENT ARE CONSIDERED AN OFFER, ACCEPTANCE IS EXPRESSLY LIMITED TO SUCH TERMS.
H2O.AI DRIVERLESS AI EVALUATION AGREEMENT
WHEREAS, H2O.ai is willing to supply, within the protection of a confidential relationship, the software, services and related materials provided in connection with this Agreement (collectively, the 'Evaluation Software') to Licensee solely for internal evaluation purposes and not for any production use ('Evaluation');
NOW, THEREFORE, in consideration of the foregoing and the mutual covenants hereinafter set forth, the parties hereby agree as follows:
1. Use of Evaluation Software. Subject to the terms of this Agreement, H2O.ai hereby grants to Licensee a personal, nontransferable, nonsublicensable, nonexclusive, revocable license to access and use the Evaluation Software only in accordance with all documentation supplied by H2O.ai solely for Licensee's internal Evaluation purposes during the term of this Agreement. H2O.ai shall at all times retain all title to and ownership of the Evaluation Software and all intellectual property rights relating thereto. Licensee agrees to use the Evaluation Software only in the ordinary course of its Evaluation. Licensee shall not (and shall not allow any third party to): (a) decompile, disassemble, or otherwise reverse engineer any portion of the Evaluation Software; (b) remove, alter or obscure any product identification, copyright or other notices contained on or in the Evaluation Software; (c) disclose, provide, distribute, resell, lease, lend or allow access to the Evaluation Software to any third party; (d) use the Evaluation Software for timesharing or service bureau purposes, or otherwise for the benefit of any third party; (e) copy, modify, adapt or create a derivative work of any part of the Evaluation Software; (f) use the Evaluation Software in excess of any limitations provided by H2O.ai;(g) use the Evaluation Software to help develop any competitive product or service; (h) remove or export the Evaluation Software or any direct product thereof from the United States. H2O
2. Feedback. If Licensee proposes any modifications, derivatives, enhancements or improvements to the Evaluation Software ('Feedback'), then notwithstanding anything else, Licensee hereby grants H2O.ai a perpetual, irrevocable, royalty free, fully paid-up, sub-licensable, right and license to use, display, reproduce, distribute and otherwise fully exploit such Feedback for any purposes.
3. Warning. THE EVALUATION SOFTWARE MAY CONTAIN A ROUTINE THAT CAUSES THE EVALUATION SOFTWARE TO CEASE PROPER FUNCTIONING AFTER A CERTAIN PERIOD OF TIME. THIS MAY OCCUR BEFORE OR AFTER TERMINATION OF THE LICENSE, SO LICENSEE MUST BE PREPARED AT ALL TIMES AND MAY NOT RELY ON THE EVALUATION SOFTWARE
4. Warranty Disclaimer. The parties acknowledge that the Evaluation Software are provided 'AS IS' and may not be functional on any machine or in any environment. H2O.AI DISCLAIMS ALL WARRANTIES RELATING TO THE EVALUATION SOFTWARE, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTIES AGAINST INFRINGEMENT OF THIRD-PARTY RIGHTS, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
5. Limitation of Remedies and Damages. H2O.AI SHALL NOT BE RESPONSIBLE OR LIABLE TO THE OTHER PARTY WITH RESPECT TO ANY SUBJECT MATTER OF THIS AGREEMENT UNDER ANY CONTRACT, NEGLIGENCE, STRICT LIABILITY OR OTHER THEORY (A) FOR LOSS OR INACCURACY OF DATA OR COST OF PROCUREMENT OF SUBSTITUTE GOODS, SERVICES OR TECHNOLOGY, (B) FOR ANY SPECIAL, INDIRECT, INCIDENTAL, PUNITIVE, RELIANCE, OR CONSEQUENTIAL DAMAGES INCLUDING, BUT NOT LIMITED TO LOSS OF REVENUES AND LOSS OF PROFITS, OR (C) FOR DAMAGES OF ANY KIND WHATSOEVER ARISING OUT OF THIS AGREEMENT IN EXCESS OF $100. H2O.AI SHALL NOT BE RESPONSIBLE FOR ANY MATTER BEYOND ITS REASONABLE CONTROL.
6. Nonassignability. Although fully assignable and transferable by H2O.ai, neither the rights nor the obligations arising under this Agreement are assignable or transferable by Licensee, and any such attempted assignment or transfer shall be void and without effect.
7. Controlling Law, Attorneys' Fees and Severability. This Agreement shall be governed by and construed in accordance with the laws of the State of California without regard to the conflicts of laws provisions therein. In any action to enforce this Agreement the prevailing party will be entitled to costs and attorneys' fees. In the event that any of the provisions of this Agreement shall be held by a court or other tribunal of competent jurisdiction to be unenforceable, such provisions shall be limited or eliminated to the minimum extent necessary so that this Agreement shall otherwise remain in full force and effect and enforceable.
8. Entire Agreement. This Agreement constitutes the entire agreement between the parties pertaining to the subject matter hereof. Any modifications of this Agreement must be in writing and signed by both parties.
9. Term; Termination. This Agreement shall become effective upon Licensee's first access to or use of the Evaluation Software ('Start Date'). This Agreement may be terminated by either party for any reason or no reason upon written notice to the other party, or immediately upon notice of any breach by Licensee of the provisions of this Agreement, and in any case will terminate twenty-one (21) days from the Start Date, unless extended by H2O.ai in writing. Upon termination, the license granted hereunder will terminate and Licensee shall promptly cease accessing the Evaluation Software, and shall return any and all documents, notes and other materials regarding the Evaluation Software to H2O.ai, including, without limitation, all copies and extracts of the foregoing, but the terms of this Agreement will otherwise remain in effect.
GDSVF&H\2472254.1
| h2o, driverless, ai, client, automl, machine learning, data science, artificial intelligence | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests",
"urllib3>=1.26.0",
"tabulate",
"toml",
"packaging>=21.0",
"fsspec>=2024.5.0; extra == \"optional\"",
"pandas; extra == \"optional\""
] | [] | [] | [] | [
"Homepage, https://h2o.ai/products/h2o-driverless-ai/",
"Documentation, https://docs.h2o.ai/driverless-ai/pyclient/docs/html/index.html",
"Changelog, https://docs.h2o.ai/driverless-ai/pyclient/docs/html/CHANGELOG.html"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-19T12:39:41.829814 | driverlessai-2.3.2.1-py3-none-any.whl | 2,809,083 | d1/4b/ba95ae54b866906a17a332e4c069831f997793f382a766140006410bab1f/driverlessai-2.3.2.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 6d400c8e2f63627c5c876981cf5a482f | a824cf8c2ef199f49a1c863906269c9c30d82bc1ab7f20c65ba2256046ead271 | d14bba95ae54b866906a17a332e4c069831f997793f382a766140006410bab1f | null | [] | 572 |
2.4 | module-qc-analysis-tools | 2.8.0rc0 | Module qc analysis tools | # module-qc-analysis-tools v2.8.0rc0
A general python tool for running ITkPixV1.1 module QC test analysis. An
overview of the steps in the module QC procedure is documented in the
[Electrical specification and QC procedures for ITkPixV1.1 modules](https://gitlab.cern.ch/atlas-itk/pixel/module/itkpix-electrical-qc/)
document and in
[this spreadsheet](https://docs.google.com/spreadsheets/d/1qGzrCl4iD9362RwKlstZASbhphV_qTXPeBC-VSttfgE/edit#gid=989740987).
---
<!-- sync the following div with docs/index.md -->
<div align="center">
<!--<img src="https://gitlab.cern.ch/atlas-itk/pixel/module/module-qc-analysis-tools/-/raw/main/docs/assets/images/logo.svg" alt="mqat logo" width="500" role="img">-->
<!-- --8<-- [start:badges] -->
<!-- prettier-ignore-start -->
| | |
| --- | --- |
| CI/CD | [![CI - Test][cicd-badge]][cicd-link] |
| Docs | [![Docs - Badge][docs-badge]][docs-link] |
| Package | [![PyPI - Downloads - Total][pypi-downloads-total]][pypi-link] [![PyPI - Downloads - Per Month][pypi-downloads-dm]][pypi-link] [![PyPI - Version][pypi-version]][pypi-link] [![PyPI platforms][pypi-platforms]][pypi-link] |
| Meta | [![GitLab - Issue][gitlab-issues-badge]][gitlab-issues-link] [![License - MIT][license-badge]][license-link] |
[cicd-badge]: https://gitlab.cern.ch/atlas-itk/pixel/module/module-qc-analysis-tools/badges/main/pipeline.svg
[cicd-link]: https://gitlab.cern.ch/atlas-itk/pixel/module/module-qc-analysis-tools/-/commits/main
[docs-badge]: https://img.shields.io/badge/documentation-mkdocs-brightgreen?style=for-the-badge&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABNCAYAAAAW92IAAAAAIGNIUk0AAHomAACAhAAA+gAAAIDoAAB1MAAA6mAAADqYAAAXcJy6UTwAAAAGYktHRAD/AP8A/6C9p5MAAAAJcEhZcwAACxMAAAsTAQCanBgAAAAHdElNRQfnAhsVAB+tqG4KAAANnklEQVR42u2cf5BcVZXHP+f168lkQn6QAJJsFgP4AyGajIuKISlqdREEtEopyyrZslbUtVjFcg1QpbuliEWpZQK1u7ilUuW6K1JSurWliL9/gBncFSWTEFiEkCWYYCJOmMTM5Md097v7x/fcfq9nejqvkw7DujlVr6a733v33nPuued8z7nnjlGSVgytJ5n6cz9wKvAnwAuBM4ClwOn++8nAPGAOMBuYBaRAZZpuGkAdOAwcAsaBPwCjwO+BXcBO4Dd+7fTfDxUbCcHYtObDpfhKywqgwHw/8Grg9cD5wFnO7ElAH2Bl22xDFb9mueDaUQAmgDFgBNgG/Ar4MfAAcMgslO6wtAAAQjAzCzcA1zvDM0HmApoFLAJeClwGfBj4bAj2SbNQWgJJ2QcBzEIFzf5MMd+JTgJe7WMsTV0JwCmbaU57ObajEcAfFZ0QwEwPYKbphABmegAzTd0KwDg2oHO8qevxdSeAQB0hr/JQ67mjAGwLGmNp6gIJGghh3YSg56XAK1AcMK+7tnpCdRQn/BbYDHwfuMeMYFZeCUoPOtDUrWeBOyDcCbaI1kDoT1EgdAqwAJgLDCDYWvX+UqR51qaLjDwgmkBB0QFgP7AX2EMeED3l19P+exMEhfJIuLwAbIrWW4Yisd8Dm3IughlWJcfr/YXPfQVBJORLMPOrDtQKzMeo8DBwOIRQs24inV4KwKmCgo/taGbaCMpitDaBZq5nZGYtqjiJBoBlwOMuyOMigAT4OxTnfwfYiNRwNMBBey6Mo0FQWDrbx/FC4JUoIhwF/qqb5roVQEDr+o1+7Qd+BzxtsAOtzV1oWYwiIzUOHERqXENrvOFtRYFF9xXzAXEJ9aNkyjxn9lRgsZktRfZmKfACHxPA3XQ5Ccdquef69aJJv8f1PNHmqk8SQhRAZD5FtqJ4Vf3qOXA7Xq4rKQz+eU0noPBMD2Cm6YQAjuKdP6qUWLdGsA58CTgNeDnPn+ToGLDFx3a8gqEmfQv4GbAcAZDlaG9gMbDQhRI3QHoVOgfyDZMxhDF2ocj0YQTIHkbxQld0tG5wLzAEDFmSELJsNsIDJ7sQ4rUAmO9CGSCPC6rI50cBBYQLauT4/4AzG3eGRlHQ86x/3p8kdjDLjg18disAC1CxgpqFLAMhvYPAM0d4nYyMBDMC1sK+EbIQQpIkUDKam8J8IMVoEMqjwW6NYGrwfuAKFPKWUvFQ+JTolYA1I8DMP4fErMl8aQ5CMB/LFRjvB9JuFt7RxAIXA58FHgUeQmvvCRQHjCCVPQBMQKiDhaMxBE3lCJhZEx7PRnFB3JB9MWbLUWLmZcAPgM9108/R2IAMreFX+AVav+PkiYtR/bV9wD6/N04e28egKLqthKlB0ABwkhnzkB05GdmUBeQ7zpU2Y+uKehULVHxQ89DM/J+hE0hwpgcw03RCAF0+//98Y0QA6MmZ5rIDPRmOWywQmhsjnwAeJN8YWYxg8HO9MdJAbve3CI98H7jbIBDKz2v5jRELUbf2AP8K3IFqdJaizOwy8gqxuDESY4CYHosVYnFjpAiGJ2+MxLggxgR7ve/dKAEbN0Z2+O+N5mCtPBzoej0PDq0v02YfrRsjMQgqboy0C4Ymb4wcav0bah25CzC8Zm1X/PTEoK0cWv+cW8YQYFOXzJ6gNnTME9duSQyv7v3MTNfPyqF1hbhalCUVNq/6UKl2jwkIrdywnlo9m9JWCTtx1MyH0DrmyHwIweqHJzSQrFG67bTYuCFr1M0MVtMEYA3KE9yJUmY9p0q1QqPWuMqMy1E4Ply4fZqZfSyd1bcLWIcMZzkBoG2tM4GtAbbHRMR0s1gUTkHx3gK83du7h6JL6hE1ao1+4N3An6McRFEAg8A1yEXeSRdgLQG+DnwX+CdgIDKVVJrsLQHOYzrMYE1BQqtr6zXFnEGxv0hV8iRS16Wyp/tLC4svZ40Aiu+/iDItFxwnxo6GwhG+l6YUuBrl+H+ehWx/Yi02Zi5KNS1BJTDHjZb/9Baq1dCh/qE7mt4QB4ZXX9cigO/6pdoOa3m5Tp5makxpuCTi7GRPcjemSXTmtZTM6mUzxB2oAphh9SjeOJ7h1WtJkQW/ArjLkmSjv/Ri4FzygxAAr0Gp7xR4IsAWS46geiEweP8t8VsCXIK06pvA4UkIsg+40J95CZASwk7gJ2gJlrXsWb3eIE0rFzhf5wKVQNgB/BT4EbAvZIHBofWkwHXAm53RjQi33wpcPqnhv/UL4EFTNDgy3ShWbliP5cxXkZv8hE/1E8DGAvOnAzcC7yCv9oj0PhfAJzmyzgVgQZpWbgb+BgVkk9v6EfBRS2w4ZBkpitYAZmfBSCzUgPv85T5kH/pR8dEzSKXuZ5oiqUiFUr05wEdd0H3Av9HqphYCtwFXoiX3Q3T8ZQ8KtS9Bwl6G0uKdmO9zQV4B/Bq4Hdjq712AynouRVHrVZYkD6XkFjQ0JIAGAhO3+YPfQ3t/n0Y+NkXRWaOEtVoEfAp4jzN3Cyq03CchGSGEa5z5MeDjyOuMFdq4DVgL3IA0qZMATgHeBHwN+AiqZot3P4fxRuAf0X7mTcA7W/xpDNCDGjuIcvlRQM16veYLGZ3A9BloKb0VJS5u8s4nmmMK4UzkhcRoCLcytQ5w1N9dArzrCAI34D/RUt3d4lGMBvBtpPH/gjTrTW3BzfDqtdFSWrt7kQY3TLXulSSpN7LsPLRDcxFCZzcQwh1tmFuNtOtJ4PZ4v6UPjWMC+Gef3VM6CCBDar8blMAaXnNdsR1QJdl9aDlc2euscGhk2UXAV535x4B3Al9pxxz5ztKDKLtDNmlEBS/4qF/TkaHyvP9qTtaa3N8X+j2ItvcBVvRaAIPAl4EV/v0JdJZPVIjSQshAbha0r9gIBDavag3ECkmPg6gmYDpKUNrs2TaCnkw7/O/CXgtgmV9bUWrrMmQ8hSWSCivv/QwAjXoDcvuicRwZ83Qyu4HW+uNOFCF/1msB1NEavBT4gg/4fcj/9wNYKseTVqugKtMouKqZsXLDLS0NDm5YFz/OQV6pkwAWovKdI+Ukzva/z/RaAEPoVOn/AH+PrK0BH0RYwN1YcyKHfeDnI/TZYidXblhXBBRxC7yTABahcJnQqLcIofB5PvA6//xA17n8FslOVdlRYMxd6T4XxgDKFdyA3Oo68nzB/ciwnQtc64KqvXToZgboL9b9z/F7C0oM8T3APVZJt04Zr+gqBIr+ANx1JA2o+QVwViWtAFS1S9KWDI+pnPYAH0JZolnAx4BrCM2di53A55H7uhotlVMHtFrwkx9LkB25knLh13nIZa4IjZbHZyMccSMCc98Aflys5ErCVL72Ild2DnBto97w0jj7BwRZi4wX/xZpN/ABH8DFwM2YPQ38h9//EoLb70Xo7WLgXuTSFqNT6i/3/mJyZnI/EcPtQUDocuAeqyQ/QBB+NgrmLkK26Gcotjicoq0lgF3VSqOlfDsIgNyKorNzkCqPo10hkV6Ixmw3hAZYEUyB3M41zuwaciOEt3e9C+qvkT04v3B/zLXk08BnXAC/o5XGXTv2oqX2c7Sk3tXmuTsQ5N6Oq8KNKI83BLnNKQjiviCJ/hnKEG0FfjGp4dtd0g+ABavIywyvXiu0qDa3IVC0IvZVoH0+qH8H/sKF3YfA0Q8RuKn5M99AwVKRfolsxAjw2KHxiU/1z+m7G3gDsi8p+ocL97pwmv9wwSYbiYCxabX++8Lg0PqOGZosgaTdqpy0RTWdS6pX+tjy2mun3DeMWmiQWmcTNUnL2lJiCRP1Gmml0vb959Vef0v+n1zwGbC5A7IbHFrX8n149XV5nDKJw7plbLnw+plm9flDx7w7PNxxZlo3XQiBjYUA5Vjb74kApD55UjKeEJzqZ6xYgboAIaqnSvSRArMMxps9WGs1bGipmwXyM0IHWtxSz3LGzY5J3O5XwGjUa95ViIeYml1nGskA+YnQ16JkY2yueBrUgNTvzQfODvkzlY1PXVZ8J3HmjTyQWYTig9iihSBBBcGsOMbmaTP3XsXTZ8U2LQoaSJrPGkmKfPJZwOOVtG87sNCw1yG3AYKhvzYxvRyBiiEXxCrDDiMXuMx/24b8+C7D5qFtrFNRoLIMsMEzvvOQdMDO8r43Aa9CbqwPYZMYKg9grDHYBMkrCfxCY7al/vxvgCVB3I0gnDCCwuKlyOWlwBbDXgAsDIHdKLbYk6DkZhWBHRBOnwv8tzc2gf5zzDxveCcCFHtR4vFMlM7ei/D1oL//OMre9KMIbRUCNQ8DdUfMYy6YJShk3o5KYs8ATvMZO4Bg9EtQwPQylGJbgDJJsRR3uTNaRVjlfB/zbufrzQgN9pOX9j+SoNRQg3y/Ldbq9LsUN6O0dURas1zFYj1PVLNTxBh7yM8GZgW1NFftJMgCnI6Ckpq3V/e2x52pSpIXOz0CrESnQs7zWY9Hcy/0ZVakPpTBrvvnCZ+8Awg0bUO5zf2VxVdfMof8VPaoD2KVM/+wz9KAS3uPL4lnfKAjrmKP+kyO+edD3ta4tzPuTMwHUlMRdeZ9jXjf5/jsPdZk0HjWJyRq3GPASGiw2xJq3kcU9E5neAU6Uf4QOuc8H2WmYug919saA0atDZKaD7wNVYLVXMIBBRnH82zwX6Jk5Y7mL90b/bOR6t/l39/qQntkuhf+F0N4SOsZwIo7AAAAJXRFWHRkYXRlOmNyZWF0ZQAyMDIzLTAyLTI3VDIwOjU4OjQ2KzAwOjAwDOG2KgAAACV0RVh0ZGF0ZTptb2RpZnkAMjAyMy0wMi0yOFQwMTo1ODo0MCswMDowMKx+Qb4AAAAodEVYdGRhdGU6dGltZXN0YW1wADIwMjMtMDItMjdUMjE6MDA6MzErMDA6MDBa3S3tAAAAAElFTkSuQmCC
[docs-link]: https://atlas-itk-pixel-mqat.docs.cern.ch
[gitlab-issues-badge]: https://img.shields.io/static/v1?label=Issues&message=File&color=blue&logo=gitlab
[gitlab-issues-link]: https://gitlab.cern.ch/atlas-itk/pixel/module/module-qc-analysis-tools/-/issues
[pypi-link]: https://pypi.org/project/module-qc-analysis-tools/
[pypi-downloads-dm]: https://img.shields.io/pypi/dm/module-qc-analysis-tools.svg?color=blue&label=Downloads&logo=pypi&logoColor=gold
[pypi-downloads-total]: https://pepy.tech/badge/module-qc-analysis-tools
[pypi-platforms]: https://img.shields.io/pypi/pyversions/module-qc-analysis-tools
[pypi-version]: https://img.shields.io/pypi/v/module-qc-analysis-tools
[license-badge]: https://img.shields.io/badge/License-MIT-blue.svg
[license-link]: https://spdx.org/licenses/MIT.html
<!-- prettier-ignore-end -->
<!-- --8<-- [end:badges] -->
</div>
| text/markdown | null | Jay Chan <jay.chan@cern.ch> | null | Giordon Stark <gstark@cern.ch> | Copyright (c) 2022 ATLAS ITk Pixel Modules
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"arrow",
"matplotlib",
"module-qc-data-tools>=1.5.0rc1",
"module-qc-database-tools>=2.10.0rc1",
"module-qc-tools>=2.7.1",
"numpy",
"pillow<11.3.0",
"pyparsing<3.3.0",
"scipy",
"typer>=0.18.0"
] | [] | [] | [] | [
"Homepage, https://gitlab.cern.ch/atlas-itk/pixel/module/module-qc-analysis-tools",
"Bug Tracker, https://gitlab.cern.ch/atlas-itk/pixel/module/module-qc-analysis-tools/-/issues",
"Source, https://gitlab.cern.ch/atlas-itk/pixel/module/module-qc-analysis-tools",
"Documentation, https://atlas-itk-pixel-mqat.doc... | twine/6.2.0 CPython/3.11.4 | 2026-02-19T12:38:30.451958 | module_qc_analysis_tools-2.8.0rc0.tar.gz | 73,693,016 | a7/bb/77cbfc5d0b5e62ceab5f35e2dad1100e2ae6ba1dc3d84641575ce093ca06/module_qc_analysis_tools-2.8.0rc0.tar.gz | source | sdist | null | false | ef79c93e6b46a74c3fa116fd07f94c14 | 2603e46900042bce3bf16d36379bf757b00831648755f448c49f2903a7ad3ef3 | a7bb77cbfc5d0b5e62ceab5f35e2dad1100e2ae6ba1dc3d84641575ce093ca06 | null | [
"LICENSE"
] | 292 |
2.4 | corrode | 0.1.2 | A Rust-like Result type for Python | # corrode
[](https://github.com/deliro/corrode/actions/workflows/ci.yml?query=branch%3Amain)
[](https://codecov.io/gh/deliro/corrode)
A Rust-like `Result` type for Python 3.11+, fully type annotated.
## Table of Contents
- [Installation](#installation)
- [Why](#why)
- [Quick start](#quick-start)
- [Exhaustive error handling](#exhaustive-error-handling)
- [Adopting corrode in an existing codebase](#adopting-corrode-in-an-existing-codebase)
- [API reference](#api-reference)
- [Pattern matching](#pattern-matching)
- [Transforming values](#transforming-values)
- [Chaining with `and_then` / `or_else`](#chaining-with-and_then--or_else)
- [Combining results with `zip`](#combining-results-with-zip)
- [Predicates](#predicates)
- [Inspecting](#inspecting)
- [Async methods](#async-methods)
- [`do` notation](#do-notation)
- [`@as_result` / `@as_async_result`](#as_result--as_async_result)
- [Escape hatches](#escape-hatches)
- [Iterator utilities](#iterator-utilities)
- [`collect`](#collect)
- [`map_collect`](#map_collect)
- [`partition`](#partition)
- [`filter_ok`](#filter_ok)
- [`filter_err`](#filter_err)
- [`try_reduce`](#try_reduce)
- [Async iterator utilities](#async-iterator-utilities)
- [`collect`](#collect)
- [`map_collect`](#map_collect)
- [`partition`](#partition)
- [`filter_ok_unordered`](#filter_ok_unordered)
- [`filter_err_unordered`](#filter_err_unordered)
- [`filter_ok`](#filter_ok-1)
- [`filter_err`](#filter_err-1)
- [`try_reduce`](#try_reduce-1)
- [License](#license)
## Installation
```sh
uv add corrode
```
or with pip / poetry:
```sh
pip install corrode
poetry add corrode
```
## Why
Exceptions are implicit. Nothing in a function signature tells you it can
raise, what it raises, or whether the caller remembered to handle it.
Bugs hide until production, and `except Exception` becomes the norm:
```python
from dataclasses import dataclass
@dataclass
class User:
id: int
name: str
# Can this raise? What exceptions? The signature doesn't tell you.
def get_user(user_id: int) -> User:
if user_id <= 0:
raise ValueError(f"Invalid user ID: {user_id}")
if user_id == 13:
raise PermissionError("Access denied")
return User(id=user_id, name="Alice")
# The caller has no idea this can fail — until it does in production
user = get_user(1)
assert user.name == "Alice"
```
`Result` makes errors explicit, typed, and impossible to ignore:
```python
from dataclasses import dataclass
from corrode import Result, Ok, Err
@dataclass
class User:
id: int
name: str
@dataclass
class NotFound:
user_id: int
@dataclass
class Forbidden:
reason: str
# Now every caller sees the possible errors in the signature
def get_user(user_id: int) -> Result[User, NotFound | Forbidden]:
if user_id <= 0:
return Err(NotFound(user_id=user_id))
if user_id == 13:
return Err(Forbidden(reason="banned"))
return Ok(User(id=user_id, name="Alice"))
# Caller must handle the Result — can't accidentally ignore errors
assert get_user(1) == Ok(User(id=1, name="Alice"))
assert get_user(-1) == Err(NotFound(user_id=-1))
```
Now every caller sees the possible errors in the signature, the type checker
verifies every branch is handled, and adding a new error variant is
a compile-time breaking change — not a runtime surprise.
## Quick start
`Result[T, E]` is a union of `Ok[T] | Err[E]`. Every `Result` must be explicitly
handled — no silent `None`s, no uncaught exceptions.
```python
from dataclasses import dataclass
from corrode import Ok, Err, Result
@dataclass
class User:
id: int
name: str
email: str
@dataclass
class NotFound:
user_id: int
@dataclass
class Forbidden:
reason: str
type GetUserError = NotFound | Forbidden
def get_user(user_id: int) -> Result[User, GetUserError]:
if user_id <= 0:
return Err(NotFound(user_id=user_id))
if user_id == 13:
return Err(Forbidden(reason="banned"))
return Ok(User(id=user_id, name="Alice", email="alice@example.com"))
# Test it works
assert get_user(1) == Ok(User(id=1, name="Alice", email="alice@example.com"))
assert get_user(-1) == Err(NotFound(user_id=-1))
```
## Exhaustive error handling
Use a nested `match` on the error value together with `assert_never` to get
a compile-time guarantee that every error variant is handled:
```python
from dataclasses import dataclass
from typing import assert_never
from corrode import Ok, Err, Result
@dataclass
class User:
id: int
name: str
@dataclass
class NotFound:
user_id: int
@dataclass
class Forbidden:
reason: str
type GetUserError = NotFound | Forbidden
def get_user(user_id: int) -> Result[User, GetUserError]:
if user_id <= 0:
return Err(NotFound(user_id=user_id))
if user_id == 13:
return Err(Forbidden(reason="banned"))
return Ok(User(id=user_id, name="Alice"))
match get_user(42):
case Ok(user):
print(f"Welcome, {user.name}")
case Err(e):
match e:
case NotFound(user_id=uid):
print(f"User {uid} does not exist")
case Forbidden(reason=reason):
print(f"Access denied: {reason}")
case _:
assert_never(e)
```
Now add a new error variant — `mypy` immediately reports that the new case
is not handled, forcing you to update the code before it compiles:
```python
from dataclasses import dataclass
from typing import assert_never
from corrode import Ok, Err, Result
@dataclass
class User:
id: int
name: str
@dataclass
class NotFound:
user_id: int
@dataclass
class Forbidden:
reason: str
@dataclass
class RateLimited:
retry_after: float
type GetUserError = NotFound | Forbidden | RateLimited
def get_user(user_id: int) -> Result[User, GetUserError]:
if user_id <= 0:
return Err(NotFound(user_id=user_id))
if user_id == 13:
return Err(Forbidden(reason="banned"))
if user_id == 100:
return Err(RateLimited(retry_after=60.0))
return Ok(User(id=user_id, name="Alice"))
# Now we must handle all three error variants
match get_user(100):
case Ok(user):
print(f"Welcome, {user.name}")
case Err(e):
match e:
case NotFound(user_id=uid):
print(f"User {uid} does not exist")
case Forbidden(reason=reason):
print(f"Access denied: {reason}")
case RateLimited(retry_after=seconds):
print(f"Rate limited, retry after {seconds}s")
case _:
assert_never(e)
```
You are forced to handle the new case before the code passes type checking.
No error silently slips through.
## Adopting corrode in an existing codebase
You don't have to rewrite everything at once. Exceptions don't disappear
overnight, and third-party libraries will always raise them. That's fine —
`corrode` is designed for gradual adoption.
### Step 1: wrap existing functions with `@as_result`
You have code that raises. Don't rewrite it yet — just wrap it:
```python
import os
from corrode import as_result, Ok, Err
# Before: raises KeyError, ValueError, nobody knows about it
def parse_port_unsafe(key: str) -> int:
return int(os.environ[key])
# After: signature tells you exactly what can go wrong
@as_result(KeyError, ValueError)
def parse_port(key: str) -> int:
return int(os.environ[key])
# Test that it works
os.environ["TEST_PORT"] = "8080"
assert parse_port("TEST_PORT") == Ok(8080)
assert isinstance(parse_port("MISSING_KEY").err(), KeyError)
```
The function body stays the same. The only change is the decorator, and
the callers now get a `Result` instead of praying nothing blows up:
```python
import os
from corrode import as_result, Ok, Err
@as_result(KeyError, ValueError)
def parse_port(key: str) -> int:
return int(os.environ[key])
def start_server(port: int) -> None:
pass # placeholder
os.environ["PORT"] = "3000"
match parse_port("PORT"):
case Ok(port):
start_server(port)
case Err(KeyError()):
start_server(8080)
case Err(ValueError() as e):
print(f"Invalid PORT: {e}")
```
### Step 2: return `Err(exception)` explicitly
Once callers are adapted, you can drop the decorator and return errors
explicitly. The function still uses exception classes, so the callers
don't change:
```python
import os
from corrode import Ok, Err, Result
def parse_port(key: str) -> Result[int, KeyError | ValueError]:
raw = os.environ.get(key)
if raw is None:
return Err(KeyError(key))
try:
return Ok(int(raw))
except ValueError as exc:
return Err(exc)
os.environ["PORT"] = "8080"
assert parse_port("PORT") == Ok(8080)
assert isinstance(parse_port("MISSING").err(), KeyError)
```
### Step 3: replace exceptions with domain types
When you're ready, replace exception classes with dataclasses that carry
exactly the data the caller needs:
```python
import os
from dataclasses import dataclass
from corrode import Ok, Err, Result
@dataclass
class MissingKey:
key: str
@dataclass
class InvalidValue:
key: str
raw: str
type ConfigError = MissingKey | InvalidValue
def parse_port(key: str) -> Result[int, ConfigError]:
raw = os.environ.get(key)
if raw is None:
return Err(MissingKey(key=key))
try:
return Ok(int(raw))
except ValueError:
return Err(InvalidValue(key=key, raw=raw))
os.environ["PORT"] = "8080"
assert parse_port("PORT") == Ok(8080)
assert parse_port("MISSING") == Err(MissingKey(key="MISSING"))
```
Each step is a small, safe refactoring. Your callers get progressively
better types, and `mypy` catches every unhandled case.
### Exceptions inside Result-returning code
Third-party libraries raise exceptions — that's fine. A `try/except`
inside a function that returns `Result` is completely normal:
```python
from dataclasses import dataclass
from corrode import Ok, Err, Result
@dataclass
class NotFound:
url: str
@dataclass
class Unavailable:
url: str
status: int
def fetch_data(url: str) -> Result[bytes, NotFound | Unavailable]:
# Example without actual HTTP call
if "notfound" in url:
return Err(NotFound(url=url))
if "error" in url:
return Err(Unavailable(url=url, status=500))
return Ok(b"data")
assert fetch_data("https://example.com") == Ok(b"data")
assert fetch_data("https://notfound.com") == Err(NotFound(url="https://notfound.com"))
```
You catch the exception, convert it to a typed `Err` with exactly the
data the caller needs, and the rest of your code stays in `Result`-land.
No need to wrap every library call — just handle exceptions where they
happen and return a meaningful error.
## API reference
### Pattern matching
The preferred way to handle results. `Ok` and `Err` support structural
pattern matching, and combined with `assert_never` you get compile-time
guarantees that every case is handled:
```python
from dataclasses import dataclass
from typing import assert_never
from corrode import Ok, Err, Result
@dataclass
class User:
id: int
name: str
balance: int
@dataclass
class NotFound:
user_id: int
@dataclass
class InsufficientFunds:
have: int
need: int
type PaymentError = NotFound | InsufficientFunds
def get_user(user_id: int) -> Result[User, NotFound]:
if user_id == 42:
return Ok(User(id=42, name="Alice", balance=100))
return Err(NotFound(user_id=user_id))
def charge(user: User, amount: int) -> Result[User, InsufficientFunds]:
if user.balance < amount:
return Err(InsufficientFunds(have=user.balance, need=amount))
return Ok(User(id=user.id, name=user.name, balance=user.balance - amount))
def process_payment(user_id: int, amount: int) -> Result[User, PaymentError]:
match get_user(user_id):
case Err(e):
return Err(e)
case Ok(user):
return charge(user, amount)
# Handle all cases exhaustively with nested match
match process_payment(42, 50):
case Ok(user):
print(f"{user.name} charged, new balance: {user.balance}")
case Err(e):
match e:
case NotFound(user_id=uid):
print(f"User {uid} not found")
case InsufficientFunds(have=h, need=n):
print(f"Need {n}, but only have {h}")
case _:
assert_never(e)
```
### Transforming values
Transform the success value with `map`, or the error with `map_err`.
The other variant passes through unchanged:
```python
from dataclasses import dataclass
from corrode import Ok, Err, Result
@dataclass
class User:
id: int
name: str
@dataclass
class ApiError:
code: int
message: str
def get_user(user_id: int) -> Result[User, ApiError]:
if user_id == 42:
return Ok(User(id=42, name="Alice"))
return Err(ApiError(code=404, message="User not found"))
def get_name(user: User) -> str:
return user.name
def format_error(err: ApiError) -> str:
return f"Error {err.code}: {err.message}"
# Extract just the name from successful result
assert get_user(42).map(get_name) == Ok("Alice")
assert get_user(0).map(get_name) == Err(ApiError(code=404, message="User not found"))
# Transform error into a user-friendly message
assert get_user(0).map_err(format_error) == Err("Error 404: User not found")
assert get_user(42).map_err(format_error) == Ok(User(id=42, name="Alice"))
# Get the value or a default
assert get_user(42).map_or("Unknown", get_name) == "Alice"
assert get_user(0).map_or("Unknown", get_name) == "Unknown"
# Compute default from the error
def error_placeholder(err: ApiError) -> str:
return f"User #{err.code}"
assert get_user(0).map_or_else(error_placeholder, get_name) == "User #404"
```
Async variants: `map_async`, `map_err_async`, `map_or_async`, `map_or_else_async`.
### Chaining with `and_then` / `or_else`
Chain fallible operations. `and_then` short-circuits on error,
`or_else` provides recovery:
```python
from dataclasses import dataclass
from corrode import Ok, Err, Result
@dataclass
class User:
id: int
name: str
email: str
@dataclass
class ValidationError:
field: str
message: str
def parse_email(email: str) -> Result[str, ValidationError]:
if "@" not in email:
return Err(ValidationError(field="email", message="Invalid email format"))
return Ok(email.lower().strip())
def parse_name(name: str) -> Result[str, ValidationError]:
if len(name) < 2:
return Err(ValidationError(field="name", message="Name too short"))
return Ok(name.strip())
def create_user(user_id: int, name: str, email: str) -> Result[User, ValidationError]:
return (
parse_name(name)
.and_then(lambda n: parse_email(email).map(lambda e: (n, e)))
.map(lambda pair: User(id=user_id, name=pair[0], email=pair[1]))
)
assert create_user(1, "Alice", "alice@example.com") == Ok(User(id=1, name="Alice", email="alice@example.com"))
assert create_user(1, "A", "alice@example.com") == Err(ValidationError(field="name", message="Name too short"))
assert create_user(1, "Alice", "invalid") == Err(ValidationError(field="email", message="Invalid email format"))
```
Use `or_else` for fallback strategies:
```python
from corrode import Ok, Err, Result
def fetch_from_cache(key: str) -> Result[str, str]:
return Err("cache miss")
def fetch_from_db(key: str) -> Result[str, str]:
if key == "user:1":
return Ok("Alice")
return Err("not found in db")
def fetch_from_api(key: str) -> Result[str, str]:
return Ok("fetched from API")
# Try cache, then DB, then API
result = (
fetch_from_cache("user:1")
.or_else(lambda _: fetch_from_db("user:1"))
.or_else(lambda _: fetch_from_api("user:1"))
)
assert result == Ok("Alice") # Found in DB
```
Async variants: `and_then_async`, `or_else_async`.
### Combining results with `zip`
Combine two to five independent `Result` values into a single `Ok` tuple.
Returns the first `Err` encountered if any result fails:
```python
from corrode import Ok, Err, Result
def parse_int(s: str) -> Result[int, str]:
return Ok(int(s)) if s.isdigit() else Err(f"not a number: {s!r}")
def parse_float(s: str) -> Result[float, str]:
try:
return Ok(float(s))
except ValueError:
return Err(f"not a float: {s!r}")
# All Ok — get a tuple
assert parse_int("3").zip(parse_float("1.5")) == Ok((3, 1.5))
# Any Err — get the first error
assert parse_int("x").zip(parse_float("1.5")) == Err("not a number: 'x'")
assert parse_int("3").zip(parse_float("y")) == Err("not a float: 'y'")
# Works with up to four extra arguments
assert Ok(1).zip(Ok(2), Ok(3), Ok(4)) == Ok((1, 2, 3, 4))
```
`Err.zip` always returns `self` without inspecting the other arguments:
```python
from corrode import Ok, Err
assert Err("already failed").zip(Ok(1), Ok(2)) == Err("already failed")
```
### Predicates
Check conditions on the contained value without unwrapping:
```python
from dataclasses import dataclass
from corrode import Ok, Err, Result
@dataclass
class User:
id: int
is_admin: bool
def get_user(user_id: int) -> Result[User, str]:
if user_id == 1:
return Ok(User(id=1, is_admin=True))
if user_id == 2:
return Ok(User(id=2, is_admin=False))
return Err("not found")
def check_admin(user: User) -> bool:
return user.is_admin
def is_not_found(err: str) -> bool:
return "not found" in err
# Check if result is Ok AND satisfies a condition
assert get_user(1).is_ok_and(check_admin) is True
assert get_user(2).is_ok_and(check_admin) is False
assert get_user(99).is_ok_and(check_admin) is False
# Check if result is Err AND satisfies a condition
assert get_user(99).is_err_and(is_not_found) is True
assert get_user(1).is_err_and(is_not_found) is False
```
Async variants: `is_ok_and_async`, `is_err_and_async`.
### Inspecting
Perform side effects (logging, metrics) without consuming the result:
```python
from dataclasses import dataclass
from corrode import Ok, Err, Result
@dataclass
class User:
id: int
name: str
logs: list[str] = []
def log_success(user: User) -> None:
logs.append(f"Found user: {user.name}")
def log_error(error: str) -> None:
logs.append(f"Error: {error}")
def get_user(user_id: int) -> Result[User, str]:
if user_id == 42:
return Ok(User(id=42, name="Alice"))
return Err("not found")
# Logs are written, but the result passes through unchanged
result = get_user(42).inspect(log_success).inspect_err(log_error)
assert result == Ok(User(id=42, name="Alice"))
assert logs == ["Found user: Alice"]
logs.clear()
result = get_user(0).inspect(log_success).inspect_err(log_error)
assert result == Err("not found")
assert logs == ["Error: not found"]
```
Async variants: `inspect_async`, `inspect_err_async`.
### Async methods
All transformation methods have `_async` variants for async callbacks:
```python
import asyncio
from dataclasses import dataclass
from corrode import Ok, Err, Result
@dataclass
class User:
id: int
name: str
@dataclass
class Profile:
bio: str
async def fetch_profile(user: User) -> Profile:
# Simulate async I/O
return Profile(bio=f"Bio for {user.name}")
async def validate_user(user: User) -> Result[User, str]:
if user.id <= 0:
return Err("Invalid user ID")
return Ok(user)
async def main() -> None:
user_result: Result[User, str] = Ok(User(id=42, name="Alice"))
# Async map
profile_result = await user_result.map_async(fetch_profile)
assert profile_result == Ok(Profile(bio="Bio for Alice"))
# Async and_then
validated = await user_result.and_then_async(validate_user)
assert validated == Ok(User(id=42, name="Alice"))
asyncio.run(main())
```
Full list: `map_async`, `map_err_async`, `map_or_async`, `map_or_else_async`,
`and_then_async`, `or_else_async`, `is_ok_and_async`, `is_err_and_async`,
`inspect_async`, `inspect_err_async`.
### `do` notation
Syntactic sugar for a sequence of `and_then()` calls. If any step is `Err`,
the whole expression short-circuits:
```python
from dataclasses import dataclass
from corrode import do, Ok, Err, Result
@dataclass
class User:
id: int
name: str
@dataclass
class Subscription:
plan: str
@dataclass
class NotFound:
pass
def get_user(user_id: int) -> Result[User, NotFound]:
if user_id <= 0:
return Err(NotFound())
return Ok(User(id=user_id, name="Alice"))
def get_subscription(user: User) -> Result[Subscription, NotFound]:
return Ok(Subscription(plan="Pro"))
result: Result[str, NotFound] = do(
Ok(f"{user.name} has {sub.plan}")
for user in get_user(42)
for sub in get_subscription(user)
)
assert result == Ok("Alice has Pro")
```
For async code, use `do_async`:
```python
import asyncio
from dataclasses import dataclass
from corrode import do_async, Ok, Err, Result
@dataclass
class User:
id: int
name: str
@dataclass
class Profile:
bio: str
@dataclass
class FetchError:
pass
async def fetch_user(user_id: int) -> Result[User, FetchError]:
return Ok(User(id=user_id, name="Alice"))
async def fetch_profile(user_id: int) -> Result[Profile, FetchError]:
return Ok(Profile(bio="Hello!"))
async def main() -> None:
result: Result[str, FetchError] = await do_async(
Ok(f"{user.name}: {profile.bio}")
for user in await fetch_user(42)
for profile in await fetch_profile(user.id)
)
assert result == Ok("Alice: Hello!")
asyncio.run(main())
```
`do_async` accepts both sync and async generators.
### `@as_result` / `@as_async_result`
Wraps a function so that it returns `Ok(value)` on success and `Err(exception)`
on specified exception types. Uncaught exception types propagate normally.
```python
import os
from corrode import as_result, Ok
os.environ["PORT"] = "8080"
@as_result(KeyError, ValueError)
def parse_env(key: str) -> int:
return int(os.environ[key])
result = parse_env("PORT") # Result[int, KeyError | ValueError]
assert result == Ok(8080)
```
For async functions:
```python
import asyncio
from corrode import as_async_result, Ok
class FetchError(Exception):
pass
@as_async_result(FetchError)
async def fetch(url: str) -> bytes:
return b"response data"
async def main() -> None:
result = await fetch("https://example.com")
assert result == Ok(b"response data")
asyncio.run(main())
```
At least one exception type is required — calling `@as_result()` with no
arguments raises `TypeError`.
### Escape hatches
For interop with code that doesn't use `Result`, or when you're absolutely
certain about the variant, these methods provide direct access. Prefer
pattern matching and combinators in most cases.
**Extracting values:**
```python
from corrode import Ok, Err, Result
result_ok: Result[int, str] = Ok(42)
result_err: Result[int, str] = Err("oops")
# .ok() and .err() return Optional
assert result_ok.ok() == 42
assert result_ok.err() is None
assert result_err.ok() is None
assert result_err.err() == "oops"
# Direct property access (use when you know the variant)
assert Ok(42).ok_value == 42
assert Err("oops").err_value == "oops"
```
**Unwrapping (raises on wrong variant):**
```python
from corrode import Ok, Err, UnwrapError
# Get value or raise UnwrapError
assert Ok(42).unwrap() == 42
assert Ok(42).expect("should have user") == 42
# Err("oops").unwrap() # raises UnwrapError
# Get value or use default
assert Ok(42).unwrap_or(0) == 42
assert Err("oops").unwrap_or(0) == 0
# Get value or compute from error
def error_len(e: str) -> int:
return len(e)
assert Err("oops").unwrap_or_else(error_len) == 4
# Get value or raise custom exception
assert Ok(42).unwrap_or_raise(ValueError) == 42
# Err("oops").unwrap_or_raise(ValueError) # raises ValueError("oops")
```
**Type guards (for if/else instead of match):**
```python
from corrode import Ok, Err, Result, is_ok, is_err
result: Result[int, str] = Ok(42)
if is_ok(result):
# Type checker knows result is Ok here
print(result.ok_value)
elif is_err(result):
# Type checker knows result is Err here
print(result.err_value)
```
## Iterator utilities
Functions for working with iterables of `Result` values:
```python
from corrode.iterator import collect, map_collect, partition, filter_ok, filter_err, try_reduce
```
### `collect`
Collect an iterable of `Result` values into `Ok[list]`. Returns the first
`Err` encountered, short-circuiting the iteration:
```python
from corrode import Ok, Err, Result
from corrode.iterator import collect
results: list[Result[int, str]] = [Ok(1), Ok(2), Ok(3)]
assert collect(results) == Ok([1, 2, 3])
results_with_err: list[Result[int, str]] = [Ok(1), Err("bad"), Ok(3)]
assert collect(results_with_err) == Err("bad")
```
### `map_collect`
Apply a function to each element and collect into `Ok[list]`. Returns the
first `Err` produced, short-circuiting the iteration:
```python
from corrode import Ok, Err, Result
from corrode.iterator import map_collect
def parse(s: str) -> Result[int, str]:
if s.isdigit():
return Ok(int(s))
return Err(f"not a number: {s!r}")
assert map_collect(["1", "2", "3"], parse) == Ok([1, 2, 3])
assert map_collect(["1", "x", "3"], parse) == Err("not a number: 'x'")
```
### `partition`
Split an iterable of `Result` into `(oks, errs)`. Consumes all elements
without short-circuiting:
```python
from corrode import Ok, Err, Result
from corrode.iterator import partition
results: list[Result[int, str]] = [Ok(1), Err("a"), Ok(2), Err("b")]
oks, errs = partition(results)
assert oks == [1, 2]
assert errs == ["a", "b"]
```
### `filter_ok`
Yield the value from each `Ok`, skipping `Err` values:
```python
from corrode import Ok, Err, Result
from corrode.iterator import filter_ok
results: list[Result[int, str]] = [Ok(1), Err("x"), Ok(2)]
assert list(filter_ok(results)) == [1, 2]
```
### `filter_err`
Yield the error from each `Err`, skipping `Ok` values:
```python
from corrode import Ok, Err, Result
from corrode.iterator import filter_err
results: list[Result[int, str]] = [Ok(1), Err("x"), Ok(2), Err("y")]
assert list(filter_err(results)) == ["x", "y"]
```
### `try_reduce`
Fold an iterable with a fallible function, short-circuiting on `Err`:
```python
from corrode import Ok, Err, Result
from corrode.iterator import try_reduce
def safe_add(acc: int, x: int) -> Result[int, str]:
if x < 0:
return Err(f"negative value: {x}")
return Ok(acc + x)
assert try_reduce([1, 2, 3], 0, safe_add) == Ok(6)
assert try_reduce([1, -1, 3], 0, safe_add) == Err("negative value: -1")
```
## Async iterator utilities
Functions for concurrent processing of awaitables that return `Result`:
```python
from corrode.async_iterator import (
collect,
map_collect,
partition,
filter_ok_unordered,
filter_err_unordered,
filter_ok,
filter_err,
try_reduce,
)
```
All functions accept an optional `concurrency` parameter to limit how many
tasks run at the same time. `None` (default) means unlimited.
`collect`, `map_collect`, `partition`, `filter_ok`, and `filter_err` return
results in **input order**. `filter_ok_unordered` and `filter_err_unordered`
yield in **completion order** (faster, but unordered).
### `collect`
Await an iterable of coroutines or tasks concurrently, collecting results
into `Ok[list]` in input order. Returns the first `Err` encountered, cancelling remaining tasks:
```python
import asyncio
from dataclasses import dataclass
from corrode import Ok, Err, Result
from corrode.async_iterator import collect
@dataclass
class User:
id: int
@dataclass
class NotFound:
user_id: int
async def fetch_user(user_id: int) -> Result[User, NotFound]:
if user_id <= 0:
return Err(NotFound(user_id=user_id))
return Ok(User(id=user_id))
async def main() -> None:
# Results are in input order regardless of completion order
result = await collect([fetch_user(1), fetch_user(2), fetch_user(3)])
assert result == Ok([User(id=1), User(id=2), User(id=3)])
# With concurrency limit — order still matches input
result = await collect([fetch_user(i) for i in range(1, 6)], concurrency=3)
assert result == Ok([User(id=1), User(id=2), User(id=3), User(id=4), User(id=5)])
asyncio.run(main())
```
### `map_collect`
Apply an async function to each element concurrently and collect into `Ok[list]`.
Returns the first `Err` produced, cancelling remaining tasks:
```python
import asyncio
from dataclasses import dataclass
from corrode import Ok, Err, Result
from corrode.async_iterator import map_collect
@dataclass
class User:
id: int
@dataclass
class NotFound:
user_id: int
async def fetch_user(user_id: int) -> Result[User, NotFound]:
if user_id <= 0:
return Err(NotFound(user_id=user_id))
return Ok(User(id=user_id))
async def main() -> None:
user_ids = [1, 2, 3, 4, 5]
# Results are in input order regardless of completion order
result = await map_collect(user_ids, fetch_user)
assert result == Ok([User(id=1), User(id=2), User(id=3), User(id=4), User(id=5)])
# Limit concurrency — order still matches input
result = await map_collect(user_ids, fetch_user, concurrency=2)
assert result == Ok([User(id=1), User(id=2), User(id=3), User(id=4), User(id=5)])
asyncio.run(main())
```
### `partition`
Await an iterable of coroutines or tasks concurrently, splitting results into
`(oks, errs)` in input order. Unlike `collect`, never short-circuits — all
awaitables run to completion:
```python
import asyncio
from dataclasses import dataclass
from corrode import Ok, Err, Result
from corrode.async_iterator import partition
@dataclass
class User:
id: int
@dataclass
class NotFound:
user_id: int
async def fetch_user(user_id: int) -> Result[User, NotFound]:
if user_id <= 0:
return Err(NotFound(user_id=user_id))
return Ok(User(id=user_id))
async def main() -> None:
# oks and errs preserve relative input order
oks, errs = await partition([
fetch_user(1),
fetch_user(-1), # will fail
fetch_user(2),
fetch_user(-2), # will fail
fetch_user(3),
])
assert oks == [User(id=1), User(id=2), User(id=3)]
assert errs == [NotFound(user_id=-1), NotFound(user_id=-2)]
# With concurrency limit — order still matches input
oks, errs = await partition(
[fetch_user(i) for i in range(-2, 5)],
concurrency=3,
)
assert oks == [User(id=1), User(id=2), User(id=3), User(id=4)]
assert errs == [NotFound(user_id=-2), NotFound(user_id=-1), NotFound(user_id=0)]
asyncio.run(main())
```
### `filter_ok_unordered`
Await coroutines or tasks concurrently, yielding `Ok` values as they complete.
`Err` values are silently skipped:
```python
import asyncio
from dataclasses import dataclass
from corrode import Ok, Err, Result
from corrode.async_iterator import filter_ok_unordered
@dataclass
class User:
id: int
name: str
@dataclass
class NotFound:
user_id: int
async def fetch_user(user_id: int) -> Result[User, NotFound]:
if user_id <= 0:
return Err(NotFound(user_id=user_id))
return Ok(User(id=user_id, name=f"User{user_id}"))
async def main() -> None:
users = []
async for user in filter_ok_unordered([fetch_user(1), fetch_user(-1), fetch_user(2)]):
users.append(user)
assert len(users) == 2
# With concurrency limit
users = []
async for user in filter_ok_unordered(
[fetch_user(i) for i in range(-2, 5)],
concurrency=2,
):
users.append(user)
assert len(users) == 4
asyncio.run(main())
```
### `filter_err_unordered`
Await coroutines or tasks concurrently, yielding `Err` values as they complete.
`Ok` values are silently skipped:
```python
import asyncio
from dataclasses import dataclass
from corrode import Ok, Err, Result
from corrode.async_iterator import filter_err_unordered
@dataclass
class User:
id: int
@dataclass
class NotFound:
user_id: int
async def fetch_user(user_id: int) -> Result[User, NotFound]:
if user_id <= 0:
return Err(NotFound(user_id=user_id))
return Ok(User(id=user_id))
async def main() -> None:
errors = []
async for err in filter_err_unordered([fetch_user(1), fetch_user(-1), fetch_user(2)]):
errors.append(err)
assert errors == [NotFound(user_id=-1)]
asyncio.run(main())
```
### `filter_ok`
Await coroutines or tasks concurrently, yielding `Ok` values in **input order**.
`Err` values are silently skipped. Later-completing tasks are buffered until
all earlier ones have been yielded.
Unlike `filter_ok_unordered`, `concurrency` is required and cannot be `None`
because the reorder buffer would otherwise be unbounded:
```python
import asyncio
from dataclasses import dataclass
from corrode import Ok, Err, Result
from corrode.async_iterator import filter_ok
@dataclass
class User:
id: int
name: str
@dataclass
class NotFound:
user_id: int
async def fetch_user(user_id: int) -> Result[User, NotFound]:
if user_id <= 0:
return Err(NotFound(user_id=user_id))
return Ok(User(id=user_id, name=f"User{user_id}"))
async def main() -> None:
# Errors are skipped, successes come out in input order
users = [
user async for user in filter_ok(
[fetch_user(1), fetch_user(-1), fetch_user(2), fetch_user(3)],
concurrency=4,
)
]
assert users == [User(id=1, name="User1"), User(id=2, name="User2"), User(id=3, name="User3")]
asyncio.run(main())
```
### `filter_err`
Await coroutines or tasks concurrently, yielding `Err` values in **input order**.
`Ok` values are silently skipped. Like `filter_ok`, requires an explicit `concurrency`:
```python
import asyncio
from dataclasses import dataclass
from corrode import Ok, Err, Result
from corrode.async_iterator import filter_err
@dataclass
class User:
id: int
@dataclass
class NotFound:
user_id: int
async def fetch_user(user_id: int) -> Result[User, NotFound]:
if user_id <= 0:
return Err(NotFound(user_id=user_id))
return Ok(User(id=user_id))
async def main() -> None:
errors = [
err async for err in filter_err(
[fetch_user(1), fetch_user(-1), fetch_user(2), fetch_user(-2)],
concurrency=4,
)
]
# Errors preserve relative input order: -1 before -2
assert errors == [NotFound(user_id=-1), NotFound(user_id=-2)]
asyncio.run(main())
```
### `try_reduce`
Await each coroutine or task **sequentially**, folding results with a fallible
function. Short-circuits on the first `Err` and closes remaining coroutines.
Unlike `collect` / `partition`, tasks run one at a time because each awaited
value must be passed to the accumulator before the next task can start:
```python
import asyncio
from corrode import Ok, Err, Result
from corrode.async_iterator import try_reduce
async def fetch_price(item_id: int) -> int:
prices = {1: 100, 2: 250, 3: 75}
return prices.get(item_id, -1)
def accumulate(total: int, price: int) -> Result[int, str]:
if price < 0:
return Err(f"unknown item with price {price}")
return Ok(total + price)
async def main() -> None:
result = await try_reduce(
[fetch_price(1), fetch_price(2), fetch_price(3)],
initial=0,
f=accumulate,
)
assert result == Ok(425)
# Short-circuits on the first Err
result = await try_reduce(
[fetch_price(1), fetch_price(99), fetch_price(3)],
initial=0,
f=accumulate,
)
assert result == Err("unknown item with price -1")
asyncio.run(main())
```
## License
MIT License
| text/markdown | null | Roman Kitaev <802.11g@bk.ru> | null | null | null | enum, error-handling, result, rust | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming L... | [] | null | null | >=3.11 | [] | [] | [] | [
"typing-extensions>=4.10.0; python_version < \"3.13\""
] | [] | [] | [] | [
"Homepage, https://github.com/deliro/corrode",
"Repository, https://github.com/deliro/corrode",
"Issues, https://github.com/deliro/corrode/issues",
"Changelog, https://github.com/deliro/corrode/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T12:38:12.613529 | corrode-0.1.2.tar.gz | 100,656 | 25/78/2a17e95b6469c9d0075dccec84c5521fe3e6ad3ef718359ffcae00eeca15/corrode-0.1.2.tar.gz | source | sdist | null | false | 482ae229b27d3f4670f7a948baf6fbbc | 5e8d3b9ba41cef63f7178f6f3e9e4d5e62b03f2484b8c63d11de276ab005b47b | 25782a17e95b6469c9d0075dccec84c5521fe3e6ad3ef718359ffcae00eeca15 | MIT | [
"LICENSE"
] | 255 |
2.4 | django-email-learning | 0.1.34 | A platform for creating and delivering learning materials via email within a Django application. It provides tools for content management, user role-based administration, and scheduler integration for automated content delivery. | <p align="center">
<img src="https://raw.githubusercontent.com/AvaCodeSolutions/django-email-learning/master/assets/Django2@2x.png" alt="Django Email Learning Logo" width="300">
</p>
# Django Email Learning
A Django package for creating email-based learning platforms with IMAP integration and React frontend components.
[](https://opensource.org/licenses/BSD-3-Clause)
[](https://www.python.org/downloads/)
[](https://www.djangoproject.com/)

## ⚠️ Early Development Notice
**This project is currently in early development and is not yet ready for production use.**
## What is django-email-learning?
**django-email-learning** is an open-source Django app, currently under active development, designed to provide a complete email-based learning platform.
It is inspired by the Darsnameh email-learning service, which unfortunately shut down in July 2017. This library aims to revive that concept and make it accessible to anyone who wants to launch a similar service.
### Why an email learning platform?
An email learning platform is a type of e-learning system where course content is delivered directly to learners’ inboxes. Platform admins can create courses, lessons, and quizzes, and configure the timing rules that determine when each next lesson or quiz is sent.
The system exposes management commands and/or API endpoints that can be triggered by cron jobs or cloud schedulers to:
- Track learner progress
- Send lessons and quizzes via email
- Handle automated transitions between course steps
Additionally, the platform can issue online completion certificates that learners can verify using a QR code.
### Why use email for e-learning?
While modern e-learning platforms often rely heavily on video content and complex web interfaces, email remains a powerful and inclusive channel. Some of the reasons:
- **Low bandwidth requirement:** Email works well in regions with slow or unstable internet.
- **High accessibility:** No need to install apps or log into a portal—lessons arrive directly in the inbox.
- **Resilience to censorship:** Emails are often less likely to be blocked than certain websites or platforms under restrictive governments.
- **Simplicity:** Email is universal, familiar, and works on virtually any device.
## Documentation
Comprehensive documentation is available at [django-email-learning.readthedocs.io](https://django-email-learning.readthedocs.io), including:
- **Installation Guide**: Step-by-step setup instructions
- **Platform Management**: Creating organizations, courses, and managing learners
- **Technical Reference**: Management commands and configuration
- **Usage Examples**: Real-world implementation scenarios
## Installation
This is a Django app, so we assume you already have Django installed.
### 1. Install the Package
```bash
pip install django-email-learning
```
### 2. Add to INSTALLED_APPS
Add `django_email_learning` to your `INSTALLED_APPS` in the Django settings file:
```python
INSTALLED_APPS = [
...
'django_email_learning',
]
```
### 3. Configure Settings
Add the required configuration for the site base URL in your Django settings:
```python
DJANGO_EMAIL_LEARNING = {
"SITE_BASE_URL": "<YOUR_SITE_BASE_URL_STARTING_WITH_HTTP>"
}
```
### 4. Configure Media Files
This app uses Django's MEDIA files to save organization logos. Ensure your media settings are configured correctly. See the [MEDIA_URL setting](https://docs.djangoproject.com/en/6.0/ref/settings/#media-url) for details.
### 5. Run Migrations
```bash
python manage.py migrate
```
### 6. Add URLs
Include the app URLs in your main Django URLs configuration:
```python
from django.urls import path, include
from django_email_learning import urls as email_learning_urls
urlpatterns = [
...
path('your_preferred_path/', include(email_learning_urls, namespace='django_email_learning')),
]
```
The platform will be accessible at `your_preferred_path/platform/`.
### Access Control Notes
- **Platform Access:** You need to be logged in to access the `/platform` sub-URL, which is used for managing courses and viewing learner progress.
- **Current MVP Limitations:** In the current MVP version, you can use the superuser account. Other staff users can access the platform only if they are programmatically assigned to an organization. The UI to assign members with different roles to organizations is not yet implemented, however, the access control for those roles is in place.
## Usage
### Content Delivery
This app uses the email backend defined in Django settings to deliver course content. Assuming you have active courses and enrollments, you need to schedule a job that runs the content delivery management command periodically (e.g., using cron or a cloud scheduler).
Execute the content delivery job using:
```bash
python manage.py deliver_contents
```
## Contributing
We welcome contributions! Please read our [Contributing Guide](https://github.com/AvaCodeSolutions/django-email-learning/blob/master/CONTRIBUTING.md) to learn about our development process, how to set up the development environment, and how to submit pull requests.
## Sponsorship
Support our open-source work and community projects by sponsoring us through [GitHub Sponsors](https://github.com/sponsors/AvaCodeSolutions) or [Open Collective](https://opencollective.com/django-email-learning). Depending on your sponsorship tier, we can feature your logo and link on the project’s README and documentation.
[](https://opencollective.com/django-email-learning)
## License
This project is licensed under the BSD 3-Clause License - see the [LICENSE](https://github.com/AvaCodeSolutions/django-email-learning/blob/master/LICENSE) file for details.
| text/markdown | Payam Najafizadeh | payam.nj@gmail.com | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Framework :: Django",
"Framework :: Django :: 5.0",
"Framework :: Django :: 6.0",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language ::... | [] | https://github.com/AvaCodeSolutions/django-email-learning | null | <4.0,>=3.12 | [] | [] | [] | [
"cryptography<47.0.0,>=46.0.3",
"pillow<13.0.0,>=12.0.0",
"pydantic<3.0.0,>=2.12.4",
"pyjwt<3.0.0,>=2.10.1",
"qrcode<9.0,>=8.2"
] | [] | [] | [] | [
"Repository, https://github.com/AvaCodeSolutions/django-email-learning"
] | poetry/2.2.1 CPython/3.13.3 Linux/5.15.167.4-microsoft-standard-WSL2 | 2026-02-19T12:37:45.950254 | django_email_learning-0.1.34.tar.gz | 856,640 | ff/8e/d29560eb50c8a4bd3e86de8cbcf1249cbc5f876e90480547303cb901149b/django_email_learning-0.1.34.tar.gz | source | sdist | null | false | f2cf3d4232f1f152923780cabad0bd81 | d660bd4f82fd5fa21b9dff961df47a33bee2d31d6cb4f2dfaa3bb98ae52a6cbd | ff8ed29560eb50c8a4bd3e86de8cbcf1249cbc5f876e90480547303cb901149b | null | [] | 253 |
2.1 | cvar-optimization-benchmarks | 0.1.3 | Conditional Value-at-Risk (CVaR) portfolio optimization benchmark problems in Python. | # CVaR optimization benchmark problems
This repository contains Conditional Value-at-Risk (CVaR) portfolio optimization benchmark
problems for fully general Monte Carlo distributions and derivatives portfolios.
The starting point is the [next-generation investment framework's market representation](https://youtu.be/4ESigySdGf8?si=yWYuP9te1K1RBU7j&t=46)
given by the matrix $R\in \mathbb{R}^{S\times I}$ and associated joint scenario probability
vectors $p,q\in \mathbb{R}^{S}$.
The [1_CVaROptBenchmarks notebook](https://github.com/fortitudo-tech/cvar-optimization-benchmarks/blob/main/1_CVaROptBenchmarks.ipynb)
illustrates how the benchmark problems can be solved using Fortitudo Technologies' Investment
Analysis module.
The [2_OptimizationExample notebook](https://github.com/fortitudo-tech/cvar-optimization-benchmarks/blob/main/2_OptimizationExample.ipynb)
shows how you can replicate the results using the [fortitudo.tech open-source Python package](https://github.com/fortitudo-tech/fortitudo.tech)
for the efficient frontier optimizations of long-only cash portfolios, which are the easiest problems to solve.
## Installation Instructions
It is recommended to install the code dependencies in a
[conda environment](https://conda.io/projects/conda/en/latest/user-guide/concepts/environments.html):
conda create -n cvar-optimization-benchmarks python=3.13
conda activate cvar-optimization-benchmarks
pip install cvar-optimization-benchmarks
After this, you should be able to run the code in the [2_OptimizationExample notebook](https://github.com/fortitudo-tech/cvar-optimization-benchmarks/blob/main/2_OptimizationExample.ipynb).
The code in [1_CVaROptBenchmarks notebook](https://github.com/fortitudo-tech/cvar-optimization-benchmarks/blob/main/1_CVaROptBenchmarks.ipynb)
can only be run by people who subscribe to the Investment Analysis module.
## Portfolio Construction and Risk Management book
You can read much more about the [next-generation investment framework](https://antonvorobets.substack.com/p/anton-vorobets-next-generation-investment-framework)
in the [Portfolio Construction and Risk Management book](https://antonvorobets.substack.com/p/pcrm-book),
including a thorough description of CVaR optimization problems and
[Resampled Portfolio Stacking](https://antonvorobets.substack.com/p/resampled-portfolio-stacking).
| text/markdown | Fortitudo Technologies | software@fortitudo.tech | null | null | GPL-3.0-or-later | CVaR, Efficient Frontier, Entropy Pooling, Quantitative Finance, Portfolio Optimization | [
"Intended Audience :: Education",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming L... | [] | https://fortitudo.tech | null | <3.14,>=3.9 | [] | [] | [] | [
"fortitudo-tech<2.0.0,>=1.1.11",
"notebook"
] | [] | [] | [] | [
"Repository, https://github.com/fortitudo-tech/cvar-optimization-benchmarks",
"Documentation, https://os.fortitudo.tech",
"Issues, https://github.com/fortitudo-tech/cvar-optimization-benchmarks/issues"
] | poetry/1.3.2 CPython/3.8.10 Linux/5.15.167.4-microsoft-standard-WSL2 | 2026-02-19T12:37:39.230233 | cvar_optimization_benchmarks-0.1.3.tar.gz | 14,800 | 6d/c2/b88e77cf81ee6b2644aa3fa9d3b9b7892a5dd4828cab861dc4ffdea72273/cvar_optimization_benchmarks-0.1.3.tar.gz | source | sdist | null | false | 7cf3dc19363b5f3ab8df33f644c6a40f | fa832eea7cc5ad16d2be39f44c828ed771810eec0c2d2b4dbfd5257df3ddc619 | 6dc2b88e77cf81ee6b2644aa3fa9d3b9b7892a5dd4828cab861dc4ffdea72273 | null | [] | 270 |
2.4 | qorm | 0.1.1 | Python ORM for q/kdb+ | # qorm
**A modern Python ORM for q/kdb+.**
qorm brings the declarative, model-based workflow popularised by SQLAlchemy to the world of kdb+. Define tables as Python classes, build queries with a chainable API, and let the ORM handle serialisation, type mapping, and IPC transport — in both sync and async flavours.
```python
from qorm import Model, Engine, Session, Symbol, Float, Long, Timestamp, avg_
class Trade(Model):
__tablename__ = "trade"
sym: Symbol
price: Float
size: Long
time: Timestamp
engine = Engine(host="localhost", port=5000)
with Session(engine) as s:
s.create_table(Trade)
s.exec(Trade.insert([
Trade(sym="AAPL", price=150.25, size=100, time=datetime.now()),
Trade(sym="GOOG", price=2800.0, size=50, time=datetime.now()),
]))
result = s.exec(
Trade.select(Trade.sym, avg_price=avg_(Trade.price))
.where(Trade.price > 100)
.by(Trade.sym)
)
for row in result:
print(row.sym, row.avg_price)
```
---
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Type System](#type-system)
- [Type Aliases](#type-aliases)
- [Plain Python Types](#plain-python-types)
- [Null Handling](#null-handling)
- [Models](#models)
- [Defining a Model](#defining-a-model)
- [Model Instances](#model-instances)
- [Field Options](#field-options)
- [Keyed Models](#keyed-models)
- [Connections](#connections)
- [Synchronous Connection](#synchronous-connection)
- [Asynchronous Connection](#asynchronous-connection)
- [Engine](#engine)
- [Constructor](#constructor)
- [DSN Strings](#dsn-strings)
- [Sessions](#sessions)
- [Synchronous Session](#synchronous-session)
- [Asynchronous Session](#asynchronous-session)
- [Raw Queries](#raw-queries)
- [Query Builder](#query-builder)
- [Select](#select)
- [Where Clauses](#where-clauses)
- [Group By](#group-by)
- [Aggregates](#aggregates)
- [Limit](#limit)
- [Update](#update)
- [Delete](#delete)
- [Insert](#insert)
- [Expressions](#expressions)
- [Column References](#column-references)
- [Comparison Operators](#comparison-operators)
- [Arithmetic Operators](#arithmetic-operators)
- [Logical Operators](#logical-operators)
- [Built-in Methods](#built-in-methods)
- [Temporal Helpers](#temporal-helpers)
- [xbar (time bucketing)](#xbar-time-bucketing)
- [today / now](#today--now)
- [fby (filter by)](#fby-filter-by)
- [each / peach (adverbs)](#each--peach-adverbs)
- [Exec Query](#exec-query)
- [Pagination](#pagination)
- [Offset](#offset)
- [Paginate Helper](#paginate-helper)
- [Joins](#joins)
- [As-of Join (aj)](#as-of-join-aj)
- [Left Join (lj)](#left-join-lj)
- [Inner Join (ij)](#inner-join-ij)
- [Window Join (wj)](#window-join-wj)
- [Result Sets](#result-sets)
- [Iterating Rows](#iterating-rows)
- [Column Access](#column-access)
- [DataFrame Export](#dataframe-export)
- [Table Reflection](#table-reflection)
- [Listing Tables](#listing-tables)
- [Reflecting a Table](#reflecting-a-table)
- [Reflecting All Tables](#reflecting-all-tables)
- [Using Reflected Models](#using-reflected-models)
- [Remote Function Calls (RPC)](#remote-function-calls-rpc)
- [Ad-hoc Calls](#ad-hoc-calls)
- [QFunction Wrapper](#qfunction-wrapper)
- [Typed Decorator (q_api)](#typed-decorator-q_api)
- [Service Discovery (QNS)](#service-discovery-qns)
- [One-liner with Engine.from_service](#one-liner-with-enginefrom_service)
- [QNS Client](#qns-client)
- [Browsing Services](#browsing-services)
- [Multi-node Engines](#multi-node-engines)
- [Custom Registry CSV](#custom-registry-csv)
- [Multi-Instance Registry](#multi-instance-registry)
- [EngineRegistry](#engineregistry)
- [EngineGroup](#enginegroup)
- [Configuration Methods](#configuration-methods)
- [TLS/SSL](#tlsssl)
- [Enabling TLS](#enabling-tls)
- [TLS DSN Scheme](#tls-dsn-scheme)
- [Custom SSL Context](#custom-ssl-context)
- [Retry / Reconnection](#retry--reconnection)
- [Config Files (YAML/TOML/JSON)](#config-files-yamltoml-json)
- [Connection Pools](#connection-pools)
- [Sync Pool](#sync-pool)
- [Async Pool](#async-pool)
- [Health Checks](#health-checks)
- [Pool from Registry](#pool-from-registry)
- [Subscription / Pub-Sub](#subscription--pub-sub)
- [IPC Compression](#ipc-compression)
- [Debug / Explain Mode](#debug--explain-mode)
- [Logging](#logging)
- [Code Generation (CLI)](#code-generation-cli)
- [Generate Models](#generate-models)
- [Using Generated Models](#using-generated-models)
- [CLI Reference](#cli-reference)
- [Schema Management](#schema-management)
- [Error Handling](#error-handling)
- [Testing Your Code](#testing-your-code)
- [API Reference](#api-reference)
---
## Installation
```bash
pip install qorm
```
With optional extras:
```bash
pip install qorm[pandas] # DataFrame export
pip install qorm[toml] # TOML config files (automatic on Python 3.11+)
pip install qorm[yaml] # YAML config files
pip install qorm[dev] # pytest for development
```
**Requirements:** Python 3.10+. No runtime dependencies — qorm is pure Python.
---
## Quick Start
### 1. Define a model
```python
from qorm import Model, Symbol, Float, Long, Timestamp
class Trade(Model):
__tablename__ = "trade"
sym: Symbol
price: Float
size: Long
time: Timestamp
```
### 2. Connect and create the table
```python
from qorm import Engine, Session
engine = Engine(host="localhost", port=5000)
with Session(engine) as s:
s.create_table(Trade)
```
### 3. Insert data
```python
from datetime import datetime
trades = [
Trade(sym="AAPL", price=150.25, size=100, time=datetime.now()),
Trade(sym="GOOG", price=2800.0, size=50, time=datetime.now()),
]
with Session(engine) as s:
s.exec(Trade.insert(trades))
```
### 4. Query data
```python
from qorm import avg_
with Session(engine) as s:
result = s.exec(
Trade.select(Trade.sym, avg_price=avg_(Trade.price))
.where(Trade.price > 100)
.by(Trade.sym)
)
for row in result:
print(row.sym, row.avg_price)
```
### 5. Raw q fallback
```python
with Session(engine) as s:
result = s.raw("select count i by sym from trade")
```
---
## Type System
qorm maps every q type to a Python type alias that carries metadata via `typing.Annotated`. The model layer reads this metadata to generate correct DDL and serialise values over IPC.
### Type Aliases
Use these in model annotations. Each alias encodes both the Python representation type and the q wire type.
| qorm alias | Python type | q type | q type char | q type code |
|---------------|----------------------|-------------|-------------|-------------|
| `Boolean` | `bool` | boolean | `b` | 1 |
| `Guid` | `uuid.UUID` | guid | `g` | 2 |
| `Byte` | `int` | byte | `x` | 4 |
| `Short` | `int` | short | `h` | 5 |
| `Int` | `int` | int | `i` | 6 |
| `Long` | `int` | long | `j` | 7 |
| `Real` | `float` | real | `e` | 8 |
| `Float` | `float` | float | `f` | 9 |
| `Char` | `str` | char | `c` | 10 |
| `Symbol` | `str` | symbol | `s` | 11 |
| `Timestamp` | `datetime.datetime` | timestamp | `p` | 12 |
| `Month` | `datetime.date` | month | `m` | 13 |
| `Date` | `datetime.date` | date | `d` | 14 |
| `DateTime` | `datetime.datetime` | datetime | `z` | 15 |
| `Timespan` | `datetime.timedelta` | timespan | `n` | 16 |
| `Minute` | `datetime.time` | minute | `u` | 17 |
| `Second` | `datetime.time` | second | `v` | 18 |
| `Time` | `datetime.time` | time | `t` | 19 |
```python
from qorm import Symbol, Float, Long, Timestamp, Date, Guid
```
### Plain Python Types
If you use a plain Python type instead of a qorm alias, the ORM infers a default q type:
| Python type | Default q type |
|----------------------|----------------|
| `bool` | boolean |
| `int` | long |
| `float` | float |
| `str` | symbol |
| `bytes` | byte |
| `datetime.datetime` | timestamp |
| `datetime.date` | date |
| `datetime.time` | time |
| `datetime.timedelta` | timespan |
| `uuid.UUID` | guid |
```python
# These two are equivalent:
class Trade(Model):
__tablename__ = "trade"
sym: str # inferred as symbol
price: float # inferred as float
class Trade(Model):
__tablename__ = "trade"
sym: Symbol # explicit symbol
price: Float # explicit float
```
Use the explicit aliases when you need a specific q type that differs from the default (e.g. `Short` or `Int` instead of the default `Long` for `int`).
### Null Handling
q has typed nulls — a long null and a date null are different values. qorm preserves this with `QNull`:
```python
from qorm import QNull, QTypeCode, is_null
# Create a typed null
null_price = QNull(QTypeCode.FLOAT)
null_date = QNull(QTypeCode.DATE)
# They are distinguishable
null_price == null_date # False
# Check if a value is null
is_null(null_price, QTypeCode.FLOAT) # True
is_null(42, QTypeCode.LONG) # False
# QNull is falsy
if not null_price:
print("it's null")
```
When the deserialiser encounters a q null value (e.g. `0N`, `0Nd`), it returns a `QNull` with the appropriate type code rather than Python `None`. This ensures correct round-trip serialisation.
---
## Models
### Defining a Model
Subclass `Model` and add type-annotated fields. The `__tablename__` class variable sets the q table name.
```python
from qorm import Model, Symbol, Float, Long, Timestamp
class Trade(Model):
__tablename__ = "trade"
sym: Symbol
price: Float
size: Long
time: Timestamp
```
The `__init_subclass__` hook automatically:
- Introspects annotations to build `Field` descriptors
- Infers the q type for each field
- Registers the model in a global registry (used for result mapping)
### Model Instances
Models generate `__init__`, `__repr__`, and `__eq__` automatically:
```python
t = Trade(sym="AAPL", price=150.25, size=100, time=datetime.now())
print(t) # Trade(sym='AAPL', price=150.25, size=100, time=...)
print(t.sym) # AAPL
print(t.to_dict()) # {'sym': 'AAPL', 'price': 150.25, ...}
# Create from dict
t2 = Trade.from_dict({"sym": "GOOG", "price": 2800.0, "size": 50})
# Equality
t == t2 # False
```
Unspecified fields default to `None`:
```python
t = Trade(sym="AAPL")
print(t.price) # None
```
### Field Options
Use `field()` to set metadata on individual columns:
```python
from qorm import Model, Symbol, Float, Long, field
from qorm.protocol.constants import ATTR_SORTED
class Trade(Model):
__tablename__ = "trade"
sym: Symbol = field(attr=ATTR_SORTED) # `s# attribute
price: Float
size: Long = field(default=0) # default value
active: Long = field(nullable=False) # not nullable
```
**Parameters:**
| Parameter | Type | Default | Description |
|---------------|--------|--------------|---------------------------------------|
| `primary_key` | `bool` | `False` | Mark as key column (for keyed tables) |
| `attr` | `int` | `ATTR_NONE` | q vector attribute (`s#`, `u#`, etc.) |
| `default` | `Any` | `None` | Default value for new instances |
| `nullable` | `bool` | `True` | Whether the field accepts nulls |
**Available attributes:**
```python
from qorm.protocol.constants import ATTR_NONE, ATTR_SORTED, ATTR_UNIQUE, ATTR_PARTED, ATTR_GROUPED
# ATTR_NONE = 0 (no attribute)
# ATTR_SORTED = 1 (`s#)
# ATTR_UNIQUE = 2 (`u#)
# ATTR_PARTED = 3 (`p#)
# ATTR_GROUPED = 5 (`g#)
```
### Keyed Models
Use `KeyedModel` for keyed tables. Mark key columns with `field(primary_key=True)`:
```python
from qorm import KeyedModel, Symbol, Date, Float, Long, field
class DailyPrice(KeyedModel):
__tablename__ = "daily_price"
sym: Symbol = field(primary_key=True)
date: Date = field(primary_key=True)
close: Float
volume: Long
```
This generates the keyed table DDL:
```q
daily_price:([sym:`s$(); date:`d$()] close:`f$(); volume:`j$())
```
Utility methods:
```python
DailyPrice.key_columns() # ['sym', 'date']
DailyPrice.value_columns() # ['close', 'volume']
```
---
## Connections
### Synchronous Connection
For direct, low-level access:
```python
from qorm import SyncConnection
conn = SyncConnection(host="localhost", port=5000)
conn.open()
result = conn.query("select from trade")
result = conn.query("select from trade where sym=`AAPL")
conn.close()
```
Or as a context manager:
```python
with SyncConnection(host="localhost", port=5000) as conn:
result = conn.query("2+3")
print(result) # 5
```
**Constructor parameters:**
| Parameter | Type | Default | Description |
|---------------|--------------------------|---------------|-------------------------------|
| `host` | `str` | `"localhost"` | kdb+ host |
| `port` | `int` | `5000` | kdb+ port |
| `username` | `str` | `""` | Authentication username |
| `password` | `str` | `""` | Authentication password |
| `timeout` | `float \| None` | `None` | Socket timeout in seconds |
| `tls_context` | `ssl.SSLContext \| None` | `None` | SSL context for TLS (use `Engine` to set this automatically) |
**Health check:**
```python
conn.ping() # True if connection is alive, False otherwise
```
### Asynchronous Connection
```python
import asyncio
from qorm import AsyncConnection
async def main():
conn = AsyncConnection(host="localhost", port=5000)
await conn.open()
result = await conn.query("select from trade")
await conn.close()
asyncio.run(main())
```
Or as an async context manager:
```python
async with AsyncConnection(host="localhost", port=5000) as conn:
result = await conn.query("2+3")
```
---
## Engine
The `Engine` is the central configuration point. It stores connection parameters and acts as a factory for connections and sessions.
### Constructor
```python
from qorm import Engine
engine = Engine(
host="localhost",
port=5000,
username="user",
password="pass",
timeout=30.0,
)
```
### DSN Strings
Parse a connection string:
```python
engine = Engine.from_dsn("kdb://user:pass@localhost:5000")
engine = Engine.from_dsn("kdb://localhost:5000") # no auth
engine = Engine.from_dsn("kdb+tls://user:pass@kdb-server:5000") # with TLS
```
**Format:** `kdb://[user:pass@]host:port` or `kdb+tls://[user:pass@]host:port`
### From QNS Service Discovery
Resolve a named kdb+ service via QNS (see [Service Discovery](#service-discovery-qns)):
```python
engine = Engine.from_service(
"EMRATESCV.SERVICE.HDB.1",
market="fx", env="prod",
username="user", password="pass",
)
```
### Creating Connections
```python
sync_conn = engine.connect() # SyncConnection
async_conn = engine.async_connect() # AsyncConnection
```
---
## Sessions
Sessions wrap a connection and provide the high-level ORM interface.
### Synchronous Session
```python
from qorm import Session
with Session(engine) as s:
# Execute ORM queries
result = s.exec(Trade.select().where(Trade.price > 100))
# Raw q expressions
result = s.raw("select from trade")
# DDL operations
s.create_table(Trade)
s.drop_table(Trade)
exists = s.table_exists(Trade)
```
### Asynchronous Session
```python
from qorm import AsyncSession
async with AsyncSession(engine) as s:
result = await s.exec(Trade.select())
result = await s.raw("select from trade")
await s.create_table(Trade)
```
### Raw Queries
When the query builder doesn't cover your use case, fall back to raw q:
```python
with Session(engine) as s:
# Simple expression
s.raw("2+3")
# Table query
s.raw("select count i by sym from trade")
# With arguments (sent as a q function call)
s.raw("{select from trade where sym=x}", "AAPL")
# System commands
s.raw("\\t select from trade")
```
---
## Query Builder
All queries compile to q functional form (`?[t;c;b;a]` for select, `![t;c;b;a]` for update/delete). You can inspect the compiled q at any time with `.compile()`.
### Select
```python
# Select all columns
Trade.select()
# Select specific columns
Trade.select(Trade.sym, Trade.price)
# Select with aliases (named columns)
Trade.select(avg_price=avg_(Trade.price))
# Combine positional and named
Trade.select(Trade.sym, avg_price=avg_(Trade.price))
```
Inspect the compiled q:
```python
query = Trade.select(Trade.sym).where(Trade.price > 100)
print(query.compile())
# ?[trade;enlist ((price>100));0b;([] sym:sym)]
```
### Where Clauses
Chain `.where()` calls — multiple conditions are ANDed:
```python
Trade.select().where(Trade.price > 100)
Trade.select().where(Trade.price > 100).where(Trade.size > 50)
Trade.select().where(Trade.price > 100, Trade.size > 50) # same thing
```
### Group By
```python
Trade.select(Trade.sym, avg_price=avg_(Trade.price)).by(Trade.sym)
# Multiple group-by columns
Trade.select(total=sum_(Trade.size)).by(Trade.sym, Trade.date)
```
### Aggregates
qorm provides these aggregate functions:
| Function | q equivalent | Description |
|-------------|--------------|-------------------------|
| `avg_(col)` | `avg` | Average |
| `sum_(col)` | `sum` | Sum |
| `min_(col)` | `min` | Minimum |
| `max_(col)` | `max` | Maximum |
| `count_()` | `count i` | Count rows |
| `count_(c)` | `count` | Count non-null in column|
| `first_(c)` | `first` | First value |
| `last_(c)` | `last` | Last value |
| `med_(col)` | `med` | Median |
| `dev_(col)` | `dev` | Standard deviation |
| `var_(col)` | `var` | Variance |
```python
from qorm import avg_, sum_, min_, max_, count_, first_, last_
Trade.select(
Trade.sym,
avg_price=avg_(Trade.price),
total_size=sum_(Trade.size),
trade_count=count_(),
high=max_(Trade.price),
low=min_(Trade.price),
).by(Trade.sym)
```
### Limit and Offset
```python
Trade.select().limit(10) # first 10 rows
Trade.select().where(Trade.price > 100).limit(5)
Trade.select().offset(100).limit(50) # skip 100, take 50
Trade.select().offset(200) # skip first 200 rows
```
### Update
```python
# Set a literal value
Trade.update().set(price=100.0)
# Set an expression
Trade.update().set(price=Trade.price * 1.1)
# With conditions
Trade.update().set(price=Trade.price * 1.1).where(Trade.sym == "AAPL")
# Multiple assignments
Trade.update().set(price=100.0, size=50)
# With group-by
Trade.update().set(price=avg_(Trade.price)).by(Trade.sym)
```
### Delete
```python
# Delete rows matching a condition
Trade.delete().where(Trade.sym == "AAPL")
# Delete specific columns from a table (rare)
Trade.delete().columns("price", "size")
```
### Insert
Insert takes a list of model instances and transposes them into column-oriented data for efficient kdb+ ingestion:
```python
from datetime import datetime
trades = [
Trade(sym="AAPL", price=150.25, size=100, time=datetime.now()),
Trade(sym="GOOG", price=2800.0, size=50, time=datetime.now()),
Trade(sym="MSFT", price=380.0, size=75, time=datetime.now()),
]
Trade.insert(trades)
# Compiles to: `trade insert ((`AAPL;`GOOG;`MSFT);150.25 2800.0 380.0;...)
```
Execute with a session:
```python
with Session(engine) as s:
s.exec(Trade.insert(trades))
```
---
## Expressions
The expression tree supports operator overloading so that Python comparisons produce q-compilable AST nodes.
### Column References
Access model columns as expression objects via class attributes:
```python
Trade.sym # Column('sym')
Trade.price # Column('price')
```
### Comparison Operators
| Python | q | Example |
|-----------------|--------|----------------------------------|
| `col > val` | `>` | `Trade.price > 100` |
| `col >= val` | `>=` | `Trade.price >= 100` |
| `col < val` | `<` | `Trade.price < 200` |
| `col <= val` | `<=` | `Trade.price <= 200` |
| `col == val` | `=` | `Trade.sym == "AAPL"` |
| `col != val` | `<>` | `Trade.sym != "AAPL"` |
### Arithmetic Operators
| Python | q | Example |
|-----------------|--------|----------------------------------|
| `col + val` | `+` | `Trade.price + 10` |
| `col - val` | `-` | `Trade.price - 5` |
| `col * val` | `*` | `Trade.price * 1.1` |
| `col / val` | `%` | `Trade.price / 2` (q uses `%`) |
| `col % val` | `mod` | `Trade.size % 10` |
| `-col` | `neg` | `-Trade.price` |
### Logical Operators
| Python | q | Example |
|-----------------|--------|----------------------------------|
| `a & b` | `&` | `(Trade.price > 100) & (Trade.size > 50)` |
| `a \| b` | `\|` | `(Trade.sym == "AAPL") \| (Trade.sym == "GOOG")` |
| `~expr` | `not` | `~(Trade.price > 100)` |
### Built-in Methods
```python
# within — range check
Trade.price.within(100, 200) # price within (100; 200)
# like — pattern matching
Trade.sym.like("A*") # sym like "A*"
# in_ — membership
Trade.sym.in_(["AAPL", "GOOG"]) # sym in (`AAPL;`GOOG)
# asc / desc — sorting
Trade.price.asc()
Trade.price.desc()
```
---
## Temporal Helpers
kdb+ time-series operations used in virtually every aggregation.
### xbar (time bucketing)
`xbar` rounds values down to bucket boundaries — the standard way to bucket timestamps:
```python
from qorm import xbar_, avg_
# 5-minute bars: bucket time by 5, aggregate price
Trade.select(Trade.sym, vwap=avg_(Trade.price))
.by(Trade.sym, t=xbar_(5, Trade.time))
```
Compiles to q: `(5 xbar time)` in the by-clause.
`xbar_` works anywhere an expression is accepted — in `.by()`, `.where()`, or `.select()`:
```python
# In a where clause
Trade.select().where(xbar_(1, Trade.time) > some_time)
```
### today / now
Sentinels that compile to q's built-in date/time values:
```python
from qorm import today_, now_
# Trades from today
Trade.select().where(Trade.date == today_())
# Compiles to: ... (date=.z.d) ...
# Trades in the last hour
Trade.select().where(Trade.time > now_())
# Compiles to: ... (time>.z.p) ...
```
| Helper | Compiles to | Description |
|------------|-------------|--------------------------|
| `today_()` | `.z.d` | Current date |
| `now_()` | `.z.p` | Current timestamp (UTC) |
---
## fby (filter by)
kdb+'s `fby` applies an aggregate per group and returns a vector, used in WHERE clauses to filter rows based on group-level conditions:
```python
from qorm import fby_
# Select trades where price equals the max price for that symbol
Trade.select().where(Trade.price == fby_("max", Trade.price, Trade.sym))
# Compiles to: ... (price=(max;price) fby sym) ...
# Trades with above-average size for their symbol
Trade.select().where(Trade.size > fby_("avg", Trade.size, Trade.sym))
```
**Signature:** `fby_(agg_name, col, group_col)`
| Parameter | Description |
|-------------|-----------------------------------------------|
| `agg_name` | Aggregate function name: `"max"`, `"avg"`, `"min"`, `"sum"`, etc. |
| `col` | Column to aggregate |
| `group_col` | Column to group by |
---
## each / peach (adverbs)
kdb+ adverbs: `f each x` applies `f` to each element, `f peach x` does so in parallel across threads.
```python
from qorm import count_, avg_, each_, peach_
# Method form — chain .each() or .peach() on an aggregate
Trade.select(Trade.sym, tag_count=count_(Trade.tags).each())
# Compiles to: count tags each
Trade.select(Trade.sym, avg_prices=avg_(Trade.prices).peach())
# Compiles to: avg prices peach
# Standalone form
Trade.select(Trade.sym, tag_count=each_("count", Trade.tags))
Trade.select(Trade.sym, avg_prices=peach_("sum", Trade.prices))
```
---
## Exec Query
q's `exec` returns vectors or dicts instead of tables. Use it when you want raw column data without the table wrapper.
```python
# Single column → returns a vector (list)
prices = s.exec(Trade.exec_(Trade.price))
# Multiple columns → returns a dict
data = s.exec(Trade.exec_(Trade.sym, Trade.price))
# With filtering
syms = s.exec(Trade.exec_(Trade.sym).where(Trade.size > 100))
# With named columns and aggregates
avg_prices = s.exec(Trade.exec_(avg_price=avg_(Trade.price)).by(Trade.sym))
```
`ExecQuery` supports the same chainable API as `SelectQuery`: `.where()`, `.by()`, `.limit()`, `.compile()`, `.explain()`.
```python
query = Trade.exec_(Trade.price)
print(query.explain())
# -- ExecQuery on `trade
# ?[trade;();0b;`price]
```
---
## Pagination
### Offset
Skip the first *n* rows with `.offset()`:
```python
# Skip first 100 rows, take next 50
Trade.select().offset(100).limit(50)
# Compiles to: 50#(100_(?[trade;();0b;()]))
# Offset without limit
Trade.select().offset(200)
# Compiles to: 200_(?[trade;();0b;()])
```
### Paginate Helper
Iterate over large result sets in pages:
```python
from qorm import paginate
for page in paginate(s, Trade.select().where(Trade.sym == "AAPL"), page_size=1000):
df = page.to_dataframe()
process(df)
# Stops automatically when a page has fewer rows than page_size or is empty
```
Async version:
```python
from qorm import async_paginate
async for page in async_paginate(s, Trade.select(), page_size=1000):
process(page)
```
---
## Joins
qorm supports all four q join types. Each takes a list of join columns, a left table, and a right table.
### As-of Join (aj)
The most common join in kdb+ — matches each left row with the most recent right row by time:
```python
from qorm import aj
class Quote(Model):
__tablename__ = "quote"
sym: Symbol
bid: Float
ask: Float
time: Timestamp
# Join trades with most recent quotes
query = aj([Trade.sym, Trade.time], Trade, Quote)
# Compiles to: aj[`sym`time;trade;quote]
with Session(engine) as s:
result = s.exec(query)
```
You can also pass column names as strings:
```python
aj(["sym", "time"], Trade, Quote)
```
### Left Join (lj)
```python
from qorm import lj
query = lj([Trade.sym], Trade, Quote)
# Compiles to: trade lj `sym xkey quote
with Session(engine) as s:
result = s.exec(query)
```
### Inner Join (ij)
```python
from qorm import ij
query = ij([Trade.sym], Trade, Quote)
# Compiles to: trade ij `sym xkey quote
```
### Window Join (wj)
Join within a time window. Useful for aggregating quotes around trade times:
```python
from qorm import wj
query = wj(
windows=(-2000000000, 0), # 2-second lookback window (nanos)
on=[Trade.sym, Trade.time],
left=Trade,
right=Quote,
aggs={"bid": "avg", "ask": "avg"}, # aggregate functions for right cols
)
```
---
## Result Sets
When a session query returns a q table, it is wrapped in a `ModelResultSet` — a lazy, column-oriented container.
### Iterating Rows
```python
result = s.exec(Trade.select())
# Iterate as model instances
for trade in result:
print(trade.sym, trade.price)
# Length
len(result) # number of rows
# Index a single row
trade = result[0]
print(trade.sym)
```
### Column Access
Access raw column vectors by name (preserves kdb+'s column-oriented layout):
```python
syms = result["sym"] # list of all sym values
prices = result["price"] # list of all price values
result.columns # ['sym', 'price', 'size', 'time']
```
### DataFrame Export
Convert to a pandas DataFrame (requires `pip install qorm[pandas]`):
```python
df = result.to_dataframe()
print(df.head())
# sym price size time
# 0 AAPL 150.25 100 2024-06-15 10:30:00
# 1 GOOG 2800.00 50 2024-06-15 10:30:01
```
Or get the raw column dict:
```python
data = result.to_dict()
# {'sym': ['AAPL', 'GOOG'], 'price': [150.25, 2800.0], ...}
```
---
## Table Reflection
When connecting to existing kdb+ processes, you don't need to pre-define Model classes. qorm can reflect table schemas at runtime.
### Listing Tables
```python
with Session(engine) as s:
tables = s.tables()
print(tables) # ['trade', 'quote', 'order']
```
### Reflecting a Table
`reflect()` queries the kdb+ process with `meta` (for column types) and `keys` (for key columns), then builds a fully functional Model class dynamically:
```python
with Session(engine) as s:
Trade = s.reflect("trade")
# Trade is now a real Model class with the correct fields
print(Trade.__fields__) # {'sym': Field(sym, symbol), 'price': Field(price, float), ...}
```
### Keyed Table Reflection
If the table has key columns, `reflect()` automatically returns a `KeyedModel`:
```python
with Session(engine) as s:
DailyPrice = s.reflect("daily_price")
# Keyed tables become KeyedModel subclasses
from qorm import KeyedModel
print(issubclass(DailyPrice, KeyedModel)) # True
print(DailyPrice.key_columns()) # ['sym', 'date']
print(DailyPrice.value_columns()) # ['close', 'volume']
```
### Reflecting All Tables
```python
with Session(engine) as s:
models = s.reflect_all()
# {'trade': Trade, 'quote': Quote, 'order': Order}
Trade = models['trade']
Quote = models['quote']
```
### Uppercase Type Chars
kdb+ uses uppercase type characters (e.g. `C`, `J`, `F`) for vector-of-vector columns (list of string vectors, list of long vectors, etc.). qorm handles these automatically during reflection — they are mapped to `MIXED_LIST` (Python `list`):
```python
with Session(engine) as s:
# Table with a 'C' column (list of char vectors / strings)
Tagged = s.reflect("tagged")
print(Tagged.__fields__['labels'].qtype.code) # QTypeCode.MIXED_LIST
```
### Using Reflected Models
Reflected models support the full ORM API — select, where, by, aggregates, insert, update, delete, and joins:
```python
with Session(engine) as s:
Trade = s.reflect("trade")
# Query with full ORM features
result = s.exec(
Trade.select(Trade.sym, avg_price=avg_(Trade.price))
.where(Trade.price > 100)
.by(Trade.sym)
)
for row in result:
print(row.sym, row.avg_price)
df = result.to_dataframe()
```
Reflected models also support instantiation, equality, `to_dict()`, and `repr()`:
```python
t = Trade(sym="AAPL", price=150.0, size=100)
print(t) # Trade(sym='AAPL', price=150.0, size=100)
print(t.to_dict()) # {'sym': 'AAPL', 'price': 150.0, 'size': 100}
```
Async sessions support the same reflection API:
```python
async with AsyncSession(engine) as s:
tables = await s.tables()
Trade = await s.reflect("trade")
models = await s.reflect_all()
```
---
## Remote Function Calls (RPC)
Call q functions that are already deployed on a kdb+ process without writing raw q strings.
### Ad-hoc Calls
Use `session.call()` to invoke a named q function:
```python
with Session(engine) as s:
result = s.call("getTradesByDate", "2024.01.15")
vwap = s.call("calcVWAP", "AAPL", "2024.01.15")
status = s.call("getStatus") # no args
```
### QFunction Wrapper
For reusable function references:
```python
from qorm import QFunction
get_trades = QFunction("getTradesByDate")
with Session(engine) as s:
result = get_trades(s, "2024.01.15")
result = get_trades(s, "2024.01.16")
```
### Typed Decorator (q_api)
Use `q_api` to document the expected signature of a q function. The function body is never called — all calls are routed through IPC:
```python
from qorm import q_api
@q_api("getTradesByDate")
def get_trades_by_date(session, date: str): ...
@q_api("calcVWAP")
def calc_vwap(session, sym: str, date: str): ...
with Session(engine) as s:
trades = get_trades_by_date(s, "2024.01.15")
vwap = calc_vwap(s, "AAPL", "2024.01.15")
```
Async sessions also support `call()`:
```python
async with AsyncSession(engine) as s:
result = await s.call("getTradesByDate", "2024.01.15")
```
---
## Service Discovery (QNS)
kdb+ environments commonly use a Q Name Service (QNS) for service discovery. Registry nodes are listed in CSV files keyed by market and environment (e.g. `fx_prod.csv`). qorm's `QNS` client connects to a registry node, queries the registry, and resolves actual service endpoints — so you never need to hardcode host:port values.
The registry query varies by market:
- **FX** (`market="fx"`) — uses `.qns.getRegistry[]` (function call that returns the full registry)
- **All other markets** — uses `.qns.registry` (direct table access)
Services follow the naming convention `DATASET.CLUSTER.DBTYPE.NODE` (e.g. `EMRATESCV.SERVICE.HDB.1`).
### One-liner with Engine.from_service
The simplest way to connect to a named service:
```python
from qorm import Engine
engine = Engine.from_service(
"EMRATESCV.SERVICE.HDB.1",
market="fx",
env="prod",
username="user",
password="pass",
)
```
This loads the registry CSV for `fx_prod`, connects to a registry node, looks up the service, and returns a configured `Engine` with the resolved host, port, and TLS settings.
### QNS Client
For more control, create a reusable `QNS` instance:
```python
from qorm import QNS
qns = QNS(market="fx", env="prod", username="user", password="pass")
# Resolve a single service to an Engine
engine = qns.engine("EMRATESCV.SERVICE.HDB.1")
```
**QNS constructor parameters:**
| Parameter | Type | Default | Description |
|------------|-------------------|---------|--------------------------------------------|
| `market` | `str` | — | Market identifier (e.g. `"fx"`) |
| `env` | `str` | — | Environment identifier (e.g. `"prod"`) |
| `username` | `str` | `""` | Credentials for registry + resolved services |
| `password` | `str` | `""` | Credentials for registry + resolved services |
| `timeout` | `float` | `10.0` | Connection timeout in seconds |
| `data_dir` | `str \| Path \| None` | `None` | Directory with CSV files (defaults to bundled `qorm.qns.data`) |
### Browsing Services
Use `lookup()` to discover services by prefix:
```python
# Find all services starting with EMR / SER / H
services = qns.lookup("EMR", "SER", "H")
for svc in services:
print(svc.fqn, svc.host, svc.port, svc.tls)
```
Each result is a `ServiceInfo` dataclass:
| Field | Type | Description |
|-----------|--------|------------------------------------------------|
| `dataset` | `str` | Dataset part of the service name |
| `cluster` | `str` | Cluster part |
| `dbtype` | `str` | Database type (HDB, RDB, etc.) |
| `node` | `str` | Node identifier |
| `host` | `str` | Resolved hostname |
| `port` | `int` | Resolved port |
| `ssl` | `str` | SSL mode string from registry (`"tls"`, `"none"`, etc.) |
| `ip` | `str` | IP address |
| `env` | `str` | Environment |
| `.tls` | `bool` | Property: `True` if `ssl` is `"tls"` (case-insensitive) |
| `.fqn` | `str` | Property: `"DATASET.CLUSTER.DBTYPE.NODE"` |
### Multi-node Engines
Resolve all matching services to a list of engines — useful for building failover or round-robin pools:
```python
engines = qns.engines("EMRATESCV", "SERVICE", "HDB")
# Returns [Engine(..., port=5010), Engine(..., port=5011), ...]
```
### Custom Registry CSV
Registry CSV files live in `src/qorm/qns/data/` by default (named `{market}_{env}.csv`). You can also point to a custom directory:
```python
qns = QNS(market="fx", env="prod", data_dir="/path/to/csv/dir")
```
**CSV format:**
```csv
dataset,cluster,dbtype,node,host,port,port_env,env
EMRATESCV,SERVICE,HDB,1,host1.example.com,5010,QNS_PORT,prod
EMRATESCV,SERVICE,HDB,2,host2.example.com,5011,QNS_PORT,prod
```
Required columns: `dataset`, `cluster`, `dbtype`, `node`, `host`, `port`, `port_env`, `env`.
The QNS client tries each registry node in order. If a node is unreachable, it logs a warning and fails over to the next one. If all nodes fail, a `QNSRegistryError` is raised with details for each failure.
> **Note:** FX registry responses are typically large and arrive compressed over IPC. qorm handles this transparently (see [IPC Compression](#ipc-compression)).
---
## Multi-Instance Registry
Manage connections to multiple kdb+ processes — organized by data domain (equities, FX) and instance type (RDB, HDB, gateway).
### EngineRegistry
A named collection of engines for a single domain:
```python
from qorm import EngineRegistry
equities = EngineRegistry()
equities.register("rdb", Engine(host="eq-rdb", port=5010))
equities.register("hdb", Engine(host="eq-hdb", port=5012))
equities.register("gw", Engine(host="eq-gw", port=5000))
# The first registered engine becomes the default
with equities.session() as s: # uses default (rdb)
...
with equities.session("hdb") as s: # explicit
Trade = s.reflect("trade")
result = s.exec(Trade.select().where(Trade.price > 100))
```
Change the default:
```python
equities.set_default("gw")
equities.names # ['rdb', 'hdb', 'gw']
equities.default # 'gw'
```
### EngineGroup
A two-level registry — domains containing instances:
```python
from qorm import EngineGroup
group = EngineGroup()
group.register("equities", equities)
group.register("fx", EngineRegistry.from_config({
"rdb": {"host": "fx-rdb", "port": 5020},
"hdb": {"host": "fx-hdb", "port": 5022},
}))
with group.session("equities", "rdb") as s:
result = s.call("getSnapshot", "AAPL")
with group.session("fx", "hdb") as s:
result = s.raw("select from fxrate")
# Attribute-style access
group.equities.get("rdb") # Engine(host='eq-rdb', port=5010)
```
### Configuration Methods
Build registries from dicts, DSN strings, or environment variables:
```python
# From config dicts
equities = EngineRegistry.from_config({
"rdb": {"host": "eq-rdb", "port": 5010},
"hdb": {"host": "eq-hdb", "port": 5012},
})
# From DSN strings
equities = EngineRegistry.from_dsn({
"rdb": "kdb://eq-rdb:5010",
"hdb": "kdb://user:pass@eq-hdb:5012",
})
# From environment variables
# Reads QORM_EQ_RDB_HOST, QORM_EQ_RDB_PORT, QORM_EQ_RDB_USER, QORM_EQ_RDB_PASS
equities = EngineRegistry.from_env(names=["rdb", "hdb"], prefi | text/markdown | null | null | null | null | null | database, kdb, orm, q, timeseries | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"pandas>=1.5; extra == \"pandas\"",
"tomli>=2.0; python_version < \"3.11\" and extra == \"toml\"",
"pyyaml>=6.0; extra == \"yaml\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T12:37:36.673901 | qorm-0.1.1.tar.gz | 122,806 | e5/4f/f0d7785302ea59d18c310bb7f78222a23ee2b553ad5ef69ca0091496d70c/qorm-0.1.1.tar.gz | source | sdist | null | false | ee1f50ffccafbd028093325c8f8c205a | e3694bc5b236256d47f05a4cbdabe457947d84363719721b235dc6ea8271db70 | e54ff0d7785302ea59d18c310bb7f78222a23ee2b553ad5ef69ca0091496d70c | MIT | [] | 243 |
2.4 | molcraft | 0.4.2 | Graph Neural Networks for Molecular Machine Learning | <img src="https://github.com/akensert/molcraft/blob/main/docs/_static/molcraft-logo.png" alt="molcraft-logo" width="90%">
**Deep Learning on Molecules**: Graph Neural Networks for Molecular Machine Learning.
## Examples
### Context-Aware Graph Neural Network
Implement a context-aware graph neural network by embedding context features in the super node.
The super node is a virtual node bidirectionally linked to all atomic nodes,
allowing both efficient information propagation and inclusion of context features.
Context features may be continuous or discrete (categorical); for discrete context features, specify
the number of categories expected via `num_categories` of the `AddContext` layer.
```python
from molcraft import features
from molcraft import featurizers
from molcraft import layers
from molcraft import models
import keras
import pandas as pd
featurizer = featurizers.MolGraphFeaturizer(
atom_features=[
features.AtomType(),
features.NumHydrogens(),
features.Degree(),
],
bond_features=[
features.BondType(),
features.IsRotatable(),
],
super_node=True,
self_loops=True,
)
df = pd.DataFrame({
'smiles': [
'N[C@@H](C)C(=O)O', 'N[C@@H](CS)C(=O)O'
],
'label': [3.5, -1.5],
'ph': [7.2, 4.5],
'temperature': [35., 45.],
})
graph = featurizer(df)
model = models.GraphModel.from_layers(
[
layers.Input(graph.spec),
layers.NodeEmbedding(dim=128),
layers.EdgeEmbedding(dim=128),
layers.AddContext(field='ph'),
layers.AddContext(field='temperature'),
layers.GraphConv(units=128),
layers.GraphConv(units=128),
layers.GraphConv(units=128),
layers.GraphConv(units=128),
layers.Readout(mode='mean'),
keras.layers.Dense(units=1024, activation='elu'),
keras.layers.Dense(units=1024, activation='elu'),
keras.layers.Dense(1)
]
)
model.compile(
keras.optimizers.Adam(1e-4), keras.losses.MeanSquaredError()
)
model.fit(graph, epochs=30)
pred = model.predict(graph)
# Uncomment below to save and load model (including featurizer)
# featurizers.save_featurizer(featurizer, '/tmp/featurizer.json')
# models.save_model(model, '/tmp/model.keras')
# loaded_featurizer = featurizers.load_featurizer('/tmp/featurizer.json')
# loaded_model = models.load_model('/tmp/model.keras')
```
### Hybrid Model for Peptides
Implement a GNN-RNN hybrid model for peptides.
```python
from molcraft import features
from molcraft import featurizers
from molcraft import layers
from molcraft import models
import keras
import pandas as pd
featurizer = featurizers.PeptideGraphFeaturizer(
atom_features=[
features.AtomType(),
features.NumHydrogens(),
features.Degree(),
],
bond_features=[
features.BondType(),
features.IsRotatable(),
],
)
# Allow modified amino acids:
# featurizer.monomers.update({
# "C[Carbamidomethyl]": "N[C@@H](CSCC(=O)N)C(=O)O"
# })
df = pd.DataFrame({
'sequence': [
'CYIQNCPLG', 'KTTKS'
],
'label': [1.0, 0.0],
})
graph = featurizer(df)
model = models.GraphModel.from_layers(
[
layers.Input(graph.spec),
layers.NodeEmbedding(dim=128),
layers.EdgeEmbedding(dim=128),
layers.GraphConv(units=128),
layers.GraphConv(units=128),
layers.GraphConv(units=128),
layers.GraphConv(units=128),
layers.PeptideReadout(),
keras.layers.Masking(),
keras.layers.Bidirectional(
keras.layers.LSTM(units=128, return_sequences=True)
),
keras.layers.GlobalAveragePooling1D(),
keras.layers.Dense(units=1024, activation='elu'),
keras.layers.Dense(units=1024, activation='elu'),
keras.layers.Dense(1, activation='sigmoid')
]
)
model.compile(
keras.optimizers.Adam(1e-4), keras.losses.BinaryCrossentropy()
)
model.fit(graph, epochs=30)
pred = model.predict(graph)
# Uncomment below to save and load model (including featurizer)
# featurizers.save_featurizer(featurizer, '/tmp/featurizer.json')
# models.save_model(model, '/tmp/model.keras')
# loaded_featurizer = featurizers.load_featurizer('/tmp/featurizer.json')
# loaded_model = models.load_model('/tmp/model.keras')
```
## Installation
For CPU users:
```bash
pip install molcraft
```
For GPU users:
```bash
pip install molcraft[gpu]
```
| text/markdown | null | Alexander Kensert <alexander.kensert@gmail.com> | null | null | MIT License
Copyright (c) 2025 Alexander Kensert
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| python, machine-learning, deep-learning, graph-neural-networks, molecular-machine-learning, molecular-graphs, computational-chemistry, computational-biology | [
"Programming Language :: Python :: 3",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX :: Linux"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tensorflow>=2.16",
"rdkit>=2023.9.5",
"pandas>=1.0.3",
"ipython>=8.12.0",
"tensorflow[and-cuda]>=2.16; extra == \"gpu\""
] | [] | [] | [] | [
"Homepage, https://github.com/compomics/molcraft"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-19T12:37:34.514375 | molcraft-0.4.2.tar.gz | 54,492 | 87/f0/9a5d03e811c1ec8885f34e31c63b90198e3892b8c03df568a47f2fa11dcd/molcraft-0.4.2.tar.gz | source | sdist | null | false | ba8264e1a952eaecd7f6d9722daa0414 | 01276a220a70549dc591954840050649ac100fdbcb2fe195241afaf248560036 | 87f09a5d03e811c1ec8885f34e31c63b90198e3892b8c03df568a47f2fa11dcd | null | [
"LICENSE"
] | 262 |
2.4 | pheme | 21.11.3 | report-generation-service | 
# Pheme - Greenbone Static Report Generator <!-- omit in toc -->
[](https://github.com/greenbone/pheme/releases)
[](https://pypi.org/project/pheme/)
[](https://codecov.io/gh/greenbone/pheme)
[](https://github.com/greenbone/pheme/actions/workflows/ci-python.yml)
**pheme** is a service to create scan reports. It is maintained by [Greenbone AG][Greenbone AG].
[Pheme](https://en.wikipedia.org/wiki/Pheme) is the personification of fame and renown.
Or in this case personification of a service to generate reports.
## Table of Contents <!-- omit in toc -->
- [Installation](#installation)
- [Requirements](#requirements)
- [Development](#development)
- [Usage](#usage)
- [Maintainer](#maintainer)
- [Contributing](#contributing)
- [License](#license)
## Installation
### Requirements
Python 3.8 and later is supported.
Besides python `pheme` also needs to have
- libcairo2-dev
- pango1.0
installed.
## Development
**pheme** uses [poetry] for its own dependency management and build
process.
First install poetry via pip
python3 -m pip install --user poetry
Afterwards run
poetry install
in the checkout directory of **pheme** (the directory containing the
`pyproject.toml` file) to install all dependencies including the packages only
required for development.
Afterwards activate the git hooks for auto-formatting and linting via
[autohooks].
poetry run autohooks activate
Validate the activated git hooks by running
poetry run autohooks check
## Usage
In order to prepare the data structure the XML report data needs to be posted to `pheme` with a grouping indicator (either by host or nvt).
E.g.:
```
> curl -X POST 'http://localhost:8000/transform?grouping=nvt'\
-H 'Content-Type: application/xml'\
-H 'Accept: application/json'\
-d @test_data/longer_report.xml
"scanreport-nvt-9a233b0d-713c-4f22-9e15-f6e5090873e3"⏎
```
The returned identifier can be used to generate the actual report.
So far a report can be either in:
- application/json
- application/xml
- text/csv
E.g.
```
> curl -v 'http://localhost:8000/report/scanreport-nvt-9a233b0d-713c-4f22-9e15-f6e5090873e3' -H 'Accept: application/csv'
```
For visual report like
- application/pdf
- text/html
the corresponding css and html template needs to be put into pheme first:
```
> curl -X PUT localhost:8000/parameter\
-H 'x-api-key: SECRET_KEY_missing_using_default_not_suitable_in_production'\
--form vulnerability_report_html_css=@path_to_css_template\
--form vulnerability_report_pdf_css=@path_to_css_template\
--form vulnerability_report=@path_to_html_template
```
afterwards it can be get as usual:
```
> curl -v 'http://localhost:8000/report/scanreport-nvt-9a233b0d-713c-4f22-9e15-f6e5090873e3' -H 'Accept: application/pdf'
```
## Maintainer
This project is maintained by [Greenbone AG][Greenbone AG]
## Contributing
Your contributions are highly appreciated. Please
[create a pull request](https://github.com/greenbone/pheme/pulls)
on GitHub. Bigger changes need to be discussed with the development team via the
[issues section at GitHub](https://github.com/greenbone/pheme/issues)
first.
## License
Copyright (C) 2020-2023 [Greenbone AG][Greenbone AG]
Licensed under the [GNU Affero General Public License v3.0 or later](LICENSE).
[Greenbone AG]: https://www.greenbone.net/
[poetry]: https://python-poetry.org/
[autohooks]: https://github.com/greenbone/autohooks
| text/markdown | Greenbone AG | info@greenbone.net | null | null | AGPL-3.0-or-later | null | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9... | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"coreapi<3.0.0,>=2.3.3",
"django==4.2.28",
"djangorestframework==3.16.1",
"pyyaml<7.0.0,>=5.3.1",
"rope<1.15,>=0.17",
"sentry-sdk<3.0,>=1.1; extra == \"tracking\"",
"uritemplate<5.0.0,>=3.0.1",
"weasyprint>=62",
"xmltodict<1.1,>=0.12"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T12:37:32.585268 | pheme-21.11.3.tar.gz | 36,669 | 1c/9d/899847b3205f6ca1b1e80ffb142c86b38bbb7298ef9f795e008d949b3cab/pheme-21.11.3.tar.gz | source | sdist | null | false | 91eb74bd7eddc8db8e955e4daebd34db | 62dead67253a1795d008480cfef7a73336f0ea07222e75b8cba8704ddc0901f0 | 1c9d899847b3205f6ca1b1e80ffb142c86b38bbb7298ef9f795e008d949b3cab | null | [
"LICENSE"
] | 268 |
2.3 | undore-rbac | 1.1.8 | RBAC System for Ascender Framework from Undore! | # Undore RBAC
**RBAC made easy for Ascender Framework**
UndoreRBAC is a lightweight, configurable role-based access control (RBAC) system designed for seamless integration with the Ascender Framework.
Its goal is to **separate permission-evaluation logic and priority rules from data storage**, provide a flexible manager interface for fetching permissions/roles, and offer an easy-to-configure permission map.
---
## Core concepts
- **RBAC Map** - a YAML file that declares all available permissions and their configuration (`default`, `explicit`, `children`).
- **RBAC Manager** - the *user-implemented* bridge between your database and UndoreRBAC. You implement methods for authentication and for fetching roles/permissions.
- **RbacService** - the application-level service used by guards/middleware to perform access checks.
- **RBACGate** - an object that represents a single user’s access state; it performs comparison, inheritance, and override logic.
- **Permissions** carry a boolean `value` (True/False). This allows both granting and explicit denial of permissions.
---
## Basic behavior rules
1. **Priority** (from lower to higher; later items win in conflicts):
- children permissions
- roles (shared permissions) - roles with **higher** `priority` take precedence
- scoped permissions (permissions assigned directly to the user)
2. **Wildcard** (`*`), e.g. `users.*`, means “everything under `users.`” - but a wildcard can be **overridden** by a permission marked `explicit` in the RBAC Map.
3. **`explicit: true`** - a permission marked explicit **ignores** wildcard/override propagation. Use with caution.
---
## Quick start
### 1) Implement `BaseRBACManager`
`BaseRBACManager` is an abstract class. You must implement:
- `authorize(token: str, request: Request | None = None, custom_meta: dict | None = None) -> user_id`
- `fetch_user_access(user_id: Any, custom_meta: dict | None = None) -> Access`
where `Access` contains:
```py
{
"permissions": list[IRBACPermission],
"roles": list[IRBACRole],
"user": Any | None
}
```
**Note**
- Make sure `fetch_user_access` returns data in a predictable order if your logic depends on creation time or role priority.
- The library can enforce `require_sorted_permissions` in RBACConfig by default, so it’s best if the manager returns sorted data.
---
## 2) Create `rbac_map.yml`
Permissions can be declared in two styles: nested YAML or dot-notation. Example:
```yaml
users:
delete:
view:
other:
auth.login:
audit.export:
_config:
default: false
explicit: true
children:
- users.view: true
- users.delete: false
```
`_config` options:
- `default` - the default boolean value for this permission when the user has no record for it.
- `explicit` - if `true`, this permission **cannot** be obtained only via wildcard/children inheritance.
- `children` - a list of `permission:value` pairs that are applied automatically when this permission is present.
---
## 3) Initialization in Ascender Framework
In your `bootstrap.py`:
```python
from undore_rbac.interfaces.config import RBACConfig
from undore_rbac.rbac_module import RbacModule
from shared.custom_rbac_manager import CustomRBACManager
import os
appBootstrap: IBootstrap = {
"providers": [
RbacModule.for_root(
RBACConfig(
rbac_manager=CustomRBACManager(),
rbac_map_path=os.path.join(BASE_PATH, "rbac_map.yml"),
require_sorted_permissions=True
)
)
]
}
```
**Notes**
- `rbac_map_path` should point to the YAML file you prepared.
- `require_sorted_permissions=True` tells the library to expect manager-provided permission records in `created_at` order
---
## 4) Guard - usage examples
### Simple Guard
---
> **Note:** Refer official `Ascender Framework` docs for `Guard` and `ParamGuard` endpoint usage examples
---
```py
class RBACGuard(Guard):
def __init__(self, *permissions: str):
self.permissions = permissions
def __post_init__(self, rbac: RbacService):
self.rbac = rbac
async def can_activate(self, request: Request, token: HTTPAuthorizationCredentials = Security(HTTPBearer())):
user_id = await self.rbac.rbac_manager.authorize(token.credentials, request=request)
await self.rbac.check_access(request.url.path, user_id, self.permissions)
return True
```
### ParamGuard (recommended to avoid duplicated DB calls)
```py
class RBACParamGuard(ParamGuard):
def __init__(self, *permissions: str):
self.permissions = permissions
def __post_init__(self, rbac: RbacService):
self.rbac = rbac
async def credentials_guard(self, request: Request, token: HTTPAuthorizationCredentials = Security(HTTPBearer())):
user_id = await self.rbac.rbac_manager.authorize(token.credentials, request=request)
if self.permissions:
gate = await self.rbac.check_access(request.url.path, user_id, self.permissions)
user = gate.user
else: # Save performance if permission check is not needed
user = ... # Your user GET logic
# Your pydantic model for creds kwarg in endpoint
return AuthCredentials(
user=user
)
```
> **Note:** `gate.user` is **not** populated automatically by the library. If you want `gate.user` available, your `fetch_user_access` implementation must return a `user` field inside the `Access` object.
---
## Detailed priority and override logic
1. Collect all permission records (scoped + shared) and roles for the user.
2. `RBACGate` calculates `children` (using the RBAC Map) for every permission.
3. Permissions are then applied in this order:
- **Children** (applied first),
- **Roles** (applied next - consider `role.priority` and assignment `created_at`),
- **Scoped permissions** assigned directly to the user (applied last - strongest).
4. When conflicting permissions have the same effective priority, the most recent record (by `created_at`, or the order provided by the manager) wins.
- If you rely on DB timestamps or insertion order, ensure `fetch_user_access` returns results in the expected order (Enabling `require_sorted_permissions` rises an exception if the sorting is wrong).
---
## Best practices & recommendations
- **Log** concise check summaries at debug level (do not log tokens or sensitive data).
- **Avoid overusing `explicit: true`** - it can silently block wildcard inheritance causing confusing denials.
- **Cache** `Access` per-request (e.g., in `request.state`) or use ParamGuard to prevent multiple DB hits in the same request.
---
---
## Common pitfalls and how to avoid them
1. **Wildcard permissions not taking effect** - check if the target permission has `_config.explicit: true` in the RBAC Map.
2. **`Permissions must be sorted by created_at` Exception** - ensure permissions are sorted as exception suggests or turn of this requirement in RBACConfig (not recommended)
3. **Heavy DB workload** - cache `Access` for the lifetime of the request or use ParamGuard to do a single fetch.
---
## Thank you for using UndoreRBAC.
Undore <github.com/Undore>
| text/markdown | Undore | deronuno@outlook.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"pyyaml<7.0.0,>=6.0.2",
"pyjwt<3.0.0,>=2.10.1",
"ascender<0.3.0,>=0.2.0",
"packaging<27.0,>=26.0",
"ascender-framework==2.1.0",
"pytz<2026.0,>=2025.2"
] | [] | [] | [] | [] | poetry/2.1.3 CPython/3.13.5 Windows/11 | 2026-02-19T12:36:41.386080 | undore_rbac-1.1.8.tar.gz | 15,627 | 6c/0f/f6d34c3bb779f1d05688a35730bf8b653e5ee7b14599352f0661484b3ea9/undore_rbac-1.1.8.tar.gz | source | sdist | null | false | 2516f8d0768dad174a9858fdbdd3c083 | 55c355dabccb8f022c3705b6dd715b792a60f93853777526083cf7b2336f7a3e | 6c0ff6d34c3bb779f1d05688a35730bf8b653e5ee7b14599352f0661484b3ea9 | null | [] | 260 |
2.4 | gobstopper | 0.3.8 | A simple wrapper delivering complete web framework power - like Wonka's Gobstopper, wrapping RSGI complexity into Flask-like simplicity | # Gobstopper Web Framework 🍬
> *"Like Willy Wonka's Everlasting Gobstopper - a simple wrapper that delivers a complete multi-course meal"*
A **production-ready**, high-performance async web framework built specifically for Granian's RSGI interface. Gobstopper takes the raw power of RSGI and wraps it in a simple, elegant API - giving you a full-featured web framework that's as easy to use as Flask but as fast as raw ASGI/RSGI.
**The Magic**: Just like Wonka's magical candy that contains an entire meal in a single piece, Gobstopper wraps RSGI's complexity into a simple interface while delivering everything you need: routing, templates, WebSockets, background tasks, sessions, security, and more.
## 🎯 Why Gobstopper?
**Simple Wrapper, Complex Power:**
- 🍬 **Simple API**: Flask-like simplicity wrapping RSGI's raw performance
- ⚡️ **RSGI Native**: Direct access to Granian's high-performance RSGI interface
- 🦀 **Rust-Accelerated**: Optional Rust components for routing, templates, and static files
- 🔋 **Batteries Included**: Complete framework - background tasks, WebSockets, sessions, and security
- 🎨 **Familiar Design**: Ergonomic API with modern async/await patterns
- 📦 **Layered Features**: Start simple, add complexity only when you need it
## 🏁 Benchmarks
```
🧪 Testing Gobstopper Benchmark Endpoints
==================================================
✅ Info 1.04ms application/json
✅ JSON 0.44ms application/json
✅ Plaintext 0.35ms text/plain
✅ Single Query 1.51ms application/json
✅ 5 Queries 1.64ms application/json
✅ 3 Updates 2.77ms application/json
✅ Fortunes 2.72ms text/html; charset=utf-8
✅ 10 Cached 11.91ms application/json
```
## 🚀 Features
### 🦀 **Rust-Powered Components**
- **Rust Router**: High-performance path routing with zero-copy parameter extraction
- **Rust Templates**: Blazing-fast Jinja2-compatible rendering with streaming support
- **Rust Static Files**: Ultra-fast static asset serving with intelligent caching
- **Hybrid Architecture**: Seamless fallback to Python components when Rust unavailable
### 🌐 **Core Framework**
- **RSGI Interface**: Built specifically for Granian's high-performance RSGI protocol
- **Type-Safe Validation**: Automatic request validation with msgspec Struct type hints
- **High-Performance JSON**: msgspec-powered JSON parsing and serialization (up to 10x faster)
- **Async/Await**: Full async support throughout the framework stack
- **Background Tasks**: Intelligent task system with DuckDB persistence, priorities, and retries
- **WebSocket Support**: Real-time communication with room management and broadcasting
- **Template Engine**: Jinja2 integration with async support and hot-reload
- **Middleware System**: Static files, CORS, security, and custom middleware
### 🔒 **Security & Production**
- **Security First**: CSRF protection, security headers, rate limiting, input validation
- **Production Ready**: Comprehensive error handling, logging, and monitoring
- **CLI Tools**: Project initialization, task workers, and management commands
- **Cross-Platform**: Native wheels for macOS ARM64, Linux x86_64/ARM64
### ⚡ **Developer Experience**
- **Flash Preview**: One-command mobile testing via `gobstopper run --share` (QR Code generation).
- **Mission Control**: Built-in dashboard (`/_gobstopper`) for checking system health, memory usage, and background tasks.
- **Smart Watcher**: Intelligent file watching that knows about your templates, config, and env files.
- **Error Prism**: Interactive, rich error pages that make debugging a joy.
- **Hot Reload**: Fast, reliable reloader that works with Python code and Rust templates.
## 📦 Installation
```bash
# Basic installation (core framework only)
uv add gobstopper
# With all optional features
uv add "gobstopper[all]"
# Or specific features
uv add "gobstopper[templates,tasks,cli,charts]"
# For production with session backends
uv add "gobstopper[redis,postgres]"
# Development installation
uv add "gobstopper[dev]"
```
### Optional Dependencies
Gobstopper uses optional dependencies to keep the core lightweight:
- **`templates`**: Jinja2 template engine (`jinja2>=3.1.0`)
- **`tasks`**: Background task system with DuckDB persistence (`duckdb>=0.9.0`)
- **`cli`**: Command-line tools for project generation (`click>=8.0.0`)
- **`charts`**: Data visualization support (`pyecharts>=2.0.0`)
- **`redis`**: Redis session storage backend (`redis>=5.0`)
- **`postgres`**: PostgreSQL session storage backend (`asyncpg`)
- **`dev`**: Development tools (pytest, black, ruff, mypy, httpx)
- **`all`**: All optional features except dev dependencies
**Note**: All optional features have graceful fallbacks - the framework will work without them, but specific features will be unavailable.
## 🏃 Quick Start
### Create a New Project
```bash
# Install Gobstopper with CLI tools
uv add "gobstopper[cli]"
# Create new project
uv run gobstopper init my_app
# Navigate and run
cd my_app
uv sync
uv run gobstopper run --reload
```
### Simple Example
```python
from gobstopper import Gobstopper, Request, jsonify
app = Gobstopper(__name__)
@app.get("/")
async def hello(request: Request):
return jsonify({"message": "Hello from Gobstopper!"})
@app.get("/users/<user_id>")
async def get_user(request: Request, user_id: str):
return jsonify({"user_id": user_id, "name": f"User {user_id}"})
# Run with: gobstopper run
# Or: gobstopper run --reload (with auto-reload)
# Or: gobstopper run -w 4 (with 4 workers)
```
## 📚 Examples
### 🌟 Interactive Demo (`example_app.py`)
Complete showcase of all framework features with a web UI:
```bash
uv sync --extra all
granian --interface rsgi --reload example_app:app
```
Visit http://localhost:8000 for interactive demos of:
- HTTP endpoints and routing
- Background task processing
- WebSocket communication
- Security features
- Middleware functionality
### 🧩 Blueprints Demo (`blueprints_demo`)
A blueprint-structured sample app demonstrating nested blueprints, per-blueprint static/templates, WebSockets, background tasks, middleware, and rate limiting.
Run:
```bash
uv sync --extra all
granian --interface rsgi --reload blueprints_demo.app:app
# or:
uv run granian -w 1 -h 0.0.0.0 -p 8080 -r blueprints_demo.app:app
```
Then visit http://localhost:8080/
### 📊 Data Handling (`data_example.py`)
RESTful API demonstrating data operations:
```bash
granian --interface rsgi --reload data_example:app
```
Features:
- CRUD operations with filtering and pagination
- Background data processing
- Real-time analytics
- Task monitoring
### 📈 Benchmarks (`benchmark_simple.py`)
Standard TechEmpower benchmark implementation:
```bash
granian --interface rsgi --workers 4 --threads 2 benchmark_simple:app
```
Benchmark endpoints:
- JSON serialization
- Database queries (simulated)
- Database updates (simulated)
- Plaintext response
- HTML template rendering
- Cached queries
## 🏗️ Architecture
```
src/gobstopper/
├── core/ # Main Gobstopper application class
├── http/ # Request/Response handling & routing
├── websocket/ # WebSocket support & room management
├── tasks/ # Background task system with DuckDB
├── templates/ # Jinja2 template engine
├── middleware/ # Static files, CORS, security
├── cli/ # Command-line tools
└── utils/ # Rate limiting and utilities
```
## 🔧 Key Components
### Application & Type-Safe Validation
```python
from gobstopper import Gobstopper
from msgspec import Struct
app = Gobstopper(__name__, debug=True)
app.init_templates() # Enable Jinja2 templates
# Define data models with automatic validation
class User(Struct):
name: str
email: str
age: int = 0 # Optional field with default
class UpdateUser(Struct):
name: str = None # All fields optional for updates
email: str = None
# Routes with automatic validation
@app.post("/api/users")
async def create_user(request, user: User):
# user is automatically validated and typed!
# No manual request.json() or validation needed
return {"message": f"Created user: {user.name}"}
@app.put("/api/users/<user_id>")
async def update_user(request, user_id: str, updates: UpdateUser):
# Path params + validated body automatically injected
return {"updated": user_id, "changes": updates}
# Manual JSON parsing still available
@app.post("/api/data")
async def manual_data(request):
data = await request.get_json() # msgspec powered
return {"received": data}
# Middleware
from gobstopper.middleware import CORSMiddleware
app.add_middleware(CORSMiddleware(origins=["*"]))
```
### Background Tasks
```python
import os
from gobstopper import should_run_background_workers
# Enable background tasks (required)
os.environ["WOPR_TASKS_ENABLED"] = "1"
@app.task("send_email", "notifications")
async def send_email(to: str, subject: str):
# Task implementation
return {"status": "sent"}
# Queue tasks
task_id = await app.add_background_task(
"send_email", "notifications", TaskPriority.HIGH,
to="user@example.com", subject="Welcome!"
)
# Start workers (only in main process when using multiple workers)
@app.on_startup
async def startup():
if should_run_background_workers():
await app.start_task_workers("notifications", worker_count=2)
```
### WebSocket
```python
@app.websocket("/ws/chat")
async def chat_handler(websocket):
await websocket.accept()
while True:
message = await websocket.receive()
await websocket.send_text(f"Echo: {message.data}")
```
### Templates
```python
@app.get("/")
async def index(request):
return await app.render_template("index.html",
message="Hello World!")
```
### File Uploads
```python
from gobstopper import FileStorage, secure_filename, send_from_directory
@app.post("/upload")
async def upload_file(request):
files = await request.get_files()
if 'document' in files:
file: FileStorage = files['document']
filename = secure_filename(file.filename)
file.save(f"uploads/{filename}")
return {"uploaded": filename}
return {"error": "No file"}, 400
@app.get("/files/<path:filename>")
async def serve_file(request, filename: str):
return send_from_directory("uploads", filename)
```
### Flask/Quart Convenience Features
```python
from gobstopper import abort, make_response, notification
@app.get("/users/<user_id>")
async def get_user(request, user_id: str):
if not user_id.isdigit():
abort(400, "Invalid user ID")
user = find_user(user_id)
if not user:
abort(404, "User not found")
return {"user": user}
@app.post("/users")
async def create_user(request):
# Flash-style notifications
notification(request, "User created successfully!", "success")
# Flexible response building
response = make_response({"id": 123}, 201, {"X-User-ID": "123"})
return response
```
## 🛠️ CLI Tools
Gobstopper includes a comprehensive CLI for rapid development and project management:
### 🏃 Running Your Application
```bash
# Basic usage (Flask-like interface)
gobstopper run
# With auto-reload for development
gobstopper run --reload
# Production with multiple workers
gobstopper run -w 4
# Custom host and port
gobstopper run -h 0.0.0.0 -p 3000
# Specific app module
gobstopper run myapp:app
# Load from configuration file
gobstopper run --config dev # Loads dev.json or dev.toml
gobstopper run --config production # Loads production.json or production.toml
# Override config with CLI arguments
gobstopper run --config production -w 8 # Use production config but override workers
# All options
gobstopper run -w 4 -t 2 -h 0.0.0.0 -p 8080 --reload
```
**Configuration Files:**
Create `dev.json`, `production.json`, or use TOML format:
```json
{
"app": "myapp:app",
"host": "0.0.0.0",
"port": 8080,
"workers": 4,
"threads": 2,
"reload": false
}
```
```toml
# production.toml
app = "myapp:app"
host = "0.0.0.0"
port = 8080
workers = 4
threads = 2
reload = false
```
**Platform-Optimized Performance:**
- 🍎 **ARM (Apple Silicon)**: Automatically uses `--runtime-mode st` (single-threaded)
- 💻 **x86_64 (Intel/AMD)**: Automatically uses `--runtime-mode mt` (multi-threaded)
**Built-in Granian Optimizations:**
- `--log-level error`: Minimal logging overhead
- `--backlog 16384`: Large connection backlog for high throughput
- `--loop uvloop`: High-performance event loop
- `--respawn-failed-workers`: Automatic worker recovery
### 🚀 Project Generation
```bash
# Interactive project setup
gobstopper init
# Create specific project types
gobstopper init my-api --usecase data-science --structure modular
gobstopper init my-cms --usecase content-management --structure blueprints
gobstopper init dashboard --usecase real-time-dashboard --structure microservices
gobstopper init simple-app --usecase microservice --structure single
```
**Available Use Cases:**
- **`data-science`**: ML APIs with data processing, model endpoints, and analytics
- **`real-time-dashboard`**: Live dashboards with WebSocket streaming and data visualization
- **`content-management`**: Full CMS with admin interface, user management, and content APIs
- **`microservice`**: Lightweight service architecture for distributed systems
**Available Structures:**
- **`modular`**: Clean separation with modules (recommended for large projects)
- **`blueprints`**: Flask-style blueprints for organized route grouping
- **`microservices`**: Distributed service architecture with service discovery
- **`single`**: Single-file applications for simple projects and prototypes
### ⚡ Component Generation
```bash
# Generate data models with type hints
gobstopper generate model User -f name:str -f email:str -f created_at:datetime -f is_active:bool
# Generate API endpoints with automatic routing
gobstopper generate endpoint /api/users -m GET --auth
gobstopper generate endpoint /api/users -m POST --auth
# Generate background tasks with categories
gobstopper generate task process_data --category data
gobstopper generate task send_notification --category notifications
# Generate WebSocket handlers
gobstopper generate websocket /ws/live --room-based
```
### 🔧 Development Commands
```bash
# Run background task workers
gobstopper run-tasks --categories data,notifications --workers 3
# Clean up old completed tasks
gobstopper cleanup-tasks --days 7
gobstopper cleanup-tasks --months 1
# Version and system info
gobstopper version
```
### 📁 Generated Project Structure
**Modular Structure:**
```
my_app/
├── app.py # Main application
├── config.py # Configuration
├── requirements.txt # Dependencies
├── .env.example # Environment template
├── modules/ # Feature modules
│ ├── auth/ # Authentication
│ ├── api/ # API routes
│ ├── admin/ # Admin interface
│ └── public/ # Public pages
├── models/ # Data models
├── tasks/ # Background tasks
├── templates/ # Jinja2 templates
└── static/ # CSS, JS, images
```
**Blueprint Structure:**
```
my_app/
├── app.py # Main application
├── blueprints/ # Route blueprints
│ ├── auth.py # Auth routes
│ ├── api.py # API routes
│ └── admin.py # Admin routes
└── ...
```
### 🎯 Use Case Features
Each use case generates tailored code:
**Data Science:**
- Model training/inference endpoints
- Data processing pipelines
- Analytics and metrics APIs
- Jupyter notebook integration
**Real-time Dashboard:**
- WebSocket streaming endpoints
- Live data aggregation
- Chart and graph APIs
- Real-time metrics collection
**Content Management:**
- User authentication/authorization
- CRUD operations for content
- Media upload handling
- Admin dashboard interface
**Microservice:**
- Health check endpoints
- Service discovery integration
- Metrics and monitoring
- Minimal dependencies
## 🛡️ Security
- **CSRF Protection**: Built-in CSRF token generation and validation
- **Security Headers**: X-Frame-Options, CSP, HSTS, etc.
- **Rate Limiting**: Configurable rate limiting with decorators
- **Input Validation**: Request data validation and sanitization
- **Static File Security**: Path traversal protection
### JSON Limits (Size & Depth)
- Configure maximum JSON request body size via env `GOBSTOPPER_JSON_MAX_BYTES` (bytes). If exceeded, returns HTTP 413 (Request too large).
- Configure maximum JSON nesting depth via env `GOBSTOPPER_JSON_MAX_DEPTH`. If exceeded, returns HTTP 400 with a clear error.
- Limits are applied per-request; you can also set `request.max_body_bytes` / `request.max_json_depth` manually in middleware if needed.
### Secure Cookies (Production)
- When `ENV=production`, cookie attributes are enforced by default:
- `Secure=True`, `HttpOnly=True`, `SameSite=Lax` (if not set)
- To explicitly allow insecure cookies in production (not recommended), set `GOBSTOPPER_ALLOW_INSECURE_COOKIES=true`.
- Gobstopper logs a warning when it has to override insecure cookie attributes in production.
### WebSocket Safety
- Max message size enforced via `MAX_WS_MESSAGE_BYTES` (default: 1 MiB). Oversized messages are closed with code 1009.
- Basic send backpressure with chunked writes (`WS_SEND_CHUNK_BYTES`, default: 64 KiB).
### Basic Rate Limiting
Use the built-in token-bucket limiter:
```python
from gobstopper.utils.rate_limiter import TokenBucketLimiter, rate_limit
limiter = TokenBucketLimiter(rate=5, capacity=10) # 5 req/sec, burst 10
@app.get('/limited')
@rate_limit(limiter, key=lambda req: req.client_ip)
async def limited(request):
return {'ok': True}
```
### Session Management
Gobstopper includes a production-grade, database-backed session system with a familiar API.
- **Pluggable Backends**: Supports Redis, PostgreSQL, and in-memory storage.
- **Secure by Default**: Optional HMAC-signed session IDs and secure cookie flags.
- **Ergonomic API**: Simple `request.session` access and `response.set_cookie()` helpers.
For more details, see the [Middleware documentation](./docs/core/middleware.md#session-management).
**Note**: The default file-based session storage is not recommended for production, especially in cloud or containerized environments. Use Redis or PostgreSQL for production deployments.
## ⚡ Performance
- **RSGI Interface**: Maximum performance with Granian server
- **Async Throughout**: Non-blocking operations everywhere
- **Background Tasks**: Offload heavy work to background queues
- **Efficient Routing**: Fast path matching with parameter extraction
- **Optional Dependencies**: Load only what you need
## 🧪 Testing
```bash
# Install dev dependencies
uv sync --extra dev
# Run tests (when implemented)
uv run pytest
# Code quality
uv run black . # Format code
uv run ruff check . # Lint code
uv run mypy src/ # Type checking
```
## 📖 Documentation
### Official Documentation
Build and view the complete Sphinx documentation:
```bash
./build_docs.sh
cd sphinx-docs
python -m http.server 8080 -d build/html
# Visit http://localhost:8080
```
Or use live preview:
```bash
cd sphinx-docs
pip install -r requirements.txt
sphinx-autobuild source build/html
# Visit http://127.0.0.1:8000
```
### Additional Resources
- **Example Applications**: Fully commented examples demonstrating all features
- **Inline Documentation**: Comprehensive docstrings and type hints throughout
- **Markdown Docs**: Additional guides in the `docs/` directory
- See the [Changelog](CHANGELOG.md) for release notes
## 🛠️ Building from Source
Gobstopper includes Rust extensions for maximum performance. Build tools are provided:
```bash
# Install build dependencies
uv add --dev maturin build
# 1) Fast dev install of Rust core into your current venv (recommended while iterating)
# Defaults to features: router,templates,static
uv run python dev_install_rust.py --strip
# or explicitly:
MATURIN_FEATURES="router,templates,static" uv run python dev_install_rust.py --strip
# 2) Build wheels for the current platform (drops wheels in ./dist)
python build_wheels.py --platform local --features "router,templates,static"
# 3) Build Linux manylinux wheels for both x86_64 and aarch64 (requires Docker)
python build_wheels.py --platform linux --arch both --features "router,templates,static"
# 4) Build for all platforms
./build_linux_wheels.sh
```
To verify the Rust core is active at runtime, look for these logs on startup:
```
🚀 Found Rust extensions, using high-performance router.
🦀 Rust template engine initialized successfully
```
You can also run:
```bash
python -c "import gobstopper._core as core; print('Symbols:', [s for s in dir(core) if not s.startswith('_')][:20])"
```
## 📦 Distribution Packages
Pre-built wheels available for:
- **macOS ARM64**: Python 3.10, 3.11, 3.12, 3.13
- **Linux x86_64**: Python 3.10, 3.11, 3.12, 3.13
- **Linux ARM64**: Python 3.10, 3.11, 3.12, 3.13
- **Source Distribution**: Universal compatibility
## 🤝 Contributing
Gobstopper is built with modern Python and Rust:
- **Python 3.10+** (3.13 recommended) for latest async improvements
- **Rust** for high-performance components (optional)
- **Type hints** throughout the Python codebase
- **Modular architecture** for easy extension
- **Comprehensive error handling** and logging
- **Security-first design** with defense in depth
## 📄 License
MIT License - see LICENSE file for details.
## 🔗 Links
- **GitHub**: https://github.com/iristech-systems/Gobstopper
- **Documentation**: https://iristech-systems.github.io/Gobstopper-Docs/
- **PyPI**: https://pypi.org/project/gobstopper
---
**Gobstopper** - High-performance async web framework for modern Python web applications. 🎮
| text/markdown; charset=UTF-8; variant=GFM | Gobstopper Framework Team | null | null | null | MIT | web, framework, async, rsgi, granian, background-tasks | [
"Development Status :: 3 - Alpha",
"Environment :: Web Environment",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programmin... | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp==3.13.0",
"granian[reload]>=2.7.0",
"loguru>=0.7.2",
"msgspec",
"uvloop; sys_platform != \"win32\"",
"winloop; sys_platform == \"win32\"",
"typing-extensions==4.15.0",
"anyio==4.11.0",
"psutil>=5.8.0",
"pyecharts>=2.0.0",
"jinja2>=3.1.0; extra == \"all\"",
"duckdb>=0.9.0; extra == \"a... | [] | [] | [] | [
"Documentation, https://iristech-systems.github.io/Gobstopper-Docs/",
"Homepage, https://github.com/iristech-systems/gobstopper",
"Issues, https://github.com/iristech-systems/gobstopper/issues",
"Repository, https://github.com/iristech-systems/gobstopper"
] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Pop!_OS","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T12:36:05.721580 | gobstopper-0.3.8.tar.gz | 278,867 | 72/31/8d30bc32240aa53a7eab5576aa182a37c68e3301f3a2f2c4663ad241291c/gobstopper-0.3.8.tar.gz | source | sdist | null | false | 5dc2ca3b93920e011cda59147b59c716 | 47240243fed47c499a163eac1444616a97d24c99bc2a5a3fc010a62169f70d84 | 72318d30bc32240aa53a7eab5576aa182a37c68e3301f3a2f2c4663ad241291c | null | [] | 178 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.