metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | soaking | 0.6.1 | soak: graph-based pipelines and tools for LLM-assisted qualitative text analysis | # soak
DAG-based pipelines for LLM-assisted qualitative text analysis.
<img src="https://raw.githubusercontent.com/benwhalley/soak/main/docs/logo-sm.png" width="100">
## Installation
```bash
pip install soaking
```
## Documentation
Full documentation, examples, and sample outputs:
**https://github.com/benwhalley/soak**
## License
AGPL v3 or later
| text/markdown | null | Ben Whalley <ben.whalley@plymouth.ac.uk> | null | null | AGPL-3.0-or-later | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"instructor>=1.10.0",
"jinja2>=3.1.6",
"lark>=1.2.2",
"matplotlib>=3.10.3",
"networkx>=3.5",
"pandas>=2.3.1",
"pdfplumber>=0.11.7",
"pydantic>=2.11.7",
"python-box>=7.3.2",
"python-decouple>=3.8",
"python-docx>=1.2.0",
"python-magic>=0.4.27",
"scikit-learn>=1.7.1",
"scipy>=1.14",
"seaborn>=0.13.2",
"tiktoken>=0.9.0",
"typer>=0.16.0",
"umap-learn",
"asyncpg>=0.30.0",
"jinja-markdown>=1.210911",
"struckdown>=0.3.17",
"nltk>=3.9.2",
"rank-bm25>=0.2.2",
"openpyxl>=3.1.0",
"xlsxwriter>=3.1.0",
"statsmodels>=0.14.0",
"krippendorff>=0.6.0",
"pyirr>=0.84.1.2",
"setuptools>=80.9.0",
"pysbd>=0.3.4",
"tqdm>=4.67.0",
"simpleeval>=1.0.3",
"mkdocs>=1.6.0",
"mkdocs-material>=9.5.0",
"pymdown-extensions>=10.11.0",
"graphviz>=0.20.0",
"pot>=0.9.6.post1",
"pyphen>=0.16.0",
"plotly>=5.18.0",
"tenacity>=8.2.0",
"hdbscan>=0.8.33",
"rpy2>=3.5.0; extra == \"calibration\"",
"transformers>=4.51.0; extra == \"local-ai\"",
"sentence-transformers>=2.5.1; extra == \"local-ai\"",
"struckdown[local]>=0.3.17; extra == \"local-ai\"",
"pygam>=0.12.0; extra == \"local-ai\"",
"scrubadub>=2.0.0; extra == \"scrub\"",
"scrubadub-spacy>=2.0.0; extra == \"scrub\"",
"spacy<3.9,>=3.8.4; extra == \"scrub\"",
"spacy-transformers; extra == \"scrub\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.9 | 2026-02-21T00:07:59.007027 | soaking-0.6.1.tar.gz | 9,358,917 | 59/c6/c99ef58008a867a0efaa60349738bf3af63ae12c1d87d217d006b60a13d3/soaking-0.6.1.tar.gz | source | sdist | null | false | 7badf7ae65dfb52f2f9b12c46db260a2 | f96d19dfe4c208a5df07c5608565264a87900c243cc96d955f6e530ad65088dd | 59c6c99ef58008a867a0efaa60349738bf3af63ae12c1d87d217d006b60a13d3 | null | [
"LICENSE"
] | 208 |
2.4 | delegate-ai | 0.2.9 | Agentic team management system | <p align="center">
<img src="branding/logo.svg" alt="delegate" height="40">
</p>
<p align="center">
<strong>An engineering manager for your AI agents.</strong><br>
<sub>Delegate plans, staffs, coordinates, and delivers — you review the results.</sub>
</p>
<p align="center">
<a href="https://pypi.org/project/delegate-ai/"><img src="https://img.shields.io/pypi/v/delegate-ai" alt="PyPI"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue" alt="MIT License"></a>
<a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.12+-blue" alt="Python 3.12+"></a>
</p>
---
Tools like Cursor, Claude Code, and Copilot are excellent **copilots** — they help you write code faster. But you're still the one driving: one task at a time, synchronous, hands-on.
Delegate is the layer above. It's an **engineering manager** that runs persistent teams of AI agents on your machine. Tell it what you want in plain English — Delegate breaks the work into tasks, assigns agents, manages code reviews between them, and merges the result. You review the output, not write the code.
Spin up a team per project — a backend API, a mobile app, a data pipeline — each with its own agents, repos, and context. Within each project, agents work on multiple tasks in parallel: one builds a feature while another fixes a bug and a third refactors a module. Across projects, teams run independently and simultaneously. You manage a portfolio of work, not a single cursor.
## Quickstart
> **Requires Python 3.12+.** Check with `python3 --version`.
> On macOS: `brew install python@3.13` · On Ubuntu: `sudo apt install python3.12` · Or download from [python.org](https://www.python.org/downloads/).
```bash
pip install -U delegate-ai
delegate start # needs claude code login or ANTHROPIC_API_KEY in ENV
```
That's it. Delegate spins up a team with a manager + 5 engineer agents
and opens the console in your browser. Tell Delegate what to build — it plans the work, assigns agents, and manages delivery. You review the results. Add more projects anytime with `delegate team add`.
> **Note:** Delegate currently works with **local git repositories** — agents commit directly to branches on your machine. Support for remote repositories (GitHub, GitLab), external tools (Slack, Linear), and CI/CD integrations is coming soon.
<!-- To update: drag the mp4 into a GitHub issue comment, copy the URL, paste below -->
<p align="center">
<video src="https://github.com/user-attachments/assets/5d2f5a8f-8bae-45b7-85c9-53ccb1a47fa3" width="800" autoplay loop muted playsinline>
Your browser does not support the video tag.
</video>
</p>
### How is this different from other AI coding tools?
| | Copilots (Cursor, Copilot, Claude Code) | Delegate |
|---|---|---|
| **You are** | The developer — AI assists | The executive — AI delivers |
| **Scope** | One file, one task | Many projects, many tasks in parallel |
| **Context** | Fresh each session | Persistent across weeks of work |
| **Agents** | One, disposable | Teams that coordinate and review each other |
| **Output** | Code suggestions and edits | Reviewed, tested, merge-ready branches |
| **Workflow** | You drive every step | You set direction, check in when you want |
This isn't a replacement for copilots — it's a different level of abstraction. Use Cursor to pair-program on a tricky function. Use Delegate to hand off "build the auth system" and come back to a reviewed PR.
## What happens when you send a task
```
You: "Add a /health endpoint that returns uptime and version"
```
1. **Delegate** (the manager agent) breaks it down, creates tasks, assigns to available engineers
2. **Engineer** gets an isolated git worktree with its own environment (venv, node_modules, etc.), writes the code, runs tests, submits for review
3. **Reviewer** (another agent) checks the diff, runs the test suite, approves or requests changes
4. **You** approve the merge (or set repos to auto-merge)
5. **Merge worker** rebases onto main, runs pre-merge checks, fast-forward merges
Meanwhile, you can send more tasks — Delegate will prioritize, assign, and multiplex across the team. All of this is visible in real-time in the web UI.
## Key features
**Many projects, many tasks, all at once.** Spin up a team per project — each with its own agents, repos, and accumulated context. Within each project, agents tackle multiple tasks in parallel, each in its own git worktree. Across projects, teams run independently. Your throughput scales with the number of teams, not with your attention. Zero cost when a team is idle.
**Persistent teams, not disposable agents.** Create a team once, use it across hundreds of tasks. Agents maintain memory — journals, notes, context files — so they learn your codebase, conventions, and patterns over time. Like a real team, they get better the longer they work together.
**Async by default.** You don't need to sit and watch. Send Delegate a task, close your laptop, come back later. The team keeps working — writing code, reviewing each other, running tests. Check in when you want. This is the fundamental difference from copilots, which require your continuous presence.
**Agents that coordinate, not just execute.** Engineers don't work in isolation. When one agent finishes coding, another reviews the diff and runs the test suite. Tasks flow through `todo → in_progress → in_review → in_approval → merging → done` with agents handling each transition — just like a well-run engineering team.
**Browser UI with real-time visibility.** Watch agents pick up tasks, write code, and review each other's work — live. Approve merges, browse diffs, inspect files, and run shell commands — all from the browser.
**Works with your existing setup.** Delegate reads `claude.md`, `AGENTS.md`, `.cursorrules`, and `.github/copilot-instructions.md` from your repos automatically — no migration needed.
**Real git, real branches.** Each agent works in isolated [git worktrees](https://git-scm.com/docs/git-worktree). Branches are named `delegate/<team>/T0001`. No magic file systems — you can `git log` any branch anytime.
**Isolated environments per task.** Every worktree gets its own environment — Python venvs, Node modules, Rust targets — so agents never step on each other. Delegate auto-detects your project's tooling (pyproject.toml, package.json, Cargo.toml, shell.nix, etc.) and generates `.delegate/setup.sh` and `.delegate/premerge.sh` scripts that reproduce the environment and run tests before merge. Generated scripts use a 3-layer additive install strategy — copy from the main repo, install from system cache, then install with network — all three always run but each is idempotent, so setup is fast and dependency changes are picked up automatically. These are committed to the repo — edit them if the defaults don't fit.
**Customizable workflows.** Define your own task lifecycle in Python:
```python
from delegate.workflow import Stage, workflow
class Deploy(Stage):
label = "Deploying"
def enter(self, ctx):
ctx.run_script("./deploy.sh")
@workflow(name="with-deploy", version=1)
def my_workflow():
return [Todo, InProgress, InReview, Deploy, Done]
```
**Mix models by role.** All agents default to Claude Sonnet. Override per agent with `--model opus` for tasks requiring stronger reasoning.
**Team charter in markdown.** Set review standards, communication norms, and team values in a markdown file — like an EM setting expectations for the team.
**Built-in shell.** Run any command from the chat with `/shell ls -la`. Output renders inline.
**Installable as an app.** Delegate's web UI is a [Progressive Web App](https://developer.mozilla.org/en-US/docs/Web/Progressive_Web_Apps) — install it from your browser for a native app experience.
## Architecture
```
~/.delegate/
├── members/ # Human identities (from git config)
│ └── nikhil.yaml
├── teams/
│ └── my-project/
│ ├── agents/ # delegate (manager) + engineer agents
│ │ ├── delegate/ # Manager agent — your delegate
│ │ ├── alice/ # Engineer agent with worktrees, logs, memory
│ │ └── bob/
│ ├── repos/ # Symlinks to your real git repos
│ ├── shared/ # Team-wide shared files
│ └── workflows/ # Registered workflow definitions
└── db.sqlite # Messages, tasks, events
```
Agents are [Claude Code](https://docs.anthropic.com/en/docs/claude-code) instances. The Delegate agent is the EM — it reads your messages, breaks down work, assigns tasks, and coordinates the team. Engineers work in git worktrees and communicate through a message bus. The daemon dispatches agent turns as async tasks, multiplexing across the whole team. All storage is local files — plaintext or sqlite.
There's no magic. You can `ls` into any agent's directory and see exactly what they're doing. Worklogs, memory journals, context files — it's all plain text.
## Sandboxing & Permissions
Delegate restricts what agents can do through six independent layers — defense-in-depth so no single bypass compromises the system:
**1. Write-path isolation (`can_use_tool` callback)**
Every agent turn runs with a programmatic guard that inspects each tool call before it executes. The `Edit` and `Write` tools are only allowed to target files inside explicitly permitted directories:
| Role | Allowed write paths |
|------|-------------------|
| Manager | Entire team directory (`~/.delegate/teams/<team>/`) |
| Engineer | Own agent directory, task worktree(s), team `shared/` folder |
Writes outside these paths are denied with an error message — the model sees the denial and can adjust.
The same guard also enforces a **bash deny-list** — commands containing dangerous substrings are blocked before execution:
```
sqlite3, DROP TABLE, DELETE FROM, rm -rf .git
```
This prevents agents from directly manipulating the database or destroying git metadata, even if they attempt it via bash.
**2. Disallowed git commands (`disallowed_tools`)**
Git commands that could change branch topology, interact with remotes, or rewrite history are hidden from agents entirely at the SDK level:
```
git rebase, git merge, git pull, git push, git fetch,
git checkout, git switch, git reset --hard, git worktree,
git branch, git remote, git filter-branch, git reflog expire
```
Agents never see these tools and cannot invoke them — branch management is handled by Delegate's merge worker instead.
**3. OS-level bash sandbox (macOS Seatbelt / Linux bubblewrap)**
All bash commands run inside an OS-level sandbox provided by Claude Code's native sandboxing. The sandbox restricts filesystem writes to:
- The team's working directory (`~/.delegate/teams/<uuid>/`) — not the entire `DELEGATE_HOME`, so `protected/` and other teams' directories are never writable from bash
- Platform temp directory (`/tmp` on Unix, `%TEMP%` on Windows)
- Each registered repo's `.git/` directory — so `git add` / `git commit` work inside worktrees without opening the repo working tree to arbitrary bash writes. All agents (including managers) get `.git/` access.
Even if the model crafts a bash command that bypasses the tool-level guards, the kernel blocks the write. Agents cannot `git` into unregistered repos (the sandbox blocks writes to their `.git/`), and they cannot write to the working tree of any repo via bash (only `.git/` is allowed).
**4. Network domain allowlist**
Agents' network access is controlled via a domain allowlist stored in `protected/network.yaml` (outside the sandbox, so agents can't tamper with it). By default, common package-manager registries and git forges are allowed (PyPI, npm, crates.io, Go proxy, RubyGems, GitHub, GitLab, Bitbucket). The sandbox proxy blocks outbound connections to anything not on the list.
```bash
delegate network show # View current allowlist
delegate network allow api.example.com # Add a domain
delegate network disallow example.com # Remove a domain
delegate network reset # Restore curated defaults
```
**5. In-process MCP tools (protected data access)**
Agents interact with the database, task system, and mailbox through in-process MCP tools that run inside the daemon (outside the agent sandbox). This means agents never need shell access to `protected/` — all operations go through validated code paths. Agent identity is baked into each tool closure, preventing impersonation: an agent cannot send messages as another agent or access data outside its team.
**6. Daemon-managed worktree lifecycle**
Git operations that modify branch topology — `git worktree add`, `git worktree remove`, branch creation, rebase, and merge — run exclusively in the **daemon process**, which is unsandboxed. Agents never run these commands directly. When a manager creates a task with `--repo`, only the DB record and branch name are saved; the daemon creates the actual worktree before dispatching any turns to the assigned worker. This clean separation means agents can write code and commit inside their worktrees but cannot create, remove, or manipulate worktrees or branches.
Together these six layers mean: the model can only write to directories Delegate explicitly allows, cannot touch your git branch topology, cannot access the database directly, cannot contact unauthorized domains, cannot escape the sandbox even through creative bash commands, and all infrastructure operations happen in a controlled daemon context.
## Configuration
### Environment
```bash
# Required — your Anthropic API key
ANTHROPIC_API_KEY=sk-ant-...
# Optional
DELEGATE_HOME=~/.delegate # Override home directory
```
### CLI commands
```bash
delegate start [--port 3548] [--env-file .env] # Start everything
delegate stop # Stop the daemon
delegate status # Check if running
delegate team add backend --agents 3 --repo /path/to/repo
delegate team list
delegate repo add myteam /path/to/another-repo --test-cmd "pytest -x"
delegate agent add myteam carol --role engineer
delegate workflow init myteam # Register default workflow
delegate workflow add myteam ./my-workflow.py # Register custom workflow
delegate network show # View network allowlist
delegate network allow api.github.com # Allow a domain
delegate network disallow example.com # Remove a domain
delegate network reset # Restore curated defaults
```
### Set Auto Approval
By default, Delegate expects you to do a final code review and give explicit
approval before merging into your local repo's main. If you wanted, you can set
it to auto approval:
```bash
delegate repo set-approval myteam my-repo auto
```
## How it works
The **daemon** is the central loop:
- Polls agent inboxes for unread messages
- Dispatches turns (one agent at a time per agent, many agents in parallel)
- Processes the merge queue
- Serves the web UI and SSE streams
**Agents** are stateless between turns. Each turn:
1. Read inbox messages
2. Execute actions (create tasks, write code, send messages, run commands)
3. Write context summary for next turn
The **workflow engine** is a Python DSL. Each task is stamped with a workflow version at creation. Stages define `enter`/`exit`/`action`/`assign` hooks. Built-in functions (`ctx.setup_worktree()`, `ctx.create_review()`, `ctx.merge_task()`, etc.) handle git operations, reviews, and merging.
## Development
```bash
git clone https://github.com/nikhilgarg28/delegate.git
cd delegate
uv sync
uv run delegate start --foreground
```
### Tests
```bash
# Python tests
uv run pytest tests/ -x -q
# Playwright E2E tests (needs npm install first)
npm install
npx playwright install
npx playwright test
```
## Roadmap
Delegate is under active development. Here's what's coming:
- ~~**Sandboxing & permissions**~~ — ✅ shipped in v0.2.5 (OS-level sandbox + write-path isolation + git command restrictions).
- ~~**Isolated environments**~~ — ✅ shipped in v0.2.7 (a script generates sensible default
for .delegate/setup.sh and .delegate/premerge.sh), agents can edit as needed.
- **More powerful workflows** — conditional transitions, parallel stages, human-in-the-loop checkpoints, and webhook triggers.
- **External tool integrations** — GitHub (PRs, issues), Slack (notifications, commands), Linear (task sync), and CI/CD pipelines (GitHub Actions, etc.).
- **Remote repositories** — push to and pull from remote Git hosts, not just local repos.
- **Exportable team templates** — package a team's configuration (agents, workflows, charter, repo settings) as a shareable template so others can spin up an identical setup in one command.
If any of these are particularly important to you, open an issue — it helps prioritize.
## About
Delegate is built by a solo developer as a side project — and built *with* Delegate. No VC funding, no growth targets — just a tool I wanted for myself and decided to open-source. MIT licensed, free forever.
If you find it useful, star the repo or say hi in an issue. Bug reports and contributions are welcome.
## License
[MIT](LICENSE)
| text/markdown | null | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Build Tools"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"anthropic>=0.40.0",
"claude-agent-sdk>=0.1.36",
"click>=8.1",
"fastapi>=0.128.4",
"filetype>=1.2.0",
"python-dotenv>=1.0",
"python-multipart>=0.0.20",
"pyyaml>=6.0.3",
"uvicorn[standard]>=0.40.0",
"watchfiles>=1.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T00:07:54.893512 | delegate_ai-0.2.9.tar.gz | 646,885 | cb/01/3d992e7e37d0f01e98600753087dac8d742de9bc728ed1292ca479175107/delegate_ai-0.2.9.tar.gz | source | sdist | null | false | 168117e3cebc69022804f455fc2aecc3 | 060f8afab6609bfe7f5075e6ec8fcb7e51e771ae885252b23692d45d2fa9fe1e | cb013d992e7e37d0f01e98600753087dac8d742de9bc728ed1292ca479175107 | MIT | [
"LICENSE"
] | 230 |
2.4 | breakpoint-library | 0.1.3 | Local-first decision engine for baseline vs candidate LLM output checks. | # BreakPoint Library
Prevent bad AI releases before they hit production.
You change a model.
The output looks fine.
But:
- Cost jumps +38%.
- A phone number slips into the response.
- The format breaks your downstream parser.
BreakPoint catches it before you deploy.
It runs locally.
Policy evaluation is deterministic from your saved artifacts.
It gives you one clear answer:
`ALLOW` · `WARN` · `BLOCK`
## Quick Example
```bash
breakpoint evaluate baseline.json candidate.json
```
```text
STATUS: BLOCK
Reasons:
- Cost increased by 38% (baseline: 1,000 tokens -> candidate: 1,380)
- Detected US phone number pattern
```
Ship with confidence.
## Lite First (Default)
This is all you need to get started:
```bash
breakpoint evaluate baseline.json candidate.json
```
Lite is local, deterministic, and zero-config. Out of the box:
- Cost: `WARN` at `+20%`, `BLOCK` at `+40%`
- PII: `BLOCK` on first detection (email, phone, credit card)
- Drift: `WARN` at `+35%`, `BLOCK` at `+70%`
- Empty output: always `BLOCK`
**Advanced option:** Need config-driven policies, output contract, latency, presets, or waivers? Use `--mode full` and see `docs/user-guide-full-mode.md`.
## Full Mode (If You Need It)
Add `--mode full` when you need config-driven policies, output contract, latency, presets, or waivers. Full details: `docs/user-guide-full-mode.md`.
```bash
breakpoint evaluate baseline.json candidate.json --mode full --json --fail-on warn
```
## CI First (Recommended)
```bash
breakpoint evaluate baseline.json candidate.json --json --fail-on warn
```
Why this is the default integration path:
- Machine-readable decision payload (`schema_version`, `status`, `reason_codes`, metrics).
- Non-zero exit code on risky changes.
- Easy to wire into existing CI without additional services.
Default policy posture (out of the box, Lite):
- Cost: `WARN` at `+20%`, `BLOCK` at `+40%`
- PII: `BLOCK` on first detection
- Drift: `WARN` at `+35%`, `BLOCK` at `+70%`
### Copy-Paste GitHub Actions Gate
Use the template:
- `examples/ci/github-actions-breakpoint.yml`
Copy it to:
- `.github/workflows/breakpoint-gate.yml`
What `--fail-on warn` means:
- Any `WARN` or `BLOCK` fails the CI step.
- Exit behavior remains deterministic: `ALLOW=0`, `WARN=1`, `BLOCK=2`.
If you only want to fail on `BLOCK`, change:
- `BREAKPOINT_FAIL_ON: warn`
to:
- `BREAKPOINT_FAIL_ON: block`
## Try In 60 Seconds
```bash
pip install -e .
make demo
```
What you should see:
- Scenario A: `BLOCK` (cost spike)
- Scenario B: `BLOCK` (format/contract regression)
- Scenario C: `BLOCK` (PII + verbosity drift)
- Scenario D: `BLOCK` (small prompt change -> cost blowup)
## Four Realistic Examples
Baseline for all examples:
- `examples/install_worthy/baseline.json`
### 1) Cost regression after model swap
```bash
breakpoint evaluate examples/install_worthy/baseline.json examples/install_worthy/candidate_cost_model_swap.json
```
Expected: `BLOCK`
Why it matters: output appears equivalent, but cost increases enough to violate policy.
### 2) Structured-output behavior regression
```bash
breakpoint evaluate examples/install_worthy/baseline.json examples/install_worthy/candidate_format_regression.json
```
Expected: `BLOCK`
Why it matters: candidate drops expected structure and drifts from baseline behavior.
### 3) PII appears in candidate output
```bash
breakpoint evaluate examples/install_worthy/baseline.json examples/install_worthy/candidate_pii_verbosity.json
```
Expected: `BLOCK`
Why it matters: candidate introduces PII and adds verbosity drift.
### 4) Small prompt change -> big cost blowup
```bash
breakpoint evaluate examples/install_worthy/baseline.json examples/install_worthy/candidate_killer_tradeoff.json
```
Expected: `BLOCK`
Why it matters: output still looks workable, but detail-heavy prompt changes plus a model upgrade create large cost and latency increases with output-contract drift.
More scenario details:
- `docs/install-worthy-examples.md`
## CLI
Evaluate two JSON files:
```bash
breakpoint evaluate baseline.json candidate.json
```
Evaluate a single combined JSON file:
```bash
breakpoint evaluate payload.json
```
JSON output for CI/parsing:
```bash
breakpoint evaluate baseline.json candidate.json --json
```
Exit-code gating options:
```bash
# fail on WARN or BLOCK
breakpoint evaluate baseline.json candidate.json --fail-on warn
# fail only on BLOCK
breakpoint evaluate baseline.json candidate.json --fail-on block
```
Stable exit codes:
- `0` = `ALLOW`
- `1` = `WARN`
- `2` = `BLOCK`
Waivers, config, presets: see `docs/user-guide-full-mode.md`.
## Input Schema
Each input JSON is an object with at least:
- `output` (string)
Optional fields used by policies:
- `cost_usd` (number)
- `model` (string)
- `tokens_total` (number)
- `tokens_in` / `tokens_out` (number)
- `latency_ms` (number)
Combined input format:
```json
{
"baseline": { "output": "..." },
"candidate": { "output": "..." }
}
```
## Python API
```python
from breakpoint import evaluate
decision = evaluate(
baseline_output="hello",
candidate_output="hello there",
metadata={"baseline_tokens": 100, "candidate_tokens": 140},
)
print(decision.status)
print(decision.reasons)
```
## Additional Docs
- `docs/user-guide.md`
- `docs/user-guide-full-mode.md` (Full mode: config, presets, environments, waivers)
- `docs/terminal-output-lite-vs-full.md` (Lite vs Full terminal output, same format)
- `docs/quickstart-10min.md`
- `docs/install-worthy-examples.md`
- `docs/baseline-lifecycle.md`
- `docs/ci-templates.md`
- `docs/value-metrics.md`
- `docs/policy-presets.md`
- `docs/release-gate-audit.md`
## Contact
Suggestions and feedback: [c.holmes.silva@gmail.com](mailto:c.holmes.silva@gmail.com) or [open an issue](https://github.com/cholmess/breakpoint-library/issues).
| text/markdown | null | Christopher Holmes <c.holmes.silva@gmail.com> | null | null | null | null | [] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"pytest>=8.0.0; extra == \"dev\"",
"sentence-transformers>=2.2.2; extra == \"ml\"",
"torch>=2.0.0; extra == \"ml\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.6 | 2026-02-21T00:07:23.558516 | breakpoint_library-0.1.3.tar.gz | 39,785 | d4/ef/5a0e2c5b3c922fcbcd85a9c1ff0a9bc22c67f172ff236aa5df3cc691be38/breakpoint_library-0.1.3.tar.gz | source | sdist | null | false | a1b6efabb9397cf3801e72966735dc4c | 3eca9841be0698018d2fe748b55d19367cb57196d1cb73115bf821df916872b1 | d4ef5a0e2c5b3c922fcbcd85a9c1ff0a9bc22c67f172ff236aa5df3cc691be38 | null | [] | 141 |
2.4 | earthscope-cli | 1.1.1 | A CLI for the EarthScope API | # EarthScope CLI
A CLI for interacting with EarthScope's APIs
## Getting Started
### Requirements
To use the CLI you must have:
- Registered an account with Earthscope ([sign up now](https://earthscope.org/user/login)). See [here](https://www.earthscope.org/data/authentication) for more information
- Python >= 3.9
### Installation
Install from PyPI
```shell
pip install earthscope-cli
```
### Usage
A new `es` command is available in your terminal. Use `--help` with any command to explore commands and options.
```shell
es --help
es user --help
```
#### Login to your EarthScope account
```shell
es login
```
This will open your browser to a confirmation page with the same code shown on your command line.
If you are on a device that does not have a web browser, you can copy the displayed url in a browser on another device (personal computer, mobile device, etc...) and complete the confirmation there.
The `es login` command will save your token locally. If this token is deleted, you will need to re-authenticate (login) to retrieve your token again.
#### Get your access token
```shell
es user get-access-token
```
The `get-access-token` command will display your access token. If your access token is close to expiration or expired,
the default behavior is to automatically refresh your token.
If you want to manually refresh your token:
```shell
es user refresh-access-token
```
Never share your tokens. If you think your token has been compromised, please revoke your refresh token and re-authenticate (login):
```shell
es user revoke-refresh-token
es login
```
#### Get your user profile
```shell
es user get-profile
```
## Documentation
For detailed usage examples, authentication guides, and advanced features, see the [full documentation](https://docs.earthscope.org/projects/CLI).
## EarthScope SDK
If you would like to use EarthScope APIs from python, please use the [earthscope-sdk](https://gitlab.com/earthscope/public/earthscope-sdk/) directly.
## FAQ/troubleshooting
- **How long does my access token last?**
- Your access token lasts 24 hours. Your refresh token can be used to refresh your access token.
- **How long does my refresh token last?**
- Your refresh token will never expire - unless you are inactive (do not use it) for one year.
If it does expire, you will need to re-authenticate to get a new access and refresh token.
- **What is a refresh token and how does the CLI use it?**
- A refresh token is a special token that is used to renew your access token without you needing to log in again.
The refresh token is obtained from your access token, and using the `es user get-access-token` command will automatically
renew your access token if it is close to expiration. You can 'manually' refresh your access token by using the command `es user refresh-access-token`.
If your access token is compromised, you can revoke your refresh token using `es user revoke-refresh-token`. Once your access token expires,
it can no longer be renewed and you will need to re-login.
- **Should I hard-code my access token into my script?**
- No. We recommend you use the cli commands to retrieve your access tokens in your scripts.
This way your access token will not be compromised by anyone viewing your script.
The access token only lasts 24 hours and cannot be used afterwards unless refreshed.
| text/markdown | null | EarthScope <data-help@earthscope.org> | null | null | null | null | [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"typer-slim<0.24.0,>=0.17.3",
"typer-di>=0.1.3",
"earthscope-sdk>=1.3.0",
"rich>=13.8.0",
"black; extra == \"dev\"",
"bumpver; extra == \"dev\"",
"build; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"pytest; extra == \"dev\"",
"twine>=6.0.1; extra == \"dev\"",
"pip-tools; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://gitlab.com/earthscope/public/earthscope-cli"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-21T00:04:30.904558 | earthscope_cli-1.1.1.tar.gz | 27,212 | 1c/64/3f10dd563cb88a761492bbaf2e393b2eeee08ca8c4b8320fd2445402e85a/earthscope_cli-1.1.1.tar.gz | source | sdist | null | false | a8ab1c7b813c4131cd4c24622b87ac37 | 8f3c35888165a650db270a53ea621b00a9f2a7e687e0202aca58b864d1d9bf66 | 1c643f10dd563cb88a761492bbaf2e393b2eeee08ca8c4b8320fd2445402e85a | Apache-2.0 | [
"LICENSE"
] | 243 |
2.4 | o2-sdk | 0.1.0 | Python SDK for the O2 Exchange — a fully on-chain order book DEX on the Fuel Network | <p align="center">
<img src="https://docs.o2.app/logo.svg" width="80" alt="O2 Exchange">
</p>
<h1 align="center">O2 SDK for Python</h1>
<p align="center">
<a href="https://github.com/o2-exchange/sdks/actions/workflows/ci.yml"><img src="https://github.com/o2-exchange/sdks/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
<a href="https://python.org"><img src="https://img.shields.io/badge/python-3.10+-blue.svg" alt="Python 3.10+"></a>
<a href="../../LICENSE"><img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" alt="License: Apache 2.0"></a>
</p>
<p align="center">
Official Python SDK for the <a href="https://o2.app">O2 Exchange</a> — a fully on-chain order book DEX on the Fuel Network.
</p>
---
## Installation
```bash
pip install o2-sdk
```
Or install from source:
```bash
pip install -e sdks/python
```
Requires **Python 3.10+**.
## Quick Start
Recommended first integration path on testnet:
1. Create/load owner wallet
2. Call `setup_account()` (idempotent setup + faucet mint attempt on testnet/devnet)
3. (Optional) Call `top_up_from_faucet()` for an explicit testnet/devnet top-up
4. Create session
5. Place orders
6. Read balances/orders
7. Settle balances back to your trading account after fills; order funds are moved into the market contract during execution and should be swept after fills or cancellations
```python
import asyncio
from o2_sdk import Network, O2Client, OrderSide
async def main():
client = O2Client(network=Network.TESTNET)
owner = client.generate_wallet()
account = await client.setup_account(owner)
await client.top_up_from_faucet(owner)
await client.create_session(owner=owner, markets=["fFUEL/fUSDC"])
order = await client.create_order("fFUEL/fUSDC", OrderSide.BUY, "0.02", "50")
print(f"order tx={order.tx_id}")
balances = await client.get_balances(account.trade_account_id)
fusdc = balances.get("fUSDC")
print(f"fUSDC balance={fusdc.trading_account_balance if fusdc else 0}")
settle = await client.settle_balance("fFUEL/fUSDC")
print(f"settle tx={settle.tx_id}")
await client.close()
asyncio.run(main())
```
`get_balances(trade_account_id)` is an aggregated view across trading account
and market contracts, so `settle_balance(...)` does not necessarily change aggregate totals.
## Network Configuration
Default network configs:
| Network | REST API | WebSocket | Fuel RPC | Faucet |
|---------|----------|-----------|----------|--------|
| `Network.TESTNET` | `https://api.testnet.o2.app` | `wss://api.testnet.o2.app/v1/ws` | `https://testnet.fuel.network/v1/graphql` | `https://fuel-o2-faucet.vercel.app/api/testnet/mint-v2` |
| `Network.DEVNET` | `https://api.devnet.o2.app` | `wss://api.devnet.o2.app/v1/ws` | `https://devnet.fuel.network/v1/graphql` | `https://fuel-o2-faucet.vercel.app/api/devnet/mint-v2` |
| `Network.MAINNET` | `https://api.o2.app` | `wss://api.o2.app/v1/ws` | `https://mainnet.fuel.network/v1/graphql` | none |
API rate limits: <https://docs.o2.app/api-endpoints-reference.html#rate-limits>.
Use a custom deployment config:
```python
from o2_sdk import NetworkConfig, O2Client
client = O2Client(
custom_config=NetworkConfig(
api_base="https://my-gateway.example.com",
ws_url="wss://my-gateway.example.com/v1/ws",
fuel_rpc="https://mainnet.fuel.network/v1/graphql",
faucet_url=None,
)
)
```
> [!IMPORTANT]
> Mainnet note: there is no faucet; account setup requires an owner wallet that already has funds deposited for trading. SDK-native bridging flows are coming soon.
## Wallet Security
- `generate_wallet()` / `generate_evm_wallet()` use cryptographically secure randomness and are suitable for mainnet key generation.
- For production custody, use external signers (KMS/HSM/hardware wallets) instead of long-lived in-process private keys.
- See `docs/guides/external_signers.rst` for production signer integration.
## Wallet Types and Identifiers
Why choose each wallet type:
- **Fuel-native wallet** — best for interoperability with other apps in the Fuel ecosystem.
- **EVM wallet** — best if you want to reuse existing EVM accounts across chains and simplify bridging from EVM chains.
O2 owner identity model:
- O2 `owner_id` is always a Fuel B256 (`0x` + 64 hex chars).
- Fuel-native wallets already expose that directly as `b256_address`.
- EVM wallets expose both:
- `evm_address` (`0x` + 40 hex chars)
- `b256_address` (`0x` + 64 hex chars)
- For EVM wallets, `b256_address` is the EVM address zero-left-padded to 32 bytes:
- `owner_b256 = 0x000000000000000000000000 + evm_address[2:]`
Identifier usage:
| Context | Identifier |
|---------|------------|
| Owner/account/session APIs | `owner_id` = wallet `b256_address` |
| Trading account state | `trade_account_id` (contract ID) |
| Human-visible EVM identity | `evm_address` |
| Markets | pair (`"fFUEL/fUSDC"`) or `market_id` |
`owner_id` vs `trade_account_id`:
- `owner_id` is wallet identity (`b256_address`) used for ownership/auth and session setup.
- `trade_account_id` is the trading account contract ID used for balances/orders/account state.
- `setup_account(wallet)` links these by creating/fetching the trading account for that owner.
## Features
- **Trading** — Place, cancel, and manage orders with automatic price/quantity scaling
- **Dual-Mode Numeric Inputs** — Pass human values (`"0.02"`, `100.0`) or explicit raw chain integers (`ChainInt(...)`)
- **Strongly Typed** — Enums for order sides/types, dataclasses for actions and order parameters
- **Market Data** — Fetch order book depth, recent trades, OHLCV candles, and ticker data
- **WebSocket Streams** — Real-time depth, order, trade, balance, and nonce updates via `async for`
- **Wallet Support** — Fuel-native and EVM wallets with session-based signing
- **Batch Actions** — Submit up to 5 typed actions per request (cancel + settle + create in one call)
- **Error Handling** — Typed exceptions (`O2Error`, `InvalidSignature`, `RateLimitExceeded`, etc.)
## API Overview
| Method | Description |
|--------|-------------|
| `generate_wallet()` / `load_wallet(pk)` | Create or load a Fuel wallet |
| `generate_evm_wallet()` / `load_evm_wallet(pk)` | Create or load an EVM wallet |
| `setup_account(wallet)` | Idempotent account setup (create + fund + whitelist) |
| `top_up_from_faucet(owner)` | Explicit faucet top-up to the owner's trading account (testnet/devnet) |
| `create_session(owner, markets)` | Create a trading session |
| `create_order(market, side, price, qty)` | Place an order (`price/qty` accept human or `ChainInt`) |
| `cancel_order(order_id, market)` | Cancel a specific order |
| `cancel_all_orders(market)` | Cancel all open orders |
| `settle_balance(market)` | Settle filled order proceeds |
| `actions_for(market)` | Build typed market actions with fluent helpers |
| `batch_actions(actions)` | Submit typed action batch (`MarketActions` or `MarketActionGroup`) |
| `get_markets()` / `get_market(pair)` | Fetch market info |
| `get_depth(market)` / `get_trades(market)` | Order book and trade data |
| `get_balances(account)` / `get_orders(account, market)` | Account data |
| `stream_depth(market)` | Real-time order book stream |
| `stream_orders(account)` / `stream_trades(market)` | Real-time updates |
| `withdraw(owner, asset, amount)` | Withdraw funds |
See [AGENTS.md](AGENTS.md) for the complete API reference with all parameters and types.
## Guides
- [`docs/guides/identifiers.rst`](docs/guides/identifiers.rst)
- [`docs/guides/trading.rst`](docs/guides/trading.rst)
- [`docs/guides/market_data.rst`](docs/guides/market_data.rst)
- [`docs/guides/websocket_streams.rst`](docs/guides/websocket_streams.rst)
- [`docs/guides/error_handling.rst`](docs/guides/error_handling.rst)
- [`docs/guides/external_signers.rst`](docs/guides/external_signers.rst)
## Examples
| Example | Description |
|---------|-------------|
| [`quickstart.py`](examples/quickstart.py) | Connect, create a wallet, place your first order |
| [`market_maker.py`](examples/market_maker.py) | Two-sided quoting loop with cancel/replace |
| [`taker_bot.py`](examples/taker_bot.py) | Monitor depth and take liquidity |
| [`portfolio.py`](examples/portfolio.py) | Multi-market balance tracking and management |
Run an example:
```bash
python examples/quickstart.py
```
## Testing
Unit tests (no network required):
```bash
pytest tests/ -m "not integration" -v
```
Integration tests (requires `O2_PRIVATE_KEY` env var):
```bash
O2_PRIVATE_KEY=0x... pytest tests/test_integration.py -m integration -v --timeout=120
```
Integration tests reuse cached wallets in `sdks/python/.integration-wallets.json` (gitignored)
and only faucet when balances are below a conservative threshold, which improves repeat-run speed.
## AI Agent Integration
See [AGENTS.md](AGENTS.md) for an LLM-optimized reference covering all methods, types, error codes, and common patterns.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp>=3.9.0",
"coincurve>=20.0.0",
"pycryptodome>=3.20.0",
"websockets>=12.0",
"mypy>=1.19.0; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest-timeout>=2.2; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.15.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T00:04:30.238922 | o2_sdk-0.1.0.tar.gz | 92,780 | 57/b0/a2a1aee7590bd584dee5bfc244ecb8495b00a5c7fa341a4e3b64c5089a7c/o2_sdk-0.1.0.tar.gz | source | sdist | null | false | 67f4f97dacc27987be451c01ccf50ee2 | b3d9554f36f674e44b932ef9b3a17f2177d69e64ecc02f2ec642f954f721bec6 | 57b0a2a1aee7590bd584dee5bfc244ecb8495b00a5c7fa341a4e3b64c5089a7c | Apache-2.0 | [] | 245 |
2.3 | bwssh | 0.1.2 | Bitwarden-backed SSH agent for Linux | # bwssh
Bitwarden-backed SSH agent for Linux. Store your SSH keys in Bitwarden and use
them seamlessly with any SSH client.
## Features
- **Bitwarden integration**: SSH keys stored securely in your Bitwarden vault
- **Standard SSH agent**: Works with `ssh`, `git`, and any SSH client
- **Systemd integration**: Runs as a user service, starts on login
- **Forwarding protection**: Blocks remote servers from using your keys
- **Optional polkit prompts**: Desktop authorization popups (disabled by default)
## Requirements
- Linux with systemd user services
- Python 3.12+
- Bitwarden CLI (`bw`) installed and logged in
## Installation
```bash
uv sync
```
## Bitwarden CLI
Install the Bitwarden CLI (`bw`) and log in before using bwssh. See
https://bitwarden.com/help/cli/ for installation instructions.
```bash
bw --version
bw login
```
## Quick start
```bash
uv run bwssh install --user-systemd
uv run bwssh start
uv run bwssh unlock
```
```bash
export SSH_AUTH_SOCK=${XDG_RUNTIME_DIR}/bwssh/agent.sock
ssh -T git@github.com
```
## Configuration
Config file: `~/.config/bwssh/config.toml`
### Quick Setup (Recommended)
The easiest way to configure bwssh is to use the init command:
```bash
# First, unlock Bitwarden
export BW_SESSION=$(bw unlock --raw)
# Then run init to auto-discover SSH keys
bwssh config init
```
This will find all SSH keys in your Bitwarden vault and create a config file.
### Manual Setup
If you prefer to configure manually, first find your SSH key IDs:
```bash
bw list items | jq -r '.[] | select(.sshKey != null) | "\(.id) \(.name)"'
```
Then create `~/.config/bwssh/config.toml`:
```toml
[bitwarden]
bw_path = "/full/path/to/bw" # Use 'which bw' to find this
item_ids = [
"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", # your-key-name
]
```
### Full Config Example
```toml
[daemon]
log_level = "INFO"
[bitwarden]
bw_path = "/usr/bin/bw"
item_ids = [
"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
]
[auth]
# Polkit authorization prompts (default: disabled)
require_polkit = false
# Block forwarded agent requests (recommended)
deny_forwarded_by_default = true
[ssh]
allow_ed25519 = true
allow_ecdsa = true
allow_rsa = true
```
### Environment Variables
- `BWSSH_RUNTIME_DIR`: Override socket directory
- `BWSSH_LOG_LEVEL`: Override log level
- `BW_SESSION`: Bitwarden session key (auto-detected by `bwssh unlock`)
## Security
### Default Mode
By default, bwssh allows all local signing requests without prompts. Security comes from:
- **Auto-lock on sleep**: Keys are cleared when your laptop sleeps (enabled by default)
- **Forwarded agent blocking**: Remote servers can't use your keys
- **Manual lock**: Run `bwssh lock` when stepping away
### Polkit Prompts (Optional)
For extra security, enable polkit to show desktop prompts for each signing request:
```toml
[auth]
require_polkit = true
```
This requires installing the polkit policy:
```bash
bwssh install --polkit | sudo tee /usr/share/polkit-1/actions/io.github.reidond.bwssh.policy > /dev/null
```
See `docs/` for detailed polkit setup instructions.
## CLI Commands
```bash
# Daemon control
bwssh start # Start the agent daemon
bwssh stop # Stop the agent daemon
bwssh status # Show daemon status
# Key management
bwssh unlock # Unlock vault and load keys
bwssh lock # Lock agent and clear keys
bwssh sync # Reload keys from Bitwarden
bwssh keys # List loaded SSH keys
# Configuration
bwssh config init # Auto-discover SSH keys and create config
bwssh config show # Show current configuration
# Installation
bwssh install --user-systemd # Install systemd user service
bwssh install --polkit # Print polkit policy file
```
## System Tray
bwssh includes an optional system tray icon (`bwssh tray`) that shows agent
status and provides quick lock/unlock controls. Install with the `gui` extra:
```bash
uv tool install bwssh[gui]
```
### Build dependencies
PyGObject must be compiled from source, which requires system development
packages.
**Fedora / RHEL / CentOS:**
```bash
sudo dnf install gobject-introspection-devel cairo-gobject-devel python3-devel \
gtk3-devel libayatana-appindicator-gtk3
```
**Arch / Manjaro:**
```bash
sudo pacman -S gobject-introspection cairo python gtk3 libayatana-appindicator
```
**openSUSE:**
```bash
sudo zypper install gobject-introspection-devel cairo-devel python3-devel \
gtk3-devel typelib-1_0-AyatanaAppIndicator3-0_1
```
**Debian / Ubuntu:**
```bash
sudo apt install libgirepository1.0-dev libcairo2-dev python3-dev \
libgtk-3-dev libayatana-appindicator3-1 gir1.2-ayatanaappindicator3-0.1
```
Alternatively, skip building from source by using system-installed PyGObject:
```bash
sudo dnf install python3-gobject gtk3 libayatana-appindicator-gtk3 # Fedora
uv tool install --system-site-packages bwssh[gui]
```
## Documentation
Full documentation lives in `docs/` and can be served locally:
```bash
cd docs
bun install
bun run dev
```
## Development
```bash
uv run ruff check .
uv run ruff format .
uv run mypy src tests
uv run pytest
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.3.1",
"cryptography>=46.0.4",
"dbus-fast>=4.0.0",
"textual>=7.5.0",
"pygobject<3.50,>=3.42.0; extra == \"gui\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T00:02:00.483330 | bwssh-0.1.2.tar.gz | 40,035 | 4d/3e/b07268d4dfb5a06cddaccd16d9ae233b3170607da27f04ca779dbedd34fb/bwssh-0.1.2.tar.gz | source | sdist | null | false | 208e43c6da84f626277c5da5afc822a5 | 630b3615e08b0c8ebf08a761ba0dd7a99b8626f56df66b94e65c3ea7c9bbf7cf | 4d3eb07268d4dfb5a06cddaccd16d9ae233b3170607da27f04ca779dbedd34fb | null | [] | 232 |
2.4 | python-devlog | 2.0 | No more logging in your code business logic with decorators | [](https://github.com/MeGaNeKoS/devlog/releases/latest)
[](https://github.com/MeGaNeKoS/devlog/actions/workflows/python-test.yml)
[](https://github.com/MeGaNeKoS/devlog/actions/workflows/python-publish.yml)


devlog
=====
No more logging in your code business logic with python decorators.
Logging is a very powerful tool for debugging and monitoring your code. But if you are often adding logging
statements, you will quickly find your code overcrowded with them.
Fortunately, you can avoid this by using python decorators. This library provides easy logging for your code without
stealing readability and maintainability. It also provides stack traces with full local variables, value sanitization,
and async support.
**Requires Python 3.9+**
Installation
------------
```bash
pip install python-devlog
```
How to use
----------
Add the decorator to your function. Depending on when you want to log, you can use:
```python
import logging
from devlog import log_on_start, log_on_end, log_on_error
logging.basicConfig(level=logging.DEBUG)
@log_on_start
@log_on_end
def add(a, b):
return a + b
@log_on_error
def divide(a, b):
return a / b
if __name__ == '__main__':
add(1, b=2)
# INFO:__main__:Start func add with args (1,), kwargs {'b': 2}
# INFO:__main__:Successfully run func add with args (1,), kwargs {'b': 2}
divide("abc", "def")
# ERROR:__main__:Error in func divide with args ('abc', 'def'), kwargs {}
# unsupported operand type(s) for /: 'str' and 'str'.
```
### Async support
All decorators work with async functions automatically:
```python
@log_on_start
@log_on_end
@log_on_error
async def fetch_data(url):
...
```
### Value sanitization
Prevent sensitive values from appearing in logs using `Sensitive` or `sanitize_params`:
```python
from devlog import log_on_start, Sensitive
# Option 1: Wrap the value — function receives the real value, logs show "***"
@log_on_start
def login(username, password):
...
login("admin", Sensitive("hunter2"))
# INFO:__main__:Start func login with args ('admin', '***'), kwargs {}
# Option 2: Auto-redact by parameter name
@log_on_start(sanitize_params={"password", "token", "secret"})
def connect(host, token):
...
connect("example.com", "sk-abc123")
# INFO:__main__:Start func connect with args ('example.com', '***'), kwargs {}
```
`Sensitive` is a transparent proxy — the wrapped function receives the real value. Only devlog log output is redacted.
What devlog can do for you
---------------------------
### Decorators
devlog provides three decorators:
- **log_on_start**: Log when the function is called.
- **log_on_end**: Log when the function finishes successfully.
- **log_on_error**: Log when the function raises an exception.
Use variables in messages
=========================
The message given to decorators is treated as a format string which takes the function arguments as format
arguments.
```python
import logging
from devlog import log_on_start
logging.basicConfig(level=logging.DEBUG)
@log_on_start(logging.INFO, 'Start func {callable.__name__} with args {args}, kwargs {kwargs}')
def hello(name):
print("Hello, {}".format(name))
if __name__ == "__main__":
hello("World")
```
Which will print:
```INFO:__main__:Start func hello with args ('World',), kwargs {}```
### Documentation
#### Format variables
The following variables are available in the format string:
| Default variable name | Description | LogOnStart | LogOnEnd | LogOnError |
|-----------------------|---------------------------------------------------------|------------|----------|------------|
| callable | The function object | Yes | Yes | Yes |
| *args/kwargs* | The arguments, key arguments passed to the function | Yes | Yes | Yes |
| result | The return value of the function | No | Yes | No |
| error | The error object if the function is finished with error | No | No | Yes |
#### Base arguments
Available arguments in all decorators:
| Argument | Description |
|--------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| logger | The logger object. If no logger is given, devlog will create one with the module name where the function is defined. Default is `logging.getLogger(callable.__module__)` |
| handler | A custom log handler object. Only available if no logger object is given. |
| args_kwargs | Set `True` to use `{args}`, `{kwargs}` format, or `False` to use function parameter names. Default `True` |
| callable_format_variable | The format variable name for the callable. Default is `callable` |
| trace_stack | Set to `True` to get the full stack trace. Default is `False` |
| capture_locals | Set to `True` to capture local variables in stack frames. Default is `False` (or `trace_stack` on log_on_error) |
| include_decorator | Set to `True` to include devlog frames in the stack trace. Default is `False` |
| sanitize_params | A set of parameter names to auto-redact in log messages. Default is `None` |
#### log_on_start
| Argument | Description |
|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| level | The level of the log message. Default is `logging.INFO` |
| message | The message to log. Can use `{args}` `{kwargs}` or function parameter names, but not both. Default is `Start func {callable.__name__} with args {args}, kwargs {kwargs}` |
#### log_on_end
| Argument | Description |
|------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| level | The level of the log message. Default is `logging.INFO` |
| message | The message to log. Can use `{args}` `{kwargs}` or function parameter names, but not both. Default is `Successfully run func {callable.__name__} with args {args}, kwargs {kwargs}` |
| result_format_variable | The format variable name for the return value. Default is `result` |
#### log_on_error
| Argument | Description |
|---------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| level | The level of the log message. Default is `logging.ERROR` |
| message | The message to log. Can use `{args}` `{kwargs}` or function parameter names, but not both. Default is `Error in func {callable.__name__} with args {args}, kwargs {kwargs}\n{error}` |
| on_exceptions | Exception classes to catch and log. Default catches all exceptions. |
| reraise | Whether to reraise the exception after logging. Default is `True` |
| exception_format_variable | The format variable name for the exception. Default is `error` |
### Extras
#### Custom exception hook
Override the default exception hook to write crash logs with local variable capture:
```python
import devlog
devlog.system_excepthook_overwrite() # Overwrite the default exception hook
```
This replaces `sys.excepthook` with devlog's handler, which writes detailed crash information to a file.
| Argument | Description |
|----------|---------------------------------------------------------------|
| out_file | The path to the file to write the crash log. Default is `crash.log` |
| text/markdown | null | めがねこ <neko@meganeko.dev> | null | null | null | clean code, decorators, logging | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/MeGaNeKoS/devlog"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T00:01:14.454663 | python_devlog-2.0.tar.gz | 16,783 | cb/0d/d40a860866e4ad477ce9ffd47a6019d542530d62a78f6b6a406859c91e1e/python_devlog-2.0.tar.gz | source | sdist | null | false | 38a6f61a14ffb85376135648815eb4ae | e7b080c783d3eafbca9822c444de92836d14bd3aa96a017b5b0c5b748aa5bb31 | cb0dd40a860866e4ad477ce9ffd47a6019d542530d62a78f6b6a406859c91e1e | MIT | [
"LICENSE.txt"
] | 239 |
2.4 | yt-dlp | 2026.2.20.235452.dev0 | A feature-rich command-line audio/video downloader | Official repository: <https://github.com/yt-dlp/yt-dlp>
**PS**: Some links in this document will not work since this is a copy of the README.md from Github
<!-- MANPAGE: BEGIN EXCLUDED SECTION -->
<div align="center">
[](#readme)
[](#installation "Installation")
[](https://pypi.org/project/yt-dlp "PyPI")
[](Maintainers.md#maintainers "Donate")
[](https://discord.gg/H5MNcFW63r "Discord")
[](supportedsites.md "Supported Sites")
[](LICENSE "License")
[](https://github.com/yt-dlp/yt-dlp/actions "CI Status")
[](https://github.com/yt-dlp/yt-dlp/commits "Commit History")
[](https://github.com/yt-dlp/yt-dlp/pulse/monthly "Last activity")
</div>
<!-- MANPAGE: END EXCLUDED SECTION -->
yt-dlp is a feature-rich command-line audio/video downloader with support for [thousands of sites](supportedsites.md). The project is a fork of [youtube-dl](https://github.com/ytdl-org/youtube-dl) based on the now inactive [youtube-dlc](https://github.com/blackjack4494/yt-dlc).
<!-- MANPAGE: MOVE "USAGE AND OPTIONS" SECTION HERE -->
<!-- MANPAGE: BEGIN EXCLUDED SECTION -->
* [INSTALLATION](#installation)
* [Detailed instructions](https://github.com/yt-dlp/yt-dlp/wiki/Installation)
* [Release Files](#release-files)
* [Update](#update)
* [Dependencies](#dependencies)
* [Compile](#compile)
* [USAGE AND OPTIONS](#usage-and-options)
* [General Options](#general-options)
* [Network Options](#network-options)
* [Geo-restriction](#geo-restriction)
* [Video Selection](#video-selection)
* [Download Options](#download-options)
* [Filesystem Options](#filesystem-options)
* [Thumbnail Options](#thumbnail-options)
* [Internet Shortcut Options](#internet-shortcut-options)
* [Verbosity and Simulation Options](#verbosity-and-simulation-options)
* [Workarounds](#workarounds)
* [Video Format Options](#video-format-options)
* [Subtitle Options](#subtitle-options)
* [Authentication Options](#authentication-options)
* [Post-processing Options](#post-processing-options)
* [SponsorBlock Options](#sponsorblock-options)
* [Extractor Options](#extractor-options)
* [Preset Aliases](#preset-aliases)
* [CONFIGURATION](#configuration)
* [Configuration file encoding](#configuration-file-encoding)
* [Authentication with netrc](#authentication-with-netrc)
* [Notes about environment variables](#notes-about-environment-variables)
* [OUTPUT TEMPLATE](#output-template)
* [Output template examples](#output-template-examples)
* [FORMAT SELECTION](#format-selection)
* [Filtering Formats](#filtering-formats)
* [Sorting Formats](#sorting-formats)
* [Format Selection examples](#format-selection-examples)
* [MODIFYING METADATA](#modifying-metadata)
* [Modifying metadata examples](#modifying-metadata-examples)
* [EXTRACTOR ARGUMENTS](#extractor-arguments)
* [PLUGINS](#plugins)
* [Installing Plugins](#installing-plugins)
* [Developing Plugins](#developing-plugins)
* [EMBEDDING YT-DLP](#embedding-yt-dlp)
* [Embedding examples](#embedding-examples)
* [CHANGES FROM YOUTUBE-DL](#changes-from-youtube-dl)
* [New features](#new-features)
* [Differences in default behavior](#differences-in-default-behavior)
* [Deprecated options](#deprecated-options)
* [CONTRIBUTING](CONTRIBUTING.md#contributing-to-yt-dlp)
* [Opening an Issue](CONTRIBUTING.md#opening-an-issue)
* [Developer Instructions](CONTRIBUTING.md#developer-instructions)
* [WIKI](https://github.com/yt-dlp/yt-dlp/wiki)
* [FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ)
<!-- MANPAGE: END EXCLUDED SECTION -->
# INSTALLATION
<!-- MANPAGE: BEGIN EXCLUDED SECTION -->
[](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.exe)
[](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp)
[](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos)
[](https://pypi.org/project/yt-dlp)
[](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz)
[](#release-files)
[](https://github.com/yt-dlp/yt-dlp/releases)
<!-- MANPAGE: END EXCLUDED SECTION -->
You can install yt-dlp using [the binaries](#release-files), [pip](https://pypi.org/project/yt-dlp) or one using a third-party package manager. See [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation) for detailed instructions
<!-- MANPAGE: BEGIN EXCLUDED SECTION -->
## RELEASE FILES
#### Recommended
File|Description
:---|:---
[yt-dlp](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp)|Platform-independent [zipimport](https://docs.python.org/3/library/zipimport.html) binary. Needs Python (recommended for **Linux/BSD**)
[yt-dlp.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.exe)|Windows (Win8+) standalone x64 binary (recommended for **Windows**)
[yt-dlp_macos](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos)|Universal MacOS (10.15+) standalone executable (recommended for **MacOS**)
#### Alternatives
File|Description
:---|:---
[yt-dlp_linux](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux)|Linux (glibc 2.17+) standalone x86_64 binary
[yt-dlp_linux.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux.zip)|Unpackaged Linux (glibc 2.17+) x86_64 executable (no auto-update)
[yt-dlp_linux_aarch64](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux_aarch64)|Linux (glibc 2.17+) standalone aarch64 binary
[yt-dlp_linux_aarch64.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux_aarch64.zip)|Unpackaged Linux (glibc 2.17+) aarch64 executable (no auto-update)
[yt-dlp_linux_armv7l.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux_armv7l.zip)|Unpackaged Linux (glibc 2.31+) armv7l executable (no auto-update)
[yt-dlp_musllinux](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_musllinux)|Linux (musl 1.2+) standalone x86_64 binary
[yt-dlp_musllinux.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_musllinux.zip)|Unpackaged Linux (musl 1.2+) x86_64 executable (no auto-update)
[yt-dlp_musllinux_aarch64](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_musllinux_aarch64)|Linux (musl 1.2+) standalone aarch64 binary
[yt-dlp_musllinux_aarch64.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_musllinux_aarch64.zip)|Unpackaged Linux (musl 1.2+) aarch64 executable (no auto-update)
[yt-dlp_x86.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_x86.exe)|Windows (Win8+) standalone x86 (32-bit) binary
[yt-dlp_win_x86.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win_x86.zip)|Unpackaged Windows (Win8+) x86 (32-bit) executable (no auto-update)
[yt-dlp_arm64.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_arm64.exe)|Windows (Win10+) standalone ARM64 binary
[yt-dlp_win_arm64.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win_arm64.zip)|Unpackaged Windows (Win10+) ARM64 executable (no auto-update)
[yt-dlp_win.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win.zip)|Unpackaged Windows (Win8+) x64 executable (no auto-update)
[yt-dlp_macos.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos.zip)|Unpackaged MacOS (10.15+) executable (no auto-update)
#### Misc
File|Description
:---|:---
[yt-dlp.tar.gz](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz)|Source tarball
[SHA2-512SUMS](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-512SUMS)|GNU-style SHA512 sums
[SHA2-512SUMS.sig](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-512SUMS.sig)|GPG signature file for SHA512 sums
[SHA2-256SUMS](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-256SUMS)|GNU-style SHA256 sums
[SHA2-256SUMS.sig](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-256SUMS.sig)|GPG signature file for SHA256 sums
The public key that can be used to verify the GPG signatures is [available here](https://github.com/yt-dlp/yt-dlp/blob/master/public.key)
Example usage:
```
curl -L https://github.com/yt-dlp/yt-dlp/raw/master/public.key | gpg --import
gpg --verify SHA2-256SUMS.sig SHA2-256SUMS
gpg --verify SHA2-512SUMS.sig SHA2-512SUMS
```
#### Licensing
While yt-dlp is licensed under the [Unlicense](LICENSE), many of the release files contain code from other projects with different licenses.
Most notably, the PyInstaller-bundled executables include GPLv3+ licensed code, and as such the combined work is licensed under [GPLv3+](https://www.gnu.org/licenses/gpl-3.0.html).
The zipimport Unix executable (`yt-dlp`) contains [ISC](https://github.com/meriyah/meriyah/blob/main/LICENSE.md) licensed code from [`meriyah`](https://github.com/meriyah/meriyah) and [MIT](https://github.com/davidbonnet/astring/blob/main/LICENSE) licensed code from [`astring`](https://github.com/davidbonnet/astring).
See [THIRD_PARTY_LICENSES.txt](THIRD_PARTY_LICENSES.txt) for more details.
The git repository, the source tarball (`yt-dlp.tar.gz`), the PyPI source distribution and the PyPI built distribution (wheel) only contain code licensed under the [Unlicense](LICENSE).
<!-- MANPAGE: END EXCLUDED SECTION -->
**Note**: The manpages, shell completion (autocomplete) files etc. are available inside the [source tarball](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz)
## UPDATE
You can use `yt-dlp -U` to update if you are using the [release binaries](#release-files)
If you [installed with pip](https://github.com/yt-dlp/yt-dlp/wiki/Installation#with-pip), simply re-run the same command that was used to install the program
For other third-party package managers, see [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation#third-party-package-managers) or refer to their documentation
<a id="update-channels"></a>
There are currently three release channels for binaries: `stable`, `nightly` and `master`.
* `stable` is the default channel, and many of its changes have been tested by users of the `nightly` and `master` channels.
* The `nightly` channel has releases scheduled to build every day around midnight UTC, for a snapshot of the project's new patches and changes. This is the **recommended channel for regular users** of yt-dlp. The `nightly` releases are available from [yt-dlp/yt-dlp-nightly-builds](https://github.com/yt-dlp/yt-dlp-nightly-builds/releases) or as development releases of the `yt-dlp` PyPI package (which can be installed with pip's `--pre` flag).
* The `master` channel features releases that are built after each push to the master branch, and these will have the very latest fixes and additions, but may also be more prone to regressions. They are available from [yt-dlp/yt-dlp-master-builds](https://github.com/yt-dlp/yt-dlp-master-builds/releases).
When using `--update`/`-U`, a release binary will only update to its current channel.
`--update-to CHANNEL` can be used to switch to a different channel when a newer version is available. `--update-to [CHANNEL@]TAG` can also be used to upgrade or downgrade to specific tags from a channel.
You may also use `--update-to <repository>` (`<owner>/<repository>`) to update to a channel on a completely different repository. Be careful with what repository you are updating to though, there is no verification done for binaries from different repositories.
Example usage:
* `yt-dlp --update-to master` switch to the `master` channel and update to its latest release
* `yt-dlp --update-to stable@2023.07.06` upgrade/downgrade to release to `stable` channel tag `2023.07.06`
* `yt-dlp --update-to 2023.10.07` upgrade/downgrade to tag `2023.10.07` if it exists on the current channel
* `yt-dlp --update-to example/yt-dlp@2023.09.24` upgrade/downgrade to the release from the `example/yt-dlp` repository, tag `2023.09.24`
**Important**: Any user experiencing an issue with the `stable` release should install or update to the `nightly` release before submitting a bug report:
```
# To update to nightly from stable executable/binary:
yt-dlp --update-to nightly
# To install nightly with pip:
python -m pip install -U --pre "yt-dlp[default]"
```
When running a yt-dlp version that is older than 90 days, you will see a warning message suggesting to update to the latest version.
You can suppress this warning by adding `--no-update` to your command or configuration file.
## DEPENDENCIES
Python versions 3.10+ (CPython) and 3.11+ (PyPy) are supported. Other versions and implementations may or may not work correctly.
<!-- Python 3.5+ uses VC++14 and it is already embedded in the binary created
<!x-- https://www.microsoft.com/en-us/download/details.aspx?id=26999 --x>
On Windows, [Microsoft Visual C++ 2010 SP1 Redistributable Package (x86)](https://download.microsoft.com/download/1/6/5/165255E7-1014-4D0A-B094-B6A430A6BFFC/vcredist_x86.exe) is also necessary to run yt-dlp. You probably already have this, but if the executable throws an error due to missing `MSVCR100.dll` you need to install it manually.
-->
While all the other dependencies are optional, `ffmpeg`, `ffprobe`, `yt-dlp-ejs` and a supported JavaScript runtime/engine are highly recommended
### Strongly recommended
* [**ffmpeg** and **ffprobe**](https://www.ffmpeg.org) - Required for [merging separate video and audio files](#format-selection), as well as for various [post-processing](#post-processing-options) tasks. License [depends on the build](https://www.ffmpeg.org/legal.html)
There are bugs in ffmpeg that cause various issues when used alongside yt-dlp. Since ffmpeg is such an important dependency, we provide [custom builds](https://github.com/yt-dlp/FFmpeg-Builds#ffmpeg-static-auto-builds) with patches for some of these issues at [yt-dlp/FFmpeg-Builds](https://github.com/yt-dlp/FFmpeg-Builds). See [the readme](https://github.com/yt-dlp/FFmpeg-Builds#patches-applied) for details on the specific issues solved by these builds
**Important**: What you need is ffmpeg *binary*, **NOT** [the Python package of the same name](https://pypi.org/project/ffmpeg)
* [**yt-dlp-ejs**](https://github.com/yt-dlp/ejs) - Required for full YouTube support. Licensed under [Unlicense](https://github.com/yt-dlp/ejs/blob/main/LICENSE), bundles [MIT](https://github.com/davidbonnet/astring/blob/main/LICENSE) and [ISC](https://github.com/meriyah/meriyah/blob/main/LICENSE.md) components.
A JavaScript runtime/engine like [**deno**](https://deno.land) (recommended), [**node.js**](https://nodejs.org), [**bun**](https://bun.sh), or [**QuickJS**](https://bellard.org/quickjs/) is also required to run yt-dlp-ejs. See [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/EJS).
### Networking
* [**certifi**](https://github.com/certifi/python-certifi)\* - Provides Mozilla's root certificate bundle. Licensed under [MPLv2](https://github.com/certifi/python-certifi/blob/master/LICENSE)
* [**brotli**](https://github.com/google/brotli)\* or [**brotlicffi**](https://github.com/python-hyper/brotlicffi) - [Brotli](https://en.wikipedia.org/wiki/Brotli) content encoding support. Both licensed under MIT <sup>[1](https://github.com/google/brotli/blob/master/LICENSE) [2](https://github.com/python-hyper/brotlicffi/blob/master/LICENSE) </sup>
* [**websockets**](https://github.com/aaugustin/websockets)\* - For downloading over websocket. Licensed under [BSD-3-Clause](https://github.com/aaugustin/websockets/blob/main/LICENSE)
* [**requests**](https://github.com/psf/requests)\* - HTTP library. For HTTPS proxy and persistent connections support. Licensed under [Apache-2.0](https://github.com/psf/requests/blob/main/LICENSE)
#### Impersonation
The following provide support for impersonating browser requests. This may be required for some sites that employ TLS fingerprinting.
* [**curl_cffi**](https://github.com/lexiforest/curl_cffi) (recommended) - Python binding for [curl-impersonate](https://github.com/lexiforest/curl-impersonate). Provides impersonation targets for Chrome, Edge and Safari. Licensed under [MIT](https://github.com/lexiforest/curl_cffi/blob/main/LICENSE)
* Can be installed with the `curl-cffi` extra, e.g. `pip install "yt-dlp[default,curl-cffi]"`
* Currently included in most builds *except* `yt-dlp` (Unix zipimport binary), `yt-dlp_x86` (Windows 32-bit) and `yt-dlp_musllinux_aarch64`
### Metadata
* [**mutagen**](https://github.com/quodlibet/mutagen)\* - For `--embed-thumbnail` in certain formats. Licensed under [GPLv2+](https://github.com/quodlibet/mutagen/blob/master/COPYING)
* [**AtomicParsley**](https://github.com/wez/atomicparsley) - For `--embed-thumbnail` in `mp4`/`m4a` files when `mutagen`/`ffmpeg` cannot. Licensed under [GPLv2+](https://github.com/wez/atomicparsley/blob/master/COPYING)
* [**xattr**](https://github.com/xattr/xattr), [**pyxattr**](https://github.com/iustin/pyxattr) or [**setfattr**](http://savannah.nongnu.org/projects/attr) - For writing xattr metadata (`--xattrs`) on **Mac** and **BSD**. Licensed under [MIT](https://github.com/xattr/xattr/blob/master/LICENSE.txt), [LGPL2.1](https://github.com/iustin/pyxattr/blob/master/COPYING) and [GPLv2+](http://git.savannah.nongnu.org/cgit/attr.git/tree/doc/COPYING) respectively
### Misc
* [**pycryptodomex**](https://github.com/Legrandin/pycryptodome)\* - For decrypting AES-128 HLS streams and various other data. Licensed under [BSD-2-Clause](https://github.com/Legrandin/pycryptodome/blob/master/LICENSE.rst)
* [**phantomjs**](https://github.com/ariya/phantomjs) - Used in some extractors where JavaScript needs to be run. No longer used for YouTube. To be deprecated in the near future. Licensed under [BSD-3-Clause](https://github.com/ariya/phantomjs/blob/master/LICENSE.BSD)
* [**secretstorage**](https://github.com/mitya57/secretstorage)\* - For `--cookies-from-browser` to access the **Gnome** keyring while decrypting cookies of **Chromium**-based browsers on **Linux**. Licensed under [BSD-3-Clause](https://github.com/mitya57/secretstorage/blob/master/LICENSE)
* Any external downloader that you want to use with `--downloader`
### Deprecated
* [**rtmpdump**](http://rtmpdump.mplayerhq.hu) - For downloading `rtmp` streams. ffmpeg can be used instead with `--downloader ffmpeg`. Licensed under [GPLv2+](http://rtmpdump.mplayerhq.hu)
* [**mplayer**](http://mplayerhq.hu/design7/info.html) or [**mpv**](https://mpv.io) - For downloading `rstp`/`mms` streams. ffmpeg can be used instead with `--downloader ffmpeg`. Licensed under [GPLv2+](https://github.com/mpv-player/mpv/blob/master/Copyright)
To use or redistribute the dependencies, you must agree to their respective licensing terms.
The standalone release binaries are built with the Python interpreter and the packages marked with **\*** included.
If you do not have the necessary dependencies for a task you are attempting, yt-dlp will warn you. All the currently available dependencies are visible at the top of the `--verbose` output
## COMPILE
### Standalone PyInstaller Builds
To build the standalone executable, you must have Python and `pyinstaller` (plus any of yt-dlp's [optional dependencies](#dependencies) if needed). The executable will be built for the same CPU architecture as the Python used.
You can run the following commands:
```
python devscripts/install_deps.py --include-extra pyinstaller
python devscripts/make_lazy_extractors.py
python -m bundle.pyinstaller
```
On some systems, you may need to use `py` or `python3` instead of `python`.
`python -m bundle.pyinstaller` accepts any arguments that can be passed to `pyinstaller`, such as `--onefile/-F` or `--onedir/-D`, which is further [documented here](https://pyinstaller.org/en/stable/usage.html#what-to-generate).
**Note**: Pyinstaller versions below 4.4 [do not support](https://github.com/pyinstaller/pyinstaller#requirements-and-tested-platforms) Python installed from the Windows store without using a virtual environment.
**Important**: Running `pyinstaller` directly **instead of** using `python -m bundle.pyinstaller` is **not** officially supported. This may or may not work correctly.
### Platform-independent Binary (UNIX)
You will need the build tools `python` (3.10+), `zip`, `make` (GNU), `pandoc`\* and `pytest`\*.
After installing these, simply run `make`.
You can also run `make yt-dlp` instead to compile only the binary without updating any of the additional files. (The build tools marked with **\*** are not needed for this)
### Related scripts
* **`devscripts/install_deps.py`** - Install dependencies for yt-dlp.
* **`devscripts/update-version.py`** - Update the version number based on the current date.
* **`devscripts/set-variant.py`** - Set the build variant of the executable.
* **`devscripts/make_changelog.py`** - Create a markdown changelog using short commit messages and update `CONTRIBUTORS` file.
* **`devscripts/make_lazy_extractors.py`** - Create lazy extractors. Running this before building the binaries (any variant) will improve their startup performance. Set the environment variable `YTDLP_NO_LAZY_EXTRACTORS` to something nonempty to forcefully disable lazy extractor loading.
Note: See their `--help` for more info.
### Forking the project
If you fork the project on GitHub, you can run your fork's [build workflow](.github/workflows/build.yml) to automatically build the selected version(s) as artifacts. Alternatively, you can run the [release workflow](.github/workflows/release.yml) or enable the [nightly workflow](.github/workflows/release-nightly.yml) to create full (pre-)releases.
# USAGE AND OPTIONS
<!-- MANPAGE: BEGIN EXCLUDED SECTION -->
yt-dlp [OPTIONS] [--] URL [URL...]
Tip: Use `CTRL`+`F` (or `Command`+`F`) to search by keywords
<!-- MANPAGE: END EXCLUDED SECTION -->
<!-- Auto generated -->
## General Options:
-h, --help Print this help text and exit
--version Print program version and exit
-U, --update Update this program to the latest version
--no-update Do not check for updates (default)
--update-to [CHANNEL]@[TAG] Upgrade/downgrade to a specific version.
CHANNEL can be a repository as well. CHANNEL
and TAG default to "nightly" and "latest"
respectively if omitted; See "UPDATE" for
details. Supported channels: stable,
nightly, master
-i, --ignore-errors Ignore download and postprocessing errors.
The download will be considered successful
even if the postprocessing fails
--no-abort-on-error Continue with next video on download errors;
e.g. to skip unavailable videos in a
playlist (default)
--abort-on-error Abort downloading of further videos if an
error occurs (Alias: --no-ignore-errors)
--list-extractors List all supported extractors and exit
--extractor-descriptions Output descriptions of all supported
extractors and exit
--use-extractors NAMES Extractor names to use separated by commas.
You can also use regexes, "all", "default"
and "end" (end URL matching); e.g. --ies
"holodex.*,end,youtube". Prefix the name
with a "-" to exclude it, e.g. --ies
default,-generic. Use --list-extractors for
a list of extractor names. (Alias: --ies)
--default-search PREFIX Use this prefix for unqualified URLs. E.g.
"gvsearch2:python" downloads two videos from
google videos for the search term "python".
Use the value "auto" to let yt-dlp guess
("auto_warning" to emit a warning when
guessing). "error" just throws an error. The
default value "fixup_error" repairs broken
URLs, but emits an error if this is not
possible instead of searching
--ignore-config Don't load any more configuration files
except those given to --config-locations.
For backward compatibility, if this option
is found inside the system configuration
file, the user configuration is not loaded.
(Alias: --no-config)
--no-config-locations Do not load any custom configuration files
(default). When given inside a configuration
file, ignore all previous --config-locations
defined in the current file
--config-locations PATH Location of the main configuration file;
either the path to the config or its
containing directory ("-" for stdin). Can be
used multiple times and inside other
configuration files
--plugin-dirs DIR Path to an additional directory to search
for plugins. This option can be used
multiple times to add multiple directories.
Use "default" to search the default plugin
directories (default)
--no-plugin-dirs Clear plugin directories to search,
including defaults and those provided by
previous --plugin-dirs
--js-runtimes RUNTIME[:PATH] Additional JavaScript runtime to enable,
with an optional location for the runtime
(either the path to the binary or its
containing directory). This option can be
used multiple times to enable multiple
runtimes. Supported runtimes are (in order
of priority, from highest to lowest): deno,
node, quickjs, bun. Only "deno" is enabled
by default. The highest priority runtime
that is both enabled and available will be
used. In order to use a lower priority
runtime when "deno" is available, --no-js-
runtimes needs to be passed before enabling
other runtimes
--no-js-runtimes Clear JavaScript runtimes to enable,
including defaults and those provided by
previous --js-runtimes
--remote-components COMPONENT Remote components to allow yt-dlp to fetch
when required. This option is currently not
needed if you are using an official
executable or have the requisite version of
the yt-dlp-ejs package installed. You can
use this option multiple times to allow
multiple components. Supported values:
ejs:npm (external JavaScript components from
npm), ejs:github (external JavaScript
components from yt-dlp-ejs GitHub). By
default, no remote components are allowed
--no-remote-components Disallow fetching of all remote components,
including any previously allowed by
--remote-components or defaults.
--flat-playlist Do not extract a playlist's URL result
entries; some entry metadata may be missing
and downloading may be bypassed
--no-flat-playlist Fully extract the videos of a playlist
(default)
--live-from-start Download livestreams from the start.
Currently experimental and only supported
for YouTube, Twitch, and TVer
--no-live-from-start Download livestreams from the current time
(default)
--wait-for-video MIN[-MAX] Wait for scheduled streams to become
available. Pass the minimum number of
seconds (or range) to wait between retries
--no-wait-for-video Do not wait for scheduled streams (default)
--mark-watched Mark videos watched (even with --simulate)
--no-mark-watched Do not mark videos watched (default)
--color [STREAM:]POLICY Whether to emit color codes in output,
optionally prefixed by the STREAM (stdout or
stderr) to apply the setting to. Can be one
of "always", "auto" (default), "never", or
"no_color" (use non color terminal
sequences). Use "auto-tty" or "no_color-tty"
to decide based on terminal support only.
Can be used multiple times
--compat-options OPTS Options that can help keep compatibility
with youtube-dl or youtube-dlc
configurations by reverting some of the
changes made in yt-dlp. See "Differences in
default behavior" for details
--alias ALIASES OPTIONS Create aliases for an option string. Unless
an alias starts with a dash "-", it is
prefixed with "--". Arguments are parsed
according to the Python string formatting
mini-language. E.g. --alias get-audio,-X "-S
aext:{0},abr -x --audio-format {0}" creates
options "--get-audio" and "-X" that takes an
argument (ARG0) and expands to "-S
aext:ARG0,abr -x --audio-format ARG0". All
defined aliases are listed in the --help
output. Alias options can trigger more
aliases; so be careful to avoid defining
recursive options. As a safety measure, each
alias may be triggered a maximum of 100
times. This option can be used multiple times
-t, --preset-alias PRESET Applies a predefined set of options. e.g.
--preset-alias mp3. The following presets
are available: mp3, aac, mp4, mkv, sleep.
See the "Preset Aliases" section at the end
for more info. This option can be used
multiple times
## Network Options:
--proxy URL Use the specified HTTP/HTTPS/SOCKS proxy. To
enable SOCKS proxy, specify a proper scheme,
e.g. socks5://user:pass@127.0.0.1:1080/.
Pass in an empty string (--proxy "") for
direct connection
--socket-timeout SECONDS Time to wait before giving up, in seconds
--source-address IP Client-side IP address to bind to
--impersonate CLIENT[:OS] Client to impersonate for requests. E.g.
chrome, chrome-110, chrome:windows-10. Pass
--impersonate="" to impersonate any client.
Note that forcing impersonation for all
requests may have a detrimental impact on
download speed and stability
--list-impersonate-targets List available clients to impersonate.
-4, --force-ipv4 Make all connections via IPv4
-6, --force-ipv6 Make all connections via IPv6
--enable-file-urls Enable file:// URLs. This is disabled by
default for security reasons.
## Geo-restriction:
--geo-verification-proxy URL Use this proxy to verify the IP address for
some geo-restricted sites. The default proxy
specified by --proxy (or none, if the option
is not present) is used for the actual
downloading
--xff VALUE How to fake X-Forwarded-For HTTP header to
try bypassing geographic restriction. One of
"default" (only when known to be useful),
"never", an IP block in CIDR notation, or a
two-letter ISO 3166-2 country code
## Video Selection:
-I, --playlist-items ITEM_SPEC Comma-separated playlist_index of the items
to download. You can specify a range using
"[START]:[STOP][:STEP]". For backward
compatibility, START-STOP is also supported.
Use negative indices to count from the right
and negative STEP to download in reverse
order. E.g. "-I 1:3,7,-5::2" used on a
playlist of size 15 will download the items
at index 1,2,3,7,11,13,15
--min-filesize SIZE Abort download if filesize is smaller than
SIZE, e.g. 50k or 44.6M
--max-filesize SIZE Abort download if filesize is larger than
SIZE, e.g. 50k or 44.6M
--date DATE Download only videos uploaded on this date.
The date can be "YYYYMMDD" or in the format
[now|today|yesterday][-N[day|week|month|year]].
E.g. "--date today-2weeks" downloads only
videos uploaded on the same day two weeks ago
--datebefore DATE Download only videos uploaded on or before
this date. The date formats accepted are the
same as --date
--dateafter DATE Download only videos uploaded on or after
this date. The date formats accepted are the
same as --date
--match-filters FILTER Generic video filter. Any "OUTPUT TEMPLATE"
field can be compared with a number or a
string using the operators defined in
"Filtering Formats". You can also simply
specify a field to match if the field is
present, use "!field" to check if the field
is not present, and "&" to check multiple
conditions. Use a "\" to escape "&" or
quotes if needed. If used multiple times,
the filter matches if at least one of the
conditions is met. E.g. --match-filters
!is_live --match-filters "like_count>?100 &
description~='(?i)\bcats \& dogs\b'" matches
only videos that are not live OR those that
have a like count more than 100 (or the like
field is not available) and also has a
description that contains the phrase "cats &
dogs" (caseless). Use "--match-filters -" to
interactively ask whether to download each
video
--no-match-filters Do not use any --match-filters (default)
--break-match-filters FILTER Same as "--match-filters" but stops the
download process when a video is rejected
--no-break-match-filters Do not use any --break-match-filters (default)
--no-playlist Download only the video, if the URL refers
to a video and a | text/markdown | null | pukkandan <pukkandan.ytdlp@gmail.com> | null | maintainers@yt-dlp.org, Grub4K <contact@grub4k.dev>, bashonly <bashonly@protonmail.com>, coletdjnz <coletdjnz@protonmail.com> | null | cli, downloader, sponsorblock, youtube-dl, youtube-downloader, yt-dlp | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Multimedia :: Video"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"build; extra == \"build\"",
"hatchling>=1.27.0; extra == \"build\"",
"pip; extra == \"build\"",
"setuptools>=71.0.2; extra == \"build\"",
"wheel; extra == \"build\"",
"curl-cffi==0.13.0; (sys_platform == \"darwin\" or (sys_platform == \"linux\" and platform_machine != \"armv7l\")) and extra == \"build-curl-cffi\"",
"curl-cffi==0.14.0; (sys_platform == \"win32\" or (sys_platform == \"linux\" and platform_machine == \"armv7l\")) and extra == \"build-curl-cffi\"",
"curl-cffi!=0.6.*,!=0.7.*,!=0.8.*,!=0.9.*,<0.15,>=0.5.10; implementation_name == \"cpython\" and extra == \"curl-cffi\"",
"brotli; implementation_name == \"cpython\" and extra == \"default\"",
"brotlicffi; implementation_name != \"cpython\" and extra == \"default\"",
"certifi; extra == \"default\"",
"mutagen; extra == \"default\"",
"pycryptodomex; extra == \"default\"",
"requests<3,>=2.32.2; extra == \"default\"",
"urllib3<3,>=2.0.2; extra == \"default\"",
"websockets>=13.0; extra == \"default\"",
"yt-dlp-ejs==0.4.0; extra == \"default\"",
"deno>=2.6.6; extra == \"deno\"",
"autopep8~=2.0; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"pytest-rerunfailures~=14.0; extra == \"dev\"",
"pytest~=8.1; extra == \"dev\"",
"ruff~=0.15.0; extra == \"dev\"",
"pyinstaller>=6.17.0; extra == \"pyinstaller\"",
"cffi; extra == \"secretstorage\"",
"secretstorage; extra == \"secretstorage\"",
"autopep8~=2.0; extra == \"static-analysis\"",
"ruff~=0.15.0; extra == \"static-analysis\"",
"pytest-rerunfailures~=14.0; extra == \"test\"",
"pytest~=8.1; extra == \"test\""
] | [] | [] | [] | [
"Documentation, https://github.com/yt-dlp/yt-dlp#readme",
"Repository, https://github.com/yt-dlp/yt-dlp",
"Tracker, https://github.com/yt-dlp/yt-dlp/issues",
"Funding, https://github.com/yt-dlp/yt-dlp/blob/master/Maintainers.md#maintainers"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T00:00:08.503766 | yt_dlp-2026.2.20.235452.dev0.tar.gz | 3,112,932 | 07/4d/6bdd15badfa61bc5c8a820dee91cc60ba462a4e3ddb1e05d208ad6b90ecb/yt_dlp-2026.2.20.235452.dev0.tar.gz | source | sdist | null | false | 2c2fbaf00bdb2937df39d9a5badd1772 | 2cb4a264543b7131e5ff89b58e6abd109c6cd63749fd1eeafb52b7f1b8d986b7 | 074d6bdd15badfa61bc5c8a820dee91cc60ba462a4e3ddb1e05d208ad6b90ecb | Unlicense | [
"LICENSE"
] | 7,683 |
2.4 | openap-top | 1.10.1 | Trajectory OPtimizer based on OpenAP model | # OpenAP Trajectory Optimizer
This repository contains the flight trajectory optimizer module based on the [OpenAP](https://github.com/junzis/openap) package.
This tool uses non-linear optimal control direct collocation algorithms from the `casadi` library. It provides simple interfaces to generate different optimal trajectories. For example, this tool can generate any of the following trajectories (or combinations thereof):
- Complete flight trajectories or flight segments at different flight phase
- Fuel-optimal trajectories
- Wind-optimal trajectories
- Cost index optimized trajectories
- Trajectories optimized using customized 4D cost functions (contrails, weather)
- Flight trajectories with constraint altitude, constant mach number, etc.
What's more, you can also design your own objective functions and constraints to optimize the flight trajectory.
## 🕮 User Guide
A more detailed user guide is available in the OpenAP handbook: <https://openap.dev/optimize>.
## Install
1. Install from PyPI:
```sh
pip install --upgrade openap-top
```
2. Install the development branch from GitHub (also ensures the development branch of `openap`):
```sh
pip install --upgrade git+https://github.com/junzis/openap
pip install --upgrade git+https://github.com/junzis/openap-top
```
The `top` package is an extension of `openap` and will be placed in the `openap` namespace.
## Quick Start
### A simple optimal flight
The following is a piece of example code that generates a fuel-optimal flight between two airports, with a take-off mass of 85% of MTOW:
```python
from openap import top
optimizer = top.CompleteFlight("A320", "EHAM", "LGAV", m0=0.85)
flight = optimizer.trajectory(objective="fuel")
```
Other predefined objective functions are available, for example:
```python
# Cost index 30 (out of max 100)
flight = optimizer.trajectory(objective="ci:30")
# Global warming potential over 100 years
flight = optimizer.trajectory(objective="gwp100")
# Global temperature potential over 100 years
flight = optimizer.trajectory(objective="gtp100")
```
The final `flight` object is a Pandas DataFrame. Here is an example:

### Using Wind Data
To include wind in our optimization, first download meteorological data in `grib` format from ECMWF, such as the ERA5 reanalysis data.
Once grid files are ready, we can read and enable wind for our optimizer with this example code:
```python
from openap import top
fgrib = "path_to_the_wind_data.grib"
windfield = top.tools.read_grids(fgrib)
optimizer = top.CompleteFlight("A320", "EHAM", "LGAV", m0=0.85)
optimizer.enable_wind(windfield)
flight = optimizer.trajectory() # default objective is fuel
```
Next, we can visualize the trajectory with wind barbs:
```
top.vis.trajectory(flight, windfield=windfield, barb_steps=15)
```

| text/markdown | null | Junzi Sun <git@junzis.com> | null | null | GNU LGPL v3 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"cartopy>=0.23.0",
"casadi>=3.6",
"openap>=2.4",
"pyproj>=3.4",
"scikit-learn>=1.4.0",
"xarray>=2024.0"
] | [] | [] | [] | [
"homepage, https://openap.dev",
"repository, https://github.com/junzis/openap-top",
"issues, https://github.com/junzis/openap-top/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:59:57.887738 | openap_top-1.10.1.tar.gz | 3,171,400 | 84/41/28bd9ffb929d9387d57163027d7d54ad0428500f8895ccd9c3e8b937e47f/openap_top-1.10.1.tar.gz | source | sdist | null | false | 53e34221a6c3c8d7a12c7bce0defeacc | 9893cde4d08b0f605fffea5fd4e0b2dcd6ef95ed21ce1e45bdd9426fbac8ff26 | 844128bd9ffb929d9387d57163027d7d54ad0428500f8895ccd9c3e8b937e47f | null | [
"LICENSE"
] | 238 |
2.4 | wagtail-hallo | 0.7.0 | Wagtail Hallo - The legacy richtext editor for Wagtail. | # Wagtail Hallo - Rich Text Editor
[](https://opensource.org/licenses/MIT) [](https://github.com/wagtail-nest/wagtail-hallo/actions/workflows/ci.yml) [](https://badge.fury.io/py/wagtail-hallo)
This is the legacy rich text editor for the Wagtail CMS. Based on [Hallo.js](http://hallojs.org/).
**As of [Wagtail 2.0, the hallo.js editor is deprecated](https://docs.wagtail.org/en/stable/releases/2.0.html#new-rich-text-editor).**
**Status** See [supported versions](#supported-versions) for Wagtail compatibility, however, this package will no longer receive bug fixes or be actively maintained. Pull requests will be accepted and if maintainers wish to support this outside of the core Wagtail team, please raise an issue to discuss this.
## Major risks of using this package
- Please be aware of the [known hallo.js issues](https://github.com/wagtail/wagtail/issues?q=is%3Aissue+hallo+is%3Aclosed+label%3A%22status%3AWon%27t+Fix%22) should you want to keep using it.
- Hallo.js has inappropriate handling of HTML and editor input – it is not reliable, has browser-specific inconsistent behavior, is not a good user experience and is not accessible.
- This package is a source of security concerns (XSS injections, not CSP compatible) and allows injection of undesirable content or formatting (e.g. images in headings, or headings in lists).
- There is no guarantee that this package will be compatible with Wagtail beyond the supported versions listed above.
## Release Notes
- See the [Changelog](https://github.com/wagtail-nest/wagtail-hallo/blob/main/CHANGELOG.md).
## Supported Versions
- Python 3.9, 3.10, 3.11, 3.12
- Django 4.2, 5.0
- Wagtail 6.1, 6.2, 6.3, 6.4, 7.0, 7.1
The wagtail-hallo package should work on the wider range of versions than those mentioned above. But there are a couple of places where changes in Wagtail have caused breaking changes in wagtail-hallo.
- If you need support for Wagtail 3.0 while you are upgrading, please use wagtail-hallo 0.2.0.
- For Wagtail 4, use wagtail-hallo 0.3.0.
- For Wagtail 5, use wagtail-hallo 0.4.0.
- For Wagtail 6.0, use wagtail-hallo 0.5.0.
- For Wagtail 6.4, use wagtail-hallo 0.6.0.
## Installing the Hallo Editor
- `pip install wagtail-hallo`
- Add `'wagtail_hallo'` to your settings.py `INSTALLED_APPS`
To use wagtail-hallo on Wagtail, add the following to your settings:
```python
WAGTAILADMIN_RICH_TEXT_EDITORS = {
'hallo': {
'WIDGET': 'wagtail_hallo.hallo.HalloRichTextArea'
}
}
```
### Using the Hallo Editor in `RichTextField`
```python
# models.py
from wagtail.admin.panels import FieldPanel
from wagtail.fields import RichTextField
from wagtail.models import Page
class MyHalloPage(Page):
body = RichTextField(editor='hallo')
content_panels = Page.content_panels + [
FieldPanel('body', classname='full'),
]
```
<!-- prettier-ignore-start -->
```html
{% extends "base.html" %}
{% load wagtailcore_tags wagtailimages_tags %}
{% block content %}
{% include "base/include/header.html" %}
<div class="container">
<div class="row">
<div class="col-md-7">{{ page.body|richtext }}</div>
</div>
</div>
{% endblock content %}
```
<!-- prettier-ignore-end -->
### Using the Hallo Editor in `StreamField` via `RichTextBlock`
```python
# models.py
from wagtail.models import Page
from wagtail.blocks import CharBlock, RichTextBlock
from wagtail.admin.panels import FieldPanel
from wagtail.fields import StreamField
class MyOtherHalloPage(Page):
body = StreamField([
('heading', CharBlock(form_classname="full title")),
('paragraph', RichTextBlock(editor='hallo')),
], blank=True)
content_panels = Page.content_panels + [
FieldPanel('body'),
]
```
<!-- prettier-ignore-start -->
```html
{% extends "base.html" %}
{% load wagtailcore_tags wagtailimages_tags %}
{% block content %}
{% include "base/include/header.html" %}
<div class="container">
<div class="row">
<div class="col-md-7">{{ page.body }}</div>
</div>
</div>
{% endblock content %}
```
<!-- prettier-ignore-end -->
## Extending the Hallo Editor
The legacy hallo.js editor’s functionality can be extended through plugins. For information on developing custom `hallo.js` plugins, see the project's page: <https://github.com/bergie/hallo>.
Once the plugin has been created, it should be registered through the feature registry's `register_editor_plugin(editor, feature_name, plugin)` method. For a `hallo.js` plugin, the `editor` parameter should always be `'hallo'`.
A plugin `halloblockquote`, implemented in `myapp/js/hallo-blockquote.js`, that adds support for the `<blockquote>` tag, would be registered under the feature name `block-quote` as follows:
```python
from wagtail import hooks
from wagtail_hallo.plugins import HalloPlugin
@hooks.register('register_rich_text_features')
def register_embed_feature(features):
features.register_editor_plugin(
'hallo', 'block-quote',
HalloPlugin(
name='halloblockquote',
js=['myapp/js/hallo-blockquote.js'],
)
)
```
The constructor for `HalloPlugin` accepts the following keyword arguments:
- `name` - the plugin name as defined in the JavaScript code. `hallo.js` plugin names are prefixed with the `"IKS."` namespace, but the name passed here should be without the prefix.
- `options` - a dictionary (or other JSON-serialisable object) of options to be passed to the JavaScript plugin code on initialisation
- `js` - a list of JavaScript files to be imported for this plugin, defined in the same way as a [Django form media](https://docs.djangoproject.com/en/4.0/topics/forms/media/) definition
- `css` - a dictionary of CSS files to be imported for this plugin, defined in the same way as a [Django form media](https://docs.djangoproject.com/en/4.0/topics/forms/media/) definition
- `order` - an index number (default 100) specifying the order in which plugins should be listed, which in turn determines the order buttons will appear in the toolbar
When writing the front-end code for the plugin, Wagtail’s Hallo implementation offers two extension points:
- In JavaScript, use the `[data-hallo-editor]` attribute selector to target the editor, eg. `var editor = document.querySelector('[data-hallo-editor]');`.
- In CSS, use the `.halloeditor` class selector.
## Whitelisting rich text elements
After extending the editor to support a new HTML element, you'll need to add it to the whitelist of permitted elements - Wagtail's standard behaviour is to strip out unrecognised elements, to prevent editors from inserting styles and scripts (either deliberately, or inadvertently through copy-and-paste) that the developer didn't account for.
Elements can be added to the whitelist through the feature registry's `register_converter_rule(converter, feature_name, ruleset)` method. When the `hallo.js` editor is in use, the `converter` parameter should always be `'editorhtml'`. The `feature_name` is the name of your plugin.
The following code will add the `<blockquote>` element to the whitelist whenever the `block-quote` feature is active:
```python
from wagtail.admin.rich_text.converters.editor_html import WhitelistRule
from wagtail.whitelist import allow_without_attributes
@hooks.register('register_rich_text_features')
def register_blockquote_feature(features):
features.register_converter_rule('editorhtml', 'block-quote', [
WhitelistRule('blockquote', allow_without_attributes),
])
```
`WhitelistRule` is passed the element name, and a callable which will perform some kind of manipulation of the element whenever it is encountered. This callable receives the element as a [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) Tag object.
The `wagtail.whitelist` module provides a few helper functions to assist in defining these handlers: `allow_without_attributes`, a handler which preserves the element but strips out all of its attributes, and `attribute_rule` which accepts a dict specifying how to handle each attribute, and returns a handler function. This dict will map attribute names to either True (indicating that the attribute should be kept), False (indicating that it should be dropped), or a callable (which takes the initial attribute value and returns either a final value for the attribute, or None to drop the attribute).
## Contributing
All contributions are welcome as the Wagtail core team will no longer be actively maintaining this project.
### Development instructions
- To make changes to this project, first clone this repository `git clone git@github.com:wagtail/wagtail-hallo.git`.
### Python (Django / Wagtail)
- `pip3 install -e ../path/to/wagtail-hallo/` -> this installs the package locally as [editable](https://pip.pypa.io/en/stable/cli/pip_install/#editable-installs)
- Ensure `'wagtail_hallo'` is added to your settings.py `INSTALLED_APPS`
- You will need to have a test application (e.g. [Bakery Demo](https://github.com/wagtail/bakerydemo)) and have a Page model to work with, along with a template.
- see `test/testapp/models.py` for a reference model
- see `test/testapp/templates/hallo_test_page.html` for a reference template
- After creating the model, remember to run `python manage.py makemigrations` and `python manage.py migrate`
- Run tests `python testmanage.py test`
- Run migrations for test models `django-admin makemigrations --settings=wagtail_hallo.test.settings`
- Run linting `flake8 wagtail_hallo`
- Run formatting `black wagtail_hallo`
### JavaScript & CSS (Frontend)
Currently the frontend tooling is based on Node & NPM and is only used to format and check code. This repository intentionally does not use any build tools and as such JavaScript and CSS must be written without that requirement.
- `nvm use` - Ensures you are on the right node version
- `npm install --no-save` - Install NPM packages
- `npm run fix` - Parses through JS/CSS files to fix anything it can
- `npm run lint` - Runs linting
- `npm run format` - Runs Prettier formatting on most files (non-Python)
- `npm test` - Runs tests (Jest)
- `npm test -- --watch` - Runs tests in watch mode (Jest)
- `npm run preflight` - Runs all the linting/formatting/jest checks and must be done before committing code
### Release checklist
- [ ] Update `VERSION` in `wagtail_hallo/__init__.py`
- [ ] Update `tox.ini`, `setup.py`, `README.md`, `package.json` and `workflows/ci.yml` with new supported Python, Django, or Wagtail versions
- [ ] Run `npm install` to ensure the `package-lock.json` is updated
- [ ] Update classifiers (e.g. `"Development Status :: # - Alpha"` based on status (# being a number) in `setup.py`
- [ ] Update `setup.py` with new release version
- [ ] Update `CHANGELOG.md` with the release date
- [ ] Push to PyPI
- `pip install twine`
- `python3 setup.py clean --all sdist bdist_wheel`
- `twine upload dist/*` <-- pushes to PyPI
- [ ] Create a stable release branch (e.g. `stable/1.0.x`)
- [ ] Add a Github release (e.g. `v1.0.0`)
## Thanks
Many thanks to all of our supporters, [contributors](https://github.com/wagtail-nest/wagtail-hallo/blob/main/CONTRIBUTORS.md), and users of Wagtail who built upon the amazing Hallo.js editor. We are thankful to the Wagtail core team and developers at Torchbox who sponsored the majority of the initial development. Thank you to LB, who transferred the Hallo.js integration from Wagtail to the wagtail-hallo package. And a very special thanks to the original creator of the Hallo.js editor.
| text/markdown | Wagtail core team | hello@wagtail.org | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Framework :: Django",
"Framework :: Django :: 5.2",
"Framework :: Django :: 6.0",
"Framework :: Wagtail",
"Framework :: Wagtail :: 4",
"Framework :: Wagtail :: 5",
"Framework :: Wagtail :: 6",
"Framework :: Wagtail :: 7"
] | [] | https://github.com/wagtail-nest/wagtail-hallo | null | null | [] | [] | [] | [
"Django<7.0,>=4.2",
"Wagtail<8.0,>=4.0",
"html5lib==1.1; extra == \"testing\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.11 | 2026-02-20T23:59:37.832276 | wagtail_hallo-0.7.0.tar.gz | 68,611 | cb/a2/9a38f9a977b30d3c4aea9c87dbf614ab8a44d209a682b1e8fbc687cf573e/wagtail_hallo-0.7.0.tar.gz | source | sdist | null | false | 518663e4dc22d00fef991bfbf496b1e3 | beb3d1ae336b7f6ab3f5538bf99156dae4732b232d2a15115eac5714370968b4 | cba29a38f9a977b30d3c4aea9c87dbf614ab8a44d209a682b1e8fbc687cf573e | null | [
"LICENSE"
] | 243 |
2.4 | tiledb-client | 3.0b11 | TileDB Python client | # tiledb-client
The next generation Python client for TileDB.
This project provides a `tiledb.client` module and a `tiledb.cloud` module. The
latter provides some backwards compatibility by re-exporting `tiledb.client`
names from the `tiledb.cloud` namespaces. Installing the tiledb-client package
installs both those modules.
tiledb-client is incompatible with tiledb-cloud versions < 1 (all versions on
PyPI). Avoid installing tiledb-cloud in Python environments where tiledb-client
wil be installed.
## Installation
`pip install tiledb-client`
## Quickstart
```python
import tiledb.client
# First, configure your credentials (this saves them to a profile)
tiledb.client.configure(
username="USERNAME",
password="PASSWORD",
workspace="WORKSPACE"
)
# Then login using the stored credentials
tiledb.client.login()
# Now you can use TileDB Client
tiledb.client.teamspaces.list_teamspaces()
```
## Documentation
API documentation is hosted on GitHub: https://tiledb-inc.github.io/TileDB-Client-Py/.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"attrs>=21.4.0",
"certifi",
"importlib-metadata",
"jsonschema",
"packaging",
"pandas>=1.2.4",
"pyarrow>=3.0.0",
"python-dateutil",
"six>=1.10",
"tblib>=1.7",
"tiledb>=0.36.0",
"tiledb-cloud==0.0.1",
"typing-extensions",
"urllib3>=2.0",
"networkx>=2; extra == \"viz-tiledb\"",
"pydot<3; extra == \"viz-tiledb\"",
"tiledb-plot-widget>=0.1.7; extra == \"viz-tiledb\"",
"networkx>=2; extra == \"viz-plotly\"",
"plotly>=4; extra == \"viz-plotly\"",
"pydot<3; extra == \"viz-plotly\"",
"networkx>=2; extra == \"all\"",
"plotly>=4; extra == \"all\"",
"pydot<3; extra == \"all\"",
"tiledb-plot-widget>=0.1.7; extra == \"all\"",
"tiledbsoma>=1.17.1; extra == \"life-sciences\"",
"quartodoc; extra == \"docs\"",
"black; extra == \"dev\"",
"pytest; extra == \"dev\"",
"ruff; extra == \"dev\"",
"boto3; extra == \"tests\"",
"cloudpickle; extra == \"tests\"",
"pytest-cov; extra == \"tests\"",
"pytest-explicit; extra == \"tests\"",
"pytest-split; extra == \"tests\"",
"pytest-random-order; extra == \"tests\"",
"pytz; extra == \"tests\"",
"xarray; extra == \"tests\""
] | [] | [] | [] | [
"homepage, https://tiledb.com",
"repository, https://github.com/TileDB-Inc/TileDB-Client-Py"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:59:00.129291 | tiledb_client-3.0b11.tar.gz | 604,513 | 83/90/e77b0f387aff9f9925d13dbb646c3e3887d31bd1777aa7b6662abde57b3f/tiledb_client-3.0b11.tar.gz | source | sdist | null | false | 54f0ac50e2e1abcc1e0cbf0441ed30e2 | cb77b6d8161ea9c35bea1ba086c6e88e14f47807ca0367fa16f8e114a4168194 | 8390e77b0f387aff9f9925d13dbb646c3e3887d31bd1777aa7b6662abde57b3f | null | [
"LICENSE"
] | 218 |
2.1 | django-psa | 0.21.1a0 | Django app for working with various PSA REST API. Defines models (tickets, companies, etc.) and callbacks. | # django-psa
Django app for working with various PSA REST API. Defines models (tickets, companies, etc.) and callbacks. Used in https://www.topleft.team/
Will provide a sync interface for applications to selectively implement PSA APIs, just import the sync app
and whatever PSA apps you need.
| text/markdown | TopLeft Technologies Ltd. | sam@topleft.team | null | null | MIT | django connectwise halo autotask rest api python | [
"Environment :: Web Environment",
"Framework :: Django",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content",
"Development Status :: 3 - Alpha"
] | [] | https://github.com/topleft-team/django-psa | null | null | [] | [] | [] | [
"requests",
"django",
"setuptools",
"python-dateutil",
"retrying",
"redis",
"django-extensions"
] | [] | [] | [] | [] | twine/5.0.0 CPython/3.12.12 | 2026-02-20T23:58:53.844078 | django_psa-0.21.1a0.tar.gz | 48,325 | 6d/7c/2df4a42fbf33f0d8a7b095b1c6eea7af0103a9f536c505380f4e4977b20e/django_psa-0.21.1a0.tar.gz | source | sdist | null | false | a03f894e51dd4ef2a97895d19ecf536c | a5cf16e3f8ecdf2ccb236b4ec0af326b678485e7a8fa1c3c7cef8254a41d686e | 6d7c2df4a42fbf33f0d8a7b095b1c6eea7af0103a9f536c505380f4e4977b20e | null | [] | 235 |
2.4 | sift-gateway | 0.2.8 | Keep big MCP responses out of your context window. Query them. | # Sift
**Artifact gateway** - Structured memory for AI agents. Keeps context usable in multi-step workflows.
[](https://www.python.org/downloads/)
[](https://pypi.org/project/sift-gateway/)
[](LICENSE)
---
AI agents break when their tools return too much data. A single MCP call or CLI command can return 30-100 KB of JSON. That is roughly 8,000-25,000 tokens spent before the agent can do the next step. After a few calls, the model starts dropping details or making bad calls. See [Why Sift exists](docs/why.md) for research and open issues behind this pattern.
Sift stores tool output as artifacts, infers a schema, and returns a compact reference with field types and sample values. The agent can see the data shape without carrying full payloads in context. When it needs details, it runs focused Python queries against stored artifacts.
Sift works with MCP clients (Claude Desktop, Claude Code, Cursor, VS Code, Windsurf, Zed) and CLI agents (OpenClaw, terminal automation). Same artifact store, same query interface, two entry points.
```
┌─────────────────────┐
MCP tool call ──────────▶│ │──────────▶ Upstream MCP Server
CLI command ──────────▶│ Sift │──────────▶ Shell command
│ │
│ ┌─────────────┐ │
│ │ Artifacts │ │
│ │ (SQLite) │ │
│ └─────────────┘ │
└─────────────────────┘
│
▼
Small output? return inline
Large output? return schema reference
Agent queries what it needs via code
```
## Quick start
### MCP agents
```bash
pipx install sift-gateway
sift-gateway init --from claude
```
Restart your MCP client. Sift mirrors upstream tools, persists outputs as artifacts, and returns either the full payload (for small responses) or a schema reference (for large responses). The agent can query stored artifacts with `artifact(action="query", query_kind="code", ...)`.
`--from` shortcuts: `claude`, `claude-code`, `cursor`, `vscode`, `windsurf`, `zed`, `auto`, or an explicit path.
### CLI agents (OpenClaw, terminal automation)
```bash
pipx install sift-gateway
sift-gateway run -- kubectl get pods -A -o json
```
Large output is stored and returned as an artifact ID plus compact schema. Example:
```bash
sift-gateway code <artifact_id> '$.items' --code "def run(data, schema, params): return {'rows': len(data)}"
```
Another capture example:
```bash
sift-gateway run -- curl -s api.example.com/events
```
For OpenClaw, see the [OpenClaw Integration Pack](docs/openclaw/README.md).
## Example workflow
You ask an agent to check what is failing in prod:
```
datadog.list_monitors(tag="service:payments")
```
Without Sift, 70 KB of monitor configs and metadata can go straight into context. That is about 18,000 tokens before the next tool call.
With Sift, the agent gets a schema reference:
```json
{
"response_mode": "schema_ref",
"artifact_id": "art_9b2c...",
"schemas_compact": [{"rp": "$.monitors", "f": [
{"p": "$.name", "t": ["string"]},
{"p": "$.status", "t": ["string"], "examples": ["Alert", "OK", "Warn"]},
{"p": "$.type", "t": ["string"]},
{"p": "$.last_triggered", "t": ["datetime"]}
]}],
"schema_legend": {"schema": {"rp": "root_path"}, "field": {"p": "path", "t": "types"}}
}
```
The agent can then run a focused query:
```python
artifact(
action="query",
query_kind="code",
artifact_id="art_9b2c...",
root_path="$.monitors",
code="def run(data, schema, params): return [m for m in data if m.get('status') == 'Alert']",
)
```
In this example, two calls use about 400 tokens and still leave room for follow-up steps.
## How it works
Sift runs one processing pipeline for MCP and CLI:
1. Execute the tool call or command.
2. Parse JSON output.
3. Detect pagination from the raw response.
4. Redact sensitive values (enabled by default).
5. Persist the artifact to SQLite.
6. Map the schema (field types, sample values, cardinality).
7. Choose response mode: `full` (inline) or `schema_ref` (compact reference).
8. Return the artifact-centric response.
### Response mode selection
Sift chooses between inline and reference automatically:
- If the response has upstream pagination: always `schema_ref`.
- If the full response exceeds the configured cap (default 8 KB): `schema_ref`.
- If the schema reference is at least 50% smaller than full: `schema_ref`.
- Otherwise: `full` (inline payload).
## Pagination
When upstream tools or APIs paginate, Sift handles continuation explicitly.
MCP:
```python
artifact(action="next_page", artifact_id="art_9b2c...")
```
CLI:
```bash
sift-gateway run --continue-from art_9b2c... -- gh api repos/org/repo/pulls --after NEXT_CURSOR
```
Each page creates a new artifact linked to the previous one through lineage metadata. The agent can run code queries across the full chain.
## Code queries
Both MCP and CLI agents can analyze stored artifacts with Python.
MCP:
```python
artifact(
action="query",
query_kind="code",
artifact_id="art_123",
root_path="$.items",
code="def run(data, schema, params): return {'count': len(data)}",
)
```
CLI:
```bash
# Function mode
sift-gateway code art_123 '$.items' --code "def run(data, schema, params): return {'count': len(data)}"
# File mode
sift-gateway code art_123 '$.items' --file ./analysis.py
```
Multi-artifact query example:
```python
artifact(
action="query",
query_kind="code",
artifact_ids=["art_users", "art_orders"],
root_paths={"art_users": "$.users", "art_orders": "$.orders"},
code="""
def run(artifacts, schemas, params):
users = {u["id"]: u["name"] for u in artifacts["art_users"]}
return [{"user": users.get(o["user_id"]), "amount": o["amount"]}
for o in artifacts["art_orders"]]
""",
)
```
### Import allowlist
Code queries run with a configurable import allowlist. Default allowed import roots include `math`, `json`, `re`, `collections`, `statistics`, `heapq`, `numpy`, `pandas`, `jmespath`, `datetime`, `itertools`, `functools`, `operator`, `decimal`, `csv`, `io`, `string`, `textwrap`, `copy`, `typing`, `dataclasses`, `enum`, `fractions`, `bisect`, `random`, `base64`, and `urllib.parse`. Third-party modules are usable only when installed in Sift's runtime environment.
Install additional packages:
```bash
sift-gateway install scipy matplotlib
```
## Security
Code queries use AST validation, an import allowlist, timeout enforcement, and memory limits. This is not a full OS-level sandbox.
Outbound secret redaction is enabled by default to reduce accidental leakage of API keys from upstream tool responses.
See [SECURITY.md](SECURITY.md) for the full security policy.
## Configuration
| Env var | Default | Description |
|---|---|---|
| `SIFT_GATEWAY_DATA_DIR` | `.sift-gateway` | Root data directory |
| `SIFT_GATEWAY_PASSTHROUGH_MAX_BYTES` | `8192` | Inline response cap |
| `SIFT_GATEWAY_SECRET_REDACTION_ENABLED` | `true` | Redact secrets from tool output |
| `SIFT_GATEWAY_AUTH_TOKEN` | unset | Required for non-local HTTP binds |
Full reference: [docs/config.md](docs/config.md)
## Documentation
| Doc | Covers |
|---|---|
| [Why Sift Exists](docs/why.md) | Research and ecosystem context |
| [Quick Start](docs/quickstart.md) | Install, init, first artifact |
| [Recipes](docs/recipes.md) | Practical usage patterns |
| [OpenClaw Pack](docs/openclaw/README.md) | OpenClaw skill, quickstart, templates |
| [API Contracts](docs/api_contracts.md) | MCP + CLI public contract |
| [Configuration](docs/config.md) | All settings and env vars |
| [Deployment](docs/deployment.md) | Transport modes, auth, ops |
| [Errors](docs/errors.md) | Error codes and troubleshooting |
| [Observability](docs/observability.md) | Structured logging and metrics |
| [Architecture](docs/architecture.md) | Design and invariants |
## Development
```bash
git clone https://github.com/lourencomaciel/sift-gateway.git
cd sift-gateway
uv sync --extra dev
uv run python -m pytest tests/unit/ -q
uv run python -m ruff check src tests
uv run python -m mypy src
```
See [CONTRIBUTING.md](CONTRIBUTING.md) for the full development guide.
## License
MIT - see [LICENSE](LICENSE).
| text/markdown | zmaciel | zmaciel <46382529+zmaciel@users.noreply.github.com> | null | null | null | mcp, model-context-protocol, gateway, artifacts, llm | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Utilities"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"fastmcp>=2.0.0",
"ijson>=3.3.0",
"orjson>=3.10.0",
"structlog>=24.0.0",
"pydantic>=2.0.0",
"pydantic-settings>=2.0.0",
"detect-secrets>=1.5.0",
"pandas>=2.2.0; extra == \"code\"",
"numpy>=1.26.0; extra == \"code\"",
"jmespath>=1.0.1; extra == \"code\"",
"pandas>=2.2.0; extra == \"data-science\"",
"numpy>=1.26.0; extra == \"data-science\"",
"jmespath>=1.0.1; extra == \"data-science\"",
"sift-gateway[code]; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"pytest-cov>=6.0.0; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\"",
"mypy>=1.13.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/lourencomaciel/sift-gateway",
"Repository, https://github.com/lourencomaciel/sift-gateway",
"Issues, https://github.com/lourencomaciel/sift-gateway/issues",
"Changelog, https://github.com/lourencomaciel/sift-gateway/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:56:49.856487 | sift_gateway-0.2.8.tar.gz | 252,818 | 19/16/d708c7f8d59f1bc148a8fddd9e1260edf2d04b837307bd734d1a418b6816/sift_gateway-0.2.8.tar.gz | source | sdist | null | false | 85042fe007fecdc41e53a859ed21fcc2 | 833f807ebb8fe02be30d789ea7323ef9a691329a0fab43d6188b4e53b8ffadfc | 1916d708c7f8d59f1bc148a8fddd9e1260edf2d04b837307bd734d1a418b6816 | MIT | [
"LICENSE"
] | 248 |
2.4 | obra | 2.21.9 | Obra - Cloud-native AI orchestration platform for autonomous software development | # Obra
AI orchestration for autonomous software development.
## Install
```bash
pipx install obra
```
## Get Started
```bash
obra briefing
```
This opens the quickstart guide with everything you need to begin.
When running `obra` from a project, the CLI prefers `--dir`, then stored session/project working_dir, then the current shell directory, and prompts only if stored and current differ.
Completion output is phase-aware; escalations show warnings and omit quality scores. File totals come from the CLI git-status footer.
## Plan Visibility
```bash
obra sessions plan <session_id> # View execution plan tree
obra sync plan <session_id> # Export session plan (YAML/JSON)
```
## License
Proprietary - All Rights Reserved. Copyright (c) 2024-2025 Unpossible Creations, Inc.
| text/markdown | Unpossible Creations, Inc. | null | null | null | Proprietary - All Rights Reserved | ai, orchestration, llm, automation, cloud, workflow, claude | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"typer>=0.9.0",
"rich>=13.0.0",
"requests>=2.31.0",
"pyyaml>=6.0",
"pydantic>=2.0.0",
"psutil>=5.9.0",
"watchdog>=3.0.0",
"python-dotenv>=1.0.0",
"textual>=0.47.0",
"tiktoken>=0.5.0",
"packaging>=21.0",
"pytest>=7.4.0; extra == \"test\"",
"pytest-cov>=4.1.0; extra == \"test\"",
"pytest-mock>=3.11.0; extra == \"test\"",
"obra[test]; extra == \"dev\"",
"mypy>=1.6.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"pyright>=1.1.0; extra == \"dev\"",
"vulture>=2.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"types-psutil; extra == \"dev\"",
"types-PyYAML; extra == \"dev\"",
"types-requests; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://obra.dev"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T23:55:43.957573 | obra-2.21.9-py3-none-any.whl | 1,602,190 | 81/6b/d2f2b9410de271737c0bb66bd4cab24960f8c3b0c81534707a6e71c80e45/obra-2.21.9-py3-none-any.whl | py3 | bdist_wheel | null | false | ae183a42448dde6ebad18ab716204d1f | d798d1dfa2b32c09ff2ae861a002c07a2f62ed3ad96c31b32ac953964c2f12e2 | 816bd2f2b9410de271737c0bb66bd4cab24960f8c3b0c81534707a6e71c80e45 | null | [] | 81 |
2.4 | phellem-semantic-kernel | 0.0.1 | Semantic Kernel adapter for the Phellem security framework | # phellem-semantic-kernel
Semantic Kernel adapter for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:55:23.929711 | phellem_semantic_kernel-0.0.1.tar.gz | 1,789 | 51/33/28ce77c1320b7d4f3bd81e7066740518cab64e41031d1243b4679581c655/phellem_semantic_kernel-0.0.1.tar.gz | source | sdist | null | false | 667211f6c75f96e88eeffeb1dbe6e9fe | c3f4968d57290ba2303fddcb36d749dddfdd25f6cd664190e030f880c4c6073d | 513328ce77c1320b7d4f3bd81e7066740518cab64e41031d1243b4679581c655 | null | [
"LICENSE"
] | 266 |
2.4 | phellem-runtime | 0.0.1 | Execution runtime for the Phellem security framework | # phellem-runtime
Execution runtime for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:55:19.903213 | phellem_runtime-0.0.1.tar.gz | 1,742 | 8f/96/e2baefa4f322f7558eec83ceca2415a73c6ea915a00a7e443553c82fac7a/phellem_runtime-0.0.1.tar.gz | source | sdist | null | false | bbeda41c583cc3c9f342fc9e74dcbbc1 | de5dc504198c587966621a53dcb0cbfaf61344e6454b960086773f3a1ef612fe | 8f96e2baefa4f322f7558eec83ceca2415a73c6ea915a00a7e443553c82fac7a | null | [
"LICENSE"
] | 267 |
2.4 | phellem-proof | 0.0.1 | Proof generation for the Phellem security framework | # phellem-proof
Proof generation for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:55:15.494395 | phellem_proof-0.0.1.tar.gz | 1,747 | 1f/86/68d385276a9fb2095512e8102da50046841ceb94718979838463fa947876/phellem_proof-0.0.1.tar.gz | source | sdist | null | false | 1ff980e36692a58ab5a5465ec637414f | 154516bbbf41fac816f992faa3eb66aeadbc56812716cf5de5a860160313a326 | 1f8668d385276a9fb2095512e8102da50046841ceb94718979838463fa947876 | null | [
"LICENSE"
] | 267 |
2.4 | phellem-policy | 0.0.1 | Policy engine for the Phellem security framework | # phellem-policy
Policy engine for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:55:11.606881 | phellem_policy-0.0.1.tar.gz | 1,737 | 45/f0/98eafc3a8188f84031b2fa928e746460460d4ab33b607308329abb357a84/phellem_policy-0.0.1.tar.gz | source | sdist | null | false | 8f68f34ddb48d7b68a431d9cf61fbd8e | 10ed6ad77a90c036b28d81d05ad36e66e1c51d2db10bffa5d3e4bdb2df4f44f4 | 45f098eafc3a8188f84031b2fa928e746460460d4ab33b607308329abb357a84 | null | [
"LICENSE"
] | 267 |
2.4 | web-in-python-lol | 0.2.2 | A responsive, Python-only UI engine for rapid dashboards |
# 🌑 web-in-python-lol Engine
A lightweight, **Python-only** UI engine designed for building rapid dashboards and web interfaces without touching HTML, CSS, or JavaScript. Built on top of a standard HTTP server with zero external Python dependencies.
## ✨ Features
* **Python-Native**: Write your entire UI in pure Python classes.
* **Zero Dependencies**: Uses only the standard library (`http.server`, `sqlite3`, etc.).
* **Hot Reloading**: Automatic page refreshes when the database state changes.
* **Built-in Persistence**: SQLite3 backend integrated directly into the `WebApp` class.
* **Responsive by Default**: Modern flexbox/grid components that work on mobile and desktop.
* **Lucide Icons**: Integrated professional SVG icons out of the box.
---
## 🚀 Quick Start
### 1. Installation
```bash
pip install web-in-python-lol
```
### 2. Create your first App
```python
from Engine.core import WebApp, Container, Card, Text, Button, Navbar
# Initialize the App
app = WebApp(name="MyDashboard")
@app.page("/")
def home(instance, params):
return [
Navbar("App", [("Home", "/"), ("Settings", "/settings")]),
Container([
Card([
Text("Welcome to ShadowUI").font_size("24px").weight("bold"),
Text("Building UIs in Python has never been this easy."),
Button("Get Started").m_top("20px")
])
])
]
if __name__ == "__main__":
app.start(port=8080)
```
---
## 🛠 Component Toolkit
ShadowUI provides a declarative way to build layouts. Every component supports **method chaining** for styling.
| Component | Description |
| --- | --- |
| `Container` | Centers content with a max-width (ideal for main pages). |
| `Row` / `Column` | Flexbox-based layouts for horizontal or vertical stacking. |
| `Grid` | Responsive CSS Grid for cards and galleries. |
| `Card` | A styled container with borders and padding. |
| `Navbar` | Responsive navigation bar with mobile hamburger menu support. |
| `Icon` | Embed 1,000+ professional icons via [Lucide](https://lucide.dev). |
---
## 💾 Database & State
includes a thread-safe SQLite wrapper. Use it to store settings or app state that triggers auto-reloads.
```python
# Save data
app.store("user_theme", "dark")
# Fetch data
theme = app.fetch("user_theme", default="light")
```
---
## 📱 Mobile Support
includes a built-in "hamburger" menu system. When the screen width drops below `768px`, the `Navbar` automatically collapses into a mobile-friendly toggle.
---
## 🤝 Contributing
1. Fork the Project
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
## 📄 License
Distributed under the MIT License. See `LICENSE` for more information.
| text/markdown | null | Basel Ezzat <lmcteam206@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.8 | 2026-02-20T23:55:10.335752 | web_in_python_lol-0.2.2.tar.gz | 8,825 | b6/c4/784e6d8b6072f2e0ed2469d3ad5ce9fdcf9755df4188a28490d4a65ed24f/web_in_python_lol-0.2.2.tar.gz | source | sdist | null | false | a6cd0dfbc08ef4e5a398e7deae83195b | a8642aa473c05f782a69107210aa17e340863254f230eb427366084fd7fb717b | b6c4784e6d8b6072f2e0ed2469d3ad5ce9fdcf9755df4188a28490d4a65ed24f | null | [
"LICENSE"
] | 240 |
2.4 | phellem-persistence | 0.0.1 | State persistence for the Phellem security framework | # phellem-persistence
State persistence for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:55:07.490105 | phellem_persistence-0.0.1.tar.gz | 1,743 | d6/e3/5a37e78f63a2cb243dbd9aee55739f03fe7e0bcbf01eaa2ff33fbfe0b5fc/phellem_persistence-0.0.1.tar.gz | source | sdist | null | false | fa77da7a1f8c2c1faea08e115a0b5447 | 5d118d6e0bdc4edc9dc3a8075c8a56893c8050219ba5e8e60a088d6cb41df480 | d6e35a37e78f63a2cb243dbd9aee55739f03fe7e0bcbf01eaa2ff33fbfe0b5fc | null | [
"LICENSE"
] | 259 |
2.4 | phellem-macros | 0.0.1 | Utility macros for the Phellem security framework | # phellem-macros
Utility macros for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:55:03.618126 | phellem_macros-0.0.1.tar.gz | 1,739 | 11/ce/feab0757c2a3b7d2f3cc406496ca9707eeadcba94bab83bccd0b10d3c526/phellem_macros-0.0.1.tar.gz | source | sdist | null | false | bfea162ef1201fd8ec3b54763e783d73 | 098232419521cfa7ba3679abbd08114537056ff942c5b866c9b534e5bef089e1 | 11cefeab0757c2a3b7d2f3cc406496ca9707eeadcba94bab83bccd0b10d3c526 | null | [
"LICENSE"
] | 264 |
2.4 | phellem-licensing | 0.0.1 | License management for the Phellem security framework | # phellem-licensing
License management for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:54:59.727270 | phellem_licensing-0.0.1.tar.gz | 1,740 | 27/ed/2f54f6532a4cd81b280a24ac1a89ac7e1e9eef678e2dede75ee00fab51f8/phellem_licensing-0.0.1.tar.gz | source | sdist | null | false | 37e13445d4d9e9cfa3ae6d05ac3dae05 | 594ed782617b1c1cfe07d415db8f8ce7f86f970822e3204405ede7261e8040fc | 27ed2f54f6532a4cd81b280a24ac1a89ac7e1e9eef678e2dede75ee00fab51f8 | null | [
"LICENSE"
] | 259 |
2.4 | phellem-langchain | 0.0.1 | LangChain adapter for the Phellem security framework | # phellem-langchain
LangChain adapter for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:54:55.838409 | phellem_langchain-0.0.1.tar.gz | 1,754 | 22/2f/2df37f8706c72030fcfb9fce0e0568713078d3b2e2601af495186b87344e/phellem_langchain-0.0.1.tar.gz | source | sdist | null | false | 8ba1b0339cc22ad9115134d872ac8528 | 63cd6912c41c70b77c9da37459b2180a10c07ad803bf1d694bc63122bd5e999c | 222f2df37f8706c72030fcfb9fce0e0568713078d3b2e2601af495186b87344e | null | [
"LICENSE"
] | 257 |
2.4 | phellem-kernel | 0.0.1 | Kernel module for the Phellem security framework | # phellem-kernel
Kernel module for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:54:51.536172 | phellem_kernel-0.0.1.tar.gz | 1,736 | 0e/f8/b0ee4df7820495756dc2e107638634285a7194ef76bc022cdb10de5d6f10/phellem_kernel-0.0.1.tar.gz | source | sdist | null | false | bbaf48d35d25d929edda264fc4e2862b | 34c420d874a4f32d76c004372880bb0f828f0d25bb8b1f48a3b1b2f49cc61ee1 | 0ef8b0ee4df7820495756dc2e107638634285a7194ef76bc022cdb10de5d6f10 | null | [
"LICENSE"
] | 271 |
2.4 | phellem-hardening | 0.0.1 | Security hardening for the Phellem security framework | # phellem-hardening
Security hardening for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:54:47.086406 | phellem_hardening-0.0.1.tar.gz | 1,745 | 7c/9e/eb6122af3f7795667fdcc68590f83ff4aeb6f44b5afdaeb3abe763a607f6/phellem_hardening-0.0.1.tar.gz | source | sdist | null | false | 53908062cbba07c05d5adac1b04d52e2 | fcbdc10b8fe50351492a1ea44e474ff0c76e9e97b29b988a8901dd336fefc949 | 7c9eeb6122af3f7795667fdcc68590f83ff4aeb6f44b5afdaeb3abe763a607f6 | null | [
"LICENSE"
] | 261 |
2.4 | phellem-fips | 0.0.1 | FIPS 140-3 support for the Phellem security framework | # phellem-fips
FIPS 140-3 support for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:54:42.935162 | phellem_fips-0.0.1.tar.gz | 1,741 | b0/33/356b278491ca69789ab1750e2b7283276bb6991e91fe77d02e76a7a4aed5/phellem_fips-0.0.1.tar.gz | source | sdist | null | false | a34c19099f3137b96072154b74d21bf4 | 5ab31f7d6485a473c23466ce9b1b8b8391f4c06a4ce4d0384c8b76fd0e40c95d | b033356b278491ca69789ab1750e2b7283276bb6991e91fe77d02e76a7a4aed5 | null | [
"LICENSE"
] | 261 |
2.4 | phellem-crypto | 0.0.1 | Cryptographic primitives for the Phellem security framework | # phellem-crypto
Cryptographic primitives for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:54:39.146905 | phellem_crypto-0.0.1.tar.gz | 1,745 | fd/8d/58de9d1a7a5b9020fefe81a0577c62ec6dd6f1c63a4fc3e073f8865b34b4/phellem_crypto-0.0.1.tar.gz | source | sdist | null | false | d0ab7d3ae8b5f9e2da3bc8fe0a3e92f0 | b5f64826c3aed7c7744f8ef4ff37406eb8671c86148a583c2d6ba520def05d95 | fd8d58de9d1a7a5b9020fefe81a0577c62ec6dd6f1c63a4fc3e073f8865b34b4 | null | [
"LICENSE"
] | 262 |
2.4 | phellem-core | 0.0.1 | Core runtime for the Phellem security framework | # phellem-core
Core runtime for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:54:35.057026 | phellem_core-0.0.1.tar.gz | 1,726 | 2d/48/19b1be7b9545b32aa72462f45e9a86306d67a86dc7c888e18f3f602a7579/phellem_core-0.0.1.tar.gz | source | sdist | null | false | a1854d67708d4aa89757bc64ad4e05e3 | 48c6e7febf5ac1f4b50f6d06796b2b8ceb534676776098296ce54684d5a38cce | 2d4819b1be7b9545b32aa72462f45e9a86306d67a86dc7c888e18f3f602a7579 | null | [
"LICENSE"
] | 253 |
2.4 | phellem-contracts | 0.0.1 | Shared types and contracts for the Phellem security framework | # phellem-contracts
Shared types and contracts for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:54:30.750135 | phellem_contracts-0.0.1.tar.gz | 1,750 | 45/e6/e8946619497dfa98b2856f4082d60000e792fccb6426bfd7f7d54182a9ef/phellem_contracts-0.0.1.tar.gz | source | sdist | null | false | 75807d66074ae20ba985716282c90bee | 59ece52a8887e575bd14f11b2679cfa0c895534ec152d1b650a5b5199e6fd872 | 45e6e8946619497dfa98b2856f4082d60000e792fccb6426bfd7f7d54182a9ef | null | [
"LICENSE"
] | 255 |
2.4 | phellem-cli | 0.0.1 | Command-line interface for the Phellem security framework | # phellem-cli
Command-line interface for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:54:26.397533 | phellem_cli-0.0.1.tar.gz | 1,740 | 47/1d/075a40bc47d47daa2bd3f0677b25ad8a1becbd35da542beb2088ac106232/phellem_cli-0.0.1.tar.gz | source | sdist | null | false | 82ccea72a17eaa62f7893b05ab74bfe3 | adf0c9bb4fa17078dad033327e458956833263fd6ff754c8f6f893ac3b25a978 | 471d075a40bc47d47daa2bd3f0677b25ad8a1becbd35da542beb2088ac106232 | null | [
"LICENSE"
] | 257 |
2.4 | phellem-autogen | 0.0.1 | AutoGen adapter for the Phellem security framework | # phellem-autogen
AutoGen adapter for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:54:22.560319 | phellem_autogen-0.0.1.tar.gz | 1,742 | 4f/9d/0530555ed0a5df66ff56d00c2d93d5212c7e8c1d6be6ccd2a439167d4ffa/phellem_autogen-0.0.1.tar.gz | source | sdist | null | false | a42979c1a7bd1aaf731792697e72344c | 4991e62ff76748f070c5da6abf7858dfbfee7484fe3a7bf66f22c64c5381da25 | 4f9d0530555ed0a5df66ff56d00c2d93d5212c7e8c1d6be6ccd2a439167d4ffa | null | [
"LICENSE"
] | 260 |
2.4 | phellem-adapters | 0.0.1 | Framework adapters for the Phellem security framework | # phellem-adapters
Framework adapters for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:54:18.253259 | phellem_adapters-0.0.1.tar.gz | 1,739 | c2/15/8ec72d52842fe8112718c24cb08990cdd9b8c5909467fa8f72c06fff6329/phellem_adapters-0.0.1.tar.gz | source | sdist | null | false | 09899e3892d31c4d55154997bdf60d26 | c040614127aab0f3d6f3dd246666f7a0d57485f1f6d24381f84ba9f6dc146839 | c2158ec72d52842fe8112718c24cb08990cdd9b8c5909467fa8f72c06fff6329 | null | [
"LICENSE"
] | 256 |
2.4 | phellem-actuarial | 0.0.1 | Risk modeling for the Phellem security framework | # phellem-actuarial
Risk modeling for the Phellem security framework
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:54:12.858581 | phellem_actuarial-0.0.1.tar.gz | 1,751 | 0c/12/0f62392c3b56adb8eda16f91bcb526f5ee54391eaf74e982cca47550784b/phellem_actuarial-0.0.1.tar.gz | source | sdist | null | false | 792dcb4e8f9f518359f6698ddc3cdd9b | 19d70c08695966046c8411e97de33e2a6a99861d076a17b3797b44fdaca03bf8 | 0c120f62392c3b56adb8eda16f91bcb526f5ee54391eaf74e982cca47550784b | null | [
"LICENSE"
] | 255 |
2.4 | phellem | 0.0.1 | Security framework for AI agent systems | # phellem
Security framework for AI agent systems
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:54:08.729102 | phellem-0.0.1.tar.gz | 1,712 | df/18/b1b6f43010bad1479073eff5a540aa5a691d63f3ef20d9ca299f69ed2a3c/phellem-0.0.1.tar.gz | source | sdist | null | false | e1703e575ce5a820a46b800ba39eb083 | d576d5880c54b6c2ab4bbfc9b4443a799945f35655c325ee5863183228a22806 | df18b1b6f43010bad1479073eff5a540aa5a691d63f3ef20d9ca299f69ed2a3c | null | [
"LICENSE"
] | 247 |
2.4 | casparian | 0.0.1 | Software library by Casparian Systems Inc. | # casparian
Software library by Casparian Systems Inc.
Under active development by [Casparian Systems Inc](https://casparian.systems).
## License
Proprietary — Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
See [LICENSE](LICENSE) for details.
| text/markdown | Casparian Systems Inc. | null | null | null | Copyright (c) 2026 Casparian Systems Inc. All rights reserved.
This software is proprietary and confidential. No part of this software
may be reproduced, distributed, or transmitted in any form or by any
means, including photocopying, recording, or other electronic or
mechanical methods, without the prior written permission of
Casparian Systems Inc, except as expressly permitted by a separate
license agreement.
For licensing enquiries: legal@casparian.systems
| null | [
"Development Status :: 1 - Planning",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://casparian.systems",
"Repository, https://github.com/casparian-systems/phellem"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T23:54:04.627513 | casparian-0.0.1.tar.gz | 1,711 | 3d/26/37f192a3d4c5458d6803aa01f6964d9a211f3bc9333e52e0c8c863840f89/casparian-0.0.1.tar.gz | source | sdist | null | false | e159c9555ce71ed8ac6c30fee26229be | 5bc59e57e88e7b48b61917eacda9b481ff4f91feadc245c3288dd9e4f41af24a | 3d2637f192a3d4c5458d6803aa01f6964d9a211f3bc9333e52e0c8c863840f89 | null | [
"LICENSE"
] | 249 |
2.1 | sgraph-ai-app-send | 0.5.0 | SGraph-AI__App__Send | # SGraph Send
**Zero-knowledge encrypted file sharing.** The server never sees your files.
[send.sgraph.ai](https://send.sgraph.ai) | Currently in private beta
---
## How It Works
A complete walkthrough of the upload-to-download flow. The server never sees your plaintext, your file name, or your decryption key at any point.
### Step 1: Select a file
Drop a file into the upload zone or click to browse. No account required.
<img width="600" alt="Upload page with drop zone and test files section" src="https://github.com/user-attachments/assets/f6a37952-fddf-4842-877a-4d59b9ee81ee" />
### Step 2: Encrypt and upload
Your file is shown with its size. Click "Encrypt & Upload" -- encryption happens entirely in your browser using AES-256-GCM before anything leaves your device.
<img width="600" alt="File selected, showing test-data.json ready for encryption" src="https://github.com/user-attachments/assets/fb5e68b7-a741-4aff-a816-99974695a067" />
### Step 3: Share the link and key separately
After upload, you get two things: a download link and a decryption key, each with its own copy button. The security tip reminds you to share these through different channels. The transparency panel proves what was stored (encrypted file, size) and what was NOT stored (file name, decryption key, raw IP).
<img width="600" alt="File sent with download link, decryption key, and transparency panel" src="https://github.com/user-attachments/assets/f5bcbf42-36d1-4c4d-abc0-83bc3083c48a" />
### Step 4: Recipient opens the download link
The recipient sees the encrypted file metadata and a field to paste the decryption key. The server never sees the key -- it is shared out-of-band between sender and recipient.
<img width="600" alt="Download page showing encrypted file and decryption key input" src="https://github.com/user-attachments/assets/2bcf7574-8eb7-4456-9a0f-442bd5d7a644" />
### Step 5: File decrypted locally
The file is decrypted in the recipient's browser. The transparency panel confirms: file content was encrypted (the server could not read it), the decryption key was NOT stored (only you have it), and the file name was never sent to the server.
<img width="600" alt="Download confirmation with transparency panel showing zero-knowledge proof" src="https://github.com/user-attachments/assets/a7e52ea7-d310-4f9a-8c77-8ca752dc77e0" />
### Step 6: Original file, intact
The downloaded file is identical to the original. The server only ever had encrypted bytes -- it could not read, modify, or inspect the contents at any point.
<img width="600" alt="Original test-data.json opened in text editor, content intact" src="https://github.com/user-attachments/assets/2b8f882f-1526-4084-928d-90c9602227e5" />
---
## Why This Exists
Most file sharing services require you to trust the provider with your unencrypted data. SGraph Send takes a different approach: the server is architecturally unable to read what you share.
- No accounts required
- No tracking, no cookies, no local storage
- The server stores only encrypted bytes it cannot decrypt
- IP addresses are hashed with a daily rotating salt — stored as one-way hashes, never in the clear
---
## Architecture
| Component | Detail |
|-----------|--------|
| **Two Lambda functions** | User-facing (transfers, health, static UI) and Admin (tokens, stats) |
| **Endpoints** | Lambda Function URLs — direct HTTPS, no API Gateway |
| **Storage** | S3 via Memory-FS abstraction (pluggable: memory, disk, S3) |
| **Encryption** | Web Crypto API, AES-256-GCM, client-side only |
| **Frontend** | IFD Web Components — vanilla JS, zero framework dependencies |
| **Backend** | FastAPI + Mangum via [osbot-fast-api](https://github.com/owasp-sbot/OSBot-Fast-API) |
| **Type system** | Type_Safe from [osbot-utils](https://github.com/owasp-sbot/OSBot-Utils) (no Pydantic) |
Three UIs serve different audiences: the user workflow, power user tools, and an admin console.
---
## The Agentic Team
This project is built and maintained by a **15-role AI agentic team** coordinated through Claude Code, with a human stakeholder (Dinis Cruz) providing direction through written briefs.
**Roles:** Architect, Dev, QA, DevOps, AppSec, GRC, DPO, Advocate, Sherpa, Ambassador, Journalist, Historian, Cartographer, Librarian, and Conductor.
Each role produces structured review documents, tracks decisions, and operates within defined boundaries. The team's work is fully visible in the repo:
- [`team/roles/`](team/roles/) — all role definitions and review documents
- [`team/humans/dinis_cruz/briefs/`](team/humans/dinis_cruz/briefs/) — stakeholder briefs driving priorities
- [`.claude/CLAUDE.md`](.claude/CLAUDE.md) — agent guidance, stack rules, and project conventions
---
## Key Documents
| Document | Path |
|----------|------|
| Project brief | [`library/docs/_to_process/project - Secure Send Service brief.md`](library/docs/_to_process/project%20-%20Secure%20Send%20Service%20brief.md) |
| Phase roadmap | [`library/roadmap/phases/v0.1.1__phase-overview.md`](library/roadmap/phases/v0.1.1__phase-overview.md) |
| Agent guidance | [`.claude/CLAUDE.md`](.claude/CLAUDE.md) |
| Development guides | [`library/guides/`](library/guides/) |
| Issue tracking | [`.issues/`](.issues/) |
---
## Project Structure
```
sgraph_ai_app_send/ # Application code
lambda__admin/ # Admin Lambda (FastAPI + Mangum)
lambda__user/ # User Lambda (FastAPI + Mangum)
sgraph_ai_app_send__ui__admin/ # Admin UI (static assets)
sgraph_ai_app_send__ui__user/ # User UI (static assets)
tests/unit/ # Tests (no mocks, real in-memory stack)
.issues/ # File-based issue tracking
library/ # Specs, guides, roadmap, dependencies
team/ # Agentic team roles, reviews, briefs
```
---
## Development
Requires **Python 3.12**.
```bash
# Install dependencies
poetry install
# Run tests
poetry run pytest tests/unit/ -v
```
All tests use real implementations with an in-memory storage backend. No mocks, no patches. The full stack starts in under 3 seconds.
---
## Stack
| Layer | Technology |
|-------|-----------|
| Runtime | Python 3.12 / arm64 |
| Web framework | FastAPI via osbot-fast-api / osbot-fast-api-serverless |
| Lambda adapter | Mangum |
| Storage | Memory-FS (pluggable: memory, disk, S3) |
| AWS operations | osbot-aws |
| Type system | Type_Safe (osbot-utils) |
| Frontend | Vanilla JS + Web Components (IFD) |
| Encryption | Web Crypto API (AES-256-GCM) |
| Testing | pytest, in-memory stack, no mocks |
| CI/CD | GitHub Actions (test, tag, deploy) |
---
## Status
**v0.2.21** — S3 persistent storage live. End-to-end encryption working. CI/CD pipeline deploying automatically. 56 tests passing. Private beta phase.
---
## License
Apache 2.0
| text/markdown | Dinis Cruz | dinis.cruz@owasp.org | null | null | Apache 2.0 | null | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | https://github.com/the-cyber-boardroom/SGraph-AI__App__Send | null | <4.0,>=3.12 | [] | [] | [] | [
"issues-fs-cli",
"mgraph-ai-service-cache",
"mgraph-ai-service-cache-client",
"osbot-fast-api-serverless",
"osbot-utils"
] | [] | [] | [] | [
"Repository, https://github.com/the-cyber-boardroom/SGraph-AI__App__Send"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T23:54:00.655656 | sgraph_ai_app_send-0.5.0.tar.gz | 46,031 | 88/ba/fb65f3f6c0337e17989a978300bbd23b8bf1e2c5e9e7ead1d6af6dc50004/sgraph_ai_app_send-0.5.0.tar.gz | source | sdist | null | false | cbd038dc3baaa30677a36a0ea28512e9 | e9db939c8bd679138d898ee168305a81b2bdaf08b9551c805e58749e6da34e38 | 88bafb65f3f6c0337e17989a978300bbd23b8bf1e2c5e9e7ead1d6af6dc50004 | null | [] | 230 |
2.4 | proxy-shadow-keys | 0.1.5 | CLI tool to manage proxy shadow keys | # Proxy Shadow Keys
A CLI tool and local proxy service designed to intercept network requests and transparently replace "shadow keys" (e.g., `shadow_my_api_key`) with your real, secret keys stored securely in the macOS Keychain.
This tool allows developers to use placeholder keys in their application code or environment files, preventing accidental exposure or commits of sensitive API keys to version control.
## Intentions
- **Security Details**: Never hardcode real API keys in your `.env` files, scripts, or repositories.
- **Convenience**: Use consistent placeholder keys (like `shadow_stripe_secret`) across your projects and let the proxy handle inserting the real sensitive values behind the scenes.
- **Fallbacks**: If a shadow key isn't found in your secure vault, the proxy sends the original placeholder unchanged, preventing unexpected crashes and making debugging easy.
## Project Structure
```text
proxy-shadow-keys/
├── src/
│ └── proxy_shadow_keys/
│ ├── cli.py # Click-based CLI commands (set, rm, start, stop, install-cert)
│ ├── interceptor.py # mitmproxy addon that intercepts and replaces shadow keys
│ └── system_proxy.py # Cross-platform utility to toggle system proxy settings (Windows, macOS, Linux)
├── tests/
│ ├── features/ # Behavior-Driven Development (BDD) feature specifications
│ └── step_defs/ # pytest-bdd step definitions
└── pyproject.toml # Project configuration and dependencies
```
## How It Works
1. **Store Keys**: Use the CLI to store a real key mapping in your system's keyring (macOS Keychain, Windows Credential Locker, or Linux Secret Service).
2. **Start Proxy**: The CLI starts a local `mitmproxy` instance in the background and configures your OS system proxy (macOS `networksetup`, Windows Registry, or Linux `gsettings`) to route traffic through it.
3. **Intercept & Replace**: As your system makes HTTP/HTTPS requests, the mitmproxy addon parses request headers, JSON bodies, and URL query parameters. When it detects a string starting with `shadow_`, it queries the local keyring to swap the placeholder with the real API key before forwarding the request to its destination.
## Requirements
- macOS, Windows, or Linux (GNOME)
- Python 3.9+
- `mitmproxy` installed and available
## Installation
The recommended way to install and run the CLI globally in an isolated environment is using `pipx` or `uv`:
```bash
# Using uv (fastest)
uv tool install proxy-shadow-keys
# Or using pipx
pipx install proxy-shadow-keys
```
### Standard Installation
You can install `proxy-shadow-keys` via `pip`:
```bash
pip install proxy-shadow-keys
```
### Recommended: Using pipx
For CLI tools, it's recommended to use [pipx](https://github.com/pypa/pipx) to keep dependencies isolated:
```bash
pipx install proxy-shadow-keys
```
### One-liner (curl)
If you have `pip` installed, you can use this one-liner to download and install the latest version directly:
```bash
curl -sSL https://raw.githubusercontent.com/Tavernari/proxy-shadow-keys/main/scripts/install.sh | bash
```
### Development or Source Installation
If you want to contribute or use the latest unreleased changes:
```bash
# Clone the repository
git clone git@github.com:Tavernari/proxy-shadow-keys.git
cd proxy-shadow-keys
# Install in editable mode
pip install -e .
```
## Usage
### 1. Certificate Installation
For the proxy to inspect encrypted HTTPS traffic, you must first install and trust the mitmproxy CA certificate. Run this command once (requires sudo privileges):
```bash
proxy-shadow-keys install-cert
```
### 2. Managing Shadow Keys
Map a placeholder `shadow_` key to your real API key (stored securely via `keyring`):
```bash
proxy-shadow-keys set shadow_openai_key sk-proj-123456789...
```
To remove a mapped key:
```bash
proxy-shadow-keys rm shadow_openai_key
```
### 3. Proxy Lifecycle
Start the proxy. This will launch `mitmproxy` in the background and configure your local network settings to use the local proxy:
```bash
proxy-shadow-keys start
```
*(Default port is 8080. You can optionally pass `--port 8081`)*
When you're done, stop the proxy to restore your standard network settings:
```bash
proxy-shadow-keys stop
```
## Testing
The project uses a BDD approach with `pytest` and `pytest-bdd`. All functionality is strictly defined in feature files before implementation.
```bash
# Install development dependencies
pip install -e ".[dev]"
# Run all tests
pytest
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=8.1.7",
"keyring>=24.3.0",
"mitmproxy>=10.2.1",
"bcrypt<4.0.0",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-bdd>=7.0.0; extra == \"dev\"",
"build>=1.0.0; extra == \"dev\"",
"twine>=4.0.0; extra == \"dev\"",
"tomli>=2.0.1; python_version < \"3.11\" and extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:52:36.683311 | proxy_shadow_keys-0.1.5.tar.gz | 11,936 | 4c/e5/ecb8fddd0f930c066630fd458e3062efb5d479e6e1a321b8af8d337399ba/proxy_shadow_keys-0.1.5.tar.gz | source | sdist | null | false | fa0c0a2bae7bdfd4fc5a8dc7b3eb67ea | 69b2736def9e5cb52a4dcf930a92a58a2d25f6cd2d359b7a89cbd6c58558652a | 4ce5ecb8fddd0f930c066630fd458e3062efb5d479e6e1a321b8af8d337399ba | null | [] | 242 |
2.4 | cloudscope | 0.3.0 | TUI for browsing and syncing files across S3, GCS, and Google Drive | # CloudScope
A terminal UI for browsing, downloading, uploading, and bi-directionally syncing files across **AWS S3**, **Google Cloud Storage**, and **Google Drive**.
Built with [Textual](https://textual.textualize.io/) for a keyboard-driven, Linear.app-inspired dark interface.
## Features
- **Multi-backend browsing** — Switch between S3, GCS, and Google Drive with a single keypress (`1`, `2`, `3`)
- **Lazy-loading tree sidebar** — Expand buckets/drives on demand without loading everything upfront
- **Sortable file table** — Click column headers to sort by name, size, modified date, or type
- **File preview** — Metadata panel appears when a file is highlighted (size, modified date, ETag, content type)
- **Downloads and uploads** — File picker dialogs for selecting local paths
- **Folder creation** — Create new folders/prefixes directly from the TUI
- **Bi-directional sync** — Three-way diff engine with conflict resolution (newer wins, local wins, remote wins, keep both, or ask)
- **Command palette** — Press `Ctrl+K` to search and run any command
- **Settings screen** — Configure default backend, AWS profile/region, GCP project, Drive OAuth credentials
- **Auth setup** — Guided authentication testing for each backend
## Requirements
- Python 3.11+
- AWS credentials (for S3) — environment variables, `~/.aws/credentials`, or IAM role
- GCP credentials (for GCS) — Application Default Credentials (`gcloud auth application-default login`)
- OAuth2 client secrets (for Google Drive) — `client_secrets.json` placed in config directory
## Installation
### From source
```bash
git clone https://github.com/your-username/cloudscope.git
cd cloudscope
pip install -e .
```
### With dev dependencies
```bash
pip install -e ".[dev]"
```
### Run
```bash
cloudscope
```
Or via module:
```bash
python -m cloudscope
```
## Authentication Setup
### AWS S3
CloudScope uses standard boto3 credential resolution. Any of the following will work:
- Environment variables (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`)
- Shared credentials file (`~/.aws/credentials`)
- AWS config file (`~/.aws/config`) with named profiles
- IAM instance role (on EC2)
Configure a specific profile in Settings (`,`) or pass it through the config file at `~/.config/cloudscope/config.toml`.
### Google Cloud Storage
Uses [Application Default Credentials](https://cloud.google.com/docs/authentication/application-default-credentials):
```bash
gcloud auth application-default login
```
Or point to a service account key in Settings.
### Google Drive
Requires OAuth2 consent flow:
1. Create OAuth credentials in the [Google Cloud Console](https://console.cloud.google.com/apis/credentials)
2. Download the client secrets JSON
3. Place it at `~/.config/cloudscope/drive_client_secrets.json` (or configure the path in Settings)
4. Press `a` in CloudScope to run the auth flow — a browser window will open for consent
The token is persisted at `~/.config/cloudscope/drive_token.json` and refreshed automatically.
## Keyboard Shortcuts
### Browsing
| Key | Action |
|---|---|
| `1` | Switch to S3 |
| `2` | Switch to GCS |
| `3` | Switch to Google Drive |
| `d` | Download selected file |
| `u` | Upload a file |
| `n` | Create new folder |
| `Delete` | Delete selected file |
| `r` | Refresh current view |
| `Tab` | Focus next panel |
| `Shift+Tab` | Focus previous panel |
| `Ctrl+K` | Open command palette |
| `s` | Open sync configuration |
| `,` | Open settings |
| `a` | Open auth setup |
| `?` | Show help |
| `q` | Quit |
### In file table
| Key | Action |
|---|---|
| `Up`/`Down` | Navigate files |
| `Enter` | Open folder or select file |
| Click column header | Sort by that column |
## Configuration
Settings are stored at `~/.config/cloudscope/config.toml`:
```toml
default_backend = "s3"
max_concurrent_transfers = 3
[backends.s3]
profile = "default"
region = "us-east-1"
[backends.gcs]
project = "my-project-id"
[backends.drive]
client_secrets_path = "/path/to/client_secrets.json"
```
All settings can also be edited from the TUI via the Settings screen (`,`).
## Project Structure
```
src/cloudscope/
├── app.py # Main Textual application
├── config.py # TOML config management
├── __main__.py # CLI entry point
├── auth/
│ ├── aws.py # AWS profile listing and client creation
│ ├── gcp.py # GCS client creation (ADC or service account)
│ └── drive_oauth.py # Google Drive OAuth2 flow
├── backends/
│ ├── base.py # CloudBackend Protocol and error hierarchy
│ ├── registry.py # Backend factory (register/get/list)
│ ├── s3.py # AWS S3 backend (boto3)
│ ├── gcs.py # Google Cloud Storage backend
│ └── drive.py # Google Drive backend (path-to-ID cache, export)
├── models/
│ ├── cloud_file.py # CloudFile, CloudFileType, CloudLocation
│ ├── transfer.py # TransferJob, TransferDirection, TransferStatus
│ └── sync_state.py # SyncRecord, SyncConflict, SyncPlan, etc.
├── sync/
│ ├── state.py # SQLite-backed sync state persistence
│ ├── differ.py # Three-way diff algorithm
│ ├── resolver.py # Conflict resolution strategies
│ ├── plan.py # Converts SyncDiff to executable SyncPlan
│ └── engine.py # Sync orchestrator (diff → plan → execute)
├── transfer/
│ ├── manager.py # Concurrent transfer queue (asyncio.Semaphore)
│ └── progress.py # Progress adapter for SDK callbacks
└── tui/
├── commands.py # Command palette provider (Ctrl+K)
├── screens/
│ ├── browse.py # Main browsing screen
│ ├── settings.py # Settings form
│ ├── sync_config.py # Sync configuration and execution
│ └── auth_setup.py # Guided auth testing
├── modals/
│ ├── confirm_dialog.py
│ ├── download_dialog.py
│ ├── upload_dialog.py
│ ├── new_folder.py
│ └── sync_dialog.py # Sync conflict resolution
├── widgets/
│ ├── status_bar.py # AppHeader (slim 1-line header)
│ ├── app_footer.py # Context-sensitive keybind hints
│ ├── cloud_tree.py # Lazy-loading tree sidebar
│ ├── file_table.py # Sortable file listing (DataTable)
│ ├── breadcrumb.py # Path breadcrumb with › separators
│ ├── preview_panel.py # File metadata display
│ └── transfer_panel.py# Transfer progress bar
└── styles/
└── cloudscope.tcss # Linear-inspired dark theme
```
## Development
```bash
# Install with dev dependencies
pip install -e ".[dev]"
# Lint
ruff check src/
# Type check
mypy src/cloudscope/
# Test
pytest
# Run with Textual dev tools (live CSS reloading, etc.)
textual run --dev cloudscope.app:CloudScopeApp
```
## License
MIT
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"aiofiles>=23.0.0",
"boto3>=1.28.0",
"google-api-python-client>=2.90.0",
"google-auth-httplib2>=0.1.0",
"google-auth-oauthlib>=1.0.0",
"google-cloud-storage>=2.10.0",
"humanize>=4.0.0",
"platformdirs>=3.0.0",
"textual>=1.0.0",
"moto>=4.0; extra == \"dev\"",
"mypy>=1.5; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"textual-dev>=1.0.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:52:17.891262 | cloudscope-0.3.0.tar.gz | 42,301 | a7/f3/261c471efd386f86e4838b4822f73ed06961e459d8da988d0423fd06fa70/cloudscope-0.3.0.tar.gz | source | sdist | null | false | 55c7068a802cbef8cd6ae385b24c7516 | ee8f28de6e8ce0919b224fdcb8a7e9398b7c3a4393ae4076710d63433d8eef40 | a7f3261c471efd386f86e4838b4822f73ed06961e459d8da988d0423fd06fa70 | null | [] | 238 |
2.4 | zotmd | 0.3.0 | Add your description here | # ZotMD
**Sync your Zotero library to Markdown files with automatic updates and PDF annotation extraction.**
Built for Obsidian and compatible with any Markdown-based note-taking app.
## Features
- **Library Sync**: Keeps your Zotero library and Obsidian folder of markdown notes in sync. Uses incremental sync by default that only updates changed items that also keeps track of modified highlights and annotations.
- **PDF Annotations**: Extracts highlights and notes created in Zotero
- **Customizable Templates**: Uses Jinja2 to create markdown notes with custom templates
## Quick Start
```bash
# Install with uv (https://docs.astral.sh/uv/)
uv tool install zotmd
# Set up configuration
zotmd config
# Sync your library
zotmd sync
```
## Requirements
- Python 3.13+
- [Better BibTeX](https://retorque.re/zotero-better-bibtex/) (Zotero plugin)
- [Zotero API access](https://www.zotero.org/settings/security)
## [Documentation](https://adbX.github.io/zotmd/)
## License
MIT License - see [LICENSE](LICENSE) for details. | text/markdown | null | Adhithya Bhaskar <adhithyabhaskar@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"alive-progress>=3.3.0",
"click>=8.3.1",
"jinja2>=3.1.6",
"platformdirs>=4.5.1",
"pyzotero>=1.7.6"
] | [] | [] | [] | [] | uv/0.7.4 | 2026-02-20T23:51:13.342702 | zotmd-0.3.0.tar.gz | 129,223 | 2d/55/7897f9bab0141728e0557a5e0ca8d8b508fb593d4669bf562176babd1953/zotmd-0.3.0.tar.gz | source | sdist | null | false | fd1e66fe273b551a63ef8ca4d395083d | b52cad9d70e696bfc5e1046e0f41a9845db0b25d8353f16cdee432c093ab3782 | 2d557897f9bab0141728e0557a5e0ca8d8b508fb593d4669bf562176babd1953 | null | [
"LICENSE"
] | 241 |
2.1 | lager-cli | 0.3.28 | Lager CLI - Box and Docker connectivity | # Lager CLI
A powerful command-line interface for controlling embedded hardware, test equipment, and development boards through Lager Data box devices.
[](https://badge.fury.io/py/lager-cli)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
## Features
### Hardware Control
- **Power Management**: Control power supplies, battery simulators, solar simulators, and electronic loads
- **I/O Operations**: ADC/DAC, GPIO, thermocouple sensors
- **Test Instruments**: Oscilloscopes, logic analyzers, multimeters, signal generators
### Embedded Development
- **Debugging**: ARM Cortex-M debugging with J-Link, CMSIS-DAP, ST-Link support
- **Firmware Flashing**: Flash firmware via debug probes
- **Serial Communication**: UART terminal with test framework integration
- **Robotics**: Robot arm control for automated testing
### Wireless & Connectivity
- **Bluetooth LE**: Scan, connect, and interact with BLE devices
- **USB**: Programmable USB hub control
- **Webcam**: Video streaming from box devices
## Installation
Install the Lager CLI using pip:
```bash
pip install lager-cli
```
Or upgrade to the latest version:
```bash
pip install -U lager-cli
```
## Quick Start
1. **Configure your box**:
```bash
lager defaults set box <your-box-id>
```
2. **Test connectivity**:
```bash
lager hello
```
3. **List available instruments**:
```bash
lager instruments
```
4. **Control a power supply**:
```bash
lager supply <net> voltage 3.3 --box <box-id>
```
## Core Commands
### Power Supply Control
```bash
# Set voltage and enable output
lager supply <net> voltage 3.3 --yes
# Set current limit
lager supply <net> current 0.5
# Check power state
lager supply <net> state
```
### ADC/DAC Operations
```bash
# Read ADC voltage
lager adc <net>
# Set DAC output
lager dac <net> 1.8
```
### Embedded Debugging
```bash
# Connect to debug probe
lager debug <net> connect --box <box>
# Flash firmware (auto-connects if needed)
lager debug <net> flash --hex firmware.hex --box <box>
# Reset and halt target
lager debug <net> reset --halt --box <box>
# Stream RTT logs
lager debug <net> rtt --box <box>
# Read memory
lager debug <net> memrd 0x08000000 256 --box <box>
```
### Oscilloscope & Logic Analyzer
```bash
# Measure frequency on scope channel
lager scope <net> measure freq
# Configure edge trigger
lager logic <net> trigger edge --slope rising --level 1.8
```
### Battery & Solar Simulation
```bash
# Set battery state of charge
lager battery <net> soc 80
# Configure solar irradiance
lager solar <net> irradiance 1000
```
### Serial Communication
```bash
# Connect to UART
lager uart --baudrate 115200
# Interactive mode with test runner
lager uart -i --test-runner unity
# Show the tty path for a UART net
lager uart <net> serial-port
# Create a UART net for an adapter without a USB serial number (store /dev path directly)
lager nets create <net> uart /dev/ttyUSB0 <label> --box <box>
# Warning: tty names can change after reboot; prefer device-serial mode when available
```
### Bluetooth LE
```bash
# Scan for BLE devices
lager ble scan --timeout 5.0
# Connect to device
lager ble connect <address>
```
## Configuration
### Box Setup
The CLI can connect to boxes via:
- **Cloud API**: Using box IDs
- **Direct IP**: Using Tailscale or VPN IP addresses
Create a `.lager` file in your project directory:
```json
{
"boxes": {
"my-box": "box-abc123",
"local-box": "<BOX_IP>"
}
}
```
### Direct IP Access
For direct IP connections, ensure SSH key authentication is configured:
```bash
# Configure SSH key for a box
ssh-copy-id lagerdata@<box-ip>
# Then connect via the CLI
lager ssh --box <box-ip>
```
### Environment Variables
- `LAGER_BOX`: Default box ID or IP address
- `LAGER_DEBUG`: Enable debug output
- `LAGER_COMMAND_DATA`: Command data (used internally)
## Net Management
Lager uses "nets" to represent physical test points or signals on your PCB:
```bash
# List all configured nets
lager nets
# Create a new power supply net
lager nets create VDD_3V3 supply 1 USB0::0x1AB1::0x0E11::DP8C0000001
# Auto-discover and create all nets
lager nets create-all
# Interactive TUI for net management
lager nets tui
```
## Advanced Features
### Remote Python Execution
```bash
# Run a Python script on the box
lager python my_script.py --box <box-id>
# Run with port forwarding
lager python --port 5000:5000/tcp server.py
```
### Development Environment
```bash
# Create a development environment
lager devenv create --image python:3.10
# Open interactive terminal
lager devenv terminal
```
### Package Management
```bash
# Install packages on box
lager pip install numpy
```
## Supported Hardware
### Debug Probes
- SEGGER J-Link
- ARM CMSIS-DAP
- ST-Link v2/v3
- Xilinx XDS110
### Power Supplies
- Rigol DP800 series
- Keysight E36200/E36300 series
- Keithley 2200/2280 series
### Battery Simulators
- Keithley 2281S
### Solar Simulators
- EA PSI/EL series (two-quadrant)
### Oscilloscopes
- Rigol MSO5000 series
### I/O Hardware
- LabJack T7 (ADC/DAC/GPIO)
- MCC USB-202 (ADC/DAC/GPIO)
### USB Hubs
- Acroname USBHub3+
- YKUSH
### Robotics
- Rotrics Dexarm
### Temperature
- Phidget Thermocouples
## Target Microcontrollers
Supports debugging and flashing for:
- STM32 (F0/F1/F2/F3/F4/F7/G0/G4/H7/L0/L1/L4/WB/WL series)
- Nordic nRF52/nRF91
- Atmel/Microchip SAM D/E/4S/70
- Texas Instruments CC32xx
- NXP i.MX RT, LPC54xx/55xx
- Silicon Labs EFM32
- Microchip PIC32MM
## Authentication & Access
The CLI authenticates to boxes via VPN access (Tailscale or similar). Access control is managed by your VPN permissions - if you have VPN access to a box, you can control it with the CLI.
### Prerequisites
1. **VPN Access**: Connect to your organization's VPN (Tailscale, etc.)
2. **SSH Keys**: Configure SSH key authentication for direct box access:
```bash
ssh-copy-id lagerdata@<box-ip>
```
3. **SSH to Box**: Use the CLI to connect:
```bash
lager ssh --box <box-ip-or-name>
```
### Verify Connectivity
```bash
# Test box connectivity
lager hello --box <box-id-or-ip>
# Check box status
lager status
```
## Documentation
For detailed documentation, visit: [https://docs.lagerdata.com](https://docs.lagerdata.com)
### Command Help
Every command has built-in help:
```bash
lager --help # Show all commands
lager supply --help # Show supply command options
lager debug --help # Show debug command options
```
## Examples
### Automated Test Script
```bash
#!/bin/bash
BOX="my-box"
# Configure power supply
lager supply VDD voltage 3.3 --box $BOX --yes
# Flash firmware
lager debug DEBUG_SWD flash --hex build/firmware.hex --box $BOX
# Reset and start
lager debug DEBUG_SWD reset --box $BOX
# Monitor UART output
lager uart --baudrate 115200 --test-runner unity --box $BOX
# Read sensor values
voltage=$(lager adc SENSOR_OUT --box $BOX)
temp=$(lager thermocouple TEMP1 --box $BOX)
echo "Voltage: $voltage V"
echo "Temperature: $temp °C"
# Disable power
lager supply VDD disable --box $BOX
```
### Interactive Python Control
```python
# example_test.py - Run on box with: lager python example_test.py
from lager.supply import supply
from lager.adc import adc
import time
# Set power supply voltage
supply.set_voltage("VDD_3V3", 3.3)
supply.enable("VDD_3V3")
# Wait for stabilization
time.sleep(0.1)
# Measure voltage
voltage = adc.read("VOUT")
print(f"Output voltage: {voltage:.3f} V")
# Disable supply
supply.disable("VDD_3V3")
```
## Troubleshooting
### Connection Issues
```bash
# Test box connectivity
lager hello --box <box-id>
# Check box status
lager status --box <box-id>
```
### Permission Errors
For Tailscale/direct IP connections, ensure SSH keys are configured:
```bash
# Set up SSH keys
ssh-copy-id lagerdata@<box-ip>
# Test SSH access
lager ssh --box <box-ip-or-name>
```
### Debug Probe Not Found
Verify J-Link GDB Server is installed on the box:
```bash
# Download J-Link to /tmp/ on your local machine
# Visit: https://www.segger.com/downloads/jlink/
# Download: JLink_Linux_V794a_x86_64.tgz to /tmp/
# Deploy box (J-Link will be installed automatically)
cd deployment
./setup_and_deploy_box.sh <box-ip>
```
### Authentication Issues
If you encounter connection issues:
1. **Verify VPN connection**: Ensure you're connected to the correct VPN
2. **Check SSH keys**: Verify SSH key authentication is configured
```bash
ssh-copy-id lagerdata@<box-ip>
```
3. **Test SSH access**: Try connecting to the box
```bash
lager ssh --box <box-ip-or-name>
```
4. **Test connectivity**: Use `lager hello` to verify the box is reachable
```bash
lager hello --box <box-ip-or-name>
```
## Contributing
We welcome contributions! Please see our contribution guidelines for more information.
## Support
- **Documentation**: https://docs.lagerdata.com
- **Issues**: Report bugs and request features via your support channel
- **Email**: hello@lagerdata.com
## License
MIT License - Copyright (c) Lager Data LLC
## Testing
Comprehensive test suites are available in the `test/` directory:
```bash
# Hardware-dependent tests (require instruments)
cd test
./supply.sh <BOX> <SUPPLY_NET>
./battery.sh <BOX> <BATTERY_NET>
./debug.sh <BOX> <DEBUG_NET> <HEXFILE> <ELFFILE>
./labjack.sh <BOX> <GPIO_NET> <ADC_NET> <DAC_NET>
# Box-only tests (no instruments required)
./deployment.sh <box-ip>
```
See `test/README.md` for test format and how to write new tests.
## Changelog
### Recent Updates
- Renamed test scripts for clarity (`test_*_commands.sh` → `*.sh`)
- Unified box deployment script (`setup_and_deploy_box.sh`)
- Added comprehensive test documentation (`test/README.md`)
- Enhanced debug command with RTT streaming and memory operations
- Improved error handling and validation across all commands
See full changelog in the [releases](https://github.com/lagerdata/lager-cli/releases).
| text/markdown | Lager Data LLC | hello@lagerdata.com | Lager Data LLC | hello@lagerdata.com | Apache-2.0 | null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development"
] | [] | https://github.com/lagerdata/lager-mono | null | >=3.10 | [] | [] | [] | [
"async-generator>=1.10",
"pymongo>=4.0",
"certifi>=2020.6.20",
"chardet>=5.2.0",
"click>=8.1.2",
"colorama>=0.4.3",
"h11>=0.16",
"idna>=3.4",
"ipaddress>=1.0.23",
"Jinja2>=3.1.2",
"multidict>=6.0.2",
"outcome>=1.0.1",
"pigpio>=1.78",
"python-dateutil>=2.8.1",
"PyYAML>=6.0.1",
"requests>=2.31.0",
"requests-toolbelt>=1.0.0",
"six>=1.16.0",
"sniffio>=1.3.1",
"sortedcontainers>=2.2.2",
"tenacity>=6.2.0",
"texttable>=1.6.2",
"trio>=0.27.0",
"lager-trio-websocket>=0.9.0.dev0",
"urllib3<3.0.0,>=1.26.20",
"wsproto>=0.14.1",
"yarl>=1.8.1",
"boto3",
"textual>=3.2.0",
"python-socketio>=5.10.0",
"websocket-client>=1.0.0",
"mcp>=1.20.0; extra == \"mcp\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.18 | 2026-02-20T23:50:36.089114 | lager_cli-0.3.28.tar.gz | 449,509 | 6e/c9/68623079f693b480f7b4e8c3c682e5e3993b266cca7909710790defb39ac/lager_cli-0.3.28.tar.gz | source | sdist | null | false | 9a0914c948ad2bfdbdc157ca7847f03c | e83b2946f9736944f8c0d99891c1248bbd9f7e011e5e465b5a4fb231f98467eb | 6ec968623079f693b480f7b4e8c3c682e5e3993b266cca7909710790defb39ac | null | [] | 250 |
2.4 | axiom-lang | 0.2.0 | Python bindings for Axiom - a verification-first policy engine for AI agents | # axiom-py
Python bindings for Axiom — a verification-first policy engine for AI agents.
---
## What Axiom Guarantees
Cite these properties when documenting your agent's safety posture:
| ID | Guarantee |
|---|---|
| **G1** | **Pre-flight pure query.** `verify()` has zero side effects. The engine never reads files, opens sockets, or mutates state. |
| **G2** | **Effect-class-based, not heuristic.** A `WRITE` intent is a `WRITE` intent whether the LLM calls it `SaveDocument`, `OutputData`, or `UpdateFile`. Axiom works on what an action structurally does — not on names or content, both of which are bypassable by rephrasing. |
| **G3** | **Deterministic.** Same input always produces the same verdict. Thread-safe. |
| **G4** | **Monotonic ratchet.** Restrictions only accumulate. A policy can add conscience predicates but never silently drop them. |
| **G5** | **Specific denial.** `verdict.reason` cites the exact predicate, value, and reason. `verdict.guidance` provides the human-readable category. |
---
## Installation
```bash
pip install axiom-lang
```
Or build from source:
```bash
cd axiom_py
pip install maturin
maturin develop
```
---
## Quickstart: Under 5 Lines
**Option 1 — Preset (recommended for common cases):**
```python
from axiom.presets import filesystem_readonly
engine = filesystem_readonly(allowed_paths=["/home/user/project"])
verdict = engine.verify("ReadFile", {"path": "/etc/passwd"})
# verdict.allowed == False
```
**Option 2 — Decorator:**
```python
from axiom import guard, AxiomDenied
@guard(effect="READ", conscience=["path_safety"])
def read_file(path: str) -> str:
with open(path) as f:
return f.read()
read_file("/etc/shadow") # raises AxiomDenied
```
**Option 3 — Builder:**
```python
from axiom import PolicyBuilder, Effect
engine = (
PolicyBuilder()
.intent("ReadFile", effect=Effect.READ, conscience=["path_safety"])
.build()
)
verdict = engine.verify("ReadFile", {"path": "/etc/shadow"})
# verdict.allowed == False
```
---
## Effect Class Table
| Effect | Meaning | Default conscience (presets) |
|---|---|---|
| `READ` | Read data from a resource | `path_safety` |
| `WRITE` | Write or modify a resource | `path_safety`, `no_exfiltrate` |
| `EXECUTE` | Execute code or a command | `no_harm`, `no_bypass_verification` |
| `NETWORK` | Send data over the network | `no_exfiltrate` |
| `NOOP` | Pure computation, no side effects | *(none)* |
---
## Conscience Predicates
Conscience predicates are named safety policies evaluated automatically against
an intent's effect class and field values.
| Predicate | Effects | What it blocks | Used in preset |
|---|---|---|---|
| `path_safety` | READ, WRITE | `/etc` `/proc` `/sys` `/boot` `/root` `/dev`; path traversal (`../`); URL-encoded variants (`%2e%2e`); fullwidth unicode path components | `filesystem_readonly`, `filesystem_readwrite`, `agent_standard`, `coding_assistant` |
| `no_exfiltrate` | NETWORK, WRITE | Any destination not in the declared-channel registry; fields: `url destination endpoint address target host uri remote`; blocks `/mnt/` `/net/` writes | `filesystem_readwrite`, `network_egress`, `agent_standard`, `coding_assistant` |
| `no_harm` | WRITE, EXECUTE, NETWORK | Destructive intent names (`Delete Drop Erase Format Kill Purge Remove Shutdown Terminate Truncate Wipe`) unless `authorized=true` in fields | `code_execution_sandboxed`, `coding_assistant` |
| `no_bypass_verification` | EXECUTE | Code/script/command/payload execution unless `verified=true` in fields or trust level ≥ `TRUSTED_INTERNAL` | `code_execution_sandboxed`, `coding_assistant` |
| `baseline_allow` | NOOP, READ | Applied automatically — permits safe read and no-op operations without an explicit allow rule | *(all)* |
`verdict.reason` contains the specific technical string from the predicate that denied the
action. `verdict.guidance` contains the higher-level category message.
---
## Presets Reference
| Function | Intents | Returns | Notes |
|---|---|---|---|
| `filesystem_readonly(allowed_paths=None)` | `ReadFile` | `GuardedEngine` | Allow-list optional |
| `filesystem_readwrite(allowed_paths=None)` | `ReadFile`, `WriteFile` | `GuardedEngine` | Allow-list optional |
| `network_egress()` | `HttpRequest` | `AxiomEngine` | — |
| `code_execution_sandboxed()` | `ExecuteCode` | `AxiomEngine` | — |
| `agent_standard(allowed_paths=None)` | `ReadFile`, `WriteFile`, `ProcessData` | `GuardedEngine` | Allow-list optional |
| `coding_assistant(project_root)` | `ReadFile`, `WriteFile`, `RunCommand` | `GuardedEngine` | `project_root` required |
```python
from axiom.presets import (
filesystem_readonly,
filesystem_readwrite,
network_egress,
code_execution_sandboxed,
agent_standard,
coding_assistant,
)
```
`GuardedEngine` enforces `allowed_paths` by resolving symlinks (`os.path.realpath`) at
construction time. Paths outside the allow-list receive a synthetic denial before the
engine is consulted.
---
## Builder API
```python
from axiom import PolicyBuilder, Effect, Conscience, IntentBuilder
engine = (
PolicyBuilder(module_name="my_policy") # optional name
.intent(
"ReadFile",
effect=Effect.READ, # or "READ"
conscience=[Conscience.PATH_SAFETY], # or ["path_safety"]
takes=[("path", "String")], # optional — for introspection
gives=[("content", "String")], # optional
pre=["length(path) > 0"], # optional pre-condition expressions
bound="time(5s), memory(64mb)", # optional resource bounds
)
.intent("WriteFile", effect=Effect.WRITE, conscience=["path_safety", "no_exfiltrate"])
.build() # returns AxiomEngine
)
# Async variant
engine = await PolicyBuilder().intent(...).build_async()
# Inspect generated source
print(PolicyBuilder().intent("ReadFile", effect="READ", conscience=["path_safety"]).source())
```
`Conscience` constants: `PATH_SAFETY`, `NO_EXFILTRATE`, `NO_HARM`, `NO_BYPASS_VERIFICATION`.
---
## Decorator API
```python
from axiom import guard, AxiomDenied
@guard(effect="READ", conscience=["path_safety"])
def read_file(path: str) -> str:
with open(path) as f:
return f.read()
# Async functions work transparently
@guard(effect="WRITE")
async def write_file(path: str, content: str) -> None:
...
# Handle denials
try:
read_file("/etc/shadow")
except AxiomDenied as e:
print(e.reason) # specific predicate failure
print(e.guidance) # human-readable category
print(e.category) # e.g. "ResourcePolicy"
print(e.verdict) # full Verdict object
```
**`guard` parameters:**
| Parameter | Default | Description |
|---|---|---|
| `effect` | *(required)* | Effect class string |
| `conscience` | effect-based default | Conscience predicate list |
| `intent_name` | PascalCase of function name | Intent name in the policy |
| `engine` | built at decoration time | Existing engine to reuse |
| `field_map` | `None` | Rename function params before passing to `verify()` |
| `coerce` | `str` | Callable applied to each argument value |
**Coercion note:** `coerce=str` means `str(True)` → `"True"` (capital T).
Predicates that test `authorized=true` (lowercase) require explicit string `"true"`.
---
## Integrations
### LangChain
```python
from axiom.presets import filesystem_readonly
from axiom.integrations.langchain import AxiomGuardedTool
engine = filesystem_readonly(allowed_paths=["/workspace"])
guarded = AxiomGuardedTool(
base_tool=my_langchain_tool,
engine=engine,
intent_name="ReadFile", # optional — defaults to tool.name
on_deny="raise", # "raise" | "return_none" | "return_denial"
)
# Use inside a LangChain agent executor as a drop-in replacement
result = guarded._run(path="/workspace/notes.txt")
```
`AxiomGuardedTool` exposes `name`, `description`, `args_schema`, `_run()`, and `_arun()`
— sufficient for all LangChain agent executors. No hard LangChain import required.
### OpenAI
```python
from openai import OpenAI
from axiom.presets import filesystem_readonly
from axiom.integrations.openai import AxiomInterceptor
client = AxiomInterceptor(
OpenAI(),
engine=filesystem_readonly(allowed_paths=["/workspace"]),
auto_verify=True, # raises AxiomDenied before returning any response with denied tool calls
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[...],
tools=[...],
)
# ^ raises AxiomDenied if the model requested a disallowed tool call
# Manual verification
results = client.verify_tool_calls(response)
for tool_call, verdict in results:
print(tool_call.function.name, verdict.allowed)
client.assert_tool_calls_safe(response) # raises on first denial
```
**Limitation:** `auto_verify=True` only works with non-streaming completions.
---
## VS Code Syntax Highlighting
A TextMate grammar for `.axm` files is included in `editors/vscode/`.
```bash
# One-time VS Code install (no marketplace needed):
ln -s $(pwd)/editors/vscode ~/.vscode/extensions/axiom
# Reload VS Code window → .axm files gain syntax highlighting
```
---
## vs. Content Filtering
Most agent safety tooling works on intent names or content — both bypassable by
renaming or rephrasing. Axiom works on what an action structurally does.
A `WRITE` intent is a `WRITE` intent whether the LLM calls it `SaveDocument`,
`OutputData`, or `UpdateFile`. Effect-class enforcement is structural, not
heuristic, and cannot be bypassed by rewording the request.
---
## .axm pre/post Functions
The following functions are available in `pre:` and `post:` clauses when writing
`.axm` policy source (via `AxiomEngine.from_source()` or `PolicyBuilder`):
| Function | Returns | Description |
|---|---|---|
| `length(s)` | Int | String or array length |
| `path_exists(p)` | Bool | Whether the path exists on the filesystem |
| `path_is_safe(p)` | Bool | `path_safety` conscience check on a path value |
| `space_available(p, bytes)` | Bool | Whether sufficient disk space is available at path |
| `schema_is_registered(s)` | Bool | Whether a schema name is present in the registry |
| `structural_valid(data, schema)` | Bool | Whether data matches the structure of the named schema |
---
## License
Apache-2.0
| text/markdown; charset=UTF-8; variant=GFM | Axiom Contributors | null | null | null | Apache-2.0 | policy, ai, safety, verification, agents | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Rust",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Security"
] | [] | null | null | >=3.12 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/latentcollapse/Axiom",
"Repository, https://github.com/latentcollapse/Axiom"
] | maturin/1.11.5 | 2026-02-20T23:49:39.414476 | axiom_lang-0.2.0-cp312-abi3-manylinux_2_34_x86_64.whl | 663,084 | 24/93/b247b530940ce82a69e5be24e4ddd5636f7da7aed8a13581d74973c79be4/axiom_lang-0.2.0-cp312-abi3-manylinux_2_34_x86_64.whl | cp312 | bdist_wheel | null | false | 73bd8b1416bda16f38d16ac3cfe96ba7 | 88bd730cefffa65e61ffa32ba0895c6ffd99a03f8c2cdc9df83c60de0c5b7618 | 2493b247b530940ce82a69e5be24e4ddd5636f7da7aed8a13581d74973c79be4 | null | [] | 83 |
2.2 | mcp-curl | 0.3.2 | A curl-like CLI tool for interacting with Model Context Protocol (MCP) servers | # murl - MCP Curl
[](https://github.com/turlockmike/murl/actions/workflows/test.yml)
A curl-like CLI tool for interacting with Model Context Protocol (MCP) servers.
**POSIX Agent Standard Compliant:** This tool implements [Level 2 (Agent-Optimized)](https://github.com/turlockmike/posix-agent-standard) compliance, making it natively compatible with AI agents.
<p align="center">
<img src="images/logo.png" alt="murl logo" width="400">
</p>
## What is MCP?
MCP (Model Context Protocol) is an open standard developed by Anthropic for AI models to access external data sources, tools, and services. It provides a universal way for large language models (LLMs) to interact with various resources securely and efficiently.
## Quick Start
### Try with Public Demo Server (No Setup Required)
Test murl immediately with a public MCP server:
```bash
# Install murl
curl -sSL https://raw.githubusercontent.com/turlockmike/murl/master/install.sh | bash
# Or using pip: pip install mcp-curl
# List tools on the public Fetch server
murl https://remote.mcpservers.org/fetch/mcp/tools
# Fetch a webpage and convert to markdown
murl https://remote.mcpservers.org/fetch/mcp/tools/fetch -d url=https://example.com
```
**Public demo servers:**
- Fetch Server: `https://remote.mcpservers.org/fetch/mcp` - Simple server for fetching web content
- DeepWiki: `https://mcp.deepwiki.com/mcp` - GitHub repository documentation
### Quick Local Setup with mcp-proxy
Get started with murl in minutes using a local MCP server:
```bash
# Step 1: Install murl
curl -sSL https://raw.githubusercontent.com/turlockmike/murl/master/install.sh | bash
# Or using pip: pip install mcp-curl
# Step 2: Install mcp-proxy to expose MCP servers over HTTP
pip install mcp-proxy
# Step 3: Start a local time server example (in one terminal)
mcp-proxy --port 3000 uvx mcp-server-time
# Step 4: Test with murl (in another terminal)
# List available tools
murl http://localhost:3000/tools
# Call the get_current_time tool
murl http://localhost:3000/tools/get_current_time
# Call with a timezone argument
murl http://localhost:3000/tools/get_current_time -d timezone=America/New_York
```
**What's happening:**
- `mcp-proxy` wraps any stdio-based MCP server and exposes it over HTTP
- `uvx mcp-server-time` is a simple MCP server that provides time-related tools
- `murl` connects to the HTTP endpoint and makes MCP requests
**Try other MCP servers:**
```bash
# Filesystem server (access files)
mcp-proxy --port 3001 uvx mcp-server-filesystem /path/to/directory
# Sequential thinking server
mcp-proxy --port 3002 npx -y @modelcontextprotocol/server-sequential-thinking
```
## Installation
### Quick Install (Recommended)
Install murl with a single command:
```bash
curl -sSL https://raw.githubusercontent.com/turlockmike/murl/master/install.sh | bash
```
This will automatically download and install murl from source.
### Using pip
Install murl from PyPI:
```bash
pip install mcp-curl
```
To upgrade to the latest version:
```bash
pip install --upgrade mcp-curl
```
### Upgrade
To upgrade murl to the latest version:
```bash
murl --upgrade
```
This command downloads and runs the installation script to update murl to the latest release from GitHub.
### From Source
```bash
git clone https://github.com/turlockmike/murl.git
cd murl
pip install -e .
```
## Usage
`murl` provides a curl-like interface for interacting with MCP servers over HTTP. It abstracts the JSON-RPC 2.0 protocol, making it easy to call MCP methods using intuitive REST-like paths.
### Basic Syntax
```bash
murl <url> [options]
```
Where `<url>` is the MCP server endpoint with a virtual path (e.g., `http://localhost:3000/tools`).
### Options
- `-d, --data <key=value>` - Add data to the request. Can be used multiple times.
- `-H, --header <key: value>` - Add custom HTTP headers (e.g., for authentication).
- `-v, --verbose` - Enable verbose output (prints request/response details to stderr).
- `--agent` - Enable agent-compatible mode (pure JSON output, structured errors). See [Agent Mode](#agent-mode) below.
- `--version` - Show detailed version information (includes Python version and installation path).
- `--upgrade` - Upgrade murl to the latest version from GitHub releases.
- `--help` - Show help message.
### Agent Mode
murl implements the [POSIX Agent Standard (Level 2)](https://github.com/turlockmike/posix-agent-standard) for AI agent compatibility. Use the `--agent` flag to enable agent-optimized behavior:
**Key Features:**
- **Pure JSON output:** Compact JSON to stdout (no pretty-printing)
- **JSON Lines (NDJSON):** List operations output one JSON object per line
- **Structured errors:** JSON error objects to stderr with error codes
- **Non-interactive:** No prompts or progress indicators
- **Semantic exit codes:**
- `0` = Success
- `1` = General error (connection, timeout, server error)
- `2` = Invalid arguments (malformed URL, invalid data format)
- `100` = MCP server error (reported via JSON `code` field, not exit code)
**Examples:**
```bash
# Get agent-optimized help
murl --agent --help
# List tools (JSON Lines output)
murl --agent http://localhost:3000/tools
# Call a tool with compact JSON output
murl --agent http://localhost:3000/tools/echo -d message=hello
# Process NDJSON output with jq (one JSON object per line)
murl --agent http://localhost:3000/tools | jq -c '.'
# Handle errors programmatically
if ! result=$(murl --agent http://localhost:3000/tools/invalid 2>&1); then
echo "Error: $result" | jq -r '.message'
fi
```
**Agent Mode vs Human Mode:**
| Feature | Human Mode | Agent Mode (`--agent`) |
|---------|-----------|------------------------|
| JSON Output | Pretty-printed (indented) | Compact (no spaces) |
| List Output | JSON array | JSON Lines (NDJSON) |
| Error Output | Friendly message to stderr | Structured JSON to stderr |
| Exit Codes | 0, 1, or 2 (2 for invalid arguments) | Semantic (0, 1, 2) |
### Examples
#### List Available Tools
```bash
murl http://localhost:3000/tools
```
This sends a `tools/list` request to the MCP server.
#### Call a Tool with Arguments
```bash
murl http://localhost:3000/tools/echo -d message=hello
```
This sends a `tools/call` request with the tool name "echo" and arguments `{"message": "hello"}`.
#### Call a Tool with Multiple Arguments
```bash
murl http://localhost:3000/tools/weather -d city=Paris -d metric=true
```
Arguments are automatically type-coerced (strings, numbers, booleans).
#### Call a Tool with JSON Data
```bash
murl http://localhost:3000/tools/config -d '{"settings": {"theme": "dark"}}'
```
You can pass complex JSON objects directly.
#### List Available Resources
```bash
murl http://localhost:3000/resources
```
This sends a `resources/list` request.
#### Read a Resource
```bash
murl http://localhost:3000/resources/path/to/file
```
This sends a `resources/read` request with the file path. The path is automatically converted to a `file://` URI.
#### List Available Prompts
```bash
murl http://localhost:3000/prompts
```
This sends a `prompts/list` request.
#### Get a Prompt
```bash
murl http://localhost:3000/prompts/greeting -d name=Alice
```
This sends a `prompts/get` request with the prompt name "greeting" and arguments.
#### Add Authorization Headers
```bash
murl http://localhost:3000/tools -H "Authorization: Bearer token123"
```
Custom headers can be added for authentication or other purposes.
#### Verbose Mode
```bash
murl http://localhost:3000/tools -v
```
Verbose mode prints the JSON-RPC request payload and HTTP headers to stderr, useful for debugging.
### URL Mapping
`murl` automatically maps REST-like paths to MCP JSON-RPC methods:
| URL Path | MCP Method | Parameters |
| ------------------------- | ----------------- | ----------------------------------------- |
| `/tools` | `tools/list` | `{}` |
| `/tools/<name>` | `tools/call` | `{name: "<name>", arguments: {...}}` |
| `/resources` | `resources/list` | `{}` |
| `/resources/<path>` | `resources/read` | `{uri: "file:///<path>"}` (three slashes) |
| `/prompts` | `prompts/list` | `{}` |
| `/prompts/<name>` | `prompts/get` | `{name: "<name>", arguments: {...}}` |
### Piping Output
`murl` outputs raw JSON to stdout, making it pipe-friendly:
```bash
# Use with jq to format output
murl http://localhost:3000/tools | jq .
# Extract specific fields
murl http://localhost:3000/tools | jq '.[0].name'
```
## Development
### Setup Development Environment
```bash
# Clone the repository
git clone https://github.com/turlockmike/murl.git
cd murl
# Install in development mode with dev dependencies
pip install -e ".[dev]"
```
### Running Tests
```bash
pytest
```
### Running Tests with Coverage
```bash
pytest --cov=murl --cov-report=html
```
## How It Works
`murl` works by:
1. **Parsing the URL** to extract the base endpoint and the MCP virtual path
2. **Mapping the virtual path** to the appropriate MCP JSON-RPC method
3. **Parsing data flags** (`-d`) into method parameters with type coercion
4. **Constructing a JSON-RPC 2.0 request** with the method and parameters
5. **Sending an HTTP POST request** to the base endpoint with the JSON-RPC payload
6. **Extracting the result** from the JSON-RPC response and printing it as JSON
## Using murl with MCP Servers
`murl` supports the Streamable HTTP transport protocol used by modern MCP servers. This allows murl to work with MCP servers that implement HTTP-based transport.
### Streamable HTTP Support
As of the latest version, murl includes comprehensive support for the MCP Streamable HTTP transport protocol:
- Sends `Accept: application/json, text/event-stream` header
- Handles both immediate JSON responses and Server-Sent Events (SSE) streams
- **Supports session-based SSE** for compatibility with mcp-proxy
- Automatically tries session-based SSE first, then falls back to regular HTTP POST
- Compatible with MCP servers implementing the Streamable HTTP specification
### Direct HTTP MCP Servers
murl works best with MCP servers that expose a direct HTTP JSON-RPC endpoint. For example, if you have a server running at `http://localhost:3000` that implements MCP over HTTP:
```bash
# List available tools
murl http://localhost:3000/tools
# Call a tool with arguments
murl http://localhost:3000/tools/my_tool -d param1=value1 -d param2=value2
# List resources
murl http://localhost:3000/resources
# Get a prompt
murl http://localhost:3000/prompts/my_prompt -d arg1=value
```
### Using murl with mcp-proxy
Many MCP servers are implemented as stdio (standard input/output) programs. To use these with murl, you can expose them via HTTP using [mcp-proxy](https://github.com/sparfenyuk/mcp-proxy):
```bash
# Install mcp-proxy
pip install mcp-proxy
# Start mcp-proxy to expose a stdio MCP server on HTTP port 3000
mcp-proxy --port 3000 python my_mcp_server.py
# Or for a Node.js MCP server
mcp-proxy --port 3000 node path/to/mcp-server.js
```
Once mcp-proxy is running, you can use murl to interact with your stdio MCP server:
```bash
# List available tools
murl http://localhost:3000/tools
# Call a tool with arguments
murl http://localhost:3000/tools/my_tool -d param1=value1 -d param2=value2
# List resources
murl http://localhost:3000/resources
# Get a prompt
murl http://localhost:3000/prompts/my_prompt -d arg1=value
```
**How it works**: murl automatically detects mcp-proxy's session-based SSE architecture and handles it transparently:
1. Connects to the SSE endpoint to get a session ID
2. Posts the request to the session-specific endpoint
3. Reads the response from the SSE stream
4. Each murl invocation creates and closes its own ephemeral session
For more information about MCP transport protocols, see the [official MCP documentation](https://modelcontextprotocol.io/specification/basic/transports).
## Requirements
- Python 3.10 or higher
- `click` - For CLI argument parsing
- `mcp` - Model Context Protocol SDK
## License
MIT License - see LICENSE file for details
## Releasing
For maintainers: To create a new release, update the version in `pyproject.toml` and `murl/__init__.py`, then create and push a git tag:
```bash
git tag v0.2.1
git push origin v0.2.1
```
This will automatically trigger a GitHub Actions workflow that builds the package and creates a GitHub release with the artifacts.
| text/markdown | turlockmike | null | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0.0",
"mcp>=1.0.0",
"exceptiongroup>=1.0.0; python_version < \"3.11\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"requests>=2.31.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/turlockmike/murl",
"Repository, https://github.com/turlockmike/murl",
"Issues, https://github.com/turlockmike/murl/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:49:30.991046 | mcp_curl-0.3.2.tar.gz | 24,292 | be/e7/8307ea4e605679318fbf14cbe0faf898db14423bc48c009deedd1d97e3b0/mcp_curl-0.3.2.tar.gz | source | sdist | null | false | 284d87df002af5e1ad9ce98ab8e0c76c | c5ad9b2e05cb8c0502cce013525e238239365356e73b14e445a554879fcf5e7d | bee78307ea4e605679318fbf14cbe0faf898db14423bc48c009deedd1d97e3b0 | null | [] | 234 |
2.4 | smartx-rfid | 5.4.1 | SmartX RFID library | # SmartX RFID



Python library for RFID device integration and data management.
## Installation
```bash
pip install smartx-rfid
```
## Quick Start
```python
from smartx_rfid.devices import X714
import asyncio
async def on_tag_read(name: str, tag_data: dict):
print(f"Tag: {tag_data['epc']} | RSSI: {tag_data['rssi']}dBm")
async def main():
reader = X714(name="RFID Reader", start_reading=True)
reader.on_event = lambda name, event_type, data: (
asyncio.create_task(on_tag_read(name, data))
if event_type == "tag" else None
)
await reader.connect()
while True:
await asyncio.sleep(1)
asyncio.run(main())
```
## Features
### Supported Devices
- **X714 RFID Reader** - Serial, TCP, Bluetooth LE connections
- **R700 IOT** - HTTP REST API integration
- **Generic Serial/TCP** - Custom protocol support
### Core Components
- **Device Management** - Async communication with auto-reconnection
- **Database Integration** - SQLAlchemy with multiple database support
- **Webhook System** - HTTP notifications with retry logic
- **Tag Management** - Thread-safe tag list with deduplication
## Device Examples
### X714 RFID Reader
```python
from smartx_rfid.devices import X714
# Serial connection (auto-detect)
reader = X714(name="X714-Serial")
# TCP connection
reader = X714(
name="X714-TCP",
connection_type="TCP",
ip="192.168.1.100"
)
# Bluetooth LE
reader = X714(
name="X714-BLE",
connection_type="BLE"
)
def on_event(name: str, event_type: str, data: dict):
if event_type == "tag":
print(f"EPC: {data['epc']}, Antenna: {data['ant']}")
reader.on_event = on_event
await reader.connect()
```
### R700 IOT Reader
```python
from smartx_rfid.devices import R700_IOT, R700_IOT_config_example
reader = R700_IOT(
name="R700-Reader",
ip="192.168.1.200",
config=R700_IOT_config_example
)
reader.on_event = on_event
await reader.connect()
```
## Database Integration
```python
from smartx_rfid.db import DatabaseManager
from sqlalchemy import Column, String, Float, Integer, DateTime
from sqlalchemy.orm import DeclarativeBase
from datetime import datetime
class Base(DeclarativeBase):
pass
class TagModel(Base):
__tablename__ = 'rfid_tags'
id = Column(Integer, primary_key=True)
epc = Column(String(50), unique=True, nullable=False)
tid = Column(String(50))
ant = Column(Integer)
rssi = Column(Float)
created_at = Column(DateTime, default=datetime.utcnow)
# Initialize database
db = DatabaseManager("sqlite:///rfid_tags.db")
db.register_models(TagModel)
db.create_tables()
# Use with sessions
with db.get_session() as session:
tag = TagModel(epc="E200001175000001", ant=1, rssi=-45.2)
session.add(tag)
# Raw SQL queries
results = db.execute_query_fetchall(
"SELECT * FROM rfid_tags WHERE rssi > :threshold",
params={"threshold": -50}
)
```
### Supported Databases
- PostgreSQL: `postgresql://user:pass@localhost/db`
- MySQL: `mysql+pymysql://user:pass@localhost/db`
- SQLite: `sqlite:///path/to/database.db`
## Webhook Integration
```python
from smartx_rfid.webhook import WebhookManager
webhook = WebhookManager("https://api.example.com/rfid-events")
# Send tag data
success = await webhook.post("device_01", "tag_read", {
"epc": "E200001175000001",
"rssi": -45.2,
"antenna": 1,
"timestamp": "2026-01-15T10:30:00Z"
})
if success:
print("Webhook sent successfully")
```
## Tag Management
```python
from smartx_rfid.utils import TagList
# Create thread-safe tag list
tags = TagList(unique_identifier="epc")
def on_tag(device: str, tag_data: dict):
new_tag, tag = tags.add(tag_data, device=device)
if new_tag:
print(f"New tag: {tag['epc']}")
# Add custom data
tag['product_name'] = "Widget ABC"
else:
print(f"Existing tag: {tag['epc']}")
# Use with device events
reader.on_event = lambda name, event_type, data: (
on_tag(name, data) if event_type == "tag" else None
)
```
## Complete Integration Example
```python
import asyncio
from smartx_rfid.devices import X714
from smartx_rfid.db import DatabaseManager
from smartx_rfid.webhook import WebhookManager
from smartx_rfid.utils import TagList
async def rfid_system():
# Initialize components
reader = X714(name="Production-Scanner", start_reading=True)
db = DatabaseManager("postgresql://localhost/rfid_production")
webhook = WebhookManager("https://api.internal.com/rfid")
tags = TagList()
async def process_tag(name: str, tag_data: dict):
# Check if new tag
new_tag, tag = tags.add(tag_data, device=name)
if new_tag:
# Save to database
with db.get_session() as session:
session.add(TagModel(**tag_data))
# Send notification
await webhook.post(name, "new_tag", tag_data)
print(f"New tag processed: {tag_data['epc']}")
reader.on_event = lambda n, t, d: (
asyncio.create_task(process_tag(n, d)) if t == "tag" else None
)
await reader.connect()
asyncio.run(rfid_system())
```
## Configuration
### Device Configuration
```python
# High-performance settings
reader = X714(
name="FastScanner",
read_power=30, # Max power
session=2, # Session config
read_interval=100 # Fast scanning
)
# Database with connection pooling
db = DatabaseManager(
database_url="postgresql://user:pass@localhost/db",
pool_size=10,
max_overflow=20,
echo=True # Enable SQL logging
)
```
### Logging Setup
```python
import logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
```
## API Reference
### Core Modules
- `smartx_rfid.devices` - Device communication classes
- `smartx_rfid.db` - Database management
- `smartx_rfid.webhook` - HTTP notification system
- `smartx_rfid.utils` - Utility classes and helpers
### Event System
All devices use a consistent event callback system:
```python
def on_event(device_name: str, event_type: str, event_data: dict):
"""
Event types:
- "connected": Device connected successfully
- "disconnected": Device disconnected
- "tag": RFID tag detected
- "error": Error occurred
"""
pass
device.on_event = on_event
```
## Examples
The `examples/` directory contains working examples for all supported devices and features:
```
examples/
├── devices/
│ ├── RFID/ # X714, R700_IOT examples
│ └── generic/ # Serial, TCP examples
├── db/ # Database integration examples
└── utils/ # Tag management examples
```
Run examples:
```bash
python examples/devices/RFID/X714_SERIAL.py
python examples/db/showcase.py
```
## Requirements
- Python 3.11+
- Dependencies automatically installed with pip
## License
MIT License
## Support
- **Repository**: [https://github.com/ghpascon/smartx_rfid](https://github.com/ghpascon/smartx_rfid)
- **Issues**: [GitHub Issues](https://github.com/ghpascon/smartx_rfid/issues)
- **Email**: [gh.pascon@gmail.com](mailto:gh.pascon@gmail.com)
| text/markdown | Gabriel Henrique Pascon | gh.pascon@gmail.com | null | null | MIT | python, library, RFID, smartx, packaging | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"bleak<2.0.0,>=1.1.1",
"httpx==0.28.1",
"psycopg2<3.0.0,>=2.9.11",
"pydantic<3.0.0,>=2.12.5",
"pyepc==0.5.0",
"pymysql==1.1.1",
"pyserial==3.5",
"pyserial-asyncio==0.6",
"sqlalchemy==2.0.29"
] | [] | [] | [] | [
"Documentation, https://github.com/ghpascon/smartx_rfid#readme",
"Homepage, https://github.com/ghpascon/smartx_rfid",
"Repository, https://github.com/ghpascon/smartx_rfid"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T23:47:48.954843 | smartx_rfid-5.4.1.tar.gz | 59,895 | 18/53/272f8ea5a88f7188a2a72d684e0e596cfcdd066bb2f6225c1eb003e26d8a/smartx_rfid-5.4.1.tar.gz | source | sdist | null | false | b9f978c7dfbfc5b1d6c7f7b89ce325ac | 6ad22a46269332eede597749b0ec2aa4961a256638f3dc76343acfb3147feb8e | 1853272f8ea5a88f7188a2a72d684e0e596cfcdd066bb2f6225c1eb003e26d8a | null | [
"LICENSE"
] | 249 |
2.4 | wse-server | 1.0.3 | Real-time WebSocket engine for Python. Up to 1M msg/s. End-to-end encrypted. Rust-accelerated. | # WSE -- WebSocket Engine
**A complete, out-of-the-box solution for building reactive interfaces with React and Python.**
Two packages. Four lines of code. Your frontend and backend talk in real time.
[](https://github.com/silvermpx/wse/actions/workflows/ci.yml)
[](https://pypi.org/project/wse-server/)
[](https://www.npmjs.com/package/wse-client)
[](LICENSE)
---
## Why WSE?
Building real-time features between React and Python is painful. You need WebSocket handling, reconnection logic, message ordering, authentication, encryption, offline support, health monitoring. That's weeks of work before you ship a single feature.
**WSE gives you all of this out of the box.**
Install `wse-server` on your backend, `wse-client` on your frontend. Everything works immediately: auto-reconnection, message encryption, sequence ordering, offline queues, health monitoring. No configuration required for the defaults. Override what you need.
The engine is Rust-accelerated via PyO3. Up to **1M msg/s** burst throughput. 285K msg/s sustained with JSON.
---
## Quick Start
### Server (Python)
```python
from fastapi import FastAPI
from wse_server import create_wse_router, WSEConfig
app = FastAPI()
wse = create_wse_router(WSEConfig(
redis_url="redis://localhost:6379",
))
app.include_router(wse, prefix="/wse")
# Publish from anywhere in your app
await wse.publish("notifications", {"text": "Order shipped!", "order_id": 42})
```
### Client (React)
```tsx
import { useWSE } from 'wse-client';
function Dashboard() {
const { isConnected, connectionHealth } = useWSE({
topics: ['notifications', 'live_data'],
endpoints: ['ws://localhost:8000/wse'],
});
useEffect(() => {
const handler = (e: CustomEvent) => {
console.log('New notification:', e.detail);
};
window.addEventListener('notifications', handler);
return () => window.removeEventListener('notifications', handler);
}, []);
return <div>Status: {connectionHealth}</div>;
}
```
That's it. Your React app receives real-time updates from your Python backend.
---
## What You Get Out of the Box
Everything listed below works the moment you install. No extra setup.
### Reactive Interface
Real-time data flow from Python to React. One hook (`useWSE`) on the client, one `publish()` call on the server. Events appear in your components instantly.
### Auto-Reconnection
Exponential backoff with jitter. Connection drops? The client reconnects automatically. No lost messages -- offline queue with IndexedDB persistence stores messages while disconnected and replays them on reconnect.
### End-to-End Encryption
AES-256-GCM per channel, HMAC-SHA256 message signing. Encrypted before it leaves the server, decrypted in the browser. No plaintext on the wire. Pluggable encryption and token providers via Python protocols.
### Message Ordering
Sequence numbers with gap detection and reordering buffer. Messages arrive in order even under high load or network instability. Out-of-order messages are buffered and delivered once the gap fills.
### Authentication
JWT-based with configurable claims. Per-connection, per-topic access control. Plug in your own auth handler or use the built-in one. Cookie-based token extraction for seamless browser auth.
### Health Monitoring
Connection quality scoring (excellent / good / fair / poor), latency tracking, jitter analysis, packet loss detection. Your UI knows when the connection is degraded and can react accordingly.
### Scaling
Redis pub/sub for multi-process fan-out. Run multiple server workers behind a load balancer. Clients get messages from any worker. Fire-and-forget delivery with sub-millisecond latency.
### Rust Performance
Compression, sequencing, filtering, rate limiting, and the WebSocket server itself are implemented in Rust via PyO3. Python API stays the same. Rust accelerates transparently.
---
## Full Feature List
### Server (Python + Rust)
| Feature | Description |
|---------|-------------|
| **Drain Mode** | Batch-polling inbound events from Rust. One GIL acquisition per batch (up to 256 messages) instead of per-message Python callbacks. Condvar-based wakeup for zero busy-wait. |
| **Write Coalescing** | Outbound pipeline: `feed()` + batch `try_recv()` + single `flush()`. Reduces syscalls under load by coalescing multiple messages into one write. |
| **Ping/Pong in Rust** | Heartbeat handled entirely in Rust with zero Python round-trips. Configurable intervals. TCP_NODELAY on accept for minimal latency. |
| **5-Level Priority Queue** | CRITICAL(10), HIGH(8), NORMAL(5), LOW(3), BACKGROUND(1). Smart dropping under backpressure: lower-priority messages are dropped first. Batch dequeue ordered by priority. |
| **Dead Letter Queue** | Redis-backed DLQ for failed messages. 7-day TTL, 1000-message cap per channel. Manual replay via `replay_dlq_message()`. Prometheus metrics for DLQ size and replay count. |
| **MongoDB-like Filters** | 14 operators: `$eq`, `$ne`, `$gt`, `$lt`, `$gte`, `$lte`, `$in`, `$nin`, `$regex`, `$exists`, `$contains`, `$startswith`, `$endswith`. Logical: `$and`, `$or`. Dot-notation for nested fields (`payload.price`). Compiled regex cache. |
| **Event Sequencer** | Monotonic sequence numbers with AHashSet dedup. Size-based and age-based eviction. Gap detection on both server and client. |
| **Compression** | Flate2 zlib with configurable levels (1-9). Adaptive threshold -- only compress when payload exceeds size limit. Binary mode via msgpack (rmp-serde) for 30% smaller payloads. |
| **Rate Limiter** | Atomic token-bucket rate limiter in Rust. Per-connection rate enforcement. 100K tokens capacity, 10K tokens/sec refill. |
| **Message Deduplication** | AHashSet-backed dedup with bounded queue. Prevents duplicate delivery across reconnections and Redis fan-out. |
| **Wire Envelope** | Protocol v2: `{t, id, ts, seq, p, v}`. Generic payload extraction with automatic type conversion (UUID, datetime, Enum, bytes to JSON-safe primitives). Latency tracking (`latency_ms` field). |
| **Snapshot Provider** | Protocol for initial state delivery. Implement `get_snapshot(user_id, topics)` and clients receive current state immediately on subscribe -- no waiting for the next publish cycle. |
| **Circuit Breaker** | Three-state machine (CLOSED / OPEN / HALF_OPEN). Sliding-window failure tracking. Automatic recovery probes. Prevents cascade failures when downstream services are unhealthy. |
| **Message Categories** | `S` (snapshot), `U` (update), `WSE` (system). Category prefixing for client-side routing and filtering. |
| **PubSub Bus** | Redis pub/sub with PSUBSCRIBE pattern matching. orjson fast-path serialization. Custom JSON encoder for UUID, datetime, Decimal. Non-blocking handler invocation. |
| **Pluggable Security** | `EncryptionProvider` and `TokenProvider` protocols. Bring your own encryption or token signing. Default: HMAC-SHA256 with auto-generated secrets. Rust-accelerated SHA-256 and HMAC. |
| **Connection Metrics** | Prometheus-compatible stubs for: messages sent/received, publish latency, DLQ size, handler errors, circuit breaker state. Drop-in Prometheus integration or use the built-in stubs. |
### Client (React + TypeScript)
| Feature | Description |
|---------|-------------|
| **useWSE Hook** | Single React hook for the entire WebSocket lifecycle. Accepts topics, endpoints, auth tokens. Returns `isConnected`, `connectionHealth`, connection controls. |
| **Connection Pool** | Multi-endpoint support with health-scored failover. Three load-balancing strategies: weighted-random, least-connections, round-robin. Automatic health checks with latency tracking. |
| **Adaptive Quality Manager** | Adjusts React Query defaults based on connection quality. Excellent: `staleTime: Infinity` (pure WebSocket). Poor: aggressive polling fallback. Dispatches `wse:quality-change` events. Optional QueryClient integration. |
| **Offline Queue** | IndexedDB-backed persistent queue. Messages are stored when disconnected and replayed on reconnect, ordered by priority. Configurable max size and TTL. |
| **Network Monitor** | Real-time latency, jitter, and packet-loss analysis. Determines connection quality (excellent / good / fair / poor). Generates diagnostic suggestions. |
| **Event Sequencer** | Client-side sequence validation with gap detection. Out-of-order buffer for reordering. Duplicate detection via seen-ID window with age-based eviction. |
| **Circuit Breaker** | Client-side circuit breaker for connection attempts. Prevents reconnection storms when the server is down. Configurable failure threshold and recovery timeout. |
| **Compression + msgpack** | Client-side decompression (pako zlib) and msgpack decoding. Automatic detection of binary vs JSON frames. |
| **Zustand Stores** | `useWSEStore` for connection state, latency history, diagnostics. `useMessageQueueStore` for message buffering with priority. Lightweight, no boilerplate. |
| **Rate Limiter** | Client-side token-bucket rate limiter for outbound messages. Prevents flooding the server. |
| **Security Manager** | Client-side HMAC verification and optional decryption. Validates message integrity before dispatching to handlers. |
---
## Performance
Rust-accelerated engine via PyO3. Benchmarked on Apple M3, single process, 1KB JSON.
| Mode | Throughput | Latency (p50) | Latency (p99) |
|------|-----------|---------------|---------------|
| **Rust (binary)** | **1,000,000 msg/s** | **0.009 ms** | **0.04 ms** |
| **Rust (JSON)** | **285,000 msg/s** | **0.03 ms** | **0.15 ms** |
| Pure Python | 106,000 msg/s | 0.09 ms | 0.4 ms |
---
## Use Cases
WSE works for any real-time communication between frontend and backend:
- **Live dashboards** -- stock prices, sensor data, analytics, monitoring panels
- **Notifications** -- order updates, alerts, system events pushed to the browser
- **Collaborative apps** -- shared cursors, document editing, whiteboarding
- **Chat and messaging** -- group chats, DMs, typing indicators, read receipts
- **IoT and telemetry** -- device status, real-time metrics, command and control
- **Gaming** -- game state sync, leaderboards, matchmaking updates
---
## Installation
```bash
# Server (Python) -- includes prebuilt Rust engine
pip install wse-server
# Client (React/TypeScript)
npm install wse-client
```
Prebuilt wheels for Linux (x86_64, aarch64), macOS (Intel, Apple Silicon), and Windows.
Python 3.12+ (ABI3 stable -- one wheel per platform).
---
## Architecture
```
Client (React + TypeScript) Server (Python + Rust)
======================== ========================
useWSE hook FastAPI Router (/wse)
| |
v v
ConnectionPool Rust Engine (PyO3)
| (multi-endpoint, | (drain mode,
| health scoring) | write coalescing)
v v
ConnectionManager EventTransformer
| (auto-reconnect, | (wire envelope,
| circuit breaker) | type conversion)
v v
MessageProcessor PriorityQueue
| (decompress, verify, | (5 levels,
| sequence, dispatch) | smart dropping)
v v
AdaptiveQualityManager Sequencer + Dedup
| (quality scoring, | (AHashSet,
| React Query tuning) | gap detection)
v v
Zustand Store Compression + Rate Limiter
| | (flate2, token bucket)
v v
React Components PubSub Bus (Redis)
|
v
Dead Letter Queue
```
**Wire format (v1):**
```json
{
"v": 1,
"id": "019503a1-...",
"t": "price_update",
"ts": "2026-01-15T10:30:00Z",
"seq": 42,
"p": { "symbol": "AAPL", "price": 187.42 }
}
```
---
## Packages
| Package | Registry | Language | Install |
|---------|----------|----------|---------|
| `wse-server` | [PyPI](https://pypi.org/project/wse-server/) | Python + Rust | `pip install wse-server` |
| `wse-client` | [npm](https://www.npmjs.com/package/wse-client) | TypeScript + React | `npm install wse-client` |
Both packages are standalone. No shared dependencies between server and client.
---
## Documentation
| Document | Description |
|----------|-------------|
| [Protocol Spec](docs/PROTOCOL.md) | Wire format, versioning, encryption |
| [Architecture](docs/ARCHITECTURE.md) | System design, data flow |
| [Benchmarks](docs/BENCHMARKS.md) | Methodology, results, comparisons |
| [Security Model](docs/SECURITY.md) | Encryption, auth, threat model |
| [Integration Guide](docs/INTEGRATION.md) | FastAPI setup, Redis, deployment |
---
## Technology Stack
| Component | Technology | Purpose |
|-----------|-----------|---------|
| Rust engine | PyO3 + maturin | Compression, sequencing, filtering, rate limiting, WebSocket server |
| Server framework | FastAPI + Starlette | ASGI WebSocket handling |
| Serialization | orjson (Rust) | Zero-copy JSON |
| Binary protocol | msgpack (rmp-serde) | 30% smaller payloads |
| Encryption | AES-256-GCM (Rust) | Per-channel E2E encryption |
| Message signing | HMAC-SHA256 (Rust) | Per-message integrity verification |
| Authentication | PyJWT | Token verification |
| Pub/Sub backbone | Redis Pub/Sub | Multi-process fan-out |
| Dead Letter Queue | Redis Lists | Failed message recovery |
| Client state | Zustand | Lightweight React store |
| Client hooks | React 18+ | useWSE hook with TypeScript |
| Offline storage | IndexedDB | Persistent offline queue |
| Build system | maturin | Rust+Python hybrid wheels |
---
## License
MIT
| text/markdown; charset=UTF-8; variant=GFM | silvermpx | null | null | null | MIT | websocket, engine, real-time, pubsub, fastapi, encrypted, rust | [
"Development Status :: 5 - Production/Stable",
"Framework :: FastAPI",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Rust",
"Topic :: Internet :: WWW/HTTP :: HTTP Servers",
"Topic :: System :: Networking",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"fastapi>=0.115",
"starlette>=0.40",
"orjson>=3.10",
"redis>=5.0",
"cryptography>=43.0",
"pyjwt[crypto]>=2.9",
"pytest>=8.0; extra == \"dev\"",
"pytest-asyncio>=0.24; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"httpx>=0.27; extra == \"dev\"",
"uvicorn>=0.30; extra == \"dev\"",
"ruff>=0.8; extra == \"dev\"",
"mypy>=1.13; extra == \"dev\"",
"maturin>=1.7; extra == \"dev\"",
"msgpack>=1.0; extra == \"msgpack\"",
"prometheus-client>=0.21; extra == \"prometheus\""
] | [] | [] | [] | [
"Changelog, https://github.com/silvermpx/wse/releases",
"Documentation, https://github.com/silvermpx/wse/tree/main/docs",
"Homepage, https://github.com/silvermpx/wse",
"Issues, https://github.com/silvermpx/wse/issues",
"Repository, https://github.com/silvermpx/wse"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:47:08.092747 | wse_server-1.0.3.tar.gz | 93,986 | 42/6b/4836e897af0a05c311a9b57e8c88bd4f8ad1613cf5d66af2505ff72162ce/wse_server-1.0.3.tar.gz | source | sdist | null | false | be2e84ee7918b2d0e555b8a10013a51e | 0e9939cd920f6a66dbaf73028216b4273d8a3745c57349286ef1cd41c31af385 | 426b4836e897af0a05c311a9b57e8c88bd4f8ad1613cf5d66af2505ff72162ce | null | [
"LICENSE"
] | 422 |
2.4 | migratorxpress-mcp | 0.1.0 | A Model Context Protocol (MCP) server for MigratorXpress, enabling database migration between heterogeneous database systems. | <!-- mcp-name: io.github.aetperf/migratorxpress-mcp -->
# MigratorXpress MCP Server
A [Model Context Protocol](https://modelcontextprotocol.io/) (MCP) server for [MigratorXpress](https://aetperf.github.io/MigratorXpress-Documentation/), enabling database migration between heterogeneous database systems through AI assistants.
MigratorXpress supports migrating from Oracle, PostgreSQL, SQL Server, and Netezza to PostgreSQL or SQL Server targets.
## Installation
```bash
pip install -e .
```
Or install dependencies directly:
```bash
pip install -r requirements.txt
```
## Configuration
### Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `MIGRATORXPRESS_PATH` | `./MigratorXpress` | Path to MigratorXpress binary |
| `MIGRATORXPRESS_TIMEOUT` | `3600` | Command execution timeout in seconds |
| `MIGRATORXPRESS_LOG_DIR` | `./logs` | Directory for execution logs |
| `LOG_LEVEL` | `INFO` | Server logging level |
Copy `.env.example` to `.env` and adjust values:
```bash
cp .env.example .env
```
### Claude Code Configuration
Add to your Claude Code MCP settings:
```json
{
"mcpServers": {
"migratorxpress": {
"command": "python",
"args": ["-m", "src.server"],
"cwd": "/path/to/migratorxpress-mcp",
"env": {
"MIGRATORXPRESS_PATH": "/path/to/MigratorXpress"
}
}
}
}
```
## Tools
### 1. `preview_command`
Build and preview a MigratorXpress CLI command without executing it. License text is automatically masked in the display output.
**Required parameters:** `auth_file`, `source_db_auth_id`, `source_db_name`, `target_db_auth_id`, `target_db_name`, `migration_db_auth_id`
### 2. `execute_command`
Execute a previously previewed command. Requires `confirmation: true` as a safety mechanism.
### 3. `validate_auth_file`
Validate that an authentication file exists, is valid JSON, and optionally check for specific `auth_id` entries.
### 4. `list_capabilities`
List supported source/target databases, tasks, migration DB modes, load modes, and FK modes.
### 5. `suggest_workflow`
Given a source database type, target database type, and optional constraint flag, suggest the full sequence of migration tasks with example commands.
### 6. `get_version`
Report MigratorXpress version and capabilities.
## Workflow Example
A typical migration from Oracle to PostgreSQL:
```
Step 1: translate — Translate Oracle DDL to PostgreSQL-compatible DDL
Step 2: create — Create target tables from translated DDL
Step 3: transfer — Transfer data from source to target
Step 4: diff — Verify row counts match between source and target
Step 5: copy_pk — Copy primary key constraints
copy_ak — Copy alternate key (unique) constraints
copy_fk — Copy foreign key constraints
```
Or run all steps in a single invocation with `--task_list all`.
## Development
### Running Tests
```bash
pip install -e ".[dev]"
python -m pytest tests/ -v
```
## License
MIT
| text/markdown | Arpe.io | null | null | null | null | database, etl, mcp, migration, migratorxpress, model-context-protocol, schema | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp>=1.0.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-mock>=3.11.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://aetperf.github.io/MigratorXpress-Documentation/",
"Repository, https://github.com/aetperf/migratorxpress-mcp",
"Issues, https://github.com/aetperf/migratorxpress-mcp/issues"
] | twine/6.2.0 CPython/3.13.9 | 2026-02-20T23:46:56.979625 | migratorxpress_mcp-0.1.0.tar.gz | 22,831 | 80/c5/f57a03c5e31507ad5fab5694a24c28fe0f409b2ac9e3339f62e3fcbe2346/migratorxpress_mcp-0.1.0.tar.gz | source | sdist | null | false | d3c32094a142a3565cb86d216130fc9e | 3fbc7cf56dc12941702f234ecfd4e5025e7d2191c2fe5d4718105a9fc517ceb3 | 80c5f57a03c5e31507ad5fab5694a24c28fe0f409b2ac9e3339f62e3fcbe2346 | MIT | [
"LICENSE"
] | 252 |
2.4 | ai-citer | 1.0.4 | AI-powered fact extraction and citation mapping for documents (PDF, Word, web, text) | # ai-citer
AI-powered fact extraction and citation mapping for documents — PDF, Word, web pages, and plain text.
Built on FastAPI + Anthropic Claude. Extracts verbatim-quoted facts from documents, maps each quote back to its exact character offset, and optionally assigns PDF page numbers.
## Install
```bash
pip install ai-citer
```
Requires Python 3.11+ and a PostgreSQL database.
## Quick start
### Run as a standalone server
Set environment variables (or create a `.env` file):
```bash
ANTHROPIC_API_KEY=sk-ant-...
DATABASE_URL=postgresql://user:pass@localhost/ai_citer
```
```bash
ai-citer serve # starts on :3001
ai-citer serve --port 8080 --reload
```
Or with uvicorn directly:
```bash
uvicorn ai_citer.main:app --port 3001
```
### Embed the router in your own FastAPI app
```python
from fastapi import FastAPI
from ai_citer import documents_router
app = FastAPI()
app.include_router(documents_router, prefix="/ai-citer")
```
> **Note:** the router reads `app.state.pool` (asyncpg pool) and `app.state.anthropic_client`
> from the FastAPI app state. Use the lifespan from `ai_citer.main` as a reference, or set them up yourself.
### Use the core functions directly
```python
import anthropic
import asyncio
from ai_citer import (
create_pool, init_db,
extract_facts, map_citations, assign_page_numbers,
parse_pdf, parse_word, parse_web, parse_text,
)
async def main():
pool = await create_pool("postgresql://localhost/mydb")
await init_db(pool)
client = anthropic.AsyncAnthropic(api_key="sk-ant-...")
# Parse a PDF
with open("report.pdf", "rb") as f:
content = parse_pdf(f.read())
# Extract facts
extraction, usage = await extract_facts(client, content.rawText)
# Map quotes back to character offsets
facts = map_citations(content.rawText, extraction.facts)
print(facts[0].citations[0].charOffset) # exact position in raw text
print(f"Cost: ${usage.costUsd:.4f}")
asyncio.run(main())
```
## REST API
When running as a server, the following endpoints are available under `/api/documents`:
| Method | Path | Description |
|--------|------|-------------|
| `GET` | `/` | List all documents |
| `POST` | `/` | Upload a file (`multipart/form-data`) or URL (`url` form field) |
| `GET` | `/:id` | Get a document (includes `pdfData` for PDFs) |
| `POST` | `/:id/extract` | Run fact extraction (optional `{ "prompt": "..." }` body) |
| `GET` | `/:id/facts` | Get all accumulated facts for a document |
| `POST` | `/:id/chat` | Chat with a document (`{ "message": "...", "history": [] }`) |
## MCP server
ai-citer ships an [MCP](https://modelcontextprotocol.io) server that exposes extraction tools to AI assistants (Claude Desktop, etc.):
```bash
ai-citer mcp
```
Tools: `upload_document_url`, `extract_facts`, `get_facts`, `list_documents`.
## Environment variables
| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| `ANTHROPIC_API_KEY` | Yes | — | Anthropic API key |
| `DATABASE_URL` | Yes | — | PostgreSQL connection string |
## Development
```bash
git clone https://github.com/czawora/ai-citer
cd ai-citer/server
pip install -e ".[dev]"
pytest
```
| text/markdown | czawora | null | null | null | MIT | ai, anthropic, citations, fact-extraction, fastapi, pdf, rag | [
"Development Status :: 4 - Beta",
"Framework :: FastAPI",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Text Processing"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"anthropic>=0.39.0",
"asyncpg>=0.30.0",
"beautifulsoup4>=4.12.0",
"fastapi>=0.115.0",
"httpx>=0.27.0",
"mammoth>=1.8.0",
"mcp>=1.0.0",
"pydantic>=2.0.0",
"pymupdf>=1.24.0",
"python-dotenv>=1.0.0",
"python-multipart>=0.0.12",
"uvicorn[standard]>=0.32.0",
"build>=1.0.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"pytest>=8.3.0; extra == \"dev\"",
"python-docx>=1.1.2; extra == \"dev\"",
"twine>=5.0.0; extra == \"dev\"",
"playwright>=1.40.0; extra == \"js\""
] | [] | [] | [] | [
"Homepage, https://github.com/czawora/ai-citer",
"Repository, https://github.com/czawora/ai-citer",
"Issues, https://github.com/czawora/ai-citer/issues"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-20T23:46:50.359409 | ai_citer-1.0.4.tar.gz | 24,996 | 6e/85/c614c6f7a43ddcef7b62b170781d951165bd111e69eb7dbe83f7d032d0ce/ai_citer-1.0.4.tar.gz | source | sdist | null | false | 1c03c4d4ad6db09fb9a4dbb8f6e930b1 | 1fda07d5eee509da601cc493e965bcb0c2254a102a88d3c90aafad9e0fe364ff | 6e85c614c6f7a43ddcef7b62b170781d951165bd111e69eb7dbe83f7d032d0ce | null | [] | 243 |
2.4 | exec-sandbox | 0.13.0 | Secure code execution in microVMs with QEMU | # exec-sandbox
Secure code execution in isolated lightweight VMs (QEMU microVMs). Python library for running untrusted Python, JavaScript, and shell code with 8-layer security isolation.
[](https://github.com/dualeai/exec-sandbox/actions/workflows/test.yml)
[](https://codecov.io/gh/dualeai/exec-sandbox)
[](https://pypi.org/project/exec-sandbox/)
[](https://pypi.org/project/exec-sandbox/)
[](https://opensource.org/licenses/Apache-2.0)
## Highlights
- **Hardware isolation** - Each execution runs in a dedicated lightweight VM (QEMU with KVM/HVF hardware acceleration), not containers
- **Fast startup** - 400ms fresh start, 1-2ms with pre-started VMs (warm pool)
- **Simple API** - `run()` for one-shot execution, `session()` for stateful multi-step workflows with file I/O; plus `sbx` CLI for quick testing
- **Streaming output** - Real-time output as code runs
- **Smart caching** - Local + S3 remote cache for VM snapshots
- **Network control** - Disabled by default, optional domain allowlisting with defense-in-depth filtering (DNS + TLS SNI + DNS cross-validation to prevent spoofing)
- **Memory optimization** - Compressed memory (zram) + unused memory reclamation (balloon) for ~30% more capacity, ~80% smaller snapshots
## Installation
```bash
uv add exec-sandbox # Core library
uv add "exec-sandbox[s3]" # + S3 snapshot caching
```
```bash
# Install QEMU runtime
brew install qemu # macOS
apt install qemu-system # Ubuntu/Debian
```
## Quick Start
### CLI
The `sbx` command provides quick access to sandbox execution from the terminal:
```bash
# Run Python code
sbx run 'print("Hello from sandbox")'
# Run JavaScript
sbx run -l javascript 'console.log("Hello from sandbox")'
# Run a file (language auto-detected from extension)
sbx run script.py
sbx run app.js
# From stdin
echo 'print(42)' | sbx run -
# With packages
sbx run -p requests -p pandas 'import pandas; print(pandas.__version__)'
# With timeout and memory limits
sbx run -t 60 -m 512 long_script.py
# Enable network with domain allowlist
sbx run --network --allow-domain api.example.com fetch_data.py
# Expose ports (guest:8080 → host:dynamic)
sbx run --expose 8080 --json 'print("ready")' | jq '.exposed_ports[0].url'
# Expose with explicit host port (guest:8080 → host:3000)
sbx run --expose 8080:3000 --json 'print("ready")' | jq '.exposed_ports[0].external'
# Start HTTP server with port forwarding (runs until timeout)
sbx run -t 60 --expose 8080 'import http.server; http.server.test(port=8080, bind="0.0.0.0")'
# JSON output for scripting
sbx run --json 'print("test")' | jq .exit_code
# Environment variables
sbx run -e API_KEY=secret -e DEBUG=1 script.py
# Multiple sources (run concurrently)
sbx run 'print(1)' 'print(2)' script.py
# Multiple inline codes
sbx run -c 'print(1)' -c 'print(2)'
```
**CLI Options:**
| Option | Short | Description | Default |
|--------|-------|-------------|---------|
| `--language` | `-l` | python, javascript, raw | auto-detect |
| `--code` | `-c` | Inline code (repeatable, alternative to positional) | - |
| `--package` | `-p` | Package to install (repeatable) | - |
| `--timeout` | `-t` | Timeout in seconds | 30 |
| `--memory` | `-m` | Memory in MB | 256 |
| `--env` | `-e` | Environment variable KEY=VALUE (repeatable) | - |
| `--network` | | Enable network access | false |
| `--allow-domain` | | Allowed domain (repeatable) | - |
| `--expose` | | Expose port `INTERNAL[:EXTERNAL][/PROTOCOL]` (repeatable) | - |
| `--json` | | JSON output | false |
| `--quiet` | `-q` | Suppress progress output | false |
| `--no-validation` | | Skip package allowlist validation | false |
| `--upload` | | Upload file `LOCAL:GUEST` (repeatable) | - |
| `--download` | | Download file `GUEST:LOCAL` or `GUEST` (repeatable) | - |
### Python API
#### Basic Execution
```python
from exec_sandbox import Scheduler
async with Scheduler() as scheduler:
result = await scheduler.run(
code="print('Hello, World!')",
language="python", # or "javascript", "raw"
)
print(result.stdout) # Hello, World!
print(result.exit_code) # 0
```
#### Sessions (Stateful Multi-Step)
Sessions keep a VM alive across multiple `exec()` calls — variables, imports, and state persist.
```python
from exec_sandbox import Scheduler
async with Scheduler() as scheduler:
async with await scheduler.session(language="python") as session:
await session.exec("import math")
await session.exec("x = math.pi * 2")
result = await session.exec("print(f'{x:.4f}')")
print(result.stdout) # 6.2832
print(session.exec_count) # 3
```
Sessions support all three languages:
```python
# JavaScript/TypeScript — variables and functions persist
async with await scheduler.session(language="javascript") as session:
await session.exec("const greet = (name: string): string => `Hello, ${name}!`")
result = await session.exec("console.log(greet('World'))")
# Shell (Bash) — env vars, cwd, and functions persist
async with await scheduler.session(language="raw") as session:
await session.exec("cd /tmp && export MY_VAR=hello")
result = await session.exec("echo $MY_VAR from $(pwd)")
```
Sessions auto-close after idle timeout (default: 300s, configurable via `session_idle_timeout_seconds`).
#### File I/O
Sessions support reading, writing, and listing files inside the sandbox.
```python
from pathlib import Path
from exec_sandbox import Scheduler
async with Scheduler() as scheduler:
async with await scheduler.session(language="python") as session:
# Write a file into the sandbox
await session.write_file("input.csv", b"name,score\nAlice,95\nBob,87")
# Write from a local file
await session.write_file("model.pkl", Path("./local_model.pkl"))
# Execute code that reads input and writes output
await session.exec("data = open('input.csv').read().upper()")
await session.exec("open('output.csv', 'w').write(data)")
# Read a file back from the sandbox
await session.read_file("output.csv", destination=Path("./output.csv"))
# List files in a directory
files = await session.list_files("") # sandbox root
for f in files:
print(f"{f.name} {'dir' if f.is_dir else f'{f.size}B'}")
```
CLI file I/O uses sessions under the hood:
```bash
# Upload a local file, run code, download the result
sbx run --upload ./local.csv:input.csv --download output.csv:./result.csv \
-c "open('output.csv','w').write(open('input.csv').read().upper())"
# Download to ./output.csv (shorthand, no local path)
sbx run --download output.csv -c "open('output.csv','w').write('data')"
```
#### With Packages
First run installs and creates snapshot; subsequent runs restore in <400ms.
```python
async with Scheduler() as scheduler:
result = await scheduler.run(
code="import pandas; print(pandas.__version__)",
language="python",
packages=["pandas==2.2.0", "numpy==1.26.0"],
)
print(result.stdout) # 2.2.0
```
#### Streaming Output
```python
async with Scheduler() as scheduler:
result = await scheduler.run(
code="for i in range(5): print(i)",
language="python",
on_stdout=lambda chunk: print(f"[OUT] {chunk}", end=""),
on_stderr=lambda chunk: print(f"[ERR] {chunk}", end=""),
)
```
#### Network Access
```python
async with Scheduler() as scheduler:
result = await scheduler.run(
code="import urllib.request; print(urllib.request.urlopen('https://httpbin.org/ip').read())",
language="python",
allow_network=True,
allowed_domains=["httpbin.org"], # Domain allowlist
)
```
#### Port Forwarding
Expose VM ports to the host for health checks, API testing, or service validation.
```python
from exec_sandbox import Scheduler, PortMapping
async with Scheduler() as scheduler:
# Port forwarding without internet (isolated)
result = await scheduler.run(
code="print('server ready')",
expose_ports=[PortMapping(internal=8080, external=3000)], # Guest:8080 → Host:3000
allow_network=False, # No outbound internet
)
print(result.exposed_ports[0].url) # http://127.0.0.1:3000
# Dynamic port allocation (OS assigns external port)
result = await scheduler.run(
code="print('server ready')",
expose_ports=[8080], # external=None → OS assigns port
)
print(result.exposed_ports[0].external) # e.g., 52341
# Long-running server with port forwarding
result = await scheduler.run(
code="import http.server; http.server.test(port=8080, bind='0.0.0.0')",
expose_ports=[PortMapping(internal=8080)],
timeout_seconds=60, # Server runs until timeout
)
```
**Security:** Port forwarding works independently of internet access. When `allow_network=False`, guest VMs cannot initiate outbound connections (all outbound TCP/UDP blocked), but host-to-guest port forwarding still works.
#### Production Configuration
```python
from exec_sandbox import Scheduler, SchedulerConfig
config = SchedulerConfig(
warm_pool_size=1, # Pre-started VMs per language (0 disables)
default_memory_mb=512, # Per-VM memory
default_timeout_seconds=60, # Execution timeout
s3_bucket="my-snapshots", # Remote cache for package snapshots
s3_region="us-east-1",
)
async with Scheduler(config) as scheduler:
result = await scheduler.run(...)
```
#### Error Handling
```python
from exec_sandbox import Scheduler, VmTimeoutError, PackageNotAllowedError, SandboxError
async with Scheduler() as scheduler:
try:
result = await scheduler.run(code="while True: pass", language="python", timeout_seconds=5)
except VmTimeoutError:
print("Execution timed out")
except PackageNotAllowedError as e:
print(f"Package not in allowlist: {e}")
except SandboxError as e:
print(f"Sandbox error: {e}")
```
## Asset Downloads
exec-sandbox requires VM images (kernel, initramfs, qcow2) and binaries (gvproxy-wrapper) to run. These assets are **automatically downloaded** from GitHub Releases on first use.
### How it works
1. On first `Scheduler` initialization, exec-sandbox checks if assets exist in the cache directory
2. If missing, it queries the GitHub Releases API for the matching version (`v{__version__}`)
3. Assets are downloaded over HTTPS, verified against SHA256 checksums (provided by GitHub API), and decompressed
4. Subsequent runs use the cached assets (no re-download)
### Cache locations
| Platform | Location |
|----------|----------|
| macOS | `~/Library/Caches/exec-sandbox/` |
| Linux | `~/.cache/exec-sandbox/` (or `$XDG_CACHE_HOME/exec-sandbox/`) |
### Environment variables
| Variable | Description |
|----------|-------------|
| `EXEC_SANDBOX_CACHE_DIR` | Override cache directory |
| `EXEC_SANDBOX_OFFLINE` | Set to `1` to disable auto-download (fail if assets missing) |
| `EXEC_SANDBOX_ASSET_VERSION` | Force specific release version |
### Pre-downloading for offline use
Use `sbx prefetch` to download all assets ahead of time:
```bash
sbx prefetch # Download all assets for current arch
sbx prefetch --arch aarch64 # Cross-arch prefetch
sbx prefetch -q # Quiet mode (CI/Docker)
```
**Dockerfile example:**
```dockerfile
FROM ghcr.io/astral-sh/uv:python3.12-bookworm
RUN uv pip install --system exec-sandbox
RUN sbx prefetch -q
ENV EXEC_SANDBOX_OFFLINE=1
# Assets cached, no network needed at runtime
```
### Security
Assets are verified against SHA256 checksums and built with [provenance attestations](https://docs.github.com/en/actions/security-guides/using-artifact-attestations-to-establish-provenance-for-builds).
## Documentation
- [QEMU Documentation](https://www.qemu.org/docs/master/) - Virtual machine emulator
- [KVM](https://www.linux-kvm.org/page/Documents) - Linux hardware virtualization
- [HVF](https://developer.apple.com/documentation/hypervisor) - macOS hardware virtualization (Hypervisor.framework)
- [cgroups v2](https://docs.kernel.org/admin-guide/cgroup-v2.html) - Linux resource limits
- [seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) - System call filtering
## Configuration
| Parameter | Default | Description |
|-----------|---------|-------------|
| `warm_pool_size` | 0 | Pre-started VMs per language (Python, JavaScript). Set >0 to enable |
| `default_memory_mb` | 256 | VM memory (128-2048 MB). Effective ~25% higher with memory compression (zram) |
| `default_timeout_seconds` | 30 | Execution timeout (1-300s) |
| `session_idle_timeout_seconds` | 300 | Session idle timeout (10-3600s). Auto-closes inactive sessions |
| `images_dir` | auto | VM images directory |
| `snapshot_cache_dir` | /tmp/exec-sandbox-cache | Local snapshot cache |
| `s3_bucket` | None | S3 bucket for remote snapshot cache |
| `s3_region` | us-east-1 | AWS region |
| `max_concurrent_s3_uploads` | 4 | Max concurrent background S3 uploads (1-16) |
| `enable_package_validation` | True | Validate against top 10k packages (PyPI for Python, npm for JavaScript) |
| `auto_download_assets` | True | Auto-download VM images from GitHub Releases |
Environment variables: `EXEC_SANDBOX_MAX_CONCURRENT_VMS`, `EXEC_SANDBOX_IMAGES_DIR`, etc.
## Memory Optimization
VMs include automatic memory optimization (no configuration required):
- **Compressed swap (zram)** - ~25% more usable memory via lz4 compression
- **Memory reclamation (virtio-balloon)** - 70-90% smaller snapshots
## Execution Result
| Field | Type | Description |
|-------|------|-------------|
| `stdout` | str | Captured output (max 1MB) |
| `stderr` | str | Captured errors (max 100KB) |
| `exit_code` | int | Process exit code (0 = success, 128+N = killed by signal N) |
| `execution_time_ms` | int | Duration reported by VM |
| `external_cpu_time_ms` | int | CPU time measured by host |
| `external_memory_peak_mb` | int | Peak memory measured by host |
| `timing.setup_ms` | int | Resource setup (filesystem, limits, network) |
| `timing.boot_ms` | int | VM boot time |
| `timing.execute_ms` | int | Code execution |
| `timing.total_ms` | int | End-to-end time |
| `warm_pool_hit` | bool | Whether a pre-started VM was used |
| `exposed_ports` | list | Port mappings with `.internal`, `.external`, `.host`, `.url` |
Exit codes follow Unix conventions: 0 = success, >128 = killed by signal N where N = exit_code - 128 (e.g., 137 = SIGKILL, 139 = SIGSEGV), -1 = internal error (could not retrieve status), other non-zero = program error.
```python
result = await scheduler.run(code="...", language="python")
if result.exit_code == 0:
pass # Success
elif result.exit_code > 128:
signal_num = result.exit_code - 128 # e.g., 9 for SIGKILL
elif result.exit_code == -1:
pass # Internal error (see result.stderr)
else:
pass # Program exited with error
```
## FileInfo
Returned by `Session.list_files()`.
| Field | Type | Description |
|-------|------|-------------|
| `name` | str | File or directory name |
| `is_dir` | bool | True if entry is a directory |
| `size` | int | File size in bytes (0 for directories) |
## Exceptions
| Exception | Description |
|-----------|-------------|
| `SandboxError` | Base exception for all sandbox errors |
| `TransientError` | Retryable errors — may succeed on retry |
| `PermanentError` | Non-retryable errors |
| `VmTimeoutError` | VM boot timed out |
| `VmCapacityError` | VM pool at capacity |
| `VmConfigError` | Invalid VM configuration |
| `SessionClosedError` | Session already closed |
| `CommunicationError` | Guest communication failed |
| `GuestAgentError` | Guest agent returned error |
| `PackageNotAllowedError` | Package not in allowlist |
| `SnapshotError` | Snapshot operation failed |
| `SandboxDependencyError` | Optional dependency missing (e.g., aioboto3) |
| `AssetError` | Asset download/verification failed |
## Pitfalls
```python
# run() creates a fresh VM each time - state doesn't persist across calls
result1 = await scheduler.run("x = 42", language="python")
result2 = await scheduler.run("print(x)", language="python") # NameError!
# Fix: use sessions for multi-step stateful execution
async with await scheduler.session(language="python") as session:
await session.exec("x = 42")
result = await session.exec("print(x)") # Works! x persists
# Pre-started VMs (warm pool) only work without packages
config = SchedulerConfig(warm_pool_size=1)
await scheduler.run(code="...", packages=["pandas==2.2.0"]) # Bypasses warm pool, fresh start (400ms)
await scheduler.run(code="...") # Uses warm pool (1-2ms)
# Version specifiers are required (security + caching)
packages=["pandas==2.2.0"] # Valid, cacheable
packages=["pandas"] # PackageNotAllowedError! Must pin version
# Streaming callbacks must be fast (blocks async execution)
on_stdout=lambda chunk: time.sleep(1) # Blocks!
on_stdout=lambda chunk: buffer.append(chunk) # Fast
# Memory overhead: pre-started VMs use warm_pool_size × 2 languages × 256MB
# warm_pool_size=5 → 5 VMs/lang × 2 × 256MB = 2.5GB for warm pool alone
# Memory can exceed configured limit due to compressed swap
default_memory_mb=256 # Code can actually use ~280-320MB thanks to compression
# Don't rely on memory limits for security - use timeouts for runaway allocations
# Network without domain restrictions is risky
allow_network=True # Full internet access
allow_network=True, allowed_domains=["api.example.com"] # Controlled
# Port forwarding binds to localhost only
expose_ports=[8080] # Binds to 127.0.0.1, not 0.0.0.0
# If you need external access, use a reverse proxy on the host
# multiprocessing.Pool works, but single vCPU means no CPU-bound speedup
from multiprocessing import Pool
Pool(2).map(lambda x: x**2, [1, 2, 3]) # Works (cloudpickle handles lambda serialization)
# For CPU-bound parallelism, use multiple VMs via scheduler.run() concurrently instead
```
## Limits
| Resource | Limit |
|----------|-------|
| Max code size | 1MB |
| Max stdout | 1MB |
| Max stderr | 100KB |
| Max packages | 50 |
| Max env vars | 100 |
| Max exposed ports | 10 |
| Max file size (I/O) | 50MB |
| Max file path length | 255 chars |
| Execution timeout | 1-300s |
| VM memory | 128MB minimum (no upper bound) |
| Max concurrent VMs | Resource-aware (auto-computed from host memory + CPU) |
## Security Architecture
| Layer | Technology | Protection |
|-------|------------|------------|
| 1 | Hardware virtualization (KVM/HVF) | CPU isolation enforced by hardware |
| 2 | Unprivileged QEMU | No root privileges, minimal exposure |
| 3 | Non-root REPL (UID 1000) | Blocks mount, ptrace, raw sockets, kernel modules |
| 4 | System call filtering (seccomp) | Blocks unauthorized OS calls |
| 5 | Resource limits (cgroups v2) | Memory, CPU, process limits |
| 6 | Process isolation (namespaces) | Separate process, network, filesystem views |
| 7 | Security policies (AppArmor/SELinux) | When available |
| 8 | Socket authentication (SO_PEERCRED/LOCAL_PEERCRED) | Verifies QEMU process identity |
**Guarantees:**
- Fresh VM per `run()`, destroyed immediately after. Sessions reuse the same VM across `exec()` calls (same isolation, persistent state)
- Network disabled by default - requires explicit `allow_network=True`
- Domain allowlisting with 3-layer outbound filtering — DNS resolution blocked for non-allowed domains, TLS SNI inspection on port 443, and DNS cross-validation to prevent SNI spoofing
- Package validation - only top 10k Python/JavaScript packages allowed by default
- Port forwarding isolation - when `expose_ports` is used without `allow_network`, guest cannot initiate any outbound connections (all outbound TCP/UDP blocked)
## Requirements
| Requirement | Supported |
|-------------|-----------|
| Python | 3.12, 3.13, 3.14 (including free-threaded) |
| Linux | x64, arm64 |
| macOS | x64, arm64 |
| QEMU | 8.0+ |
| Hardware acceleration | KVM (Linux) or HVF (macOS) recommended, 10-50x faster |
Verify hardware acceleration is available:
```bash
ls /dev/kvm # Linux
sysctl kern.hv_support # macOS
```
Without hardware acceleration, QEMU uses software emulation (TCG), which is 10-50x slower.
### Linux Setup (Optional Security Hardening)
For enhanced security on Linux, exec-sandbox can run QEMU as an unprivileged `qemu-vm` user. This isolates the VM process from your user account.
```bash
# Create qemu-vm system user
sudo useradd --system --no-create-home --shell /usr/sbin/nologin qemu-vm
# Add qemu-vm to kvm group (for hardware acceleration)
sudo usermod -aG kvm qemu-vm
# Add your user to qemu-vm group (for socket access)
sudo usermod -aG qemu-vm $USER
# Re-login or activate group membership
newgrp qemu-vm
```
**Why is this needed?** When `qemu-vm` user exists, exec-sandbox runs QEMU as that user for process isolation. The host needs to connect to QEMU's Unix sockets (0660 permissions), which requires group membership. This follows the [libvirt security model](https://wiki.archlinux.org/title/Libvirt).
If `qemu-vm` user doesn't exist, exec-sandbox runs QEMU as your user (no additional setup required, but less isolated).
## VM Images
Pre-built images from [GitHub Releases](https://github.com/dualeai/exec-sandbox/releases):
| Image | Runtime | Package Manager | Size | Description |
|-------|---------|-----------------|------|-------------|
| `python-3.14-base` | Python 3.14 | uv | ~140MB | Full Python environment with C extension support |
| `node-1.3-base` | Bun 1.3 | bun | ~57MB | Fast JavaScript/TypeScript runtime with Node.js compatibility |
| `raw-base` | Bash | None | ~15MB | Shell scripts and custom runtimes |
All images are based on **Alpine Linux 3.21** (Linux 6.12 LTS, musl libc) and include common tools for AI agent workflows.
### Common Tools (all images)
| Tool | Purpose |
|------|---------|
| `git` | Version control, clone repositories |
| `curl` | HTTP requests, download files |
| `jq` | JSON processing |
| `bash` | Shell scripting |
| `coreutils` | Standard Unix utilities (ls, cp, mv, etc.) |
| `tar`, `gzip`, `unzip` | Archive extraction |
| `file` | File type detection |
### Python Image
| Component | Version | Notes |
|-----------|---------|-------|
| Python | 3.14 | [python-build-standalone](https://github.com/astral-sh/python-build-standalone) (musl) |
| uv | 0.9+ | 10-100x faster than pip ([docs](https://docs.astral.sh/uv/)) |
| gcc, musl-dev | Alpine | For C extensions (numpy, pandas, etc.) |
| cloudpickle | 3.1 | Serialization for `multiprocessing` in REPL ([docs](https://github.com/cloudpipe/cloudpickle)) |
**Usage notes:**
- Use `uv pip install` instead of `pip install` (pip not included)
- Python 3.14 includes t-strings, deferred annotations, free-threading support
- `multiprocessing.Pool` works out of the box — cloudpickle handles serialization of REPL-defined functions, lambdas, and closures. Single vCPU means no CPU-bound speedup, but I/O-bound parallelism and `Pool`-based APIs work correctly
### JavaScript Image
| Component | Version | Notes |
|-----------|---------|-------|
| Bun | 1.3 | Runtime, bundler, package manager ([docs](https://bun.com/docs)) |
**Usage notes:**
- Bun is a Node.js-compatible runtime (not Node.js itself)
- Built-in TypeScript/JSX support, no transpilation needed
- Use `bun install` for packages, `bun run` for scripts
- Near-complete Node.js API compatibility
### Raw Image
Minimal Alpine Linux with common tools only. Use for:
- Shell script execution (`language="raw"`) — runs under **GNU Bash**, full bash syntax supported
- Custom runtime installation
- Lightweight workloads
Build from source:
```bash
./scripts/build-images.sh
# Output: ./images/dist/python-3.14-base.qcow2, ./images/dist/node-1.3-base.qcow2, ./images/dist/raw-base.qcow2
```
## Security
- [Security Policy](./SECURITY.md) - Vulnerability reporting
- [Dependency list (SBOM)](https://github.com/dualeai/exec-sandbox/releases) - Full list of included software, attached to releases
## Contributing
Contributions welcome! Please open an issue first to discuss changes.
```bash
make install # Setup environment
make test # Run tests
make lint # Format and lint
```
## License
[Apache-2.0](https://opensource.org/licenses/Apache-2.0)
| text/markdown | null | Duale AI <hello@duale.ai> | null | null | null | code-execution, isolation, microvm, qemu, sandbox, security, virtualization, vm | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Security",
"Topic :: Software Development :: Interpreters",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiofiles~=24.0",
"aiohttp~=3.11",
"aiojobs~=1.3",
"backports-zstd; python_version < \"3.14\"",
"click~=8.0",
"psutil~=7.2",
"pydantic-settings~=2.0",
"pydantic~=2.0",
"qemu-qmp~=0.0.5",
"tenacity~=8.0",
"aioresponses~=0.7; extra == \"dev\"",
"hypothesis~=6.0; extra == \"dev\"",
"moto[s3,server]~=5.0; extra == \"dev\"",
"py-spy~=0.4; extra == \"dev\"",
"pyright~=1.1; extra == \"dev\"",
"pytest-asyncio~=0.24; extra == \"dev\"",
"pytest-cov~=6.0; extra == \"dev\"",
"pytest-timeout~=2.3; extra == \"dev\"",
"pytest-xdist~=3.5; extra == \"dev\"",
"pytest~=8.0; extra == \"dev\"",
"ruff~=0.7; extra == \"dev\"",
"twine~=6.1; extra == \"dev\"",
"vulture~=2.14; extra == \"dev\"",
"aioboto3~=13.0; extra == \"s3\""
] | [] | [] | [] | [
"Homepage, https://github.com/dualeai/exec-sandbox",
"Documentation, https://github.com/dualeai/exec-sandbox#readme",
"Repository, https://github.com/dualeai/exec-sandbox.git",
"Changelog, https://github.com/dualeai/exec-sandbox/releases",
"Issues, https://github.com/dualeai/exec-sandbox/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:45:25.655277 | exec_sandbox-0.13.0.tar.gz | 743,870 | 5f/82/e90bdd2e211687299e1dbc7377570f58874198ecedb2fb092b4dd53398f1/exec_sandbox-0.13.0.tar.gz | source | sdist | null | false | 46d6663f7a8a58eb60af2998cb6bf6b3 | 88b7c46abbc357e392d2c244a2877ee7cbb029f975a73c28d4372749a97810f5 | 5f82e90bdd2e211687299e1dbc7377570f58874198ecedb2fb092b4dd53398f1 | Apache-2.0 | [
"LICENSE"
] | 243 |
2.4 | rfswarm-reporter | 1.6.0a20260220100740 | RFSwarm Reporter | # rfswarm (Robot Framework Swarm)
## About
rfswarm is a testing tool that allows you to use [Robot Framework](https://robotframework.org/) test cases for performance or load testing.
> _Swarm being the collective noun for Robots, just as Flock is for Birds and Herd for Sheep, so it made sense to use swarm for a performance testing tool using Robot Framework, hence rfswarm_
While Robot Framework is normally used for functional or regression testing, it has long been considered the holy grail in testing for there to be synergies between the functional and performance testing scripts so that effort expended in creating test cases for one does not need to be duplicated for the other which is currently the normal case.
rfswarm aims to solve this problem by allowing you to take an existing functional or regression test case written in Robot Framework, make some minor adjustments to make the test case suitable for performance testing and then run the Robot Framework test case with as many virtual users (robots) as needed to generate load on the application under test.
rfswarm is written completely in python, so if you are already using Robot Framework, then you will already have most of what you need to use rfswarm and will be familiar with pip to get any extra components you need.
To learn more about rfswarm please refer to the [Documentation](https://github.com/damies13/rfswarm/blob/master/Doc/README.md)
## Getting Help
### Community Support
- [rfswarm Documentation](https://github.com/damies13/rfswarm/blob/master/Doc/README.md)
- [Discord](https://discord.gg/jJfCMrqCsT)
- [Slack](https://robotframework.slack.com/archives/C06J2Q0LGEM)
- [Reporting Issues / Known Issues](https://github.com/damies13/rfswarm/issues)
<kbd align="centre">
<img align="centre" height="350" alt="Manager and Agent" src="https://github.com/damies13/rfswarm/blob/master/Doc/Images/Manager&Agent_Example.png">
</kbd><br>
An example of how your rfswarm setup might look.
### Commercial Support
- The easiest way to get commercial support is to sponsor this project on [GitHub](https://github.com/sponsors/damies13?frequency=recurring&sponsor=damies13)
## Donations
If you would like to thank me for this project please consider using one of the sponsorship methods:
- [GitHub](https://github.com/sponsors/damies13/)
- [PayPal.me](https://paypal.me/damies13/5) (the $5 is a suggestion, feel free to change to any amount you would like)
| text/markdown | damies13 | damies13+rfswarm@gmail.com | null | null | null | null | [
"Development Status :: 2 - Pre-Alpha",
"Framework :: Robot Framework",
"Framework :: Robot Framework :: Tool",
"Topic :: Software Development :: Testing",
"Programming Language :: Python :: 3.9",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent"
] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"configparser<8.0.0,>=7.2.0",
"lxml<7.0.0,>=6.0.2",
"matplotlib<4.0.0,>=3.9.0",
"openpyxl<4.0.0,>=3.1.5",
"pillow>=9.1.0",
"python-docx<2.0.0,>=1.2.0",
"pyyaml<7.0.0,>=6.0.3",
"tzlocal>=4.1"
] | [] | [] | [] | [
"Getting help, https://github.com/damies13/rfswarm#getting-help",
"Homepage, https://github.com/damies13/rfswarm",
"Say Thanks!, https://github.com/damies13/rfswarm#donations",
"Source, https://github.com/damies13/rfswarm"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:44:57.996924 | rfswarm_reporter-1.6.0a20260220100740.tar.gz | 223,866 | 59/da/088acded1a0ee1657b9c5f03d984ee19ebff2a9118e429aedfb7b4f733c0/rfswarm_reporter-1.6.0a20260220100740.tar.gz | source | sdist | null | false | 27443f422a08b5073f983a4fb67d8adb | 42edfef8f2cf8bec4a50661b19ff557320e3f137b8e3f257b634d29306f664eb | 59da088acded1a0ee1657b9c5f03d984ee19ebff2a9118e429aedfb7b4f733c0 | GPL-3.0-only | [
"LICENSE"
] | 230 |
2.4 | rfswarm-manager | 1.6.0a20260220100740 | RFSwarm Manager | # rfswarm (Robot Framework Swarm)
## About
rfswarm is a testing tool that allows you to use [Robot Framework](https://robotframework.org/) test cases for performance or load testing.
> _Swarm being the collective noun for Robots, just as Flock is for Birds and Herd for Sheep, so it made sense to use swarm for a performance testing tool using Robot Framework, hence rfswarm_
While Robot Framework is normally used for functional or regression testing, it has long been considered the holy grail in testing for there to be synergies between the functional and performance testing scripts so that effort expended in creating test cases for one does not need to be duplicated for the other which is currently the normal case.
rfswarm aims to solve this problem by allowing you to take an existing functional or regression test case written in Robot Framework, make some minor adjustments to make the test case suitable for performance testing and then run the Robot Framework test case with as many virtual users (robots) as needed to generate load on the application under test.
rfswarm is written completely in python, so if you are already using Robot Framework, then you will already have most of what you need to use rfswarm and will be familiar with pip to get any extra components you need.
To learn more about rfswarm please refer to the [Documentation](https://github.com/damies13/rfswarm/blob/master/Doc/README.md)
## Getting Help
### Community Support
- [rfswarm Documentation](https://github.com/damies13/rfswarm/blob/master/Doc/README.md)
- [Discord](https://discord.gg/jJfCMrqCsT)
- [Slack](https://robotframework.slack.com/archives/C06J2Q0LGEM)
- [Reporting Issues / Known Issues](https://github.com/damies13/rfswarm/issues)
<kbd align="centre">
<img align="centre" height="350" alt="Manager and Agent" src="https://github.com/damies13/rfswarm/blob/master/Doc/Images/Manager&Agent_Example.png">
</kbd><br>
An example of how your rfswarm setup might look.
### Commercial Support
- The easiest way to get commercial support is to sponsor this project on [GitHub](https://github.com/sponsors/damies13?frequency=recurring&sponsor=damies13)
## Donations
If you would like to thank me for this project please consider using one of the sponsorship methods:
- [GitHub](https://github.com/sponsors/damies13/)
- [PayPal.me](https://paypal.me/damies13/5) (the $5 is a suggestion, feel free to change to any amount you would like)
| text/markdown | damies13 | damies13+rfswarm@gmail.com | null | null | null | null | [
"Development Status :: 2 - Pre-Alpha",
"Framework :: Robot Framework",
"Framework :: Robot Framework :: Tool",
"Topic :: Software Development :: Testing",
"Programming Language :: Python :: 3.9",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent"
] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"configparser<8.0.0,>=7.2.0",
"httpserver<2.0.0,>=1.1.0",
"matplotlib<4.0.0,>=3.9.0",
"pillow>=9.1.0",
"psutil<8.0.0,>=7.1.3",
"pyyaml<7.0.0,>=6.0.3"
] | [] | [] | [] | [
"Getting help, https://github.com/damies13/rfswarm#getting-help",
"Homepage, https://github.com/damies13/rfswarm",
"Say Thanks!, https://github.com/damies13/rfswarm#donations",
"Source, https://github.com/damies13/rfswarm"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:44:57.201866 | rfswarm_manager-1.6.0a20260220100740.tar.gz | 236,440 | 54/ae/96388c03ab9bb7236d00b39c4d2681a1f561b3cc256574e75cf31c8bea4c/rfswarm_manager-1.6.0a20260220100740.tar.gz | source | sdist | null | false | ec6ab21dc59a4f4f5c61cfb2c64aa9e2 | 3d7ba987bd5cfdc518305a96eb7ae010ee9aed3bd430d87b761ab1c490819aff | 54ae96388c03ab9bb7236d00b39c4d2681a1f561b3cc256574e75cf31c8bea4c | GPL-3.0-only | [
"LICENSE"
] | 228 |
2.4 | rfswarm-agent | 1.6.0a20260220100740 | RFSwarm Agent | # rfswarm (Robot Framework Swarm)
## About
rfswarm is a testing tool that allows you to use [Robot Framework](https://robotframework.org/) test cases for performance or load testing.
> _Swarm being the collective noun for Robots, just as Flock is for Birds and Herd for Sheep, so it made sense to use swarm for a performance testing tool using Robot Framework, hence rfswarm_
While Robot Framework is normally used for functional or regression testing, it has long been considered the holy grail in testing for there to be synergies between the functional and performance testing scripts so that effort expended in creating test cases for one does not need to be duplicated for the other which is currently the normal case.
rfswarm aims to solve this problem by allowing you to take an existing functional or regression test case written in Robot Framework, make some minor adjustments to make the test case suitable for performance testing and then run the Robot Framework test case with as many virtual users (robots) as needed to generate load on the application under test.
rfswarm is written completely in python, so if you are already using Robot Framework, then you will already have most of what you need to use rfswarm and will be familiar with pip to get any extra components you need.
To learn more about rfswarm please refer to the [Documentation](https://github.com/damies13/rfswarm/blob/master/Doc/README.md)
## Getting Help
### Community Support
- [rfswarm Documentation](https://github.com/damies13/rfswarm/blob/master/Doc/README.md)
- [Discord](https://discord.gg/jJfCMrqCsT)
- [Slack](https://robotframework.slack.com/archives/C06J2Q0LGEM)
- [Reporting Issues / Known Issues](https://github.com/damies13/rfswarm/issues)
<kbd align="centre">
<img align="centre" height="350" alt="Manager and Agent" src="https://github.com/damies13/rfswarm/blob/master/Doc/Images/Manager&Agent_Example.png">
</kbd><br>
An example of how your rfswarm setup might look.
### Commercial Support
- The easiest way to get commercial support is to sponsor this project on [GitHub](https://github.com/sponsors/damies13?frequency=recurring&sponsor=damies13)
## Donations
If you would like to thank me for this project please consider using one of the sponsorship methods:
- [GitHub](https://github.com/sponsors/damies13/)
- [PayPal.me](https://paypal.me/damies13/5) (the $5 is a suggestion, feel free to change to any amount you would like)
| text/markdown | damies13 | damies13+rfswarm@gmail.com | null | null | null | null | [
"Development Status :: 2 - Pre-Alpha",
"Framework :: Robot Framework",
"Framework :: Robot Framework :: Tool",
"Topic :: Software Development :: Testing",
"Programming Language :: Python :: 3.9",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent"
] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"configparser<8.0.0,>=7.2.0",
"psutil<8.0.0,>=7.1.3",
"pyyaml<7.0.0,>=6.0.3",
"requests<3.0.0,>=2.32.5",
"robotframework<8.0.0,>=7.3.2"
] | [] | [] | [] | [
"Getting help, https://github.com/damies13/rfswarm#getting-help",
"Homepage, https://github.com/damies13/rfswarm",
"Say Thanks!, https://github.com/damies13/rfswarm#donations",
"Source, https://github.com/damies13/rfswarm"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:44:56.429137 | rfswarm_agent-1.6.0a20260220100740.tar.gz | 154,799 | ad/38/8bbbfb8524aa82dabdfd72286057dc7e75cda98cd5949262214bea57454e/rfswarm_agent-1.6.0a20260220100740.tar.gz | source | sdist | null | false | eee2c7bab03716be2b3129f0c2fcee7a | d484b26084b738b50a1cd4cefdf5d05c9ea4c247fdb8b11305eff527dcc7d814 | ad388bbbfb8524aa82dabdfd72286057dc7e75cda98cd5949262214bea57454e | GPL-3.0-only | [
"LICENSE"
] | 229 |
2.4 | yaacli | 0.19.3 | TUI reference implementation for ya-agent-sdk | # YAACLI CLI
TUI reference implementation for [ya-agent-sdk](https://github.com/wh1isper/ya-agent-sdk).
## Usage
With uvx, run:
```bash
uvx yaacli
```
Or to install yaacli globally with uv, run:
```bash
uv tool install yaacli
...
yaacli
```
To update to the latest version:
```bash
uv tool upgrade yaacli
```
Or with pip, run:
```bash
pip install yaacli
...
yaacli
```
Or run as a module:
```bash
python -m yaacli
```
## Development
This package is part of the ya-agent-sdk monorepo. To develop locally:
```bash
cd ya-agent-sdk
uv sync --all-packages
```
## License
BSD 3-Clause License - see [LICENSE](LICENSE) for details.
| text/markdown | null | wh1isper <jizhongsheng957@gmail.com> | null | null | null | ai-agent, cli, python, tui | [
"Environment :: Console",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"click>=8.0",
"pydantic-ai",
"pydantic-settings>=2.0",
"pydantic>=2.0",
"y-agent-environment>=0.1.0",
"ya-agent-sdk[all]==0.19.3"
] | [] | [] | [] | [
"Repository, https://github.com/wh1isper/ya-agent-sdk"
] | uv/0.6.2 | 2026-02-20T23:44:11.393275 | yaacli-0.19.3.tar.gz | 189,998 | ce/c6/415caabfbb6e72c5e37d3b8504a68962313a6577a057e664e4863e0d465d/yaacli-0.19.3.tar.gz | source | sdist | null | false | d8c74dee45ca900f6ca06531d4e86c59 | 6cbac4fe7c378acc889972a0c7d653644d773d333c2b6c50e92e7b5bd227ce60 | cec6415caabfbb6e72c5e37d3b8504a68962313a6577a057e664e4863e0d465d | null | [
"LICENSE"
] | 255 |
2.4 | ya-agent-sdk | 0.19.3 | Application framework for building AI agents with Pydantic AI - environment abstractions, session management, and hierarchical agent patterns | # Ya Agent SDK
> Yet Another Agent SDK
[](https://img.shields.io/github/v/release/wh1isper/ya-agent-sdk)
[](https://github.com/wh1isper/ya-agent-sdk/actions/workflows/main.yml?query=branch%3Amain)
[](https://codecov.io/gh/wh1isper/ya-agent-sdk)
[](https://img.shields.io/github/commit-activity/m/wh1isper/ya-agent-sdk)
[](https://img.shields.io/github/license/wh1isper/ya-agent-sdk)
Yet Another Agent SDK for building AI agents with [Pydantic AI](https://ai.pydantic.dev/). Used at my homelab for research and prototyping.
## Key Features
- **Environment-based Architecture**: Protocol-based design for file operations, shell access, and resources. Built-in `LocalEnvironment` and `DockerEnvironment`, easily extensible for custom backends (SSH, S3, cloud VMs, etc.)
- **Fully Typed**: Complete type annotations validated with pyright (standard mode). Enjoy full IDE autocompletion and catch errors before runtime
- **Resumable Sessions**: Export and restore `AgentContext` state for multi-turn conversations across restarts
- **Hierarchical Agents**: Subagent system with task delegation, tool inheritance, and markdown-based configuration
- **Skills System**: Markdown-based instruction files with hot reload and progressive loading
- **Human-in-the-Loop**: Built-in approval workflows for sensitive tool operations
- **Toolset Architecture**: Extensible tool system with pre/post hooks for logging, validation, and error handling
- **Resumable Resources**: Export and restore resource states (like browser sessions) across process restarts
- **Browser Automation**: Docker-based headless Chrome sandbox for safe browser automation
- **Streaming Support**: Real-time streaming of agent responses and tool executions
## Installation
```bash
# Recommended: install with all optional dependencies
pip install ya-agent-sdk[all]
uv add ya-agent-sdk[all]
# Or install individual extras as needed
pip install ya-agent-sdk[docker] # Docker sandbox support
pip install ya-agent-sdk[web] # Web tools (tavily, firecrawl, markitdown)
pip install ya-agent-sdk[document] # Document processing (pymupdf, markitdown)
```
## Project Structure
This repository contains:
- **ya_agent_sdk/** - Core SDK with environment abstraction, toolsets, and session management
- **yaacli/** - Reference CLI implementation with TUI for interactive agent sessions
- **examples/** - Code examples demonstrating SDK features
- **docs/** - Documentation for SDK architecture and APIs
## Quick Start
### Using the SDK
```python
from ya_agent_sdk.agents import create_agent, stream_agent
# create_agent returns AgentRuntime (not a context manager)
runtime = create_agent("openai:gpt-4o")
# stream_agent manages runtime lifecycle automatically
async with stream_agent(runtime, "Hello") as streamer:
async for event in streamer:
print(event)
```
### Using YAACLI CLI
For a ready-to-use terminal interface, try [yaacli](yaacli/README.md) - a TUI reference implementation built on top of ya-agent-sdk:
```bash
# Run directly with uvx (no installation needed)
uvx yaacli
# Or install globally
uv tool install yaacli
pip install yaacli
```
Features:
- Rich terminal UI with syntax highlighting and streaming output
- Built-in tool approval workflows (human-in-the-loop)
- Session management with conversation history
- Browser automation support via Docker sandbox
- MCP (Model Context Protocol) server integration
## Examples
Check out the [examples/](examples/) directory:
| Example | Description |
| ------------------------------------------- | ----------------------------------------------------------------------- |
| [general.py](examples/general.py) | Complete pattern with streaming, HITL approval, and session persistence |
| [deepresearch.py](examples/deepresearch.py) | Autonomous research agent with web search and content extraction |
| [browser_use.py](examples/browser_use.py) | Browser automation with Docker-based headless Chrome sandbox |
## For Agent Users
If you're using an AI agent (e.g., Claude, Cursor) that supports skills:
- **Clone this repo**: The [SKILL.md](SKILL.md) file in the repository root provides comprehensive guidance for agents
- **Download release package**: Get the latest `SKILL.zip` from the [Releases](https://github.com/wh1isper/ya-agent-sdk/releases) page (automatically built during each release)
## Configuration
Copy `examples/.env.example` to `examples/.env` and configure your API keys.
## Documentation
- [AgentContext & Sessions](docs/context.md) - Session state, resumable sessions, extending context
- [Streaming & Hooks](docs/streaming.md) - Real-time streaming, lifecycle hooks, event handling
- [Toolset Architecture](docs/toolset.md) - Create tools, use hooks, handle errors, extend Toolset
- [Subagent System](docs/subagent.md) - Hierarchical agents, builtin presets, markdown configuration
- [Message Bus](docs/message-bus.md) - Inter-agent communication, user steering during execution
- [Skills System](docs/skills.md) - Markdown-based skills, hot reload, pre-scan hooks
- [Custom Environments](docs/environment.md) - Environment lifecycle, resource management
- [Resumable Resources](docs/resumable-resources.md) - Export and restore resource states across restarts
- [Model Configuration](docs/model.md) - Provider setup, gateway mode
- [Logging Configuration](docs/logging.md) - Configure SDK logging levels
## Development
See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup and guidelines.
| text/markdown | null | wh1isper <jizhongsheng957@gmail.com> | null | null | null | python | [
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"anyio>=4.12.0",
"cdp-use<2.0.0,>=1.4.4",
"httpx>=0.28.1",
"jinja2>=3.0.0",
"pathspec>=0.12.0",
"pillow>=10.0.0",
"pydantic-ai-slim>=1.59.0",
"pydantic-settings>=2.0.0",
"pydantic>=2.12.5",
"pyyaml>=6.0.0",
"tenacity>=9.0.0",
"typing-extensions>=4.15.0",
"y-agent-environment==0.4.0",
"boto3>=1.34; extra == \"all\"",
"docker>=7.0.0; extra == \"all\"",
"firecrawl-py>=4.0.0; extra == \"all\"",
"markitdown[all]>=0.1.5; extra == \"all\"",
"pydantic-ai; extra == \"all\"",
"pymupdf-layout>=1.26.6; extra == \"all\"",
"pymupdf4llm>=0.2.8; extra == \"all\"",
"pymupdf>=1.26.6; extra == \"all\"",
"python-dotenv; extra == \"all\"",
"tavily-python>=0.7.0; extra == \"all\"",
"docker>=7.0.0; extra == \"docker\"",
"markitdown[all]>=0.1.5; extra == \"document\"",
"pymupdf-layout>=1.26.6; extra == \"document\"",
"pymupdf4llm>=0.2.8; extra == \"document\"",
"pymupdf>=1.26.6; extra == \"document\"",
"pydantic-ai; extra == \"examples\"",
"python-dotenv; extra == \"examples\"",
"boto3>=1.34; extra == \"s3\"",
"firecrawl-py>=4.0.0; extra == \"web\"",
"markitdown[all]>=0.1.5; extra == \"web\"",
"tavily-python>=0.7.0; extra == \"web\""
] | [] | [] | [] | [
"Repository, https://github.com/wh1isper/ya-agent-sdk"
] | uv/0.6.2 | 2026-02-20T23:44:08.052680 | ya_agent_sdk-0.19.3.tar.gz | 729,984 | f5/2d/b83d48ee7bb062e7c305e0b5aaf63e08f50595fd94f1937a7f11e8feaaf2/ya_agent_sdk-0.19.3.tar.gz | source | sdist | null | false | 39fe03ee6daeb4251f9cdb717db8cc34 | ac6b0f137ecb51c7b8732b00bac3b37d4365242cc9e90347a559adbfb166d269 | f52db83d48ee7bb062e7c305e0b5aaf63e08f50595fd94f1937a7f11e8feaaf2 | null | [
"LICENSE"
] | 260 |
2.2 | Ripple-hpc | 1.2.4 | High-performance molecular dynamics trajectory analysis (RDF, SSF, VHF, MSD/cMSD, diffusion tensors). | # Ripple
[](https://badge.fury.io/py/Ripple)
[](LICENSE)
Ripple is a high-performance Python package for scalable analysis of particle‐motion correlation functions. Exploiting multicore parallelism on HPC systems, Ripple rapidly computes correlation metrics for exceptionally large trajectory datasets—such as those produced by machine‐learning force fields. Trajectories are read through ASE, guaranteeing broad compatibility with virtually any supported file format. Computations are dispatched across multiple CPUs, and the resulting correlation data are exported as HDF5 files to a user-specified directory for downstream processing.
---
## Functions
- Mean square displacement (MSD)
- Collective mean square displacement (cMSD)
- Displacement cross correlation function
- Haven ratio
- Time averaged radial distribution function (RDF)
- Time averaged static structure factor (SSF)
- Van Hove correlation function self & distinct part (VHF_s, VHF_d)
- Intermediate scattering function self part & total (ISF_s, ISF_tot)
---
## Installation
Install via pip:
```bash
pip install git+https://github.com/Frost-group/Ripple
```
---
## Usage Example
Here’s a simple case:
```python
import ripple
from ripple import correlation
if __name__ == '__main__':
v=0
N_frame = 4000
save_dir = f'Ripple_VHF_d/'
target_atoms1 = 'Li'
target_atoms2 = 'Li'
r_max = 12
dr = 0.02
timestep = 0.1
N_workers = 64
for t in [200,250,300,350,400,450,500,550,600,650,700]:
for r in [0,42,123,161,1234]:
trajectory = ase.io.read(f'diffusion_rand={r}/MaceMD_{t}K_{v}vacancies_trajactory.xyz', format='extxyz', index=f':{N_frame}')
trajectory_tag = f'{v}v_{r}randn_{t}K'
vhf_distinct_cal(trajectory, trajectory_tag, save_dir, target_atoms1, target_atoms2, r_max, dr, timestep, N_workers)
```
After calculation, you will obtain a HDF5 file for each trajectory.
```python
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.font_manager as fm
import matplotlib as mpl
import h5py
v=0
randn = [0, 42, 123, 161, 1234]
t = 400 #temperature = [200,250,300,350,400,450,500,550,600,650,700]
target_atom1 = 'Li'
target_atom2 = 'Li'
G_d = np.empty(len(randn), dtype=object)
for r in range(len(randn)):
file = h5py.File(f'Ripple_VHF_d/vhfd_{target_atom1}_{target_atom2}_{v}v_{randn[r]}randn_{t}K.hdf5', 'r')
G_d[r] = np.array(file[f'vhfd_{target_atom1}_{target_atom2}'])
N_frame = int(np.array(file['N_frame']))
timestep = float(np.array(file['timestep']))
r_max = float(np.array(file['r_max']))
file.close()
G_d = np.mean(G_d, axis=0) # take average on diff randon mumbers
font_path = fm.findfont(fm.FontProperties(family='Times New Roman'))
font = fm.FontProperties(fname=font_path, size=16)
fig = plt.figure("plot", figsize=(8, 6), dpi=100)
plt.subplots_adjust(top=0.95, bottom=0.105, left=0.11, right=0.98, hspace=0.05, wspace=0.05)
plt.rcParams['font.family'] = 'Times New Roman'
plt.rcParams['font.size'] = 16
fig.tight_layout(pad=0.0)
ax = fig.add_subplot(111)
im = ax.imshow(G_d.T, vmin=0.0, vmax=2.0, origin='lower', cmap='coolwarm', extent=[0, N_frame*timestep, 0, r_max], aspect='auto')
ax.set_yticks(np.linspace(0, r_max, 5))
ax.set_xticks(np.linspace(0, N_frame*timestep, 5))
ax.tick_params(axis='both', which='both', direction='inout', length=5.0, width=2.0, color='black')
ax.set_ylabel('r (Å)', fontproperties=font)
ax.set_xlabel('t (ps)', fontproperties=font)
plt.colorbar(im)
fig.show()
```
 | text/markdown | Frost research group | null | null | null | MIT License
Copyright (c) 2025 Frost research group
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.26.4",
"joblib>=1.5",
"tqdm>=4.66"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:44:03.106862 | ripple_hpc-1.2.4.tar.gz | 38,465,996 | 4d/40/afcc825e817e1eb286e191baf27b3f1f394774512328d2279f58a3eb66ad/ripple_hpc-1.2.4.tar.gz | source | sdist | null | false | bd48975feff051ca8d02b1b3900a80eb | 3596da2280ce8344b14c4a073605e81ed0bb53c475096bfae89b2beb93d423c0 | 4d40afcc825e817e1eb286e191baf27b3f1f394774512328d2279f58a3eb66ad | null | [] | 0 |
2.4 | PermutiveAPI | 5.4.5 | A Python wrapper for the Permutive API. | # PermutiveAPI
[](https://pypi.org/project/openai-sdk-helpers/)
[](https://pypi.org/project/openai-sdk-helpers/)
[](https://opensource.org/licenses/MIT)
PermutiveAPI is a Python module to interact with the Permutive API. It provides a set of classes and methods to manage users, imports, cohorts, and workspaces within the Permutive ecosystem.
## Table of Contents
- [Installation](#installation)
- [Configuration](#configuration)
- [Usage](#usage)
- [Importing the Module](#importing-the-module)
- [Managing Workspaces](#managing-workspaces)
- [Managing Cohorts](#managing-cohorts)
- [Managing Segments](#managing-segments)
- [Managing Imports](#managing-imports)
- [Managing Users](#managing-users)
- [Evaluating Segmentation](#evaluating-segmentation)
- [Evaluating Context Segmentation](#evaluating-context-segmentation)
- [Working with pandas DataFrames](#working-with-pandas-dataframes)
- [Batch Helpers and Progress Callbacks](#batch-helpers-and-progress-callbacks)
- [Error Handling](#error-handling)
- [Development](#development)
- [Contributing](#contributing)
- [License](#license)
## Installation
You can install the PermutiveAPI module using pip:
```sh
pip install PermutiveAPI --upgrade
```
> **Note**
> PermutiveAPI depends on [`pandas`](https://pandas.pydata.org/) for its DataFrame
> export helpers. The dependency is installed automatically with the package,
> but make sure your runtime environment includes it before using the
> `to_pd_dataframe` utilities described below.
## Configuration
Before using the library, you need to configure your credentials.
1. **Copy the environment file**:
```sh
cp _env .env
```
2. **Set your credentials path**:
Edit the `.env` file and set the `PERMUTIVE_APPLICATION_CREDENTIALS` environment variable to the absolute path of your workspace JSON file.
```sh
PERMUTIVE_APPLICATION_CREDENTIALS="/absolute/path/to/your/workspace.json"
```
The workspace credentials JSON can be downloaded from the Permutive dashboard under **Settings → API keys**. Save the file somewhere secure. The `apiKey` inside this JSON is used to authenticate API calls.
## Usage
### Importing the Module
To use the PermutiveAPI module, import the necessary classes. The main classes are exposed at the top level of the `PermutiveAPI` package:
```python
from PermutiveAPI import (
Alias,
Cohort,
Identity,
Import,
Segment,
Source,
Workspace,
ContextSegment,
)
```
### Managing Workspaces
The `Workspace` class is the main entry point for interacting with your Permutive workspace.
```python
# Create a workspace instance
workspace = Workspace(
name="Main",
organisation_id="your-org-id",
workspace_id="your-workspace-id",
api_key="your-api-key",
)
# List cohorts in the workspace (including child workspaces)
all_cohorts = workspace.cohorts()
print(f"Found {len(all_cohorts)} cohorts.")
# List imports in the workspace
all_imports = workspace.imports()
print(f"Found {len(all_imports)} imports.")
# List segments for a specific import
segments_in_import = workspace.segments(import_id="your-import-id")
print(f"Found {len(segments_in_import)} segments.")
```
### Managing Cohorts
You can create, retrieve, and list cohorts using the `Cohort` class.
```python
# List all cohorts
all_cohorts = Cohort.list(api_key="your_api_key")
print(f"Found {len(all_cohorts)} cohorts.")
# Get a specific cohort by ID
cohort_id = "your-cohort-id"
cohort = Cohort.get_by_id(id=cohort_id, api_key="your_api_key")
print(f"Retrieved cohort: {cohort.name}")
# Create a new cohort
new_cohort = Cohort(
name="High-Value Customers",
query={"type": "segment", "id": "segment-id-for-high-value-customers"}
)
new_cohort.create(api_key="your_api_key")
print(f"Created cohort with ID: {new_cohort.id}")
```
### Managing Segments
The `Segment` class allows you to interact with audience segments.
```python
# List all segments for a given import
import_id = "your-import-id"
segments = Segment.list(api_key="your_api_key", import_id=import_id)
print(f"Found {len(segments)} segments in import {import_id}.")
# Get a specific segment by ID
segment_id = "your-segment-id"
segment = Segment.get_by_id(import_id=import_id, segment_id=segment_id, api_key="your_api_key")
print(f"Retrieved segment: {segment.name}")
```
### Managing Imports
You can list and retrieve imports using the `Import` class.
```python
# List all imports
all_imports = Import.list(api_key="your_api_key")
for imp in all_imports:
print(f"Import ID: {imp.id}, Code: {imp.code}, Source Type: {imp.source.type}")
# Get a specific import by ID
import_id = "your-import-id"
import_instance = Import.get_by_id(id=import_id, api_key="your_api_key")
print(f"Retrieved import: {import_instance.id}, Source Type: {import_instance.source.type}")
```
### Managing Users
The `Identity` and `Alias` classes are used to manage user profiles.
```python
# Create an alias for a user
alias = Alias(id="user@example.com", tag="email", priority=1)
# Create an identity for the user
identity = Identity(user_id="internal-user-id-123", aliases=[alias])
# Send the identity information to Permutive
try:
identity.identify(api_key="your-api-key")
print("Successfully identified user.")
except Exception as e:
print(f"Error identifying user: {e}")
```
### Evaluating Segmentation
The segmentation helpers expose the low-level CCS segmentation endpoint so you
can evaluate arbitrary event streams against your configured audiences. Start by
describing each event with the `Event` dataclass and then submit the request with
the `Segmentation` helper.
```python
from PermutiveAPI import Event, Segmentation
event = Event(
name="SlotViewable",
time="2025-07-01T15:39:11.594Z",
properties={"campaign_id": "3747123491"},
)
request = Segmentation(user_id="user-123", events=[event])
# Submit the request to retrieve segment membership
response = request.send(api_key="your-api-key")
print(response["segments"]) # [{"id": "segment-id", "name": "Segment Name"}, ...]
```
The segmentation endpoint accepts two optional query parameters that you can
control directly from the helper:
| Parameter | Default | What it does |
|-----------|---------|--------------|
| `activations` | `False` | Include any activated cohorts in the response payload. |
| `synchronous-validation` | `False` | Validate events against their schemas before segmentation, which is useful for debugging but adds latency. |
Set them when constructing the request or override them per call:
```python
# Opt in for activations and synchronous validation on every request
request = Segmentation(
user_id="user-123",
events=[event],
activations=True,
synchronous_validation=True,
)
# Or override when sending if you only need them occasionally
response = request.send(
api_key="your-api-key",
activations=True,
synchronous_validation=True,
)
```
`Event.session_id` and `Event.view_id` are optional—include them only when you
need to tie events together across sessions or page views. When present, they
are forwarded as part of the event payload.
For high-volume workloads, use `Segmentation.batch_send` to process multiple
requests concurrently. The helper integrates with the shared batch runner
described in the next section so you can surface throughput metrics via
`progress_callback` while respecting rate limits.
### Evaluating Context Segmentation
Use the `ContextSegment` helper to call the Context API endpoint
(`https://api.permutive.com/ctx/v1/segment`) with a page URL and page
properties payload.
```python
from PermutiveAPI import ContextSegment
request = ContextSegment(
url="https://example.com/article/sports-news",
page_properties={
"client": {
"url": "https://example.com/article/sports-news",
"domain": "example.com",
"referrer": "https://example.com",
"type": "web",
"user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)",
"title": "Latest Sports News",
},
"category": "sports",
"tags": ["football", "premier-league"],
},
)
response = request.send(api_key="your-api-key")
print(response["segments"])
```
### Working with pandas DataFrames
The list models expose helpers for quick DataFrame exports when you need to
analyze your data using pandas. Each list class provides a `to_pd_dataframe`
method that returns a `pandas.DataFrame` populated with the model attributes:
```python
from PermutiveAPI import Cohort, CohortList
cohorts = CohortList(
[
Cohort(name="C1", id="1", code="c1", tags=["t1"]),
Cohort(name="C2", id="2", description="second cohort"),
]
)
df = cohorts.to_pd_dataframe()
print(df[["id", "name"]])
```
The same helper is available on `SegmentList` and `ImportList` for consistency
across the API.
### Batch Helpers and Progress Callbacks
High-volume workflows often rely on the ``batch_*`` helpers to run requests
concurrently. Every helper accepts an optional ``progress_callback`` that is
invoked after each request completes with a
:class:`~PermutiveAPI._Utils.http.Progress` snapshot describing aggregate
throughput. The dataclass includes counters for completed requests, failure
totals, elapsed time, and the estimated seconds required to process 1,000
requests, making it straightforward to surface both reliability and latency
trends in dashboards or logs. Most workloads achieve a good balance between
throughput and API friendliness with ``max_workers=4``. Increase the pool size
gradually (for example to 6 or 8 workers) only after observing stable latency
and error rates because the Permutive API enforces rate limits.
```python
from PermutiveAPI import Cohort
from PermutiveAPI._Utils.http import Progress
def on_progress(progress: Progress) -> None:
"""Render a concise progress snapshot."""
avg = progress.average_per_thousand_seconds
avg_display = f"{avg:.2f}s" if avg is not None else "n/a"
print(
f"{progress.completed}/{progress.total} completed; "
f"errors: {progress.errors}, avg/1k: {avg_display}"
)
cohorts = [
Cohort(name="VIP Customers", query={"type": "users"}),
Cohort(name="Returning Visitors", query={"type": "visitors"}),
]
responses, failures = Cohort.batch_create(
cohorts,
api_key="your-api-key",
max_workers=4, # recommended starting point for concurrent writes
progress_callback=on_progress,
)
if failures:
print(f"Encountered {len(failures)} failures.")
```
The same callback shape is shared across helpers such as
``Identity.batch_identify`` and ``Segment.batch_create``, enabling reuse of
progress reporting utilities that surface throughput, error counts, and
latency projections. The helpers delegate to
:func:`PermutiveAPI._Utils.http.process_batch`, so they automatically benefit
from the shared retry/backoff configuration used by the underlying request
helpers. When the API responds with ``HTTP 429`` (rate limiting), the helper
retries using the exponential backoff already built into the package before
surfacing the error in the ``failures`` list. The segmentation helper,
``Segmentation.batch_send``, also consumes the same callback so you can track
progress consistently across segmentation workloads.
#### Configuring batch defaults
Two environment variables allow you to tune the default behaviour without
touching application code:
- ``PERMUTIVE_BATCH_MAX_WORKERS`` controls the worker pool size used by the
shared batch runner when ``max_workers`` is omitted. Provide a positive
integer to cap concurrency or leave it unset to use Python's default
heuristic.
- ``PERMUTIVE_BATCH_TIMEOUT_SECONDS`` controls the default timeout applied to
each ``PermutiveAPI._Utils.http.BatchRequest``. Set it to a positive
float (in seconds) to align the HTTP timeout with your infrastructure's
expectations.
Invalid values raise ``ValueError`` during initialisation to surface mistakes
early in the development cycle.
#### Configuring retry defaults
Transient failure handling can also be adjusted through environment variables.
When unset, the package uses the standard ``RetryConfig`` defaults.
- ``PERMUTIVE_RETRY_MAX_RETRIES`` sets the number of attempts performed by the
HTTP helpers before surfacing an error. Provide a positive integer.
- ``PERMUTIVE_RETRY_BACKOFF_FACTOR`` controls the exponential multiplier applied
after each failed attempt. Provide a positive number (floats are accepted).
- ``PERMUTIVE_RETRY_INITIAL_DELAY_SECONDS`` specifies the starting delay in
seconds before retrying. Provide a positive number.
Supplying invalid values for any of these variables raises ``ValueError`` when
the retry configuration is evaluated, helping catch misconfiguration early.
Segmentation workflows follow the same pattern. For example, you can create
multiple segments for a given import in one request batch while reporting
progress back to an observability system:
```python
from PermutiveAPI import Segment
segments = [
Segment(
import_id="import-123",
name="Frequent Flyers",
query={"type": "users", "filter": {"country": "US"}},
),
Segment(
import_id="import-123",
name="Dormant Subscribers",
query={"type": "users", "filter": {"status": "inactive"}},
),
]
segment_responses, segment_failures = Segment.batch_create(
segments,
api_key="your-api-key",
max_workers=4,
progress_callback=on_progress,
)
if segment_failures:
print(f"Encountered {len(segment_failures)} failures during segment creation.")
```
You can also evaluate multiple users in parallel while reporting progress back
to an observability system:
```python
from PermutiveAPI import Event, Segmentation
events = [
Event(
name="SlotViewable",
time="2025-07-01T15:39:11.594Z",
properties={"campaign_id": "3747123491"},
session_id="f19199e4-1654-4869-b740-703fd5bafb6f",
view_id="d30ccfc5-c621-4ac4-a282-9a30ac864c8a",
)
]
requests = [
Segmentation(user_id="user-1", events=events),
Segmentation(user_id="user-2", events=events),
]
segmentation_responses, segmentation_failures = Segmentation.batch_send(
requests,
api_key="your-api-key",
max_workers=4,
progress_callback=on_progress,
)
if segmentation_failures:
print(f"Encountered {len(segmentation_failures)} failures during segmentation.")
```
### Error Handling
The package raises purpose-specific exceptions that are also available at the
top level of the package for convenience:
```python
from PermutiveAPI import (
PermutiveAPIError,
PermutiveAuthenticationError,
PermutiveBadRequestError,
PermutiveRateLimitError,
PermutiveResourceNotFoundError,
PermutiveServerError,
)
try:
# make an API call via the high-level classes
Cohort.list(api_key="your_api_key")
except PermutiveBadRequestError as e:
# e.status, e.url, and e.response are available for debugging
print(e.status, e.url, e)
except PermutiveAPIError as e:
print("Unhandled API error:", e)
```
## Development
To set up a development environment, install the development dependencies:
```sh
pip install ".[dev]"
```
### Running Tests
Before committing any changes, please run the following checks to ensure code quality and correctness.
**Style Checks:**
```bash
pydocstyle PermutiveAPI
black --check .
```
**Static Type Analysis:**
```bash
pyright PermutiveAPI
```
**Unit Tests and Coverage:**
```bash
pytest -q --cov=PermutiveAPI --cov-report=term-missing --cov-fail-under=70
```
All checks must pass before a pull request can be merged.
## Contributing
Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for development setup and pull request guidelines.
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
| text/markdown | null | fatmambo33 <fatmambo33@gmail.com> | null | null | MIT License
Copyright (c) 2023 fatmambo33
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests",
"python-dotenv",
"pandas",
"pydocstyle; extra == \"dev\"",
"pytest-pydocstyle; extra == \"dev\"",
"pyright; extra == \"dev\"",
"black; extra == \"dev\"",
"pytest-cov; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-20T23:43:36.018108 | permutiveapi-5.4.5.tar.gz | 53,697 | be/bf/b77748cbf0c6712f9669e522fde9f39d377c1b38d71e13a4818db3ec894d/permutiveapi-5.4.5.tar.gz | source | sdist | null | false | b7f06071b1f285c8c7468f18564cf273 | c7db72cdf247cfb4ad19978c3346f6843ddbc09f0f98a9c7bd74d53e36a702a5 | bebfb77748cbf0c6712f9669e522fde9f39d377c1b38d71e13a4818db3ec894d | null | [
"LICENSE"
] | 0 |
2.4 | scrapli | 2026.2.20 | Fast, flexible, sync/async, Python 3.7+ screen scraping client specifically for network devices | <p center><a href=""><img src=https://github.com/carlmontanari/scrapli/blob/main/scrapli.svg?sanitize=true/></a></p>
[](https://pypi.org/project/scrapli)
[](https://badge.fury.io/py/scrapli)
[](https://github.com/carlmontanari/scrapli/actions?query=workflow%3A%22Weekly+Build%22)
[](https://codecov.io/gh/carlmontanari/scrapli)
[](https://github.com/ambv/black)
[](https://opensource.org/licenses/MIT)
scrapli
=======
---
**Documentation**: <a href="https://carlmontanari.github.io/scrapli" target="_blank">https://carlmontanari.github.io/scrapli</a>
**Source Code**: <a href="https://github.com/carlmontanari/scrapli" target="_blank">https://github.com/carlmontanari/scrapli</a>
**Examples**: <a href="https://github.com/carlmontanari/scrapli/tree/master/examples" target="_blank">https://github.com/carlmontanari/scrapli/tree/master/examples</a>
---
scrapli -- scrap(e c)li -- is a python 3.7+ library focused on connecting to devices, specifically network devices
(routers/switches/firewalls/etc.) via Telnet or SSH.
#### Key Features:
- __Easy__: It's easy to get going with scrapli -- check out the documentation and example links above, and you'll be
connecting to devices in no time.
- __Fast__: Do you like to go fast? Of course you do! All of scrapli is built with speed in mind, but if you really
feel the need for speed, check out the `ssh2` transport plugin to take it to the next level!
- __Great Developer Experience__: scrapli has great editor support thanks to being fully typed; that plus thorough
docs make developing with scrapli a breeze.
- __Well Tested__: Perhaps out of paranoia, but regardless of the reason, scrapli has lots of tests! Unit tests
cover the basics, regularly ran functional tests connect to virtual routers to ensure that everything works IRL!
- __Pluggable__: scrapli provides a pluggable transport system -- don't like the currently available transports,
simply extend the base classes and add your own! Need additional device support? Create a simple "platform" in
[scrapli_community](https://github.com/scrapli/scrapli_community) to easily add new device support!
- __But wait, there's more!__: Have NETCONF devices in your environment, but love the speed and simplicity of
scrapli? You're in luck! Check out [scrapli_netconf](https://github.com/scrapli/scrapli_netconf)!
- __Concurrency on Easy Mode__: [Nornir's](https://github.com/nornir-automation/nornir)
[scrapli plugin](https://github.com/scrapli/nornir_scrapli) gives you all the normal benefits of scrapli __plus__
all the great features of Nornir.
- __Sounds great, but I am a Gopher__: For our Go loving friends out there, check out
[scrapligo](https://github.com/scrapli/scrapligo) for a similar experience, but in Go!
## Requirements
MacOS or \*nix<sup>1</sup>, Python 3.7+
scrapli "core" has no requirements other than the Python standard library<sup>2</sup>.
<sup>1</sup> Although many parts of scrapli *do* run on Windows, Windows is not officially supported
<sup>2</sup> While Python 3.6 has been dropped, it *probably* still works, but requires the `dataclass`
backport as well as third party `async_generator` library, Python 3.7+ has no external dependencies for scrapli "core"
## Installation
```
pip install scrapli
```
See the [docs](https://carlmontanari.github.io/scrapli/user_guide/installation) for other installation methods/details.
## A Simple Example
```python
from scrapli import Scrapli
device = {
"host": "172.18.0.11",
"auth_username": "scrapli",
"auth_password": "scrapli",
"auth_strict_key": False,
"platform": "cisco_iosxe"
}
conn = Scrapli(**device)
conn.open()
print(conn.get_prompt())
```
<small>* Bunny artwork by Caroline Montanari, inspired by [@egonelbre](https://github.com/egonelbre/gophers).
The bunny/rabbit is a nod to/inspired by the white rabbit in `Monty Python and the Holy Grail`, because there
are enough snake logos already!</small>
| text/markdown | null | Carl Montanari <carl.r.montanari@gmail.com> | null | null | MIT License
Copyright (c) 2021 Carl Montanari
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| arista, automation, cisco, eos, iosxe, iosxr, juniper, junos, netconf, network, nxos, ssh, telnet | [
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"black<25.0.0,>=23.3.0; extra == \"dev-darwin\"",
"darglint<2.0.0,>=1.8.1; extra == \"dev-darwin\"",
"isort<6.0.0,>=5.10.1; extra == \"dev-darwin\"",
"mypy<2.0.0,>=1.4.1; extra == \"dev-darwin\"",
"nox==2024.4.15; extra == \"dev-darwin\"",
"pydocstyle<7.0.0,>=6.1.1; extra == \"dev-darwin\"",
"pyfakefs<6.0.0,>=5.4.1; extra == \"dev-darwin\"",
"pylint<4.0.0,>=3.0.0; extra == \"dev-darwin\"",
"pytest-asyncio<1.0.0,>=0.17.0; extra == \"dev-darwin\"",
"pytest-cov<5.0.0,>=3.0.0; extra == \"dev-darwin\"",
"pytest<8.0.0,>=7.0.0; extra == \"dev-darwin\"",
"scrapli-cfg==2023.7.30; extra == \"dev-darwin\"",
"scrapli-replay==2023.7.30; extra == \"dev-darwin\"",
"toml<1.0.0,>=0.10.2; extra == \"dev-darwin\"",
"types-paramiko<4.0.0,>=2.8.6; extra == \"dev-darwin\"",
"ntc-templates<8.0.0,>=1.1.0; extra == \"dev-darwin\"",
"textfsm<2.0.0,>=1.1.0; extra == \"dev-darwin\"",
"genie>=20.2; (sys_platform != \"win32\" and python_version < \"3.13\") and extra == \"dev-darwin\"",
"pyats>=20.2; (sys_platform != \"win32\" and python_version < \"3.13\") and extra == \"dev-darwin\"",
"ttp<1.0.0,>=0.5.0; extra == \"dev-darwin\"",
"paramiko<4.0.0,>=2.6.0; extra == \"dev-darwin\"",
"asyncssh<3.0.0,>=2.2.1; extra == \"dev-darwin\"",
"scrapli_community>=2021.01.30; extra == \"dev-darwin\"",
"black<25.0.0,>=23.3.0; extra == \"dev\"",
"darglint<2.0.0,>=1.8.1; extra == \"dev\"",
"isort<6.0.0,>=5.10.1; extra == \"dev\"",
"mypy<2.0.0,>=1.4.1; extra == \"dev\"",
"nox==2024.4.15; extra == \"dev\"",
"pydocstyle<7.0.0,>=6.1.1; extra == \"dev\"",
"pyfakefs<6.0.0,>=5.4.1; extra == \"dev\"",
"pylint<4.0.0,>=3.0.0; extra == \"dev\"",
"pytest-asyncio<1.0.0,>=0.17.0; extra == \"dev\"",
"pytest-cov<5.0.0,>=3.0.0; extra == \"dev\"",
"pytest<8.0.0,>=7.0.0; extra == \"dev\"",
"scrapli-cfg==2023.7.30; extra == \"dev\"",
"scrapli-replay==2023.7.30; extra == \"dev\"",
"toml<1.0.0,>=0.10.2; extra == \"dev\"",
"types-paramiko<4.0.0,>=2.8.6; extra == \"dev\"",
"ntc-templates<8.0.0,>=1.1.0; extra == \"dev\"",
"textfsm<2.0.0,>=1.1.0; extra == \"dev\"",
"genie>=20.2; (sys_platform != \"win32\" and python_version < \"3.13\") and extra == \"dev\"",
"pyats>=20.2; (sys_platform != \"win32\" and python_version < \"3.13\") and extra == \"dev\"",
"ttp<1.0.0,>=0.5.0; extra == \"dev\"",
"paramiko<4.0.0,>=2.6.0; extra == \"dev\"",
"ssh2-python<2.0.0,>=0.23.0; python_version < \"3.12\" and extra == \"dev\"",
"asyncssh<3.0.0,>=2.2.1; extra == \"dev\"",
"scrapli_community>=2021.01.30; extra == \"dev\"",
"mdx-gh-links<1.0,>=0.2; extra == \"docs\"",
"mkdocs<2.0.0,>=1.2.3; extra == \"docs\"",
"mkdocs-gen-files<1.0.0,>=0.4.0; extra == \"docs\"",
"mkdocs-literate-nav<1.0.0,>=0.5.0; extra == \"docs\"",
"mkdocs-material<10.0.0,>=8.1.6; extra == \"docs\"",
"mkdocs-material-extensions<2.0.0,>=1.0.3; extra == \"docs\"",
"mkdocs-section-index<1.0.0,>=0.3.4; extra == \"docs\"",
"mkdocstrings[python]<1.0.0,>=0.19.0; extra == \"docs\"",
"ntc-templates<8.0.0,>=1.1.0; extra == \"textfsm\"",
"textfsm<2.0.0,>=1.1.0; extra == \"textfsm\"",
"genie>=20.2; (sys_platform != \"win32\" and python_version < \"3.13\") and extra == \"genie\"",
"pyats>=20.2; (sys_platform != \"win32\" and python_version < \"3.13\") and extra == \"genie\"",
"ttp<1.0.0,>=0.5.0; extra == \"ttp\"",
"paramiko<4.0.0,>=2.6.0; extra == \"paramiko\"",
"ssh2-python<2.0.0,>=0.23.0; python_version < \"3.12\" and extra == \"ssh2\"",
"asyncssh<3.0.0,>=2.2.1; extra == \"asyncssh\"",
"scrapli_community>=2021.01.30; extra == \"community\""
] | [] | [] | [] | [
"Changelog, https://carlmontanari.github.io/scrapli/changelog",
"Docs, https://carlmontanari.github.io/scrapli/",
"Homepage, https://github.com/carlmontanari/scrapli"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T23:42:57.414325 | scrapli-2026.2.20.tar.gz | 104,525 | cc/20/5a16997013f9aca4100e41493a93af774b1e73ef2c0fafc312a250b7425a/scrapli-2026.2.20.tar.gz | source | sdist | null | false | 4a90751bdb1f417194505b600caae0b2 | b6f8492110601c11ca8394d8540814464bb97681b87e92b30bb6853b5213558a | cc205a16997013f9aca4100e41493a93af774b1e73ef2c0fafc312a250b7425a | null | [
"LICENSE"
] | 4,001 |
2.4 | lumina-tools | 1.0.0 | Lithophane & Spiral Betty Generator | # 🌟 Lumina
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[]()
**Lumina** is a powerful Python toolkit for makers, artists, and 3D printing enthusiasts. Transform your images into stunning physical art pieces.
## ✨ What Can You Create?
| Feature | Description | Output |
|---------|-------------|--------|
| 🖼️ **Lithophanes** | 3D-printable light art | `.stl` mesh |
| 🌀 **Spiral Betty** | Spiral art for laser/CNC engraving | `.png` image |
<p align="center">
<img src="assets/images/readme/slicer.png" alt="Classic Litho Features Presentation" width="800">
</p>
## 🚀 Quick Start
### Installation
```bash
pip install lumina-tools
```
Or from source:
```bash
git clone https://github.com/AtaCanYmc/lumina.git
cd lumina
pip install -e .
```
### Create a Lithophane
```python
from lumina import flat_lithophane
mesh = flat_lithophane(
"photo.jpg",
shape="heart", # rect, circle, heart
width_mm=100,
max_thickness=3.0
)
mesh.save("lithophane.stl")
```
<p align="center">
<img src="assets/images/readme/classic.png" alt="Classic Litho Features Presentation" width="800">
</p>
### Create Spiral Art
```python
from lumina import generate_spiral_betty_png
import cv2
spiral = generate_spiral_betty_png("portrait.jpg", radius_mm=50)
cv2.imwrite("spiral_art.png", spiral)
```
### Spiral then Lithophane (chain example)
You can chain the spiral generator and the lithophane generator: first produce a Spiral Betty PNG, then feed that PNG to `flat_lithophane` to create an STL.
```python
from lumina import generate_spiral_betty_png, flat_lithophane
import cv2
# 1) Generate a spiral PNG from an input image
input_photo = "portrait.jpg" # your source image
spiral_png = "portrait_spiral.png"
spiral_img = generate_spiral_betty_png(image_path=input_photo, radius_mm=50)
cv2.imwrite(spiral_png, spiral_img)
# 2) Create a lithophane from the generated spiral PNG
mesh = flat_lithophane(
image_path=spiral_png,
shape="rect",
width_mm=100,
max_thickness=3.0,
min_thickness=0.5,
)
mesh.save("portrait_spiral_lithophane.stl")
```
<p align="center">
<img src="assets/images/readme/spiral.png" alt="Classic Litho Features Presentation" width="800">
</p>
## 💻 CLI Usage
```bash
# Lithophane
python -m lumina.cli flat photo.jpg --shape circle --width 120
# Spiral art
python -m lumina.cli spiral portrait.jpg --radius 100 --lines 40
```
## 🎨 Features
- **Multiple Shapes**: Rectangle, Circle, Heart
- **Smart Framing**: Auto-generated shape-conforming frames
- **True Mesh Cutting**: Clean edges without artifacts
- **Flexible Output**: STL meshes, PNG images
- **CLI & Python API**: Use from terminal or integrate into your projects
## 🧪 Continuous Integration (CI) & Local Checks (updated)
This repository includes GitHub Actions workflows and `pre-commit` hooks to keep code quality high and releases reproducible. Below is the current, recommended workflow for local development and what CI enforces.
What CI does now
- Installs runtime dependencies and the package itself (editable) so tests can import `lumina`:
- `pip install -r requirements.txt` then `pip install -e .`
- Runs style checks in *check-only* mode (so CI does not mutate files):
- `isort --check-only .`
- `black --check .`
- `ruff check .`
- Runs `pytest` with coverage and uploads the generated `coverage.xml` artifact. The artifact name is generated per matrix job/run (to avoid 409 conflicts) and CI uploads to Codecov if configured.
Why this matters
- CI no longer runs `pre-commit run --all-files` in a way that modifies files; instead it enforces that the repository is already formatted. Developers must run formatting locally and commit the results to avoid CI failures.
- The workflow installs the package under test so tests won't fail with `ModuleNotFoundError: No module named 'lumina'`.
Quick local setup (recommended)
1. Create and activate a virtualenv:
```bash
python -m venv .venv
source .venv/bin/activate
```
2. Install dependencies and the package in editable mode:
```bash
pip install --upgrade pip
pip install -r requirements.txt
pip install -e .
```
3. Install and use pre-commit hooks (one-time):
```bash
pip install pre-commit
pre-commit install
# Apply hooks & auto-fixes locally
pre-commit run --all-files
# Stage and commit the changes made by hooks
git add -A
git commit -m "Apply pre-commit formatting"
```
4. Run tests & coverage locally:
```bash
pip install pytest pytest-cov
pytest -q --cov=lumina --cov-report=xml:coverage.xml
```
Publishing to PyPI
- To publish, create a git tag (`git tag vX.Y.Z && git push --tags`) and ensure repository secret `PYPI_API_TOKEN` is set (token created on PyPI).
- The `publish.yml` workflow builds sdist and wheel and publishes to PyPI.
Secrets for CI
- `PYPI_API_TOKEN`: required for automatic PyPI publishing.
- `CODECOV_TOKEN` (optional): set this if you want Codecov upload to use a token (for private repos); for public repos Codecov may work without it.
Notes & recommendations
- Run `pre-commit run --all-files` locally before pushing; CI will reject pushes that aren’t formatted.
- If you prefer CI to be less strict about formatting, we can remove `--fix` from the `ruff` pre-commit hook or make lint checks non-blocking in CI. I can apply that change if you want.
- Artifact uploads in CI are named uniquely per matrix job and set to `overwrite: true` to avoid `409 Conflict` when re-running or retrying jobs.
If you want, I can also add a short `CONTRIBUTING.md` section that enforces pre-commit + explains required commit hooks for contributors.
## 📖 Documentation
- [CLI Reference](docs/CLI.md)
- [Python API](docs/API.md)
- [Contributing](docs/CONTRIBUTING.md)
## 🤝 Contributing
We welcome contributions! See [CONTRIBUTING.md](docs/CONTRIBUTING.md) for guidelines.
## 📄 License
MIT License - see [LICENSE](LICENSE) for details.
---
Made with ❤️ for the maker community
| text/markdown | null | Ata Can Yaymacı <atacanymc@gmail.com> | null | null | null | lithophane, 3d-printing, spiral-betty, stl, maker, gift | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent",
"Topic :: Multimedia :: Graphics :: 3D Modeling"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.20.0",
"opencv-python>=4.5.0",
"numpy-stl>=2.16.0",
"click>=8.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/AtaCanYmc/lumina",
"Bug Tracker, https://github.com/AtaCanYmc/lumina/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T23:42:41.829582 | lumina_tools-1.0.0.tar.gz | 20,189 | 16/7f/7c08b9d8064b41a6807bdf3dfd8cd2b8d352871139233314d98f860ccb05/lumina_tools-1.0.0.tar.gz | source | sdist | null | false | f86f115151182f0f7b5f259dbb278094 | 08c1a72dfcc59dbfe4ab23dd3fa511160ab1a30cc4c6b92fd1b2485eceb908d8 | 167f7c08b9d8064b41a6807bdf3dfd8cd2b8d352871139233314d98f860ccb05 | MIT | [
"LICENSE"
] | 247 |
2.4 | pyDiffTools | 0.1.17 | Diff tools | pydifftools
===========
:Info: See <https://github.com/jmfranck/pyDiffTools>
:Author: J. M. Franck <https://github.com/jmfranck>
.. _vim: http://www.vim.org
this is a set of tools to help with merging, mostly for use with vim_.
The scripts are accessed with the command ``pydifft``
included are (listed in order of fun/utility):
- `pydifft cpb <filename.md>` ("continuous pandoc build")
This continuously monitors
`filename.md`, build the result,
and displays it in your browser.
Continuous pandoc build.
This works *very well* together
with the `g/` vim command
(supplied by our standard vimrc
gist) to search for phrases (for
example `g/ n sp me` to find "new
spectroscopic methodology" -- this
works *much better* than you
would expect)
For this to work, you need to
**install selenium with** `pip
install selenium` *not conda*.
Then go to `the selenium page <https://pypi.org/project/selenium/>`_
and download the chrome driver.
Note that from there, it can be hard to find the
chrome driver -- as of this update,
the drivers are `here <https://googlechromelabs.github.io/chrome-for-testing/#stable>`_,
but it seems like google is moving them around.
You also need to install `pandoc <https://pandoc.org/installing.html>`_
as well as `pandoc-crossref <https://github.com/lierdakil/pandoc-crossref>`_
(currently tested on windows with *version 3.5* of the former,
*not the latest installer*,
since crossref isn't built with the most recent version).
- `pydifft wgrph <graph.yaml>` watches a YAML flowchart description,
rebuilds the DOT/SVG output using GraphViz, and keeps a browser window
refreshed as you edit the file. This wraps the former
``flowchart/watch_graph.py`` script so all of its functionality is now
available through the main ``pydifft`` entry point.
- `pydifft tex2qmd file.tex` converts LaTeX sources to Quarto markdown.
The converter preserves custom observation blocks and errata tags while
translating verbatim/python environments into fenced code blocks so the
result is ready for the Pandoc-based builder.
- `pydifft qmdb [--watch] [--no-browser] [--webtex]` runs the relocated
``fast_build.py`` logic from inside the package. Without ``--watch`` it
performs a single build of the configured `_quarto.yml` targets into the
``_build``/``_display`` directories; with ``--watch`` it starts the HTTP
server and automatically rebuilds the staged fragments whenever you edit
a ``.qmd`` file.
- `pydifft qmdinit [directory]` scaffolds a new Quarto-style project using
the bundled templates and example ``project1`` hierarchy, then downloads
MathJax into ``_template/mathjax`` so the builder can run immediately.
This is analogous to ``git init`` for markdown notebooks.
- `pydifft wr <filename.tex|md>` (wrap)
This provides a standardized (and
short) line
wrapping, ideal for when you are
working on manuscripts that you
are version tracking with git.
- `pydifft wmatch` ("whitespace match"): a script that matches whitespace between two text files.
* pandoc can convert between markdown/latex/word, but doing this messes with your whitespace and gvimdiff comparisons.
* this allows you to use an original file with good whitespace formatting as a "template" that you can match other (e.g. pandoc converted file) onto another
- `pydifft wd` ("word diff"): generate "track changes" word files starting from pandoc markdown in a git history. Assuming that you have copied diff-doc.js (copied + licensed from elsewhere) into your home directory, this will use pandoc to convert the markdown files to MS Word, then use the MS Word comparison tool to generate a document where all relevant changes are shown with "track changes."
* by default, this uses the file `template.docx` in the current directory as a pandoc word template
- `pydifft sc` ("split conflicts"): a very basic merge tool that takes a conflicted file and generates a .merge_head and .merge_new file, where basic
* you can use this directly with gvimdiff, you can use the files in a standard gvimdiff merge
* unlike the standard merge tool, it will
* less complex than the gvimdiff merge tool used with git.
* works with "onewordify," below
- a script that searches a notebook for numbered tasks, and sees whether or not they match (this is for organizing a lab notebook, to be described)
Future versions will include:
- Scripts for converting word html comments to latex commands.
- converting to/form one word per line files (for doing things like wdiff, but with more control)
| text/x-rst | J M Franck | null | null | null | Copyright (c) 2015, jmfranck
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of pyDiffTools nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| null | [] | [] | null | null | null | [] | [] | [] | [
"selenium",
"fuzzywuzzy[speedup]",
"PyYAML>=6.0",
"watchdog",
"pydot",
"python-dateutil",
"jinja2",
"nbformat",
"nbconvert",
"pygments",
"ansi2html",
"lxml"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:42:24.158542 | pydifftools-0.1.17.tar.gz | 1,518,590 | 00/f9/598a65125321c5e7020e7ccff105be21205d6bfad97a528ecbbd14fd316f/pydifftools-0.1.17.tar.gz | source | sdist | null | false | 1fb6522eef6122d0b27fb5be5cf78388 | 499eb5eb3985d5a836b7e7294cbe7b4c9cd7086816e2c9f3d9819085eb262a2e | 00f9598a65125321c5e7020e7ccff105be21205d6bfad97a528ecbbd14fd316f | null | [
"LICENSE.md"
] | 0 |
2.4 | databricks-langchain | 0.16.0 | Support for Databricks AI support in LangChain | # 🦜🔗 Databricks LangChain Integration
The `databricks-langchain` package provides seamless integration of Databricks AI features into LangChain applications. This repository is now the central hub for all Databricks-related LangChain components, consolidating previous packages such as `langchain-databricks` and `langchain-community`.
## Installation
### From PyPI
```sh
pip install databricks-langchain
```
### From Source
```sh
pip install git+https://git@github.com/databricks/databricks-ai-bridge.git#subdirectory=integrations/langchain
```
## Key Features
- **LLMs Integration:** Use Databricks-hosted large language models (LLMs) like Llama and Mixtral through `ChatDatabricks`.
- **Vector Search:** Store and query vector representations using `DatabricksVectorSearch`.
- **Embeddings:** Generate embeddings with `DatabricksEmbeddings`.
- **Genie:** Use [Genie](https://www.databricks.com/product/ai-bi/genie) in Langchain.
## Getting Started
### Use LLMs on Databricks
```python
from databricks_langchain import ChatDatabricks
llm = ChatDatabricks(endpoint="databricks-meta-llama-3-1-70b-instruct")
```
### Use a Genie Space as an Agent (Preview)
> **Note:** Requires Genie API Private Preview. Contact your Databricks account team for enablement.
```python
from databricks_langchain.genie import GenieAgent
genie_agent = GenieAgent(
"space-id", "Genie",
description="This Genie space has access to sales data in Europe"
)
```
---
## Contribution Guide
We welcome contributions! Please see our [contribution guidelines](https://github.com/databricks/databricks-ai-bridge/tree/main/integrations/langchain) for details.
## License
This project is licensed under the [MIT License](LICENSE).
Thank you for using Databricks LangChain!
| text/markdown | null | Databricks <agent-feedback@databricks.com> | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"databricks-ai-bridge>=0.13.0",
"databricks-mcp>=0.5.1",
"databricks-openai>=0.9.0",
"databricks-sdk>=0.65.0",
"databricks-vectorsearch>=0.50",
"langchain-mcp-adapters>=0.1.13",
"langchain>=1.0.0",
"mlflow>=3.0.0",
"openai>=1.99.9",
"pydantic>2.10.0",
"unitycatalog-langchain[databricks]>=0.3.0",
"databricks-ai-bridge[memory]>=0.10.0; extra == \"memory\"",
"langgraph-checkpoint-postgres>=2.0.5; extra == \"memory\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:41:43.452380 | databricks_langchain-0.16.0.tar.gz | 32,874 | 76/24/369a959d296514ab3ac8a0f21f2b9ca054b3e4fc7b0018371478f5d27f7e/databricks_langchain-0.16.0.tar.gz | source | sdist | null | false | 684a1a0e14962fa65686f157236b8082 | 00630401dbe9462c21854f585fd49745e3caec74268ac2297a89ae7f22284308 | 7624369a959d296514ab3ac8a0f21f2b9ca054b3e4fc7b0018371478f5d27f7e | null | [] | 16,698 |
2.4 | databricks-mcp | 0.9.0 | MCP helpers for Databricks | # Databricks MCP Library
The `databricks-mcp` package provides useful helpers to integrate MCP Servers in Databricks
## Installation
### From PyPI
```sh
pip install databricks-mcp
```
### From Source
```sh
pip install git+https://git@github.com/databricks/databricks-ai-bridge.git#subdirectory=databricks_mcp
```
## Key Features
- **OAuth Provider**: Enables authentication across Databricks Notebooks, Model Serving, and local environments using the Databricks CLI.
---
## Contribution Guide
We welcome contributions! Please see our [contribution guidelines](https://github.com/databricks/databricks-ai-bridge/tree/main/mcp) for details.
## License
This project is licensed under the [MIT License](LICENSE).
Thank you for using MCP Servers on Databricks!
| text/markdown | null | Databricks <agent-feedback@databricks.com> | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"databricks-ai-bridge>=0.4.2",
"databricks-sdk>=0.49.0",
"mcp>=1.13.0",
"mlflow>=3.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:41:39.857539 | databricks_mcp-0.9.0.tar.gz | 10,898 | f0/6b/fe9c6291bbbbccd14267bd05ade1f4522b59fc3380d16253476b9984d1a1/databricks_mcp-0.9.0.tar.gz | source | sdist | null | false | 7ac6b36e22c7a0854aeeebd34b93945f | e311c70221306116af7fc9adb2bfedf0fb2f181d7cf12ff1e46682334150a98a | f06bfe9c6291bbbbccd14267bd05ade1f4522b59fc3380d16253476b9984d1a1 | null | [] | 18,825 |
2.4 | databricks-ai-bridge | 0.15.0 | Official Python library for Databricks AI support | # Databricks AI Bridge library
The Databricks AI Bridge library provides a shared layer of APIs to interact with Databricks AI features, such as [Databricks AI/BI Genie ](https://www.databricks.com/product/ai-bi/genie) and [Vector Search](https://docs.databricks.com/en/generative-ai/vector-search.html). Use these packages to help [author agents with Agent Framework](https://docs.databricks.com/aws/en/generative-ai/agent-framework/author-agent#requirements) on Databricks.
## Integration Packages
If you are using LangChain/LangGraph or the OpenAI SDK, we provide these integration packages for seamless integration of Databricks AI features.
- [`databricks-langchain`](./integrations/langchain/README.md) - For LangChain/LangGraph users
- [`databricks-openai`](./integrations/openai/README.md) - For OpenAI SDK users
## Installation
If you're using LangChain/LangGraph or OpenAI:
```sh
pip install databricks-langchain
pip install databricks-openai
```
For frameworks without dedicated integration packages:
```sh
pip install databricks-ai-bridge
```
### Install from source
With https:
```sh
# For LangChain/LangGraph users (recommended):
pip install git+https://git@github.com/databricks/databricks-ai-bridge.git#subdirectory=integrations/langchain
# For OpenAI users (recommended):
pip install git+https://git@github.com/databricks/databricks-ai-bridge.git#subdirectory=integrations/openai
# Generic installation (only if needed):
pip install git+https://git@github.com/databricks/databricks-ai-bridge.git
```
| text/markdown | null | Databricks <agent-feedback@databricks.com> | null | null | Databricks License
Copyright (2024) Databricks, Inc.
Definitions.
Agreement: The agreement between Databricks, Inc., and you governing
the use of the Databricks Services, as that term is defined in
the Master Cloud Services Agreement (MCSA) located at
www.databricks.com/legal/mcsa.
Licensed Materials: The source code, object code, data, and/or other
works to which this license applies.
Scope of Use. You may not use the Licensed Materials except in
connection with your use of the Databricks Services pursuant to
the Agreement. Your use of the Licensed Materials must comply at all
times with any restrictions applicable to the Databricks Services,
generally, and must be used in accordance with any applicable
documentation. You may view, use, copy, modify, publish, and/or
distribute the Licensed Materials solely for the purposes of using
the Licensed Materials within or connecting to the Databricks Services.
If you do not agree to these terms, you may not view, use, copy,
modify, publish, and/or distribute the Licensed Materials.
Redistribution. You may redistribute and sublicense the Licensed
Materials so long as all use is in compliance with these terms.
In addition:
- You must give any other recipients a copy of this License;
- You must cause any modified files to carry prominent notices
stating that you changed the files;
- You must retain, in any derivative works that you distribute,
all copyright, patent, trademark, and attribution notices,
excluding those notices that do not pertain to any part of
the derivative works; and
- If a "NOTICE" text file is provided as part of its
distribution, then any derivative works that you distribute
must include a readable copy of the attribution notices
contained within such NOTICE file, excluding those notices
that do not pertain to any part of the derivative works.
You may add your own copyright statement to your modifications and may
provide additional license terms and conditions for use, reproduction,
or distribution of your modifications, or for any such derivative works
as a whole, provided your use, reproduction, and distribution of
the Licensed Materials otherwise complies with the conditions stated
in this License.
Termination. This license terminates automatically upon your breach of
these terms or upon the termination of your Agreement. Additionally,
Databricks may terminate this license at any time on notice. Upon
termination, you must permanently delete the Licensed Materials and
all copies thereof.
DISCLAIMER; LIMITATION OF LIABILITY.
THE LICENSED MATERIALS ARE PROVIDED "AS-IS" AND WITH ALL FAULTS.
DATABRICKS, ON BEHALF OF ITSELF AND ITS LICENSORS, SPECIFICALLY
DISCLAIMS ALL WARRANTIES RELATING TO THE LICENSED MATERIALS, EXPRESS
AND IMPLIED, INCLUDING, WITHOUT LIMITATION, IMPLIED WARRANTIES,
CONDITIONS AND OTHER TERMS OF MERCHANTABILITY, SATISFACTORY QUALITY OR
FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. DATABRICKS AND
ITS LICENSORS TOTAL AGGREGATE LIABILITY RELATING TO OR ARISING OUT OF
YOUR USE OF OR DATABRICKS' PROVISIONING OF THE LICENSED MATERIALS SHALL
BE LIMITED TO ONE THOUSAND ($1,000) DOLLARS. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE LICENSED MATERIALS OR
THE USE OR OTHER DEALINGS IN THE LICENSED MATERIALS. | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"databricks-sdk>=0.49.0",
"databricks-vectorsearch>=0.57",
"mlflow-skinny>=2.19.0",
"pandas>=2.2.0",
"pydantic>=2.10.0",
"tabulate>=0.9.0",
"tiktoken>=0.8.0",
"typing-extensions>=4.15.0",
"psycopg[binary,pool]>=3.2.10; extra == \"memory\"",
"sqlalchemy[asyncio]>=2.0.0; extra == \"memory\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:41:36.032502 | databricks_ai_bridge-0.15.0.tar.gz | 26,511 | 48/ed/e9e6e9e003e89ae2558a4d4e3957cf397d5e5ffdab4e59426616b1366ad3/databricks_ai_bridge-0.15.0.tar.gz | source | sdist | null | false | 85134f29bad1a58a567fc33e37fed912 | ec06c59cee14a0b99ca29294a77f54ba29dd6a496b90e4b105a7a91f1ea728ad | 48ede9e6e9e003e89ae2558a4d4e3957cf397d5e5ffdab4e59426616b1366ad3 | null | [
"LICENSE.txt",
"NOTICE.txt"
] | 21,028 |
2.4 | nv-ingest-client | 2026.2.20.dev20260220 | Python client for the nv-ingest service | <!--
SPDX-FileCopyrightText: Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES.
All rights reserved.
SPDX-License-Identifier: Apache-2.0
-->
# NV-Ingest-Client
NV-Ingest-Client is a tool designed for efficient ingestion and processing of large datasets. It provides both a Python API and a command-line interface to cater to various ingestion needs.
> [!Note]
> NV-Ingest is also known as NVIDIA Ingest and NeMo Retriever extraction.
## Table of Contents
1. [Installation](#installation)
2. [Usage](#usage)
- [CLI Tool](#cli-tool)
- [API Libraries](#api-libraries)
3. [Command Line Interface (CLI)](#command-line-interface-cli)
- [Command Overview](#command-overview)
- [Options](#options)
4. [Examples](#examples)
## Installation
To install NV-Ingest-Client, run the following command in your terminal:
```bash
pip install [REPO_ROOT]/client
```
This command installs both the API libraries and the `nv-ingest-cli` tool which can subsequently be called from the
command line.
## API Libraries
### nv_ingest_client.primitives.jobs
#### JobSpec
Specification for creating a job for submission to the nv-ingest microservice.
- **Parameters**:
- `payload` (Dict): The payload data for the job.
- `tasks` (Optional[List], optional): A list of tasks to be added to the job. Defaults to None.
- `source_id` (Optional[str], optional): An identifier for the source of the job. Defaults to None.
- `source_name` (Optional[str], optional): A name for the source of the job. Defaults to None.
- `document_type` (Optional[str], optional): Type of the document. Defaults to 'txt'.
- `job_id` (Optional[Union[UUID, str]], optional): A unique identifier for the job. Defaults to a new UUID.
- `extended_options` (Optional[Dict], optional): Additional options for job processing. Defaults to None.
- **Attributes**:
- `_payload` (Dict): Storage for the payload data.
- `_tasks` (List): Storage for the list of tasks.
- `_source_id` (str): Storage for the source identifier.
- `_job_id` (UUID): Storage for the job's unique identifier.
- `_extended_options` (Dict): Storage for the additional options.
- **Methods**:
- **to_dict() -> Dict**:
- **Description**: Converts the job specification to a dictionary for JSON serialization.
- **Returns**: `Dict`: Dictionary representation of the job specification.
- **add_task(task)**:
- **Description**: Adds a task to the job specification.
- **Parameters**:
- `task`: The task to be added. Assumes the task has a `to_dict()` method.
- **Raises**:
- `ValueError`: If the task does not have a `to_dict()` method or is not an instance of `Task`.
- **Properties**:
- `payload`: Getter/Setter for the payload data.
- `job_id`: Getter/Setter for the job's unique identifier.
- `source_id`: Getter/Setter for the source identifier.
- `source_name`: Getter/Setter for the source name.
- **Example Usage**:
```python
job_spec = JobSpec(
payload={"data": "Example data"},
tasks=[extract_task, split_task],
source_id="12345",
job_id="abcd-efgh-ijkl-mnop",
extended_options={"tracing_options": {"trace": True}}
)
print(job_spec.to_dict())
```
### nv_ingest_client.primitives.tasks
#### Task Factory
- **Function**: `task_factory(task_type, **kwargs)`
- **Description**: Factory method for creating task objects based on the provided task type. It dynamically selects
the appropriate task class from a mapping and initializes it with any additional keyword arguments.
- **Parameters**:
- `task_type` (TaskType or str): The type of the task to create. Can be an enum member of `TaskType` or a string
representing a valid task type.
- `**kwargs` (dict): Additional keyword arguments to pass to the task's constructor.
- **Returns**:
- `Task`: An instance of the task corresponding to the given task type.
- **Raises**:
- `ValueError`: If an invalid task type is provided, or if any unexpected keyword arguments are passed that do
not match the task constructor's parameters.
- **Example**:
```python
# Assuming TaskType has 'Extract' and 'Split' as valid members and corresponding classes are defined.
extract_task = task_factory('extract', document_type='PDF', extract_text=True)
split_task = task_factory('split', split_by='sentence', split_length=100)
```
#### ExtractTask
Object for document extraction tasks, extending the `Task` class.
- **Method**: `__init__(document_type, extract_method='pdfium', extract_text=False, extract_images=False,
extract_tables=False)`
- **Parameters**:
- `document_type`: Type of document.
- `extract_method`: Method used for extraction. Default is 'pdfium'.
- `extract_text`: Boolean indicating if text should be extracted. Default is False.
- `extract_images`: Boolean indicating if images should be extracted. Default is False.
- `extract_tables`: Boolean indicating if tables should be extracted. Default is False.
- `extract_page_as_image`: Boolean indicating if each page should be rendered as an image for embedding. Default is False.
- **Description**: Sets up configuration for the extraction task.
- **Method: `to_dict()`**
- **Description**: Converts task details to a dictionary for submission to message client. Includes handling for
specific
methods and document types.
- **Returns**: `Dict`: Dictionary containing task type and properties.
- **Example**:
```python
extract_task = ExtractTask(
document_type=file_type,
extract_text=True,
extract_images=True,
extract_tables=True
)
```
#### SplitTask
Object for document splitting tasks, extending the `Task` class.
- **Method**: `__init__(split_by=None, split_length=None, split_overlap=None, max_character_length=None,
sentence_window_size=None)`
- **Parameters**:
- `split_by`: Criterion for splitting, e.g., 'word', 'sentence', 'passage'.
- `split_length`: The length of each split segment.
- `split_overlap`: Overlap length between segments.
- `max_character_length`: Maximum character length for a split.
- `sentence_window_size`: Window size for sentence-based splits.
- **Description**: Sets up configuration for the splitting task.
- **Method: `to_dict()`**
- **Description**: Converts task details to a dictionary for submission to message client.
- **Returns**: `Dict`: Dictionary containing task type and properties.
- **Example**:
```python
split_task = SplitTask(
split_by="word",
split_length=300,
split_overlap=10,
max_character_length=5000,
sentence_window_size=0,
)
```
### nv_ingest_client.client.client
The `NvIngestClient` class provides a comprehensive suite of methods to handle job submission and retrieval processes
efficiently. Below are the public methods available:
### Initialization
- **`__init__`**:
Initializes the NvIngestClient with customizable client allocator and Redis configuration.
- **Parameters**:
- `message_client_allocator`: A callable that returns an instance of the client used for communication.
- `message_client_hostname`: Hostname of the message client server. Defaults to "localhost".
- `message_client_port`: Port number of the message client server. Defaults to 7670.
- `message_client_kwargs`: Additional keyword arguments for the message client.
- `msg_counter_id`: Redis key for tracking message counts. Defaults to "nv-ingest-message-id".
- `worker_pool_size`: Number of worker processes in the pool. Defaults to 1.
- **Example**:
```python
client = NvIngestClient(
message_client_hostname="localhost", # Host where nv-ingest-ms-runtime is running
message_client_port=7670 # REST port, defaults to 7670
)
```
## Submission Methods
### submit_job
Submits a job to a specified job queue. This method can optionally wait for a response if blocking is set to True.
- **Parameters**:
- `job_id`: The unique identifier of the job to be submitted.
- `job_queue_id`: The ID of the job queue where the job will be submitted.
- **Returns**:
- Optional[Dict]: The job result if blocking is True and a result is available before the timeout, otherwise None.
- **Raises**:
- Exception: If submitting the job fails.
- **Example**:
```python
client.submit_job(job_id, "ingest_task_queue")
```
### submit_jobs
Submits multiple jobs to a specified job queue. This method does not wait for any of the jobs to complete.
- **Parameters**:
- `job_ids`: A list of job IDs to be submitted.
- `job_queue_id`: The ID of the job queue where the jobs will be submitted.
- **Returns**:
- List[Union[Dict, None]]: A list of job results if blocking is True and results are available before the timeout,
otherwise None.
- **Example**:
```python
client.submit_jobs([job_id0, job_id1], "ingest_task_queue")
```
### submit_job_async
Asynchronously submits one or more jobs to a specified job queue using a thread pool. This method handles both single
job ID or a list of job IDs.
- **Parameters**:
- `job_ids`: A single job ID or a list of job IDs to be submitted.
- `job_queue_id`: The ID of the job queue where the jobs will be submitted.
- **Returns**:
- Dict[Future, str]: A dictionary mapping futures to their respective job IDs for later retrieval of outcomes.
- **Notes**:
- This method queues the jobs for asynchronous submission and returns a mapping of futures to job IDs.
- It does not wait for any of the jobs to complete.
- Ensure that each job is in the proper state before submission.
- **Example**:
```python
client.submit_job_async(job_id, "ingest_task_queue")
```
## Job Retrieval
### fetch_job_result
- **Description**: Fetches the job result from a message client, handling potential errors and state changes.
- **Method**: `fetch_job_result(job_id, timeout=10, data_only=True)`
- **Parameters**:
- `job_id` (str): The identifier of the job.
- `timeout` (float, optional): Timeout for the fetch operation in seconds. Defaults to 10.
- `data_only` (bool, optional): If true, only returns the data part of the job result.
- **Returns**:
- Tuple[Dict, str]: The job result and the job ID.
- **Raises**:
- `ValueError`: If there is an error in decoding the job result.
- `TimeoutError`: If the fetch operation times out.
- `Exception`: For all other unexpected issues.
- **Example**:
```python
job_id = client.add_job(job_spec)
client.submit_job(job_id, TASK_QUEUE)
generated_metadata = client.fetch_job_result(
job_id, timeout=DEFAULT_JOB_TIMEOUT
)
```
### fetch_job_result_async
- **Description**: Fetches job results for a list or a single job ID asynchronously and returns a mapping of futures to
job IDs.
- **Method**: `fetch_job_result_async(job_ids, timeout=10, data_only=True)`
- **Parameters**:
- `job_ids` (Union[str, List[str]]): A single job ID or a list of job IDs.
- `timeout` (float, optional): Timeout for fetching each job result, in seconds. Defaults to 10.
- `data_only` (bool, optional): Whether to return only the data part of the job result.
- **Returns**:
- Dict[Future, str]: A dictionary mapping each future to its corresponding job ID.
- **Raises**:
- No explicit exceptions raised but leverages the exceptions from `fetch_job_result`.
- **Example**:
```python
job_id = client.add_job(job_spec)
client.submit_job(job_id, TASK_QUEUE)
generated_metadata = client.fetch_job_result_async(
job_id, timeout=DEFAULT_JOB_TIMEOUT
)
```
## Job and Task Management
### job_count
- **Description**: Returns the number of jobs currently tracked by the client.
- **Method**: `job_count()`
- **Returns**: Integer representing the total number of jobs.
- **Example**:
```python
client.job_count()
```
### add_job
- **Description**: Adds a job specification to the job tracking system.
- **Method**: `add_job(job_spec)`
- **Parameters**:
- `job_spec` (JobSpec, optional): The job specification to add. If not provided, a new job ID will be generated.
- **Returns**: String representing the job ID of the added job.
- **Raises**:
- `ValueError`: If a job with the specified job ID already exists.
- **Example**:
```python
extract_task = ExtractTask(
document_type=file_type,
extract_text=True,
extract_images=True,
extract_tables=True,
text_depth="document",
extract_tables_method="yolox",
)
job_spec.add_task(extract_task)
job_id = client.add_job(job_spec)
```
### create_job
- **Description**: Creates a new job with specified parameters and adds it to the job tracking dictionary.
- **Method**: `create_job(payload, source_id, source_name, document_type, tasks, job_id, extended_options)`
- **Parameters**:
- `payload` (str): The payload associated with the job.
- `source_id` (str): The source identifier for the job.
- `source_name` (str): The unique name of the job's source data.
- `document_type` (str, optional): The type of document to be processed.
- `tasks` (list, optional): A list of tasks to be associated with the job.
- `job_id` (uuid.UUID | str, optional): The unique identifier for the job.
- `extended_options` (dict, optional): Additional options for job creation.
- **Returns**: String representing the job ID.
- **Raises**:
- `ValueError`: If a job with the specified job ID already exists.
### add_task
- **Description**: Adds a task to an existing job.
- **Method**: `add_task(job_id, task)`
- **Parameters**:
- `job_id` (str): The job ID to which the task will be added.
- `task` (Task): The task to add.
- **Raises**:
- `ValueError`: If the job does not exist or is not in the correct state.
- **Example**:
```python
job_spec = JobSpec(
document_type=file_type,
payload=file_content,
source_id=SAMPLE_PDF,
source_name=SAMPLE_PDF,
extended_options={
"tracing_options": {
"trace": True,
"ts_send": time.time_ns(),
}
},
)
extract_task = ExtractTask(
document_type=file_type,
extract_text=True,
extract_images=True,
extract_tables=True,
text_depth="document",
extract_tables_method="yolox",
)
job_spec.add_task(extract_task)
```
### create_task
- **Description**: Creates a task with specified parameters and adds it to an existing job.
- **Method**: `create_task(job_id, task_type, task_params)`
- **Parameters**:
- `job_id` (uuid.UUID | str): The unique identifier of the job.
- `task_type` (TaskType): The type of the task.
- `task_params` (dict, optional): Parameters for the task.
- **Raises**:
- `ValueError`: If the job does not exist or if an attempt is made to modify a job after its submission.
- **Example**:
```python
job_id = client.add_job(job_spec)
client.create_task(job_id, DedupTask, {content_type: "image", filter: True})
```
## CLI Tool
After installation, you can use the `nv-ingest-cli` tool from the command line to manage and process datasets.
### CLI Options
Here are the options provided by the CLI, explained:
- `--batch_size`: Specifies the number of documents to process in a single batch. Default is 10. Must be 1 or more.
- `--doc`: Adds a new document to be processed. Supports multiple entries. Files must exist.
- `--dataset`: Specifies the path to a dataset definition file.
- `--client`: Sets the client type with choices including REST, Redis, Kafka. Default is Redis.
- `--client_host`: Specifies the DNS name or URL for the endpoint.
- `--client_port`: Sets the port number for the client endpoint.
- `--client_kwargs`: Provides additional arguments to pass to the client. Default is `{}`.
- `--concurrency_n`: Defines the number of inflight jobs to maintain at one time. Default is 1.
- `--dry_run`: Enables a dry run without executing actions.
- `--output_directory`: Specifies the output directory for results.
- `--log_level`: Sets the log level. Choices are DEBUG, INFO, WARNING, ERROR, CRITICAL. Default is INFO.
- `--shuffle_dataset`: Shuffles the dataset before processing if enabled. Default is true.
- `--task`: Allows for specification of tasks in JSON format. Supports multiple tasks.
- `--collect_profiling_traces`: Collect the tracing profile for the run after processing.
- `--zipkin_host`: Host used to connect to Zipkin to gather tracing profiles.
- `--zipkin_port`: Port used to connect to Zipkin to gether tracing profiles.
## Examples
You can find a notebook with examples that use the CLI client in [the client examples folder](client/client_examples/examples/).
| text/markdown | null | Jeremy Dyer <jdyer@nvidia.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [
"Programming Language :: Python :: 3.11",
"Operating System :: OS Independent"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"build>=1.2.2",
"charset-normalizer>=3.4.1",
"click>=8.1.8",
"fsspec>=2025.2.0",
"httpx>=0.28.1",
"pydantic>2.0.0",
"pydantic-settings>2.0.0",
"requests>=2.28.2",
"setuptools>=78.1.1",
"tqdm>=4.67.1",
"lancedb>=0.25.3",
"pymilvus==2.5.10; extra == \"milvus\"",
"pymilvus[bulk_writer,model]; extra == \"milvus\"",
"minio>=7.2.15; extra == \"minio\""
] | [] | [] | [] | [
"homepage, https://github.com/NVIDIA/nv-ingest",
"repository, https://github.com/NVIDIA/nv-ingest",
"documentation, https://docs.nvidia.com/nv-ingest"
] | twine/6.2.0 CPython/3.12.10 | 2026-02-20T23:41:31.702897 | nv_ingest_client-2026.2.20.dev20260220.tar.gz | 129,217 | 7a/1d/499da4b9383e31fc71852e5052ea22b052dc8c2e8a7507797862dfda319e/nv_ingest_client-2026.2.20.dev20260220.tar.gz | source | sdist | null | false | 81d42f6aa629bb765d15fb8a289ca911 | 6ac9e796d090ccae9c6f2c35b97909df59aabef4a589478276ba05781d0501bf | 7a1d499da4b9383e31fc71852e5052ea22b052dc8c2e8a7507797862dfda319e | null | [
"LICENSE"
] | 222 |
2.4 | nv-ingest-api | 2026.2.20.dev20260220 | Python module with core document ingestion functions. | # nv-ingest-api
Provides a common set of
- Pythonic Objects
- Common Functions
- Utilities
- Core Logic
Implemented in pure Python that can be imported and used directly or used as part of future frameworks and runtimes.
| text/markdown | null | Jeremy Dyer <jdyer@nvidia.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | null | [] | [] | [] | [
"backoff==2.2.1",
"pandas>=2.0",
"pydantic>2.0.0",
"pydantic-settings>2.0.0",
"fsspec>=2025.5.1",
"universal_pathlib>=0.2.6",
"ffmpeg-python==0.2.0",
"tritonclient",
"glom",
"pypdfium2>=4.30.0",
"moviepy==2.2.1; extra == \"test\""
] | [] | [] | [] | [
"homepage, https://github.com/NVIDIA/nv-ingest",
"repository, https://github.com/NVIDIA/nv-ingest",
"documentation, https://docs.nvidia.com/nv-ingest"
] | twine/6.2.0 CPython/3.12.10 | 2026-02-20T23:41:29.002245 | nv_ingest_api-2026.2.20.dev20260220.tar.gz | 263,464 | 38/a1/06a740f203c194ccd346f32cf0a68791b49896083c0ca0f9d62d36accc8f/nv_ingest_api-2026.2.20.dev20260220.tar.gz | source | sdist | null | false | 88d2b40bcb5d55a97c36c0e50a2fdaef | 11ea513c51a548f515ebc89df981577756c618cb92704602a20956fe13b1cd4e | 38a106a740f203c194ccd346f32cf0a68791b49896083c0ca0f9d62d36accc8f | null | [
"LICENSE"
] | 230 |
2.4 | personanexus | 0.0.1 | Define AI agent personalities in YAML, not code. Psychological frameworks (OCEAN/DISC), behavioral modes, mood states, and interaction protocols — compiled to system prompts for any LLM. | # PersonaNexus
Define AI agent personalities in YAML, not code.
**Coming soon.** Full release with OCEAN/DISC personality frameworks, behavioral modes, mood states, and interaction protocols — compiled to system prompts for any LLM.
- 🌐 [personanexus.ai](https://personanexus.ai)
- 📦 [GitHub](https://github.com/jcrowan3/PersonaNexus)
- 🐦 [@PersonaNexus](https://x.com/PersonaNexus)
| text/markdown | Jim Rowan | null | null | null | MIT | ai, agents, personality, ocean, disc, llm, yaml | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://personanexus.ai",
"Repository, https://github.com/jcrowan3/PersonaNexus"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T23:40:31.261700 | personanexus-0.0.1.tar.gz | 1,623 | 33/27/314e2b46e54712ec4ee9dcde144513158c80e297e01210b723b90e3a5eb1/personanexus-0.0.1.tar.gz | source | sdist | null | false | 0bffb35db8c6d953860fffa320e279db | 6e1c3d02f134347e168b7567447f7e1f3c01a1218cb3cd20deb0f25108403af3 | 3327314e2b46e54712ec4ee9dcde144513158c80e297e01210b723b90e3a5eb1 | null | [] | 253 |
2.4 | runpod-flash | 1.3.0 | A Python library for distributed inference and serving of machine learning models | # Flash: Serverless computing for AI workloads
Runpod Flash is a Python SDK that streamlines the development and deployment of AI workflows on Runpod's [Serverless infrastructure](http://docs.runpod.io/serverless/overview). Write Python functions locally, and Flash handles the infrastructure, provisioning GPUs and CPUs, managing dependencies, and transferring data, allowing you to focus on building AI applications.
You can find a repository of prebuilt Flash examples at [runpod/flash-examples](https://github.com/runpod/flash-examples).
> [!Note]
> **New feature - Consolidated template management:** `PodTemplate` overrides now seamlessly integrate with `ServerlessResource` defaults, providing more consistent resource configuration and reducing deployment complexity.
## Table of contents
- [Overview](#overview)
- [Get started](#get-started)
- [Create Flash API endpoints](#create-flash-api-endpoints)
- [CLI Reference](#cli-reference)
- [Key concepts](#key-concepts)
- [How it works](#how-it-works)
- [Advanced features](#advanced-features)
- [Configuration](#configuration)
- [Workflow examples](#workflow-examples)
- [Use cases](#use-cases)
- [Limitations](#limitations)
- [Contributing](#contributing)
- [Troubleshooting](#troubleshooting)
## Overview
There are two basic modes for using Flash. You can:
- Build and run standalone Python scripts using the `@remote` decorator.
- Create Flash API endpoints with FastAPI (using the same script syntax).
Follow the steps in the next section to install Flash and create your first script before learning how to [create Flash API endpoints](#create-flash-api-endpoints).
To learn more about how Flash works, see [Key concepts](#key-concepts).
## Get started
Before you can use Flash, you'll need:
- Python 3.9 (or higher) installed on your local machine.
- A Runpod account with API key ([sign up here](https://runpod.io/console)).
- Basic knowledge of Python and async programming.
### Step 1: Install Flash
```bash
pip install runpod-flash
```
### Step 2: Set your API key
Generate an API key from the [Runpod account settings](https://docs.runpod.io/get-started/api-keys) page and set it as an environment variable:
```bash
export RUNPOD_API_KEY=[YOUR_API_KEY]
```
Or save it in a `.env` file in your project directory:
```bash
echo "RUNPOD_API_KEY=[YOUR_API_KEY]" > .env
```
### Step 3: Create your first Flash function
Add the following code to a new Python file:
```python
import asyncio
from runpod_flash import remote, LiveServerless
from dotenv import load_dotenv
# Uncomment if using a .env file
# load_dotenv()
# Configure GPU resources
gpu_config = LiveServerless(name="flash-quickstart")
@remote(
resource_config=gpu_config,
dependencies=["torch", "numpy"]
)
def gpu_compute(data):
import torch
import numpy as np
# This runs on a GPU in Runpod's cloud
tensor = torch.tensor(data, device="cuda")
result = tensor.sum().item()
return {
"result": result,
"device": torch.cuda.get_device_name(0)
}
async def main():
# This runs locally
result = await gpu_compute([1, 2, 3, 4, 5])
print(f"Sum: {result['result']}")
print(f"Computed on: {result['device']}")
if __name__ == "__main__":
asyncio.run(main())
```
Run the example:
```bash
python your_script.py
```
The first time you run the script, it will take significantly longer to process than successive runs (about one minute for first run vs. one second for future runs), as your endpoint must be initialized.
When it's finished, you should see output similar to this:
```bash
2025-11-19 12:35:15,109 | INFO | Created endpoint: rb50waqznmn2kg - flash-quickstart-fb
2025-11-19 12:35:15,112 | INFO | URL: https://console.runpod.io/serverless/user/endpoint/rb50waqznmn2kg
2025-11-19 12:35:15,114 | INFO | LiveServerless:rb50waqznmn2kg | API /run
2025-11-19 12:35:15,655 | INFO | LiveServerless:rb50waqznmn2kg | Started Job:b0b341e7-e460-4305-9acd-fc2dfd1bd65c-u2
2025-11-19 12:35:15,762 | INFO | Job:b0b341e7-e460-4305-9acd-fc2dfd1bd65c-u2 | Status: IN_QUEUE
2025-11-19 12:35:16,301 | INFO | Job:b0b341e7-e460-4305-9acd-fc2dfd1bd65c-u2 | .
2025-11-19 12:35:17,756 | INFO | Job:b0b341e7-e460-4305-9acd-fc2dfd1bd65c-u2 | ..
2025-11-19 12:35:22,610 | INFO | Job:b0b341e7-e460-4305-9acd-fc2dfd1bd65c-u2 | ...
2025-11-19 12:35:37,163 | INFO | Job:b0b341e7-e460-4305-9acd-fc2dfd1bd65c-u2 | ....
2025-11-19 12:35:59,248 | INFO | Job:b0b341e7-e460-4305-9acd-fc2dfd1bd65c-u2 | .....
2025-11-19 12:36:09,983 | INFO | Job:b0b341e7-e460-4305-9acd-fc2dfd1bd65c-u2 | Status: COMPLETED
2025-11-19 12:36:10,068 | INFO | Worker:icmkdgnrmdf8gz | Delay Time: 51842 ms
2025-11-19 12:36:10,068 | INFO | Worker:icmkdgnrmdf8gz | Execution Time: 1533 ms
2025-11-19 17:36:07,485 | INFO | Installing Python dependencies: ['torch', 'numpy']
Sum: 15
Computed on: NVIDIA GeForce RTX 4090
```
## Create Flash API endpoints
You can use Flash to deploy and serve API endpoints that compute responses using GPU and CPU Serverless workers. Use `flash run` for local development of `@remote` functions, then `flash deploy` to deploy your full application to Runpod Serverless for production.
These endpoints use the same Python `@remote` decorators [demonstrated above](#get-started)
### Step 1: Initialize a new project
Use the `flash init` command to generate a project template with example worker files.
Run this command to initialize a new project directory:
```bash
flash init my_project
```
You can also initialize your current directory:
```
flash init
```
For complete CLI documentation, see the [Flash CLI Reference](src/runpod_flash/cli/docs/README.md).
### Step 2: Explore the project template
This is the structure of the project template created by `flash init`:
```txt
my_project/
├── gpu_worker.py # GPU worker with @remote function
├── cpu_worker.py # CPU worker with @remote function
├── .env # Environment variable template
├── .gitignore # Git ignore patterns
├── .flashignore # Flash deployment ignore patterns
├── pyproject.toml # Python dependencies (uv/pip compatible)
└── README.md # Project documentation
```
This template includes:
- Example worker files with `@remote` decorated functions.
- Templates for Python dependencies, `.env`, `.gitignore`, etc.
- Each worker file contains:
- Pre-configured worker scaling limits using the `LiveServerless()` object.
- A `@remote` decorated function that returns a response from a worker.
When you run `flash run`, it auto-discovers all `@remote` functions and generates a local development server at `.flash/server.py`. Queue-based workers are exposed at `/{file_prefix}/run_sync` (e.g., `/gpu_worker/run_sync`).
### Step 3: Install Python dependencies
After initializing the project, navigate into the project directory:
```bash
cd my_project
```
Install required dependencies using uv (recommended) or pip:
```bash
uv sync # recommended
# or
pip install -r requirements.txt
```
### Step 4: Configure your API key
Open the `.env` template file in a text editor and add your [Runpod API key](https://docs.runpod.io/get-started/api-keys):
```bash
# Use your text editor of choice, e.g.
cursor .env
```
Remove the `#` symbol from the beginning of the `RUNPOD_API_KEY` line and replace `your_api_key_here` with your actual Runpod API key:
```txt
RUNPOD_API_KEY=your_api_key_here
# FLASH_HOST=localhost
# FLASH_PORT=8888
# LOG_LEVEL=INFO
```
Save the file and close it.
### Step 5: Start the local API server
Use `flash run` to start the API server:
```bash
flash run
```
Open a new terminal tab or window and test your GPU API using cURL:
```bash
curl -X POST http://localhost:8888/gpu_worker/run_sync \
-H "Content-Type: application/json" \
-d '{"message": "Hello from the GPU!"}'
```
If you switch back to the terminal tab where you used `flash run`, you'll see the details of the job's progress.
For more `flash run` options and configuration, see the [flash run documentation](src/runpod_flash/cli/docs/flash-run.md).
### Faster testing with auto-provisioning
For development with multiple endpoints, use `--auto-provision` to deploy all resources before testing:
```bash
flash run --auto-provision
```
This eliminates cold-start delays by provisioning all serverless endpoints upfront. Endpoints are cached and reused across server restarts, making subsequent runs much faster. Resources are identified by name, so the same endpoint won't be re-deployed if configuration hasn't changed.
### Step 6: Open the API explorer
Besides starting the API server, `flash run` also starts an interactive API explorer. Point your web browser at [http://localhost:8888/docs](http://localhost:8888/docs) to explore the API.
To run remote functions in the explorer:
1. Expand one of the available endpoints (e.g., `/gpu_worker/run_sync`).
2. Click **Try it out** and then **Execute**.
You'll get a response from your workers right in the explorer.
### Step 7: Customize your API
To customize your API:
1. Create new `.py` files with `@remote` decorated functions.
2. Test the scripts individually by running `python your_worker.py`.
3. Run `flash run` to auto-discover all `@remote` functions and serve them.
## CLI Reference
Flash provides a command-line interface for project management, development, and deployment:
### Main Commands
- **`flash init`** - Initialize a new Flash project with template structure
- **`flash run`** - Start local development server to test your `@remote` functions with auto-reload
- **`flash build`** - Build deployment artifact with all dependencies
- **`flash deploy`** - Build and deploy your application to Runpod Serverless in one step
### Management Commands
- **`flash env`** - Manage deployment environments (dev, staging, production)
- `list`, `create`, `get`, `delete` subcommands
- **`flash app`** - Manage Flash applications (top-level organization)
- `list`, `create`, `get`, `delete` subcommands
- **`flash undeploy`** - Manage and remove deployed endpoints
### Quick Examples
```bash
# Initialize and run locally
flash init my-project
cd my-project
flash run --auto-provision
# Build and deploy to production
flash build
flash deploy --env production
# Manage environments
flash env create staging
flash env list
flash deploy --env staging
# Clean up
flash undeploy --interactive
flash env delete staging
```
### Complete Documentation
For complete CLI documentation including all options, examples, and troubleshooting:
**[Flash CLI Documentation](src/runpod_flash/cli/docs/README.md)**
Individual command references:
- [flash init](src/runpod_flash/cli/docs/flash-init.md) - Project initialization
- [flash run](src/runpod_flash/cli/docs/flash-run.md) - Development server
- [flash build](src/runpod_flash/cli/docs/flash-build.md) - Build artifacts
- [flash deploy](src/runpod_flash/cli/docs/flash-deploy.md) - Deployment
- [flash env](src/runpod_flash/cli/docs/flash-env.md) - Environment management
- [flash app](src/runpod_flash/cli/docs/flash-app.md) - App management
- [flash undeploy](src/runpod_flash/cli/docs/flash-undeploy.md) - Endpoint removal
## Key concepts
### Remote functions
The Flash `@remote` decorator marks functions for execution on Runpod's infrastructure. Everything inside the decorated function runs remotely, while code outside runs locally.
```python
@remote(resource_config=config, dependencies=["pandas"])
def process_data(data):
# This code runs remotely
import pandas as pd
df = pd.DataFrame(data)
return df.describe().to_dict()
async def main():
# This code runs locally
result = await process_data(my_data)
```
### Resource configuration
Flash provides fine-grained control over hardware allocation through configuration objects:
```python
from runpod_flash import LiveServerless, GpuGroup, CpuInstanceType, PodTemplate
# GPU configuration
gpu_config = LiveServerless(
name="ml-inference",
gpus=[GpuGroup.AMPERE_80], # A100 80GB
workersMax=5,
template=PodTemplate(containerDiskInGb=100) # Extra disk space
)
# CPU configuration
cpu_config = LiveServerless(
name="data-processor",
instanceIds=[CpuInstanceType.CPU5C_4_16], # 4 vCPU, 16GB RAM
workersMax=3
)
```
### Dependency management
Specify Python packages in the decorator, and Flash installs them automatically:
```python
@remote(
resource_config=gpu_config,
dependencies=["transformers==4.36.0", "torch", "pillow"]
)
def generate_image(prompt):
# Import inside the function
from transformers import pipeline
import torch
from PIL import Image
# Your code here
```
### Parallel execution
Run multiple remote functions concurrently using Python's async capabilities:
```python
# Process multiple items in parallel
results = await asyncio.gather(
process_item(item1),
process_item(item2),
process_item(item3)
)
```
### Load-Balanced Endpoints with HTTP Routing
For API endpoints requiring low-latency HTTP access with direct routing, use load-balanced endpoints:
```python
from runpod_flash import LiveLoadBalancer, remote
api = LiveLoadBalancer(name="api-service")
@remote(api, method="POST", path="/api/process")
async def process_data(x: int, y: int):
return {"result": x + y}
@remote(api, method="GET", path="/api/health")
def health_check():
return {"status": "ok"}
# Call functions directly
result = await process_data(5, 3) # → {"result": 8}
```
**Key differences from queue-based endpoints:**
- **Direct HTTP routing** - Requests routed directly to workers, no queue
- **Lower latency** - No queuing overhead
- **Custom HTTP methods** - GET, POST, PUT, DELETE, PATCH support
- **No automatic retries** - Users handle errors directly
Load-balanced endpoints are ideal for REST APIs, webhooks, and real-time services. Queue-based endpoints are better for batch processing and fault-tolerant workflows.
For detailed information:
- **User guide:** [Using @remote with Load-Balanced Endpoints](docs/Using_Remote_With_LoadBalancer.md)
- **Runtime architecture:** [LoadBalancer Runtime Architecture](docs/LoadBalancer_Runtime_Architecture.md) - details on deployment, request flows, and execution
## How it works
Flash orchestrates workflow execution through a sophisticated multi-step process:
1. **Function identification**: The `@remote` decorator marks functions for remote execution, enabling Flash to distinguish between local and remote operations.
2. **Dependency analysis**: Flash automatically analyzes function dependencies to construct an optimal execution order, ensuring data flows correctly between sequential and parallel operations.
3. **Resource provisioning and execution**: For each remote function, Flash:
- Dynamically provisions endpoint and worker resources on Runpod's infrastructure.
- Serializes and securely transfers input data to the remote worker.
- Executes the function on the remote infrastructure with the specified GPU or CPU resources.
- Returns results to your local environment for further processing.
4. **Data orchestration**: Results flow seamlessly between functions according to your local Python code structure, maintaining the same programming model whether functions run locally or remotely.
## Advanced features
### Custom Docker images
`LiveServerless` resources use a fixed Docker image that's optimized for Flash runtime, and supports full remote code execution. For specialized environments that require a custom Docker image, use `ServerlessEndpoint` or `CpuServerlessEndpoint`:
```python
from runpod_flash import ServerlessEndpoint
custom_gpu = ServerlessEndpoint(
name="custom-ml-env",
imageName="pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime",
gpus=[GpuGroup.AMPERE_80]
)
```
Unlike `LiveServerless`, these endpoints only support dictionary payloads in the form of `{"input": {...}}` (similar to a traditional [Serverless endpoint request](https://docs.runpod.io/serverless/endpoints/send-requests)), and cannot execute arbitrary Python functions remotely.
### Persistent storage with network volumes
Attach [network volumes](https://docs.runpod.io/storage/network-volumes) for persistent storage across workers and endpoints:
```python
config = LiveServerless(
name="model-server",
networkVolumeId="vol_abc123", # Your volume ID
template=PodTemplate(containerDiskInGb=100)
)
```
### Environment variables
Pass configuration to remote functions:
```python
config = LiveServerless(
name="api-worker",
env={"HF_TOKEN": "your_token", "MODEL_ID": "gpt2"}
)
```
Environment variables are excluded from configuration hashing, which means changing environment values won't trigger endpoint recreation. This allows different processes to load environment variables from `.env` files without causing false drift detection. Only structural changes (like GPU type, image, or template modifications) trigger endpoint updates.
### Build Process
Flash uses a sophisticated build process to package your application for deployment.
#### How Flash Builds Your Application
When you run `flash build`, the following happens:
1. **Discovery**: Flash scans your code for `@remote` decorated functions
2. **Grouping**: Functions are grouped by their `resource_config`
3. **Manifest Creation**: A `flash_manifest.json` file maps functions to their endpoints
4. **Dependency Installation**: Python packages are installed with Linux x86_64 compatibility
5. **Packaging**: Everything is bundled into `artifact.tar.gz` for deployment
#### Cross-Platform Builds
Flash automatically handles cross-platform builds, ensuring your deployments work correctly regardless of your development platform:
- **Automatic Platform Targeting**: Dependencies are installed for Linux x86_64 (Runpod's serverless platform), even when building on macOS or Windows
- **Python Version Matching**: The build uses your current Python version to ensure package compatibility
- **Binary Wheel Enforcement**: Only pre-built binary wheels are used, preventing platform-specific compilation issues
This means you can build on macOS ARM64, Windows, or any other platform, and the resulting package will run correctly on Runpod serverless.
#### Cross-Endpoint Function Calls
Flash enables functions on different endpoints to call each other. The runtime automatically discovers endpoints using the manifest and routes calls appropriately:
```python
# CPU endpoint function
@remote(resource_config=cpu_config)
def preprocess(data):
return clean_data
# GPU endpoint function
@remote(resource_config=gpu_config)
async def inference(data):
# Can call CPU endpoint function
clean = preprocess(data)
return result
```
The runtime wrapper handles service discovery and routing automatically.
#### Build Artifacts
After `flash build` completes:
- `.flash/.build/`: Temporary build directory (removed unless `--keep-build`)
- `.flash/artifact.tar.gz`: Deployment package
- `.flash/flash_manifest.json`: Service discovery configuration
For information on load-balanced endpoints (required for HTTP services), see [docs/Load_Balancer_Endpoints.md](docs/Load_Balancer_Endpoints.md).
#### Troubleshooting Build Issues
**No @remote functions found:**
- Ensure your functions are decorated with `@remote(resource_config)`
- Check that Python files are not excluded by `.gitignore` or `.flashignore`
- Verify function decorators have valid syntax
**Build succeeded but deployment failed:**
- Verify all function imports work in the deployment environment
- Check that environment variables required by your functions are available
- Review the generated `flash_manifest.json` for correct function mappings
**Dependency installation failed:**
- If a package doesn't have pre-built Linux x86_64 wheels, the build will fail with an error
- For newer Python versions (3.13+), some packages may require manylinux_2_27 or higher
- Ensure you have standard pip installed (`python -m ensurepip --upgrade`) for best compatibility
- uv pip has known issues with newer manylinux tags - standard pip is recommended
- Check PyPI to verify the package supports your Python version on Linux
#### Managing Bundle Size
Runpod serverless has a **500MB deployment limit**. Exceeding this limit will cause deployment failures.
Use `--exclude` to skip packages already in your worker-flash Docker image:
```bash
# For GPU deployments (PyTorch pre-installed)
flash build --exclude torch,torchvision,torchaudio
# Check your resource config to determine which base image you're using
```
**Which packages to exclude depends on your resource config:**
- **GPU resources** → PyTorch images have torch/torchvision/torchaudio pre-installed
- **CPU resources** → Python slim images have NO ML frameworks pre-installed
- **Load-balanced** → Same as above, depends on GPU vs CPU variant
See [worker-flash](https://github.com/runpod-workers/worker-flash) for base image details.
## Configuration
### GPU configuration parameters
The following parameters can be used with `LiveServerless` (full remote code execution) and `ServerlessEndpoint` (dictionary payload only) to configure your Runpod GPU endpoints:
| Parameter | Description | Default | Example Values |
|--------------------|-------------------------------------------------|---------------|-------------------------------------|
| `name` | (Required) Name for your endpoint | `""` | `"stable-diffusion-server"` |
| `gpus` | GPU pool IDs that can be used by workers | `[GpuGroup.ANY]` | `[GpuGroup.ADA_24]` for RTX 4090 |
| `gpuCount` | Number of GPUs per worker | 1 | 1, 2, 4 |
| `workersMin` | Minimum number of workers | 0 | Set to 1 for persistence |
| `workersMax` | Maximum number of workers | 3 | Higher for more concurrency |
| `idleTimeout` | Seconds before scaling down | 60 | 300, 600, 1800 |
| `env` | Environment variables | `None` | `{"HF_TOKEN": "xyz"}` |
| `networkVolumeId` | Persistent storage ID | `None` | `"vol_abc123"` |
| `executionTimeoutMs`| Max execution time (ms) | 0 (no limit) | 600000 (10 min) |
| `scalerType` | Scaling strategy | `QUEUE_DELAY` | `REQUEST_COUNT` |
| `scalerValue` | Scaling parameter value | 4 | 1-10 range typical |
| `locations` | Preferred datacenter locations | `None` | `"us-east,eu-central"` |
| `imageName` | Custom Docker image (`ServerlessEndpoint` only) | Fixed for LiveServerless | `"pytorch/pytorch:latest"`, `"my-registry/custom:v1.0"` |
### CPU configuration parameters
The same GPU configuration parameters above apply to `LiveServerless` (full remote code execution) and `CpuServerlessEndpoint` (dictionary payload only), with these additional CPU-specific parameters:
| Parameter | Description | Default | Example Values |
|--------------------|-------------------------------------------------|---------------|-------------------------------------|
| `instanceIds` | CPU Instance Types (forces a CPU endpoint type) | `None` | `[CpuInstanceType.CPU5C_2_4]` |
| `imageName` | Custom Docker image (`CpuServerlessEndpoint` only) | Fixed for `LiveServerless` | `"python:3.11-slim"`, `"my-registry/custom:v1.0"` |
### Resource class comparison
| Feature | LiveServerless | ServerlessEndpoint | CpuServerlessEndpoint |
|---------|----------------|-------------------|----------------------|
| **Remote code execution** | ✅ Full Python function execution | ❌ Dictionary payload only | ❌ Dictionary payload only |
| **Custom Docker images** | ❌ Fixed optimized images | ✅ Any Docker image | ✅ Any Docker image |
| **Use case** | Dynamic remote functions | Traditional API endpoints | Traditional CPU endpoints |
| **Function returns** | Any Python object | Dictionary only | Dictionary only |
| **@remote decorator** | Full functionality | Limited to payload passing | Limited to payload passing |
### Available GPU types
Some common GPU groups available through `GpuGroup`:
- `GpuGroup.ANY` - Any available GPU (default)
- `GpuGroup.ADA_24` - NVIDIA GeForce RTX 4090
- `GpuGroup.AMPERE_80` - NVIDIA A100 80GB
- `GpuGroup.AMPERE_48` - NVIDIA A40, RTX A6000
- `GpuGroup.AMPERE_24` - NVIDIA RTX A5000, L4, RTX 3090
### Available CPU instance types
- `CpuInstanceType.CPU3G_1_4` - (cpu3g-1-4) 3rd gen general purpose, 1 vCPU, 4GB RAM
- `CpuInstanceType.CPU3G_2_8` - (cpu3g-2-8) 3rd gen general purpose, 2 vCPU, 8GB RAM
- `CpuInstanceType.CPU3G_4_16` - (cpu3g-4-16) 3rd gen general purpose, 4 vCPU, 16GB RAM
- `CpuInstanceType.CPU3G_8_32` - (cpu3g-8-32) 3rd gen general purpose, 8 vCPU, 32GB RAM
- `CpuInstanceType.CPU3C_1_2` - (cpu3c-1-2) 3rd gen compute-optimized, 1 vCPU, 2GB RAM
- `CpuInstanceType.CPU3C_2_4` - (cpu3c-2-4) 3rd gen compute-optimized, 2 vCPU, 4GB RAM
- `CpuInstanceType.CPU3C_4_8` - (cpu3c-4-8) 3rd gen compute-optimized, 4 vCPU, 8GB RAM
- `CpuInstanceType.CPU3C_8_16` - (cpu3c-8-16) 3rd gen compute-optimized, 8 vCPU, 16GB RAM
- `CpuInstanceType.CPU5C_1_2` - (cpu5c-1-2) 5th gen compute-optimized, 1 vCPU, 2GB RAM
- `CpuInstanceType.CPU5C_2_4` - (cpu5c-2-4) 5th gen compute-optimized, 2 vCPU, 4GB RAM
- `CpuInstanceType.CPU5C_4_8` - (cpu5c-4-8) 5th gen compute-optimized, 4 vCPU, 8GB RAM
- `CpuInstanceType.CPU5C_8_16` - (cpu5c-8-16) 5th gen compute-optimized, 8 vCPU, 16GB RAM
### Logging
Flash automatically logs CLI activity to local files during development. Logs are written to `.flash/logs/activity.log` with daily rotation and 30-day retention by default.
**Configuration via environment variables:**
```bash
# Disable file logging (CLI continues with stdout-only)
export FLASH_FILE_LOGGING_ENABLED=false
# Keep only 7 days of logs
export FLASH_LOG_RETENTION_DAYS=7
# Use custom log directory
export FLASH_LOG_DIR=/var/log/flash
```
File logging is automatically disabled in deployed containers. See [flash-logging.md](src/runpod_flash/cli/docs/flash-logging.md) for complete documentation.
## Workflow examples
### Basic GPU workflow
```python
import asyncio
from runpod_flash import remote, LiveServerless
# Simple GPU configuration
gpu_config = LiveServerless(name="example-gpu-server")
@remote(
resource_config=gpu_config,
dependencies=["torch", "numpy"]
)
def gpu_compute(data):
import torch
import numpy as np
# Convert to tensor and perform computation on GPU
tensor = torch.tensor(data, device="cuda")
result = tensor.sum().item()
# Get GPU info
gpu_info = torch.cuda.get_device_properties(0)
return {
"result": result,
"gpu_name": gpu_info.name,
"cuda_version": torch.version.cuda
}
async def main():
result = await gpu_compute([1, 2, 3, 4, 5])
print(f"Result: {result['result']}")
print(f"Computed on: {result['gpu_name']} with CUDA {result['cuda_version']}")
if __name__ == "__main__":
asyncio.run(main())
```
### Advanced GPU workflow with template configuration
```python
import asyncio
from runpod_flash import remote, LiveServerless, GpuGroup, PodTemplate
import base64
# Advanced GPU configuration with consolidated template overrides
sd_config = LiveServerless(
gpus=[GpuGroup.AMPERE_80], # A100 80GB GPUs
name="example_image_gen_server",
template=PodTemplate(containerDiskInGb=100), # Large disk for models
workersMax=3,
idleTimeout=10
)
@remote(
resource_config=sd_config,
dependencies=["diffusers", "transformers", "torch", "accelerate", "safetensors"]
)
def generate_image(prompt, width=512, height=512):
import torch
from diffusers import StableDiffusionPipeline
import io
import base64
# Load pipeline (benefits from large container disk)
pipeline = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16
)
pipeline = pipeline.to("cuda")
# Generate image
image = pipeline(prompt=prompt, width=width, height=height).images[0]
# Convert to base64 for return
buffered = io.BytesIO()
image.save(buffered, format="PNG")
img_str = base64.b64encode(buffered.getvalue()).decode()
return {"image": img_str, "prompt": prompt}
async def main():
result = await generate_image("A serene mountain landscape at sunset")
print(f"Generated image for: {result['prompt']}")
# Save image locally if needed
# img_data = base64.b64decode(result["image"])
# with open("output.png", "wb") as f:
# f.write(img_data)
if __name__ == "__main__":
asyncio.run(main())
```
### Basic CPU workflow
```python
import asyncio
from runpod_flash import remote, LiveServerless, CpuInstanceType
# Simple CPU configuration
cpu_config = LiveServerless(
name="example-cpu-server",
instanceIds=[CpuInstanceType.CPU5G_2_8], # 2 vCPU, 8GB RAM
)
@remote(
resource_config=cpu_config,
dependencies=["pandas", "numpy"]
)
def cpu_data_processing(data):
import pandas as pd
import numpy as np
import platform
# Process data using CPU
df = pd.DataFrame(data)
return {
"row_count": len(df),
"column_count": len(df.columns) if not df.empty else 0,
"mean_values": df.select_dtypes(include=[np.number]).mean().to_dict(),
"system_info": platform.processor(),
"platform": platform.platform()
}
async def main():
sample_data = [
{"name": "Alice", "age": 30, "score": 85},
{"name": "Bob", "age": 25, "score": 92},
{"name": "Charlie", "age": 35, "score": 78}
]
result = await cpu_data_processing(sample_data)
print(f"Processed {result['row_count']} rows on {result['platform']}")
print(f"Mean values: {result['mean_values']}")
if __name__ == "__main__":
asyncio.run(main())
```
### Advanced CPU workflow with template configuration
```python
import asyncio
import base64
from runpod_flash import remote, LiveServerless, CpuInstanceType, PodTemplate
# Advanced CPU configuration with template overrides
data_processing_config = LiveServerless(
name="advanced-cpu-processor",
instanceIds=[CpuInstanceType.CPU5C_4_16, CpuInstanceType.CPU3C_4_8], # Fallback options
template=PodTemplate(
containerDiskInGb=20, # Extra disk space for data processing
env=[{"key": "PYTHONPATH", "value": "/workspace"}] # Custom environment
),
workersMax=5,
idleTimeout=15,
env={"PROCESSING_MODE": "batch", "DEBUG": "false"} # Additional env vars
)
@remote(
resource_config=data_processing_config,
dependencies=["pandas", "numpy", "scipy", "scikit-learn"]
)
def advanced_data_analysis(dataset, analysis_type="full"):
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
import platform
# Create DataFrame
df = pd.DataFrame(dataset)
# Perform analysis based on type
results = {
"platform": platform.platform(),
"dataset_shape": df.shape,
"memory_usage": df.memory_usage(deep=True).sum()
}
if analysis_type == "full":
# Advanced statistical analysis
numeric_cols = df.select_dtypes(include=[np.number]).columns
if len(numeric_cols) > 0:
# Standardize data
scaler = StandardScaler()
scaled_data = scaler.fit_transform(df[numeric_cols])
# PCA analysis
pca = PCA(n_components=min(len(numeric_cols), 3))
pca_result = pca.fit_transform(scaled_data)
results.update({
"correlation_matrix": df[numeric_cols].corr().to_dict(),
"pca_explained_variance": pca.explained_variance_ratio_.tolist(),
"pca_shape": pca_result.shape
})
return results
async def main():
# Generate sample dataset
sample_data = [
{"feature1": np.random.randn(), "feature2": np.random.randn(),
"feature3": np.random.randn(), "category": f"cat_{i%3}"}
for i in range(1000)
]
result = await advanced_data_analysis(sample_data, "full")
print(f"Processed dataset with shape: {result['dataset_shape']}")
print(f"Memory usage: {result['memory_usage']} bytes")
print(f"PCA explained variance: {result.get('pca_explained_variance', 'N/A')}")
if __name__ == "__main__":
asyncio.run(main())
```
### Hybrid GPU/CPU workflow
```python
import asyncio
from runpod_flash import remote, LiveServerless, GpuGroup, CpuInstanceType, PodTemplate
# GPU configuration for model inference
gpu_config = LiveServerless(
name="ml-inference-gpu",
gpus=[GpuGroup.AMPERE_24], # RTX 3090/A5000
template=PodTemplate(containerDiskInGb=50), # Space for models
workersMax=2
)
# CPU configuration for data preprocessing
cpu_config = LiveServerless(
name="data-preprocessor",
instanceIds=[CpuInstanceType.CPU5C_4_16], # 4 vCPU, 16GB RAM
template=PodTemplate(
containerDiskInGb=30,
env=[{"key": "NUMPY_NUM_THREADS", "value": "4"}]
),
workersMax=3
)
@remote(
resource_config=cpu_config,
dependencies=["pandas", "numpy", "scikit-learn"]
)
def preprocess_data(raw_data):
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
# Data cleaning and preprocessing
df = pd.DataFrame(raw_data)
# Handle missing values
df = df.fillna(df.mean(numeric_only=True))
# Normalize numeric features
numeric_cols = df.select_dtypes(include=[np.number]).columns
if len(numeric_cols) > 0:
scaler = StandardScaler()
df[numeric_cols] = scaler.fit_transform(df[numeric_cols])
return {
"processed_data": df.to_dict('records'),
"shape": df.shape,
"columns": list(df.columns)
}
@remote(
resource_config=gpu_config,
dependencies=["torch", "transformers", "numpy"]
)
def run_inference(processed_data):
import torch
import numpy as np
# Simulate ML model inference on GPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Convert to tensor
data_array = np.array([list(item.values()) for item in processed_data["processed_data"]])
tensor = torch.tensor(data_array, dtype=torch.float32).to(device)
# Simple neural network simulation
with torch.no_grad():
# Simulate model computation
result = torch.nn.functional.softmax(tensor.mean(dim=1), dim=0)
predictions = result.cpu().numpy().tolist()
return {
"predictions": predictions,
"device_used": str(device),
"input_shape": tensor.shape
}
async def ml_pipeline(raw_dataset):
"""Complete ML pipeline: CPU preprocessing -> GPU inference"""
print("Step 1: Preprocessing data on CPU...")
preprocessed = await preprocess_data(raw_dataset)
print(f"Preprocessed data shape: {preprocessed['shape']}")
print("Step 2: Running inference on GPU...")
results = await run_inference(preprocessed)
print(f"Inference completed on: {results['device_used']}")
return {
"preprocessing": preprocessed,
"inference": results
}
async def main():
# Sample dataset
raw_data = [
{"feature1": np.random.randn(), "feature2": np.random.randn(),
"feature3": np.random.randn(), "label": i % 2}
for i in range(100)
]
# Run the complete pipeline
results = await ml_pipeline(raw_data)
print("\nPipeline Results:")
print(f"Data processed: {results['preprocessing']['shape']}")
print(f"Predictions generated: {len(results['inference']['predictions'])}")
print(f"GPU device: {results['inference']['device_used']}")
if __name__ == "__main__":
asyncio.run(main())
```
### Multi-stage ML pipeline example
```python
import os
import asyncio
from runpod_flash import remote, LiveServerless
# Configure Runpod resources
runpod_config = LiveServerless(name="multi-stage-pipeline-server")
# Feature extraction on GPU
@remote(
resource_config=runpod_config,
dependencies=["torch", "transformers"]
)
def extract_features(texts):
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
model.to("cuda")
features = []
for text in texts:
inputs = tokenizer(text, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model(**inputs)
features.append(outputs.last_hidden_state[:, 0].cpu().numpy().tolist()[0])
return features
# Classification on GPU
@remote(
resource_config=runpod_config,
dependencies=["torch", "sklearn"]
)
def classify(features, labels=None):
import torch
import numpy as np
from sklearn.linear_model import LogisticRegression
features_np = np.array(features[1:] if labels is None and isinstance(features, list) and len(features)>0 and isinstance(features[0], dict) else features)
if labels is not None:
labels_np = np.array(labels)
classifier = LogisticRegression()
classifier.fit(features_np, labels_np)
coefficients = {
"coef": classifier.coef_.tolist(),
"intercept": classifier.intercept_.tolist(),
"classes": classifier.classes_.tolist()
}
return coefficients
else:
coefficients = features[0]
classifier = LogisticRegression()
classifier.coef_ = np.array(coefficients["coef"])
classifier.intercept_ = np.array(coefficients["intercept"])
classifier.classes_ = np.array(coefficients["classes"])
# Predict
predictions = classifier.predict(features_np)
probabilities = classifier.predict_proba(features_np)
return {
"predictions": predictions.tolist(),
"probabilities": probabilities.tolist()
}
# Complete pipeline
async def text_classification_pipeline(train_texts, train_labels, test_texts):
train_features = await extract_features(train_texts)
test_features = await extract_features(test_texts)
model_coeffs = await classify(train_features, train_labels)
# For inference, pass model coefficients along with test features
# The classify function expects a list where the first element is the model (coeffs)
# and subsequent elements are features for prediction.
predictions = await classify([model_coeffs] + test_features)
return predictions
```
### More examples
You can find many more examples in the [flash-examples repository](https://github.com/runpod/flash-examples).
## Use cases
Flash is well-suited for a diverse range of AI and data processing workloads:
- **Multi-modal AI pipelines**: Orchestrate unified workflows combining text, image, and audio models with GPU acceleration.
- **Distributed model training**: Scale training operations across multiple GPU workers for faster model development.
- **AI research experimentation**: Rapidly protot | text/markdown | null | Runpod <engineer@runpod.io> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"cloudpickle>=3.1.1",
"runpod",
"python-dotenv>=1.0.0",
"pydantic>=2.0.0",
"rich>=14.0.0",
"typer>=0.12.0",
"questionary>=2.0.0",
"pathspec>=0.11.0",
"tomli>=2.0.0; python_version < \"3.11\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:40:26.397212 | runpod_flash-1.3.0.tar.gz | 202,436 | a5/38/56d720d7ecd083e81c2acda2a76d6e86e5bc56c2703a37d89fa6ed185a28/runpod_flash-1.3.0.tar.gz | source | sdist | null | false | 8b24d6b61b9b90658d1f87b256bcb9cb | 3f337e080bbc6c8ffbd9ddbfc479a513f19ec646fe23e7c8af5c6925f8b0a5b6 | a53856d720d7ecd083e81c2acda2a76d6e86e5bc56c2703a37d89fa6ed185a28 | null | [] | 276 |
2.4 | memorymesh | 3.1.0 | The SQLite of AI Memory - an embeddable, zero-dependency AI memory library for any LLM application | # MemoryMesh - The SQLite of AI Memory
<!-- Badges -->
[](https://pypi.org/project/memorymesh/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/memorymesh/)
[](https://github.com/sparkvibe-io/memorymesh/actions/workflows/ci.yml)
**MemoryMesh** is an embeddable AI memory library with zero required dependencies that gives any LLM application persistent, intelligent memory. Install it with `pip install memorymesh` and add long-term memory to your AI agents in three lines of code. It works with ANY LLM -- Claude, GPT, Gemini, Llama, Ollama, Mistral, and more. Runs everywhere Python runs (Linux, macOS, Windows). All data stays on your machine by default. No servers, no APIs, no cloud accounts required. Privacy-first by design.
---
## Why MemoryMesh?
Every AI application needs memory, but existing solutions come with heavy trade-offs:
| Solution | Approach | Trade-off |
|---|---|---|
| **Mem0** | SaaS / managed service | Requires cloud account, data leaves your machine, ongoing costs |
| **Letta / MemGPT** | Full agent framework | Heavy framework lock-in, complex setup, opinionated architecture |
| **Zep** | Memory server | Requires PostgreSQL, Docker, server infrastructure |
| **MemoryMesh** | **Embeddable library** | **Zero dependencies. Just SQLite. Works anywhere.** |
MemoryMesh takes a fundamentally different approach. Like SQLite revolutionized embedded databases, MemoryMesh brings the same philosophy to AI memory: a simple, reliable, embeddable library that just works. No infrastructure. No lock-in. No surprises.
---
## Quick Start
```python
from memorymesh import MemoryMesh
memory = MemoryMesh()
memory.remember("User prefers Python and dark mode")
results = memory.recall("What does the user prefer?")
```
That is it. Three lines to give your AI application persistent, semantic memory.
---
## How MemoryMesh Saves You Money
Without memory, every AI interaction requires re-sending the full conversation history. As conversations grow, so do your token costs -- linearly, every single turn.
**MemoryMesh flips this model.** Instead of sending thousands of tokens of raw conversation history, you recall only the top-k most relevant memories (typically 3-5 short passages) and inject them as context. The conversation itself stays short.
### Token cost comparison: 20-turn conversation
| Turn | Without Memory (full history) | With MemoryMesh (recall top-5) |
|------|-------------------------------|-------------------------------|
| 1 | ~250 tokens | ~250 tokens |
| 5 | ~1,500 tokens | ~400 tokens |
| 10 | ~4,000 tokens | ~400 tokens |
| 20 | ~10,000 tokens | ~450 tokens |
| 50 | ~30,000 tokens | ~500 tokens |
*Estimates based on typical conversational turns of ~250 tokens each, with MemoryMesh recalling 5 relevant memories (~50 tokens each) per turn.*
### How it works
1. **Store** -- After each interaction, `remember()` the key facts (not the full conversation).
2. **Recall** -- Before the next interaction, `recall()` retrieves only the most relevant memories ranked by semantic similarity, recency, and importance.
3. **Inject** -- Pass the recalled memories as system context to your LLM. The full conversation history is never needed.
**The result:** Your input token count stays roughly constant regardless of how long the conversation has been going. At $3/million input tokens (Claude Sonnet pricing), a 50-turn conversation costs ~$0.09 without memory vs. ~$0.0015 with MemoryMesh -- a **60x reduction**.
This is not just a cost saving. It also means your application stays within context window limits, responds faster (fewer tokens to process), and retrieves only what is actually relevant instead of forcing the LLM to sift through thousands of tokens of conversational noise.
---
## Installation
```bash
# Base installation (no external dependencies, uses built-in keyword matching)
pip install memorymesh
# With local embeddings (sentence-transformers, runs entirely on your machine)
pip install "memorymesh[local]"
# With Ollama embeddings (connect to a local Ollama instance)
pip install "memorymesh[ollama]"
# With OpenAI embeddings
pip install "memorymesh[openai]"
# Everything
pip install "memorymesh[all]"
```
---
## Features
- **Simple API** -- `remember()`, `recall()`, `forget()`. That is the core interface. No boilerplate, no configuration ceremony.
- **SQLite-Based** -- All memory is stored in SQLite files. No database servers, no infrastructure. Automatic schema migrations keep existing databases up to date.
- **Framework-Agnostic** -- Works with any LLM, any framework, any architecture. Use it with LangChain, LlamaIndex, raw API calls, or your own custom setup.
- **Pluggable Embeddings** -- Choose the embedding provider that fits your needs: local models, Ollama, OpenAI, or plain keyword matching with zero dependencies.
- **Time-Based Decay** -- Memories naturally fade over time, just like human memory. Recent and frequently accessed memories are ranked higher.
- **Auto-Importance Scoring** -- Automatically detect and prioritize key information. MemoryMesh analyzes text for keywords, structure, and specificity to assign importance scores without manual tuning.
- **Episodic Memory** -- Group memories by conversation session. Recall with session context for better continuity across multi-turn interactions.
- **Memory Compaction** -- Detect and merge similar or redundant memories to keep your store lean. Reduces noise and improves recall accuracy over time.
- **Encrypted Storage** -- Optionally encrypt memory text and metadata at rest. All data stays protected on disk using application-level encryption with zero external dependencies.
- **Privacy-First** -- All data stays on your machine by default. No telemetry, no cloud calls, no data collection. You own your data.
- **Cross-Platform** -- Runs on Linux, macOS, and Windows. Anywhere Python runs, MemoryMesh runs.
- **MCP Support** -- Expose memory as an MCP (Model Context Protocol) server for seamless integration with AI assistants.
- **Multi-Tool Sync** -- Sync memories to Claude Code, OpenAI Codex CLI, and Google Gemini CLI simultaneously. Your knowledge follows you across tools.
- **Memory Categories** -- Automatic categorization with scope routing. Preferences and guardrails go to global scope; decisions and patterns stay in the project. MemoryMesh decides where memories belong.
- **Session Start** -- Structured context retrieval at the beginning of every AI session. Returns user profile, guardrails, common mistakes, and project context in one call.
- **Auto-Compaction** -- Transparent deduplication that runs automatically during normal use. Like SQLite's auto-vacuum, you never need to think about it.
- **CLI** -- Inspect, search, export, compact, and manage memories from the terminal. No Python code required.
- **Pin Support** -- Pin critical memories so they never decay and always rank at the top. Use for guardrails and non-negotiable rules.
- **Privacy Guard** -- Automatically detect secrets (API keys, tokens, passwords) before storing. Optionally redact them with `redact=True`.
- **Contradiction Detection** -- Catch conflicting facts when storing new memories. Choose to keep both, update, or skip.
- **Retrieval Filters** -- Filter recall by category, minimum importance, time range, or metadata key-value pairs.
- **Web Dashboard** -- Browse and search all your memories in a local web UI (`memorymesh ui`).
- **Evaluation Suite** -- Built-in tests for recall quality and adversarial robustness.
---
## What's New in v3
- **Pin support** -- `remember("critical rule", pin=True)` sets importance to 1.0 with zero decay.
- **Privacy guard** -- Detects API keys, GitHub tokens, JWTs, AWS keys, passwords, and more. Use `redact=True` to auto-redact before storing.
- **Contradiction detection** -- `on_conflict="update"` replaces contradicting memories; `"skip"` discards the new one; `"keep_both"` flags it.
- **Retrieval filters** -- `recall(query, category="decision", min_importance=0.7, time_range=(...), metadata_filter={...})`.
- **Web dashboard** -- `memorymesh ui` launches a local browser-based memory viewer.
- **Evaluation suite** -- 32 tests covering recall quality, adversarial inputs, scope isolation, and importance ranking.
---
## Works with Any LLM
MemoryMesh is not tied to any specific LLM provider. It works as a memory layer alongside whatever model you use:
```python
from memorymesh import MemoryMesh
memory = MemoryMesh()
# Store memories from any source
memory.remember("User is a senior Python developer")
memory.remember("User is building a healthcare startup")
memory.remember("User prefers concise explanations")
# Recall relevant context before calling ANY LLM
context = memory.recall("What do I know about this user?")
# Use with Claude
response = claude_client.messages.create(
model="claude-sonnet-4-20250514",
system=f"User context: {context}",
messages=[{"role": "user", "content": "Help me design an API"}],
)
# Or GPT
response = openai_client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": f"User context: {context}"},
{"role": "user", "content": "Help me design an API"},
],
)
# Or Ollama, Gemini, Mistral, Llama, or literally anything else
```
---
## Documentation
**Full documentation:** [**sparkvibe-io.github.io/memorymesh**](https://sparkvibe-io.github.io/memorymesh/)
| Guide | Description |
|---|---|
| **[Configuration](https://sparkvibe-io.github.io/memorymesh/configuration/)** | Embedding providers, Ollama setup, all constructor options |
| **[MCP Server](https://sparkvibe-io.github.io/memorymesh/mcp-server/)** | Setup for Claude Code, Cursor, Windsurf + teaching your AI to use memory |
| **[Multi-Tool Sync](https://sparkvibe-io.github.io/memorymesh/multi-tool-sync/)** | Sync memories across Claude, Codex, and Gemini CLI |
| **[CLI Reference](https://sparkvibe-io.github.io/memorymesh/cli/)** | Terminal commands for inspecting and managing memories |
| **[API Reference](https://sparkvibe-io.github.io/memorymesh/api/)** | Full Python API with all methods and parameters |
| **[Architecture](https://sparkvibe-io.github.io/memorymesh/architecture/)** | System design, dual-store pattern, and schema migrations |
| **[FAQ](https://sparkvibe-io.github.io/memorymesh/faq/)** | Common questions answered |
| **[Benchmarks](https://sparkvibe-io.github.io/memorymesh/benchmarks/)** | Performance numbers and how to run benchmarks |
---
## Roadmap
We are currently on **v3.0 -- Intelligent Memory**. Next up:
### v4.0 -- Adaptive Memory
- Smart sync -- export top-N most relevant memories, not all
- Auto-remember via hooks/triggers -- no system prompt instructions needed
- Graph-based memory relationships
- Plugin system for custom relevance strategies
### v5.0 -- Anticipatory Intelligence
- Question and behavioral learning across sessions
- Proactive anticipation -- AI that knows what you need before you ask
- Multi-device sync
- Cross-session episodic continuity
See the [full roadmap](https://github.com/sparkvibe-io/memorymesh/blob/main/ROADMAP.md) for version history and completed milestones.
---
## Contributing
We welcome contributions from everyone. See [CONTRIBUTING.md](https://github.com/sparkvibe-io/memorymesh/blob/main/CONTRIBUTING.md) for guidelines on how to get started.
---
## License
MIT License. See [LICENSE](https://github.com/sparkvibe-io/memorymesh/blob/main/LICENSE) for the full text.
---
## Built for Humanity
MemoryMesh is part of the [SparkVibe](https://github.com/sparkvibe-io) open-source AI initiative. We believe that foundational AI tools should be free, open, and accessible to everyone -- not locked behind paywalls, cloud subscriptions, or proprietary platforms.
Our mission is to reduce the cost and complexity of building AI applications, so that developers everywhere -- whether at a startup, a research lab, a nonprofit, or learning on their own -- can build intelligent systems without barriers.
If AI is going to shape the future, the tools that power it should belong to all of us.
| text/markdown | null | SparkVibe <hello@sparkvibe.io> | null | null | null | agent, ai, embeddings, llm, memory, rag, semantic-memory, sqlite | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"openai>=1.0; extra == \"all\"",
"sentence-transformers>=2.0; extra == \"all\"",
"torch; extra == \"all\"",
"mypy; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff; extra == \"dev\"",
"mkdocs-material>=9.0; extra == \"docs\"",
"sentence-transformers>=2.0; extra == \"local\"",
"torch; extra == \"local\"",
"openai>=1.0; extra == \"openai\""
] | [] | [] | [] | [
"Homepage, https://github.com/sparkvibe-io/memorymesh",
"Repository, https://github.com/sparkvibe-io/memorymesh",
"Issues, https://github.com/sparkvibe-io/memorymesh/issues",
"Documentation, https://sparkvibe-io.github.io/memorymesh/"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-20T23:40:08.229497 | memorymesh-3.1.0.tar.gz | 222,855 | ed/ff/8ace240ce687009fb14429339ba1589c6970ebd87a70f5af8d2fb0fa70e3/memorymesh-3.1.0.tar.gz | source | sdist | null | false | 828bf33ef9c234ebd48c234819f93beb | 7a46b59e6624f2a6c6461e3591af069ab5d1f908d246162378b79675f1efec6d | edff8ace240ce687009fb14429339ba1589c6970ebd87a70f5af8d2fb0fa70e3 | MIT | [
"LICENSE"
] | 233 |
2.4 | litellm-proxy-extras | 0.4.45 | Additional files for the LiteLLM Proxy. Reduces the size of the main litellm package. | Additional files for the proxy. Reduces the size of the main litellm package.
Currently, only stores the migration.sql files for litellm-proxy.
To install, run:
```bash
pip install litellm-proxy-extras
```
OR
```bash
pip install litellm[proxy] # installs litellm-proxy-extras and other proxy dependencies
```
To use the migrations, run:
```bash
litellm --use_prisma_migrate
```
| text/markdown | BerriAI | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | !=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,>=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"homepage, https://litellm.ai",
"Homepage, https://litellm.ai",
"repository, https://github.com/BerriAI/litellm",
"Repository, https://github.com/BerriAI/litellm",
"documentation, https://docs.litellm.ai",
"Documentation, https://docs.litellm.ai"
] | poetry/2.2.1 CPython/3.12.8 Darwin/25.2.0 | 2026-02-20T23:39:55.978364 | litellm_proxy_extras-0.4.45.tar.gz | 26,746 | fd/ed/2d111d994a1c11b9a55d8856e2e219c0595b0dc2615f42fe3aa627cce146/litellm_proxy_extras-0.4.45.tar.gz | source | sdist | null | false | db6f1933a58795447275a16f2fe71d23 | 7edf9c616b65d3d9700b59ea8fcb48466956c8608b9870119b0e77f84afcd0a0 | fded2d111d994a1c11b9a55d8856e2e219c0595b0dc2615f42fe3aa627cce146 | null | [] | 7,499 |
2.4 | diagrid | 0.1.10 | Diagrid namespace package | # Diagrid
**Durable AI Agents with Diagrid Catalyst**
The `diagrid` package is the primary SDK for building durable, fault-tolerant AI agents using [Diagrid Catalyst](https://www.diagrid.io/catalyst). It integrates seamlessly with popular agent frameworks, wrapping them in Dapr Workflows to ensure your agents can recover from failures, persist state across restarts, and scale effectively.
Get started with [Catalyst for free](https://diagrid.ws/get-catalyst).
## Community
Have questions, hit a bug, or want to share what you're building? Join the [Diagrid Community Discord](https://diagrid.ws/diagrid-community) to connect with the team and other users.
## Features
- **Multi-Framework Support:** Native integrations for LangGraph, CrewAI, Google ADK, Strands, and OpenAI Agents.
- **Durability:** Agent state is automatically persisted. If your process crashes, the agent resumes from the last successful step.
- **Fault Tolerance:** Built-in retries and error handling powered by Dapr.
- **Observability:** Deep insights into agent execution, tool calls, and state transitions.
## Installation
Install the base package along with the extension for your chosen framework:
```bash
# For LangGraph
pip install "diagrid[langgraph]"
# For CrewAI
pip install "diagrid[crewai]"
# For Google ADK
pip install "diagrid[adk]"
# For Strands
pip install "diagrid[strands]"
# For OpenAI Agents
pip install "diagrid[openai_agents]"
```
## Prerequisites
- **Python:** 3.11 or higher
- **Dapr:** [Dapr CLI](https://docs.dapr.io/getting-started/install-dapr-cli/) installed and initialized (`dapr init`).
## CLI
The `diagrid` package includes the `diagridpy` CLI — a tool for setting up your local development environment and deploying agents to Kubernetes with a single command.
### `diagridpy init`
Bootstraps a complete local development environment in one step:
1. **Authenticates** with Diagrid Catalyst (browser-based device code flow or API key)
2. **Creates a Catalyst project** to manage your agent's AppID and connection details
3. **Clones a quickstart template** for your chosen framework into a new directory
4. **Provisions a local Kubernetes cluster** using [kind](https://kind.sigs.k8s.io/)
5. **Installs the `catalyst-agents` Helm chart** — Dapr, observability stack, Redis, and LLM backend
6. **Creates a Catalyst AppID** with a provisioned API token
```bash
# Initialize with the default framework (dapr-agents)
diagridpy init my-project
# Initialize with a specific framework
diagridpy init my-project --framework langgraph
# Use an API key instead of browser auth
diagridpy init my-project --framework crewai --api-key <YOUR_KEY>
```
Supported frameworks: `dapr-agents`, `langgraph`, `crewai`, `adk`, `strands`, `openai-agents`
### `diagridpy deploy`
Builds your agent image, loads it into the local cluster, and deploys it with the correct Catalyst connection details automatically injected as environment variables.
```bash
# Build and deploy from the current directory (requires a Dockerfile)
diagridpy deploy
# Deploy and immediately trigger the agent with a prompt
diagridpy deploy --trigger "Plan a trip to Paris"
# Override image name, tag, or target project
diagridpy deploy --image my-agent --tag v1 --project my-project
```
Run `diagridpy --help` or `diagridpy <command> --help` to see all available options.
## Quick Start
### LangGraph
Wrap your LangGraph `StateGraph` with `DaprWorkflowGraphRunner` to make it durable.
```python
import os
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, ToolMessage
from langchain_core.tools import tool
from langgraph.graph import StateGraph, START, END, MessagesState
from diagrid.agent.langgraph import DaprWorkflowGraphRunner
@tool
def get_weather(city: str) -> str:
"""Get current weather for a city."""
return f"Sunny in {city}, 72F"
tools = [get_weather]
tools_by_name = {t.name: t for t in tools}
model = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)
def call_model(state: MessagesState) -> dict:
response = model.invoke(state["messages"])
return {"messages": [response]}
def call_tools(state: MessagesState) -> dict:
last_message = state["messages"][-1]
results = []
for tc in last_message.tool_calls:
result = tools_by_name[tc["name"]].invoke(tc["args"])
results.append(
ToolMessage(content=str(result), tool_call_id=tc["id"])
)
return {"messages": results}
def should_use_tools(state: MessagesState) -> str:
last_message = state["messages"][-1]
if hasattr(last_message, "tool_calls") and last_message.tool_calls:
return "tools"
return "__end__"
graph = StateGraph(MessagesState)
graph.add_node("agent", call_model)
graph.add_node("tools", call_tools)
graph.add_edge(START, "agent")
graph.add_conditional_edges("agent", should_use_tools)
graph.add_edge("tools", "agent")
runner = DaprWorkflowGraphRunner(graph=graph.compile())
runner.serve(
port=int(os.environ.get("APP_PORT", "5001")),
input_mapper=lambda req: {"messages": [HumanMessage(content=req["task"])]},
)
```
### CrewAI
Wrap your CrewAI `Agent` with `DaprWorkflowAgentRunner`.
```python
import os
from crewai import Agent
from crewai.tools import tool
from diagrid.agent.crewai import DaprWorkflowAgentRunner
@tool("Get weather")
def get_weather(city: str) -> str:
"""Get current weather for a city."""
return f"Sunny in {city}, 72F"
agent = Agent(
role="Assistant",
goal="Help users",
backstory="Expert assistant",
tools=[get_weather],
llm="openai/gpt-4o-mini",
)
runner = DaprWorkflowAgentRunner(agent=agent)
runner.serve(port=int(os.environ.get("APP_PORT", "5001")))
```
### Google ADK
Use `DaprWorkflowAgentRunner` to execute Google ADK agents as workflows.
```python
import os
from google.adk.agents import LlmAgent
from google.adk.tools import FunctionTool
from diagrid.agent.adk import DaprWorkflowAgentRunner
def get_weather(city: str) -> str:
"""Get current weather for a city."""
return f"Sunny in {city}, 72F"
agent = LlmAgent(
name="assistant",
model="gemini-2.0-flash",
tools=[FunctionTool(get_weather)],
)
runner = DaprWorkflowAgentRunner(agent=agent)
runner.serve(port=int(os.environ.get("APP_PORT", "5001")))
```
### Strands
Use the `DaprWorkflowAgentRunner` wrapper for Strands.
```python
import os
from strands import Agent, tool
from strands.models.openai import OpenAIModel
from diagrid.agent.strands import DaprWorkflowAgentRunner
@tool
def get_weather(city: str) -> str:
"""Get current weather for a city."""
return f"Weather in {city}: Sunny, 72F"
agent = Agent(
model=OpenAIModel(model_id="gpt-4o-mini"),
tools=[get_weather],
system_prompt="You are a helpful assistant.",
)
runner = DaprWorkflowAgentRunner(agent=agent)
runner.serve(port=int(os.environ.get("APP_PORT", "5001")))
```
### OpenAI Agents
Use the `DaprWorkflowAgentRunner` wrapper for OpenAI Agents.
```python
import os
from agents import Agent, function_tool
from diagrid.agent.openai_agents import DaprWorkflowAgentRunner
@function_tool
def get_weather(city: str) -> str:
"""Get current weather for a city."""
return f"Sunny in {city}, 72F"
agent = Agent(
name="assistant",
instructions="You are a helpful assistant.",
model="gpt-4o-mini",
tools=[get_weather],
)
runner = DaprWorkflowAgentRunner(agent=agent)
runner.serve(port=int(os.environ.get("APP_PORT", "5001")))
```
## How It Works
This SDK leverages [Dapr Workflows](https://docs.dapr.io/developing-applications/building-blocks/workflow/) to orchestrate agent execution.
1. **Orchestration:** The agent's control loop is modeled as a workflow.
2. **Activities:** Each tool execution or LLM call is modeled as a durable activity.
3. **State Store:** Dapr saves the workflow state to a configured state store (e.g., Redis, CosmosDB) after every step.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"dapr>=1.16.0",
"dapr-ext-workflow>=1.16.0",
"diagrid-cli>=0.1.0",
"dapr-agents>=0.10.6; extra == \"agent-core\"",
"dapr-ext-langgraph>=1.17.0rc3; extra == \"agent-core\"",
"dapr-ext-strands>=1.17.0rc3; extra == \"agent-core\"",
"fastapi>=0.129.0; extra == \"agent-core\"",
"uvicorn>=0.41.0; extra == \"agent-core\"",
"pydantic>=2.12.5; extra == \"agent-core\"",
"langgraph>=0.3.6; extra == \"agent-core\"",
"langgraph>=1.0.8; extra == \"langgraph\"",
"langchain-core>=0.3.0; extra == \"langgraph\"",
"diagrid[agent-core]; extra == \"langgraph\"",
"crewai>=1.6.1; extra == \"crewai\"",
"litellm>=1.0.0; extra == \"crewai\"",
"diagrid[agent-core]; extra == \"crewai\"",
"google-adk>=1.0.0; extra == \"adk\"",
"diagrid[agent-core]; extra == \"adk\"",
"openai>=2.21.0; extra == \"strands\"",
"strands-agents>=1.26.0; extra == \"strands\"",
"diagrid[agent-core]; extra == \"strands\"",
"openai-agents>=0.1.0; extra == \"openai-agents\"",
"openai>=1.0.0; extra == \"openai-agents\"",
"diagrid[agent-core]; extra == \"openai-agents\"",
"diagrid[adk,crewai,langgraph,openai_agents,strands]; extra == \"all\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:39:45.924052 | diagrid-0.1.10.tar.gz | 122,917 | c5/5f/6c529cf431c5de702e4109537a0925a629fc6aa51ccef45c6582b89ec8d2/diagrid-0.1.10.tar.gz | source | sdist | null | false | 3bb70d5fa2667bc7edc6ce83da68c155 | cc2ba51cb63946d5270fe4b4f8a7b48a4ae0652e29d20913af190d52e46ee264 | c55f6c529cf431c5de702e4109537a0925a629fc6aa51ccef45c6582b89ec8d2 | null | [] | 226 |
2.4 | diagrid-cli | 0.1.10 | Diagrid CLI - Command-line interface for Diagrid Catalyst agents | # Diagrid CLI
The Diagrid CLI (`diagrid-cli`) is a command-line tool for managing Diagrid Catalyst resources, deploying agents, and handling infrastructure tasks.
## Community
Have questions, hit a bug, or want to share what you're building? Join the [Diagrid Community Discord](https://diagrid.ws/diagrid-community) to connect with the team and other users.
## Installation
The CLI is installed automatically when you install the main `diagrid` package. You can also install it standalone:
```bash
pip install diagrid-cli
```
## Usage
The CLI provides several command groups for different tasks. Run `diagrid --help` to see all available commands.
### Common Commands
#### Initialization
Initialize a new Diagrid Catalyst project.
```bash
diagrid init
```
#### Deployment
Deploy your agent to a target environment.
```bash
# Deploy to the currently configured context
diagrid deploy
```
#### Infrastructure
Manage local development infrastructure using Kind (Kubernetes in Docker) and Helm.
```bash
# Check if required tools (Docker, Helm, Kind, Kubectl) are installed
diagrid infra check
# Set up a local development cluster
diagrid infra setup
```
## Configuration
The CLI manages configuration and authentication contexts.
- **Authentication:** Supports API key and device code authentication flows for connecting to Diagrid Catalyst.
- **Contexts:** Switch between different environments (e.g., local, dev, prod).
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"diagrid-core>=0.1.0",
"click>=8.1.0",
"rich>=13.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:39:35.610312 | diagrid_cli-0.1.10.tar.gz | 20,942 | 25/9d/f71e5f6770d166dcbb2e6a0880fc58f7789469e46b398b2007d25d41d037/diagrid_cli-0.1.10.tar.gz | source | sdist | null | false | 1c9fa2bfeba39272f2a6df8a3371a03d | dc194791498a8956742f770da0896cf3eef99e777d5a9fe55ed0dbd93db6de92 | 259df71e5f6770d166dcbb2e6a0880fc58f7789469e46b398b2007d25d41d037 | null | [] | 231 |
2.4 | diagrid-core | 0.1.10 | Diagrid Core - Shared auth and Catalyst API client | # Diagrid Core
`diagrid-core` is the foundational library for Diagrid Catalyst Python SDKs. It provides shared utilities for authentication, configuration management, and API client interactions.
**Note:** This package is primarily intended for internal use by `diagrid` and `diagrid-cli`, or for advanced users building custom integrations with the Diagrid Catalyst API.
## Community
Have questions, hit a bug, or want to share what you're building? Join the [Diagrid Community Discord](https://diagrid.ws/diagrid-community) to connect with the team and other users.
## Installation
```bash
pip install diagrid-core
```
## Features
- **Authentication:** Handles Catalyst API authentication, including API Key management and OAuth2 device code flows.
- **API Client:** A robust HTTP client for interacting with Diagrid Catalyst services, built on `httpx`.
- **Configuration:** Manages global configuration settings, environment variables, and context persistence.
- **Type Safety:** Fully typed with modern Python type hints.
## Requirements
- Python 3.11+
- `httpx`
- `pydantic`
- `pyjwt`
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx>=0.27.0",
"pydantic>=2.0.0",
"pyjwt[crypto]>=2.8.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:39:25.988381 | diagrid_core-0.1.10.tar.gz | 11,916 | 66/46/b4302057e26a21c9b1a2be256de93212a5447fdb1df33e3ddd279c27d2f1/diagrid_core-0.1.10.tar.gz | source | sdist | null | false | 4e785707b6370aa139231cb10aa8938e | 03de7c50c70b397ce3740b79421451b641fa0cae17068c39aa315af72ad0487d | 6646b4302057e26a21c9b1a2be256de93212a5447fdb1df33e3ddd279c27d2f1 | null | [] | 237 |
2.4 | lbt-grasshopper | 1.9.53 | Collection of all Ladybug Tools plugins for Grasshopper | [](https://github.com/ladybug-tools/lbt-grasshopper/actions)
[](https://github.com/IronLanguages/ironpython2/releases/tag/ipy-2.7.8/)
# lbt-grasshopper
Collection of all Ladybug Tools plugins for Grasshopper.
Note that this repository and corresponding Python package does not contain any
code and it simply exists to provide a shortcut for installing all of the grasshopper
plugin packages together. The repository CI also manages the assignment of version
numbers to the totality of the grasshopper plugins together.
## Included Grasshopper Plugins
Running `pip install lbt-grasshopper` will result in the installation of the
following Grasshopper plugin packages:
* [ladybug-grasshopper](https://github.com/ladybug-tools/ladybug-grasshopper)
* [honeybee-grasshopper-core](https://github.com/ladybug-tools/honeybee-grasshopper-core)
* [honeybee-grasshopper-radiance](https://github.com/ladybug-tools/honeybee-grasshopper-radiance)
* [honeybee-grasshopper-energy](https://github.com/ladybug-tools/honeybee-grasshopper-energy)
* [dragonfly-grasshopper](https://github.com/ladybug-tools/dragonfly-grasshopper)
All of the repositories above contain only Grasshopper components and their
source code.
## Installation
See [the wiki of this repository](https://github.com/ladybug-tools/lbt-grasshopper/wiki)
for a list of instructions to install the Grasshopper plugin for free.
Alternatively, you can use the [Pollination Grasshopper single-click installer ](https://www.pollination.cloud/grasshopper-plugin) to install the Ladybug Tools plugin for free.
| text/markdown | Ladybug Tools | info@ladybug.tools | null | null | AGPL-3.0 | null | [
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: Implementation :: IronPython",
"Operating System :: OS Independent"
] | [] | https://github.com/ladybug-tools/lbt-grasshopper | null | null | [] | [] | [] | [
"ladybug-grasshopper==1.70.6",
"honeybee-grasshopper-core==1.43.5",
"honeybee-grasshopper-radiance==1.36.4",
"honeybee-grasshopper-energy==1.59.8",
"dragonfly-grasshopper==1.64.2",
"fairyfly-grasshopper==0.5.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-20T23:39:11.210709 | lbt_grasshopper-1.9.53.tar.gz | 780,513 | f0/0b/803a2cb0c19c6229c4a8bcfd48ed0a0b434bb4319ed74c2424ff6f78c7f3/lbt_grasshopper-1.9.53.tar.gz | source | sdist | null | false | 5b0493ad157b92b4088021a22275cba4 | 746d69b96d65b98bcbf7fae07566af44dc1bba2709e372dc3f2c0484e98d73bf | f00b803a2cb0c19c6229c4a8bcfd48ed0a0b434bb4319ed74c2424ff6f78c7f3 | null | [
"LICENSE"
] | 259 |
2.4 | rayobrowse | 0.1.31 | Lightweight client SDK for the Rayobrowse stealth browser platform | <p align="center">
<img src="assets/rayobrowse.png" alt="rayobrowse">
</p>
<p align="center">
<em>Self-hosted Chromium stealth browser for web scraping and automation.</em>
</p>
## Overview
rayobrowse is a Chromium-based stealth browser for web scraping, AI agents, and automation workflows. It runs on headless Linux servers (no GPU required) and works with any tool that speaks CDP: Playwright, Puppeteer, Selenium, OpenClaw, Scrapy, and custom automation scripts.
Standard headless Chromium gets blocked immediately by modern bot detection. rayobrowse fixes this with realistic fingerprints (user agent, screen resolution, WebGL, fonts, timezone, and dozens of other signals) that make each session look like a real device.
It runs inside Docker (x86_64 and ARM64) and is actively used in production on [Rayobyte's scraping API](https://rayobyte.com/products/web-scraping-api) to scrape millions of pages per day across some of the most difficult, high-value websites.
---
## Quick Start
**1. Set up environment**
```bash
cp .env.example .env
```
Open `.env` and set `STEALTH_BROWSER_ACCEPT_TERMS=true` to confirm you agree to the [LICENSE](LICENSE). The daemon will not create browsers until this is set.
**2. Start the container**
```bash
docker compose up -d
```
Docker automatically pulls the correct image for your architecture (x86_64 or ARM64).
**3. Connect and automate**
Any CDP client can connect directly to the `/connect` endpoint. No SDK install required.
```python
# pip install playwright && playwright install
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.connect_over_cdp(
"ws://localhost:9222/connect?headless=false&os=windows"
)
page = browser.new_context().new_page()
page.goto("https://example.com")
print(page.title())
input("Browser open — view at http://localhost:6080/vnc.html. Press Enter to close...")
browser.close()
```
View the browser live at [http://localhost:6080/vnc.html](http://localhost:6080/vnc.html) (noVNC).
For more control (listing, deleting, managing multiple browsers), install the Python SDK:
```bash
pip install -r requirements.txt
python examples/playwright_example.py
```
---
## Upgrading
To upgrade to the latest version of rayobrowse:
```bash
# Pull the latest Docker image and restart the container
docker compose pull && docker compose up -d
# Upgrade the Python SDK
pip install --upgrade -r requirements.txt
```
The Docker image and Python SDK are versioned independently:
- **Docker image** (`rayobyte/rayobrowse:latest`) — contains Chromium binary, fingerprint engine, daemon server
- **Python SDK** (`rayobrowse` on PyPI) — lightweight client for `create_browser()`
Both are updated regularly. The SDK maintains backward compatibility with older daemon versions, but upgrading both together is recommended for the best experience.
---
## Requirements
- **Docker** — the browser runs inside a container
- **Python 3.10+** — for the SDK client and examples
- **2GB+ RAM** available (~300MB per browser instance)
Works on Linux, Windows (native or WSL2), and macOS. Both **x86_64 (amd64)** and **ARM64 (Apple Silicon, AWS Graviton)** are supported — the Docker image is built and tested for both architectures, and Docker automatically pulls the correct one.
### What's in the pip package vs. the Docker image
| Component | Where it lives |
| --- | --- |
| `rayobrowse` Python SDK (`create_browser()`, client) | `pip install rayobrowse` — lightweight, pure-Python |
| Chromium binary, fingerprint engine, daemon server | Docker image (`rayobyte/rayobrowse`) |
The SDK is intentionally minimal — it issues HTTP requests to the daemon and returns CDP WebSocket URLs. All browser-level logic runs inside the container.
---
## Why This Exists
Browser automation is becoming the backbone of web interaction, not just for scraping, but for AI agents, workflow automation, and any tool that needs to navigate the real web. Projects like OpenClaw, Scrapy, Firecrawl, and dozens of others all need a browser to do their job. The problem is that standard headless Chromium gets detected and blocked by most websites. Every one of these tools hits the same wall.
rayobrowse gives them a browser that actually works. It looks like a real device, with a matching fingerprint across user agent, screen resolution, WebGL, fonts, timezone, and every other signal that detection systems check. Any tool that speaks CDP (Chrome DevTools Protocol) can connect and automate without getting blocked.
We needed a browser that:
- Uses **Chromium** (71% browser market share, blending in is key)
- Runs reliably on **headless Linux servers** with no GPU
- Works with **any CDP client** (Playwright, Selenium, Puppeteer, AI agents, custom tools)
- Uses real-world, diverse fingerprints
- Can be deployed and updated at scale
- Is commercially maintained long-term
Since no existing solution met these requirements, we built rayobrowse. It's developed as part of [our scraping platform](https://rayobyte.com/products/web-scraping-api), so it'll be commercially supported and up-to-date with the latest anti-scraping techniques.
---
## Architecture
<p align="center">
<img src="assets/architecture.png" alt="rayobrowse architecture">
</p>
rayobrowse runs as a Docker container that bundles the custom Chromium binary, fingerprint engine, and a daemon server. Your code runs on the host and connects over CDP:
There are two ways to get a browser:
| Method | How it works | Best for |
| --- | --- | --- |
| **`/connect` endpoint** | Connect to `ws://localhost:9222/connect?headless=true&os=windows`. A stealth browser is auto-created on connection and cleaned up on disconnect. | Third-party tools (OpenClaw, Scrapy, Firecrawl), quick scripts, any CDP client |
| **Python SDK** | Call `create_browser()` to get a CDP WebSocket URL, then connect with your automation library. | Fine-grained control, multiple browsers, custom lifecycle management |
The `/connect` endpoint is the simplest path. Point any CDP-capable tool at a single static URL and it just works. The Python SDK gives you more control over browser creation, listing, and deletion.
The noVNC viewer on `:6080` lets you watch browser sessions in real time, useful for debugging and demos.
Zero system dependencies on your host machine beyond Docker. No Xvfb, no font packages, no Chromium install.
---
## How It Works
### Chromium Fork
rayobrowse tracks upstream Chromium releases and applies a focused set of patches (using a [plaster approach similar to Brave](https://github.com/brave/brave-core/blob/master/tools/cr/plaster.py)):
- Normalize and harden exposed browser APIs
- Reduce fingerprint entropy leaks
- Improve automation compatibility
- Preserve native Chromium behavior where possible
Updates are continuously validated against internal test targets before release.
### Fingerprint Injection
At startup, each session is assigned a real-world device profile covering:
- User agent, platform, and OS metadata
- Screen resolution and media features
- Graphics and rendering attributes (Canvas, WebGL)
- Fonts matching the target OS
- Locale, timezone, and WebRTC configuration
Profiles are selected dynamically from a database of thousands of real-world fingerprints collected using the same techniques that major anti-bot companies use.
### Automation Layer
rayobrowse exposes standard Chromium interfaces and avoids non-standard hooks that increase detection risk. Automation connects through native CDP and operates on unmodified page contexts — your existing Playwright, Selenium, and Puppeteer scripts work as-is.
### CI & Validation
Every release passes through automated testing including fingerprint consistency checks, detection regression tests, and stability benchmarks. Releases are only published once they pass all validation stages.
---
## Features
### Fingerprint Spoofing
Use your own static fingerprint or pull from our database of thousands of real-world fingerprints. Vectors emulated include:
- OS (Windows, Android thoroughly tested; macOS and Linux experimental)
- WebRTC and DNS leak protection
- Canvas and WebGL
- Fonts (matched to target OS)
- Screen resolution
- `hardwareConcurrency`
- Timezone matching with proxy geolocation (via MaxMind GeoLite2)
- ...and much more
### Human Mouse
Optional human-like mouse movement and clicking, inspired by [HumanCursor](https://github.com/riflosnake/HumanCursor). Use Playwright's `page.click()` and `page.mouse.move()` as you normally do — our system applies natural mouse curves and realistic click timing automatically.

### Proxy Support
Route traffic through any HTTP proxy, just as you would with standard Playwright.
### Headless or Headful
Run headful mode on headless Linux servers via Xvfb (handled inside the container). Watch sessions live through the built-in noVNC viewer.
---
## Usage
rayobrowse works with **Playwright, Selenium, Puppeteer**, and any tool that speaks CDP. See the [`examples/`](examples/) folder for ready-to-run scripts.
### Using `/connect` (simplest)
Connect any CDP client directly to the `/connect` endpoint. No SDK needed.
```python
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.connect_over_cdp(
"ws://localhost:9222/connect?headless=true&os=windows"
)
page = browser.new_context().new_page()
page.goto("https://example.com")
print(page.title())
browser.close()
```
Customize the browser via query parameters:
```
ws://localhost:9222/connect?headless=true&os=windows&proxy=http://user:pass@host:port
```
All `/connect` parameters:
| Parameter | Default | Description |
|-----------|---------|-------------|
| `headless` | `true` | `true` or `false` |
| `os` | `linux` | Fingerprint OS: `windows`, `linux`, `android`, `macos` |
| `browser_name` | `chrome` | Browser fingerprint type |
| `browser_version_min` | *(latest)* | Minimum Chrome version |
| `browser_version_max` | *(latest)* | Maximum Chrome version |
| `proxy` | *(none)* | Proxy URL, e.g. `http://user:pass@host:port` |
| `browser_language` | *(auto)* | Accept-Language value |
| `ui_language` | *(auto)* | Browser UI locale |
| `screen_width_min` | *(auto)* | Minimum screen width |
| `screen_height_min` | *(auto)* | Minimum screen height |
| `api_key` | *(none)* | Required in remote mode |
### Using the Python SDK
For more control over the browser lifecycle, use the Python SDK (`pip install -r requirements.txt`).
```python
from rayobrowse import create_browser
from playwright.sync_api import sync_playwright
ws_url = create_browser(headless=False, target_os="windows")
with sync_playwright() as p:
browser = p.chromium.connect_over_cdp(ws_url)
page = browser.contexts[0].pages[0]
page.goto("https://example.com")
browser.close()
```
#### With Proxy
```python
ws_url = create_browser(
headless=False,
target_os="windows",
proxy="http://user:pass@proxy.example.com:8000",
)
```
#### Specific Fingerprint Version
```python
ws_url = create_browser(
headless=False,
target_os="windows",
browser_name="chrome",
browser_version_min=144,
browser_version_max=144,
)
```
#### Multiple Browsers
```python
from rayobrowse import create_browser
from playwright.sync_api import sync_playwright
urls = [create_browser(headless=False, target_os="windows") for _ in range(3)]
with sync_playwright() as p:
for ws_url in urls:
browser = p.chromium.connect_over_cdp(ws_url)
browser.contexts[0].pages[0].goto("https://example.com")
input("Press Enter to close all...")
```
#### Static Fingerprint Files
For deterministic environments, fingerprints can be loaded from disk:
```python
ws_url = create_browser(
fingerprint_file="fingerprints/windows_chrome.json"
)
```
Due to anti-bot companies monitoring repos like ours, we don't publish fingerprint templates. Contact us at [support@rayobyte.com](mailto:support@rayobyte.com) and we'll send one over.
---
## Integrations
rayobrowse works with any tool that supports CDP. These guides walk through setup and include working examples:
| Tool | What it does | Guide |
| --- | --- | --- |
| **OpenClaw** | AI agent framework for browser automation | [`integrations/openclaw/`](integrations/openclaw/) |
| **Scrapy** | Web scraping framework with `scrapy-playwright` | [`integrations/scrapy/`](integrations/scrapy/) |
| **Playwright** | Browser automation library (Python, Node, .NET) | [`examples/playwright_example.py`](examples/playwright_example.py) |
| **Selenium** | Browser automation via WebDriver/CDP | [`examples/selenium_example.py`](examples/selenium_example.py) |
| **Puppeteer** | Node.js browser automation | [`examples/puppeteer_example.js`](examples/puppeteer_example.js) |
All integrations use the `/connect` endpoint, so there's nothing extra to install beyond the tool itself and a running rayobrowse container.
More integrations (Firecrawl, LangChain, etc.) are coming. If you have a specific tool you'd like supported, open an [issue](https://github.com/rayobyte-data/rayobrowse/issues).
---
## API Reference
### `create_browser(**kwargs) -> str`
Returns a CDP WebSocket URL. Connect to it with Playwright, Selenium, or Puppeteer.
| Parameter | Type | Default | Description |
| --------------------- | ------ | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `headless` | `bool` | `False` | Run without GUI |
| `target_os` | `str | list` | `tested: "windows", "android"; experimental: "linux", "macos"` |
| `browser_name` | `str` | `"chrome"` | Browser type |
| `browser_version_min` | `int` | `None` | Min Chrome version to emulate; if you use a value that doesn't match the Chromum version (144 currently), some websites can detect the mismatch. |
| `browser_version_max` | `int` | `None` | Max Chrome version to emulate |
| `proxy` | `str` | `None` | Proxy URL (`http://user:pass@host:port`) |
| `browser_language` | `str` | `None` | Language header (e.g., `"ko,en;q=0.9"`) |
| `fingerprint_file` | `str` | `None` | Path to a static fingerprint JSON file |
| `launch_args` | `list` | `None` | Extra Chromium flags |
| `api_key` | `str` | `None` | API key (overrides `STEALTH_BROWSER_API_KEY` env var) |
| `endpoint` | `str` | `None` | Daemon URL (overrides `RAYOBYTE_ENDPOINT` env var, default `http://localhost:9222`) |
---
## Configuration
### Environment Variables
Set in `.env` (next to `docker-compose.yml`):
| Variable | Default | Description |
| ------------------------------ | --------- | ---------------------------------------------------------------------------------- |
| `STEALTH_BROWSER_ACCEPT_TERMS` | `false` | **Required.** Set to `true` to accept the [LICENSE](LICENSE) and enable the daemon |
| `STEALTH_BROWSER_API_KEY` | *(empty)* | API key for paid plans. Also used for remote mode endpoint auth |
| `STEALTH_BROWSER_NOVNC` | `true` | Enable browser viewer at [http://localhost:6080](http://localhost:6080) |
| `STEALTH_BROWSER_DAEMON_MODE` | `local` | `local` or `remote`. Remote enables API key auth on management endpoints |
| `STEALTH_BROWSER_PUBLIC_URL` | *(empty)* | Base URL for CDP endpoints in remote mode. Auto-detects public IP if not set |
| `RAYOBROWSE_PORT` | `9222` | Host port (set in `.env`, used by `docker-compose.yml`). Set to `80` for remote |
Changes require a container restart:
```bash
docker compose up -d
```
### Viewing the Browser
With `STEALTH_BROWSER_NOVNC=true` (the default), open [http://localhost:6080](http://localhost:6080) to watch browsers in real time.
---
## Remote / Cloud Mode (*Beta*)
By default, rayobrowse runs in **local mode** — your SDK connects to the daemon on localhost. For cloud deployments where external clients need direct CDP access, switch to **remote mode**. If you need help setting up, please contact [support@rayobyte.com](mailto:support@rayobyte.com).
### How It Works
```
┌──────────────┐ POST /browser ┌─────────────────────────┐
│ Your Server │ ──────────────────────► │ rayobrowse │
│ (controller) │ ◄────── ws_endpoint ─── │ (remote mode, :80) │
└──────────────┘ └─────────────────────────┘
▲
┌──────────────┐ CDP WebSocket │
│ End User / │ ──────────────────────────────────┘
│ Worker │ (direct connection, no middleman)
└──────────────┘
```
Your server requests a browser via the REST API (authenticated with your API key). The daemon returns a `ws_endpoint` URL using the server's public IP. The end user connects directly to the browser over CDP — no proxy in between.
### Setup
**1. Configure `.env`**
```bash
STEALTH_BROWSER_ACCEPT_TERMS=true
STEALTH_BROWSER_API_KEY=your_api_key_here
STEALTH_BROWSER_DAEMON_MODE=remote
RAYOBROWSE_PORT=80
# Optional: set if you have a domain, otherwise public IP is auto-detected
# STEALTH_BROWSER_PUBLIC_URL=http://browser.example.com
```
**2. Start**
```bash
docker compose up -d
```
**3. Connect (two options)**
**Option A: `/connect` with `api_key` in the URL** (simplest, works with any CDP client)
```python
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.connect_over_cdp(
"ws://your-server/connect?headless=true&os=windows&api_key=your_api_key_here"
)
page = browser.new_context().new_page()
page.goto("https://example.com")
```
**Option B: REST API** (for managing multiple browsers programmatically)
```bash
curl -X POST http://your-server/browser \
-H "Content-Type: application/json" \
-H "X-API-Key: your_api_key_here" \
-d '{"headless": true, "os": "windows"}'
```
Response:
```json
{
"success": true,
"data": {
"browser_id": "br_59245e8658532863",
"ws_endpoint": "ws://your-server/cdp/br_59245e8658532863"
}
}
```
Then connect to the returned `ws_endpoint` (no additional auth needed, the browser ID is the token):
```python
browser = p.chromium.connect_over_cdp("ws://your-server/cdp/br_59245e8658532863")
```
### API Authentication (Remote Mode)
In remote mode, management endpoints require your API key:
| Endpoint | Auth Required | How to authenticate |
| ----------------------- | ------------- | --- |
| `WS /connect` | Yes | `api_key` query parameter in the URL |
| `POST /browser` | Yes | `X-API-Key: KEY` or `Authorization: Bearer KEY` header |
| `GET /browsers` | Yes | Same |
| `DELETE /browser/{id}` | Yes | Same |
| `GET /health` | No | |
| `WS /cdp/{browser_id}` | No | Browser ID is the token |
Requests without a valid key receive `401 Unauthorized`.
### Public IP Auto-Detection
When `STEALTH_BROWSER_PUBLIC_URL` is not set, the daemon automatically detects the server's public IP at startup using external services (ipify.org, ifconfig.me, checkip.amazonaws.com). This works well for cloud servers that auto-scale — each instance discovers its own IP without DNS configuration.
### TLS / HTTPS
The daemon serves HTTP. For HTTPS, put a reverse proxy in front (Cloudflare, nginx, Caddy, etc.). If using Cloudflare, just point your domain at the server IP and enable the proxy — no server-side changes needed.
---
## Licensing & Usage
We can't open-source the browser itself. We saw firsthand that major anti-bot companies reverse-engineered the great [camoufox](https://github.com/daijro/camoufox). You can read more about our [reasoning and journey here](https://rayobyte.com/blog/custom-chromium-stealth-browser-web-scraping/).
Our license prohibits companies on [this list](https://cdn.sb.rayobyte.com/list-of-prohibited-companies.txt) from using our software. If you're on this list and have a legitimate scraping use case, please contact [sales@rayobyte.com](mailto:sales@rayobyte.com).
For everyone else, rayobrowse is free to download and run locally:
### Free (Default)
- Install and run immediately — no registration
- Fully self-hosted
- One concurrent browser per machine
- No proxy restrictions
### Free Unlimited (with Rayobyte Proxies)
- Unlimited concurrency when routing traffic through supported Rayobyte rotating proxies
- Fully self-hosted
- Requires [rotating residential, ISP, or data center proxies through Rayobyte](https://rayobyte.com/products/)
### Paid Threads (Bring Your Own Proxy)
For teams running their own proxy infrastructure:
- Fully self-hosted
- Unlimited concurrency
- No proxy requirements
- Pay per active browser session
### Cloud Browser
- Self-host with [remote mode](#remote--cloud-mode) for direct CDP access from external clients
- Auto-scaling friendly — each daemon detects its own public IP
- Managed cloud browser service coming soon (scaling handled by us)
For Paid or Cloud access, fill out this [form](https://share.hsforms.com/1cTZ0E4WMTWGo5QuwrKpQlA3xcyu).
---
## Limitations & Expectations
rayobrowse is currently in **Beta**. We use it to scrape millions of pages per day, but your results may vary.
For beta testers who can provide valuable feedback, we'll offer free browser threads in exchange. Contact us through this [form](https://share.hsforms.com/1cTZ0E4WMTWGo5QuwrKpQlA3xcyu) if you're interested.
Specific limitations:
- Fingerprint coverage is optimized for **Windows and Android**. macOS and Linux fingerprints are available but aren't a primary focus.
- For optimal fingerprint matching, set `browser_version_min` and `browser_version_max` to **144** (the current Chromium version). Using a fingerprint from a different version may cause detection on some sites.
- **Canvas and WebGL** fingerprinting is an ongoing research area. Major targets we scrape are unaffected, but some sites can detect our current algorithm. A new major release addressing this is expected by end of February.
---
## Troubleshooting
### Can't connect to daemon
```bash
curl http://localhost:9222/health
# Should return: {"success": true, "data": {"status": "healthy", ...}}
```
### Check daemon logs
```bash
docker compose logs -f
```
### Environment variable changes not taking effect
The container reads `.env` at startup. After editing, recreate the container:
```bash
docker compose up -d
```
### Enable debug logging
```python
import logging
logging.basicConfig(level=logging.DEBUG)
```
---
## FAQ
**Why Chromium and not Chrome?**
Chrome is closed-source. Although there are slight differences between Chrome and Chromium, our experiments on the most difficult websites — and real-world scraping of millions of pages per day — show no discernible difference in detection rate. The difference yields too many false positives and would negatively impact too many users. Additionally, Chromium-based browsers (Brave, Edge, Samsung, etc.) make up a significant portion of the browser market.
**Why is it not open-source?**
We've seen great projects like camoufox get undermined by anti-bot companies reverse-engineering the source to detect it. We want to avoid that fate and continue providing a reliable scraping browser for years to come.
---
## Issues & Support
1. **Code-level bugs or feature requests** — open a [GitHub Issue](https://github.com/rayobyte/rayobrowse/issues). We'll track and resolve these publicly.
2. **Anti-scraping issues** ("detected on site X" or "fingerprint applied incorrectly on site Y") — email [support@rayobyte.com](mailto:support@rayobyte.com) with full output after enabling debug logging. We don't engage in public assistance on anti-scraping cases due to watchful eyes.
3. **Sales, partnerships, or closer collaboration** — fill in this [form](https://share.hsforms.com/1cTZ0E4WMTWGo5QuwrKpQlA3xcyu) and we'll be in touch.
---
## Legal & Ethics Notice
This project should be used only for legal and ethical web scraping of publicly available data. Rayobyte is a proud partner of the [EWDCI](https://ethicalwebdata.com/) and places a high importance on ethical web scraping.
| text/markdown | null | Rayobyte <support@rayobyte.com> | null | null | Proprietary | browser, automation, stealth, scraping, playwright, anti-detect, fingerprint | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Internet :: WWW/HTTP :: Browsers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests>=2.32.0",
"playwright>=1.48.0"
] | [] | [] | [] | [
"Homepage, https://rayobyte.com",
"Documentation, https://github.com/rayobyte-data/rayobrowse"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T23:38:42.848980 | rayobrowse-0.1.31-py3-none-any.whl | 15,522 | 39/16/92ea3e3c9f6c399e0d070da4ce4cb915760ab3da980e545a12d79972f7a2/rayobrowse-0.1.31-py3-none-any.whl | py3 | bdist_wheel | null | false | 79d7d1620879fa8c9beaf3818161bc7f | 32f0e90a1a0c3f1ddc30698e446b53a7cfe86f9d95211121c250dc0255fe447f | 391692ea3e3c9f6c399e0d070da4ce4cb915760ab3da980e545a12d79972f7a2 | null | [
"LICENSE"
] | 86 |
2.4 | fairyfly-grasshopper | 0.5.0 | Fairyfly plugin for Grasshopper. | [](https://github.com/ladybug-tools/fairyfly-grasshopper/actions)
[](https://github.com/IronLanguages/ironpython2/releases/tag/ipy-2.7.8/)
# fairyfly-grasshopper
:ant: :green_book: fairyfly plugin for Grasshopper.
This repository contains all Grasshopper components for the fairyfly plugin.
The package includes both the user objects (`.ghuser`) and the Python source (`.py`).
The repository also contains a JSON version of the grasshopper component data.
Note that this library only possesses the Grasshopper components and, in order to
run the plugin, the core libraries must be installed in a way that they can be
discovered by Rhino (see dependencies).
## Dependencies
The fairyfly-grasshopper plugin has the following dependencies (other than Rhino/Grasshopper):
* [ladybug-geometry](https://github.com/ladybug-tools/ladybug-geometry)
* [ladybug-core](https://github.com/ladybug-tools/ladybug)
* [ladybug-rhino](https://github.com/ladybug-tools/ladybug-rhino)
* [fairyfly-core](https://github.com/ladybug-tools/fairyfly-core)
* [fairyfly-therm](https://github.com/ladybug-tools/fairyfly-therm)
## Installation
See the [Wiki of the lbt-grasshopper repository](https://github.com/ladybug-tools/lbt-grasshopper/wiki)
for the installation instructions for the entire Ladybug Tools Grasshopper plugin
(including this repository).
| text/markdown | Ladybug Tools | info@ladybug.tools | null | null | AGPL-3.0 | null | [
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: Implementation :: IronPython",
"Operating System :: OS Independent"
] | [] | https://github.com/ladybug-tools/fairyfly-grasshopper | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-20T23:37:43.701258 | fairyfly_grasshopper-0.5.0.tar.gz | 129,240 | 57/cb/99a620dc893a2b766f56f6ab5d6ffbf2c2c37abab16df7cf66603eafee31/fairyfly_grasshopper-0.5.0.tar.gz | source | sdist | null | false | 32c3abcd1289557fc464388733d3b880 | 9f507d57586bdcbb5139e0de5332dd0ed71e80f378aef0e324931f07b2c1f3e9 | 57cb99a620dc893a2b766f56f6ab5d6ffbf2c2c37abab16df7cf66603eafee31 | null | [
"LICENSE"
] | 260 |
2.4 | pre-commit-vauxoo | 8.2.25 | pre-commit script to run automatically the configuration and variables custom from Vauxoo | ========
Overview
========
.. image:: https://www.vauxoo.com/logo.png
:alt: Vauxoo
:target: https://www.vauxoo.com/
pre-commit script to run automatically the configuration and variables custom from Vauxoo
* Free software: GNU Lesser General Public License v3 or later (LGPLv3+)
Installation
============
Install in the same way than you usually install pypi packages
python3 -m pip install --force-reinstall -U pre-commit-vauxoo
Or using 'sudo':
sudo python3 -m pip install --force-reinstall -U pre-commit-vauxoo
Or using '--user':
python3 -m pip install --user --force-reinstall -U pre-commit-vauxoo
Or using virtualenv
source YOUR_VENV/bin/activate && pip install --force-reinstall -U pre-commit-vauxoo
You can confirm your environment running `pre-commit-vauxoo --version`
Usage
=====
Run pre-commit-vauxoo command in git repository where you want to run our lints
The autofixes are disabled by default you can use the following option to enable it
pre-commit-vauxoo -t all
Full --help command result:
::
Usage: pre-commit-vauxoo [OPTIONS]
pre-commit-vauxoo run pre-commit with custom validations and configuration
files
Options:
-p, --paths PATH CSV PATHS are the specific filenames to run
hooks on separated by commas. [env var:
INCLUDE_LINT; default: .]
--no-overwrite Overwrite configuration files.
*If True, existing configuration files into
the project will be overwritten.
*If False, then current files will be used,
if they exist. [env var:
PRECOMMIT_NO_OVERWRITE_CONFIG_FILES]
--fail-optional Change the exit_code for 'optional'
precommit-hooks-type.
*If this flag is enabled so the exit_code
will be -1 (error) if 'optional' fails.
*If it is disabled (by default), exit_code
will be 0 (successful) even if 'optional'
fails. [env var: PRECOMMIT_FAIL_OPTIONAL]
-x, --exclude-autofix PATH CSV Exclude paths on which to run the autofix
pre-commit configuration, separated by
commas [env var: EXCLUDE_AUTOFIX]
-l, --exclude-lint PATH CSV Paths to exclude checks, separated by
commas. [env var: EXCLUDE_LINT]
-d, --pylint-disable-checks TEXT CSV
Pylint checks to disable, separated by
commas. [env var: PYLINT_DISABLE_CHECKS]
--oca-hooks-disable-checks TEXT CSV
OCA Hooks checks to disable, separated by
commas. [env var: OCA_HOOKS_DISABLE_CHECKS]
-S, --skip-string-normalization
If '-t fix' is enabled, don't normalize
string quotes or prefixes '' -> ""
This parameter is related to 'black' hook
[env var: BLACK_SKIP_STRING_NORMALIZATION]
-t, --precommit-hooks-type [mandatory|optional|fix|experimental|all|-mandatory|-optional|-fix|-experimental]
Pre-commit configuration file to run hooks,
separated by commas.
prefix '-' means that the option will be
removed.
*Mandatory: Stable hooks that needs to be
fixed (Affecting build status).
*Optional: Optional hooks that could be
fixed later. (No affects build status almost
'--fail-optional' is set).
*Experimental: Experimental hooks that only
to test. (No affects build status).
*Fix: Hooks auto fixing source code (Affects
build status).
*All: All configuration files to run hooks.
[env var: PRECOMMIT_HOOKS_TYPE; default:
all, -fix]
--install Install the pre-commit script
Using this option a '.git/hooks/pre-commit'
will be created
Now your command 'git commit' will run 'pre-
commit-vauxoo' before to commit
--version Show the version of this package
--odoo-version TEXT Odoo version used for the repository. [env
var: VERSION]
--py-version TEXT Python version used for the repository.
[env var: TRAVIS_PYTHON_VERSION]
--is-project-for-apps BOOLEAN It is a project for apps (manifest with
price) enabling special pylint checks [env
var: PRECOMMIT_IS_PROJECT_FOR_APPS]
--only-cp-cfg Only copy configuration files without
running the pre-commit script
--compatibility-version COMPATIBILITY-VERSION
Defines the compatibility and behavior level
for each linter tooling.
This parameter controls how aggressive or
modern the enabled linters, formatters, and
autofixes are. Each position in the version
represents a specific tool and its behavior
level.
Lower values prioritize backward
compatibility and minimal diffs. Higher
values enable newer versions, stricter
rules, and more aggressive autofixes.
Default: 10.10.10.10.10.10.10.10.10.10
Example: * 0.0.0.0.0.0.0 → Using zero 0 or
not defined will use the latest behavior
ever * 10.10.10.10.10.10.10 → Freeze old
behavior <=2025 year (safe, backward-
compatible) * 20.20.20.20.20.20.20 → Enable
new 2026 behaviors and aggressive autofixes
* (future changes may add more values) *
Mixed values (e.g. 10.20.10.20.0.20) allow
fine-grained control per tool
Tool order: 🟢 1. Prettier (20 → Enable XML
aggressive whitespace fixes) 🟢 2. OCA hooks
https://github.com/OCA/odoo-pre-commit-hooks
(20 → rm py headers, rm unused logger,
change xml id position first, change xml
bool/integer to eval, add xml-header-
missing uppercase, mv README.md to
README.rst, change py _('translation')
to self.env._('translation'), rm manifest
superfluous keys, rm field-string-redundant)
🟢 3. ESLint 🟢 4. Black / Autoflake 🟢 5. pre-
commit framework 🟢 6. Pylint/pylint-odoo 🟢
7. flake8
⚠️ Higher values or empty valuesmay
introduce formatting changes, stricter
linting, or non-backward-compatible fixes
(especially for XML, Python, and JS files).
[env var: LINT_COMPATIBILITY_VERSION]
--help Show this message and exit.
.. Documentation
.. =============
.. https://pre-commit-vauxoo.readthedocs.io/
Development
===========
To run all the tests run::
tox
Use extra parameters to change the test behaviour.
e.g. particular python version::
tox -e py310
e.g. particular unittest method::
tox -e py310 -- -k test_basic
e.g. all the tests at the same time in parallel::
tox -p auto
Note, to combine the coverage data from all the tox environments run:
.. list-table::
:widths: 10 90
:stub-columns: 1
- - Windows
- ::
set PYTEST_ADDOPTS=--cov-append
tox
- - Other
- ::
PYTEST_ADDOPTS=--cov-append tox
CHANGES
=======
v8.2.25
-------
* Bump version: 8.2.24 → 8.2.25
* [FIX] oca\_hooks-autofix: Re-enable PO autofixes (#217)
v8.2.24
-------
* Bump version: 8.2.23 → 8.2.24
* [REF] prettier config: Move xmlWhitespaceSensitivity:preserve to lint compatibility 30 instead of 20 (#216)
v8.2.23
-------
* Bump version: 8.2.22 → 8.2.23
* [REF] test: Add LINT\_COMPATIBILITY\_VERSION matrix testing for tox and gh (#215)
* [REF] \*: Fix lints (#214)
v8.2.22
-------
* Bump version: 8.2.21 → 8.2.22
* [REF] cli: Rename LINT\_COMPATIBILITY\_MATRIX to LINT\_COMPATIBILITY\_VERSION (#213)
* [REF] pre-commit-config: Update OCA hooks (#211)
* [REF] pre-commit-config-autofix: Set old version in order to avoid mixed changes (#212)
* [REF] pre-commit-config: Enable new oca hooks and update pkg (#210)
* [REF] prettier: Enable xmlQuoteAttributes double (#209)
* [REF] prettier: Enable xmlQuoteAttributes double
* [REF] flake8: ignore B024 'AbstractHandler is an abstract base class, but none of the methods it defines are abstract.'
* [REF] pre-commit-config: Update OCA Hooks version
* [REF] pre-commit-config: Update all packages (#206)
* [REF] pre-commit-config-autofix: Enable requirements.txt fixer and upgrade pre-commit/pre-commit-hooks hook (#205)
* [REF] cfg: Use new pylint4 (#203)
* [REF] pre-commit-config: Update OCA hooks version (#204)
* [REF] pre\_commit\_vauxoo: Add logger to show compatibility matrix (#202)
* [REF] README: Update readme with help content (#201)
* [REF] oca-hooks-autofix.cfg: Disable autofix for oca\_hooks\_disable\_checks<=10 (#200)
* [FIX] pylint: Fix small bug (#199)
* [REF] tests: Migrating unittest to pytest (#198)
* [REF] pre\_commit\_vauxoo: Add matrix versioning for tool configurations
* [REF] cli: Add new option to only copy the configuration files
* [FIX] pylintrc: Fix DeprecationWarning ignore-mixin-members has been deprecated
* [REF] requirements.txt: Fix GitWildMatchPattern warning
* [IMP] pre\_commit\_vauxoo: Use copier to create configuration files
v8.2.21
-------
* Bump version: 8.2.20 → 8.2.21
* [REF] pre-commit-config-fix: Update oca-hooks (#196)
* [REF] pre-commit-config-fix: Update oca-hooks (#195)
* [REF] pre-commit-config: Update oca-hooks version (#194)
* [REF] pre-commit-config: Update oca-hooks version (#193)
* [ADD] use-header-comments: New check to remove comments in the py headers (#192)
* [ADD] unused-logger: Enable unused-logger check with autofix (#191)
* [REF] oca\_hooks.cfg: Disable repeated checks for autofixes (#190)
* [ADD] xml-template-prettier-incompatible, xml-id-position-first: Consider 'template' tag for xml-id-poisition-first and enable xml-template-prettier-incompatible (#189)
* [REF] pre-commit-config-autofix: Use prettier configuration file already listed in our gitignore file (#187)
* [FIX] pre-commit-config-autofix: Fix prettier to autofix xml files (#186)
* [ADD] xml-field-bool-without-eval, xml-field-number-without-eval: Check for bool or numeric fields without eval in xml records (#184)
* [REF] github-actions: Enable py3.14 for all OS (#183)
* [ADD] prefer-readme-rst: Enable prefer-readme-rst with autofix (#182)
* [REF] cfg: Using autofix multiline version (#179)
* [REF] cfg: Enable autofix xml and py from oca-pre-commit-hooks (#177)
v8.2.20
-------
* Bump version: 8.2.19 → 8.2.20
* [REF] pylintrc: Removing items using the default value
* [REF] manifest-required-key: Remove installable as required
* [REF] pylint: Enable apps checks only for project with flag enabled from environment variable
v8.2.19
-------
* Bump version: 8.2.18 → 8.2.19
* [REF] experimental: Enable manifest-superfluous-key in experimental configuration file (#175)
v8.2.18
-------
* Bump version: 8.2.17 → 8.2.18
* [REF] cfg: Update pylint\_odoo v9.3.20 (#174)
v8.2.17
-------
* Bump version: 8.2.16 → 8.2.17
* [REF] cfg: Update pylint\_odoo v9.3.18 (#172)
v8.2.16
-------
* Bump version: 8.2.15 → 8.2.16
* [REF] cfg: Update pylint v9.3.17 (#171)
v8.2.15
-------
* Bump version: 8.2.14 → 8.2.15
* [REF] pre-commit-config: Update pylint-odoo v9.3.16 (#169)
v8.2.14
-------
* Bump version: 8.2.13 → 8.2.14
* [REF] pre-commit-config-optional: Update bandit to fix pbr dependency error (#168)
v8.2.13
-------
* Bump version: 8.2.12 → 8.2.13
* [REF] cfg: Use latest pylint-odoo to support odoo 19.0 (#166)
v8.2.12
-------
* Bump version: 8.2.11 → 8.2.12
* [REF] pre-commit-config\*: Update pylint-odoo package (#165)
* [REF] github-actions: Use py314 only for ubuntu and install apk dependencies (#164)
v8.2.11
-------
* Bump version: 8.2.10 → 8.2.11
* [REF] pre-commit-config: Update pylint\_odoo to 9.3.13 (#161)
* [REF] \*: Adapt code to be compatible with pylint4 (early)
* [REF] github-actions: Use py3.14 pre-release (#163)
v8.2.10
-------
* Bump version: 8.2.9 → 8.2.10
* [REF] check\_deactivate\_jinja: Now support neutralize.sql (#160)
* [REF] pre-commit-config: Update pylint\_odoo to 9.3.11 (#159)
* [REF] test-requirements: Fix py3.13 CI red (#157)
v8.2.9
------
* Bump version: 8.2.8 → 8.2.9
* [REF] .pylint: Remove no-search-all for mandatory check (#156)
v8.2.8
------
* Bump version: 8.2.7 → 8.2.8
* [ADD] pylintrc-experimental: Add new experimental for pylint-odoo checks (#155)
v8.2.7
------
* Bump version: 8.2.6 → 8.2.7
* [REF] config: Update odoo-pre-commit-hooks to v0.1.4 (#154)
v8.2.6
------
* Bump version: 8.2.5 → 8.2.6
* [REF] cfg: Upgrade oca-odoo-pre-commit-hooks v0.1.3 (#153)
* [REF] cfg: Upgrade oca-odoo-pre-commit-hooks v0.1.2 (#151)
* [FIX] click: Pinned 'click' version where it is compatibility
* [REF] pylint\_odoo: Update pylint version to v9.3.6
v8.2.5
------
* Bump version: 8.2.4 → 8.2.5
* [REF] pylint\_odoo: Update pylint version to v9.3.3 (#149)
v8.2.4
------
* Bump version: 8.2.3 → 8.2.4
* [REF] pylint\_odoo: Enabling python version 3.13 compatibility (#148)
* [REF] check\_deactivate\_jinja: Add "nginx\_url" variable and better error message (#147)
v8.2.3
------
* Bump version: 8.2.2 → 8.2.3
* [REF] pylint\_odoo: Update pylint version and drop support for py38 (#515)
* [REF] README: Add tox params to run unittest
* [REF] .github-actions: Fix detected dubious ownership in repository
* [REF] pre\_commit\_vauxoo: pylint checks support define python version
* [REF] github-action: Avoid unnecessary time-consuming 'Processing triggers for man-db' installing apt (#143)
v8.2.2
------
* Bump version: 8.2.1 → 8.2.2
* [FIX] eslint: Update .eslintrc.json to use ECMAScript 2022 (#142)
v8.2.1
------
* Bump version: 8.2.0 → 8.2.1
* [REV] pre-commit-config: Revert enable jobs for pylint hook (#141)
v8.2.0
------
* Bump version: 8.1.3 → 8.2.0
* [REF] pre-commit-config: Enable jobs for pylint hook (#140)
v8.1.3
------
* Bump version: 8.1.2 → 8.1.3
* [REF] cfg: Update odoo-pre-commit-hooks to 0.0.35 (#139)
v8.1.2
------
* Bump version: 8.1.1 → 8.1.2
* [REF] tox.ini: Add compatibility with new pyttest
* [REF] .pre-commit-config: Bump OCA/odoo-pre-commit-hooks to 0.0.34
v8.1.1
------
* Bump version: 8.1.0 → 8.1.1
* [REF] optional,autofix: Upgrade odoo-pre-commit-hooks version v0.0.33 (#137)
* [REF] github-actions: Use exclude macosx-latest for py old (#136)
v8.1.0
------
* Bump version: 8.0.2 → 8.1.0
* [ADD] name-non-ascii: Prevents file or directory names with ASCII characters (#134)
* [REF] github-actions: Add arch in cache-key to use macosx m1 and intel compatibility
* [REF] github-actions: Use latest codecov version
* [REF] setup: Add setuptools deps to build
* [REF] github-actions: Use macosx-latest only for py-latest and macosx-14 for older Related to https://github.com/actions/setup-python/issues/825\#issuecomment-2096792396
* [REF] setup: Add py3.12 because we are compatible
* [REF] .github: Add py3.12, update gh action packages and fix pre-commit cache (#133)
v8.0.2
------
* Bump version: 8.0.1 → 8.0.2
* [IMP] cfg: update pylint-odoo
v8.0.1
------
* Bump version: 8.0.0 → 8.0.1
* [CI]: fix wrong path on windows runners
* [IMP] cfg: update black version
v8.0.0
------
* Bump version: 7.0.26 → 8.0.0
* [REF] cfg: bump pylint-odoo to v9.0.4 (#127)
v7.0.26
-------
* Bump version: 7.0.25 → 7.0.26
* [IMP] cfg: bump pylint-odoo to v8.0.21 (#126)
* [REF] Remove redundant autofix checks (#125)
* [REF] CI: Update CI/RTD (#123)
* ci: Update actions/checkout (#122)
v7.0.25
-------
* Bump version: 7.0.24 → 7.0.25
* [REF] .pre-commit-config: pylint-odoo bumpversion v8.0.20 (#120)
* [REF] tox: Build ChangeLog again (#119)
v7.0.24
-------
* Bump version: 7.0.23 → 7.0.24
* [REF] setup: Enable py311 classifier (#117)
* [IMP] cfg: update oca odoo hooks version (#114)
* [REF] .gitignore: Ignore .oca\_hooks\*
v7.0.23
-------
* Bump version: 7.0.22 → 7.0.23
* [IMP] support disabling oca hooks through env var (#116)
v7.0.22
-------
* Bump version: 7.0.21 → 7.0.22
* [REF] use config files for oca-hooks (#112)
v7.0.21
-------
* Bump version: 7.0.20 → 7.0.21
* [REF] Disable xml-oe-structure-missing-id (#110)
v7.0.20
-------
* Bump version: 7.0.19 → 7.0.20
* [REF] Disable xml-oe-structure-id (#109)
v7.0.19
-------
* Bump version: 7.0.18 → 7.0.19
* [REF] cfg: Update bandit version and disable "defusedxml" checks part 2 (#108)
v7.0.18
-------
* Bump version: 7.0.17 → 7.0.18
* [REF] cfg: Update bandit version and disable "defusedxml" checks (#107)
v7.0.17
-------
* Bump version: 7.0.16 → 7.0.17
* [REF] pre-commit-config: Upgrade OCA/odoo-pre-commit-hooks to v0.0.28
v7.0.16
-------
* Bump version: 7.0.15 → 7.0.16
* [FIX] CI: Add ignored installed to Cannot uninstall 'distlib' error
* [IMP] update odoo-pre-commit-hooks, add po-pretty-format, oe\_structure
v7.0.15
-------
* Bump version: 7.0.14 → 7.0.15
* [FIX] non-installable module regex (#103)
* [FIX] CI: Fix typo for windows (#101)
v7.0.14
-------
* Bump version: 7.0.13 → 7.0.14
* [FIX] pre-commit-vauxoo: Fix isort hook - RuntimeError The Poetry configuration is invalid (#100)
v7.0.13
-------
* Bump version: 7.0.12 → 7.0.13
* [REF] pylint.conf: Update partner name as required author
v7.0.12
-------
* Bump version: 7.0.11 → 7.0.12
* [REF] pre-commit-vauxoo: Include migrations script for versions 15 and higher (#98)
* [FIX] ci: Array matrix syntax, rm tox envs and fix src (#96)
v7.0.11
-------
* Bump version: 7.0.10 → 7.0.11
* [REF] pre-commit-vauxoo: Fix missing newline for pyproject.toml (#95)
v7.0.10
-------
* Bump version: 7.0.9 → 7.0.10
* [REF] pre-commit-config-autofix: Update latest version of repos for autofixes (#94)
v7.0.9
------
* Bump version: 7.0.8 → 7.0.9
* [FIX] pre-commit-config\*.yaml: Replace deprecated gitlab URL (#92)
v7.0.8
------
* Bump version: 7.0.7 → 7.0.8
* [ADD] pre-commit-config-optional: Add new bandit security checks experimental (#88)
v7.0.7
------
* Bump version: 7.0.6 → 7.0.7
* [REF] pre-commit-config-optional: Bump hooks version
v7.0.6
------
* Bump version: 7.0.5 → 7.0.6
* [REF] pre-commit-config: bumpversion hooks (#87)
v7.0.5
------
* Bump version: 7.0.4 → 7.0.5
* [REF] tox: More testing for package build and dependencies
* [REF] .pre-commit-config: pylint-odoo bumpversion v8.0.16
v7.0.4
------
* Bump version: 7.0.3 → 7.0.4
* [REF] pre-commit-config-optional: Bump OCA odoo-pre-commit-hooks version (#83)
v7.0.3
------
* Bump version: 7.0.2 → 7.0.3
* [REF] .pylintrc: Disable unsupported-binary-operation check (#82)
v7.0.2
------
* Bump version: 7.0.1 → 7.0.2
* [REF] pre-commit-config: Migrate to new pylint-odoo - #apocalintSYS (#79)
v7.0.1
------
* Bump version: 7.0.0 → 7.0.1
* [FIX] eslint: Fix 'import' sentence error (#80)
* [REF] CI: Remove deprecated MQT build (#78)
v7.0.0
------
* Bump version: 6.0.0 → 7.0.0
* [REF] CI: Add py3.11, update tox, gitignore (#75)
v6.0.0
------
* Bump version: 5.3.2 → 6.0.0
* [REF] tests: Remove git --initial-branch parameter incompatible with old git version (#76)
* [REF] pylintrc: Add 'column' to message-template option and change format (#74)
* [REM] Remove unused "tests" directory (#73)
* [REF] pylintrc: re-enable check bad-super-call (#72)
* [REF] pre\_commit\_vauxoo: Use the same git diff command than original (#71)
* [REF] pylintrc: Disable assignment-from-none and bad-super-call (#70)
v5.3.2
------
* Bump version: 5.3.1 → 5.3.2
* [REF] cfg/.flake8: ignore E203 (whitespace before ':')
v5.3.1
------
* Bump version: 5.3.0 → 5.3.1
* [IMP] pre\_commit\_vauxoo: show diff with changes made in autofixes
* [FIX] pre\_commit\_vauxoo: Removed non autofix checks from autofix cfg #58
* [REF] pre\_commit\_vauxoo: Merge vauxoo hooks into repo
v5.3.0
------
* Bump version: 5.2.3 → 5.3.0
* [REF] tests: Improve unittests to be more deterministic
* [REF] pre\_commit\_vauxoo: Test repo structure set to standards The previous structure was: /tmp\_dir/resources/all\_modules
* [IMP] pre-commit-vauxoo: Uninstallable modules are no longer checked
v5.2.3
------
* Bump version: 5.2.2 → 5.2.3
* [REF] pre-commit-config: Update sha of pylint-odoo from vx (#62)
v5.2.2
------
* Bump version: 5.2.1 → 5.2.2
* [REF] mandatory: Update custom hook (#60)
* [REF] readme: Update from help command and add multiple ways to install it (#57)
v5.2.1
------
* Bump version: 5.2.0 → 5.2.1
* [REF] pre-commit-vauxoo: Better message for CI autofixes and add --version option parameter
v5.2.0
------
* Bump version: 5.1.2 → 5.2.0
* [REF] CI: No install ecpg since MQT must install it
* [REF] tests: Add module\_autofix1 in order to validate it is working well
* [REF] test: Improve the unittest to check if logs were raised
* [REF] tox: No use workers in order to show the full logs
* [REF] autofixes: Better message for CI if autofixes are required
v5.1.2
------
* Bump version: 5.1.1 → 5.1.2
* [REF] cfg: Update custom vx hook to v0.0.2 (#53)
v5.1.1
------
* Bump version: 5.1.0 → 5.1.1
* [REF] README: Update README --help to last version (#52)
* [REF] CI: Trigger pipeline to dockerv if new release (#51)
v5.1.0
------
* Bump version: 5.0.0 → 5.1.0
* [ADD] pre\_commit\_vauxoo: Mandatory - Add vx-check-deactivate hook (#50)
v5.0.0
------
* Bump version: 4.0.0 → 5.0.0
* [REF] pre\_commit\_vauxoo: Enable black's string normalization and add extra parameter to disable it (#38)
v4.0.0
------
* Bump version: 3.5.0 → 4.0.0
* [ADD] pre\_commit\_vauxoo: Add option to install .git/hooks/pre\_commit (#48)
* [REF] pre\_commit\_vauxoo: Mandatory green even if mandatory are red (#47)
* [REF] pre\_commit\_vauxoo: Deprecate PRECOMMIT\_AUTOFIX in pro PRECOMMIT\_HOOKS\_TYPE=all (#46)
* [FIX] pre\_commit\_vauxoo: Fix duplicate '-w' parameter (#45)
* [REF] CI: Faster pypi publish, remove "needs" to run parallel but only trigger for stable branches and PRs and tags (#44)
* [REF] CI: Enable pytest-xdist to run tests with multiple CPUs to speed up test execution (#43)
* [REF] pre\_commit\_vauxoo: Reformat code running black with string-normalizatio
v3.5.0
------
* Bump version: 3.4.0 → 3.5.0
* [REF] cli: fail-optional now is a flag (#36)
v3.4.0
------
* Bump version: 3.3.0 → 3.4.0
* [IMP] pre\_commit\_vauxoo: Support fail if 'optional' hooks type and support "-" prefix to remove hooks type (#35)
v3.3.0
------
* Bump version: 3.2.4 → 3.3.0
* [FIX] click: Match envvar for disable-pylint-checks and use csv string (#34)
v3.2.4
------
* Bump version: 3.2.3 → 3.2.4
* [ADD] requirements.txt: Add requirements.txt file and setup.py read this file (#32)
* [REF] cli: Show env var for INCLUDE\_LINT and add help to path option (#31)
* [REF] docs: Clean dummy files and add docs badge and logo (#30)
v3.2.3
------
* Bump version: 3.2.2 → 3.2.3
* [REF] CI: Generates ChangeLog with pbr installed (#29)
v3.2.2
------
* Bump version: 3.2.1 → 3.2.2
* [REF] setup.py: Autogenerate ChangeLog (#28)
v3.2.1
------
* Bump version: 3.2.0 → 3.2.1
* [REF] cli: Bypassing errors if git repo is not found allow to run --help (#27)
v3.2.0
------
* Bump version: 3.1.0 → 3.2.0
* [REF] README: Better help output with newlines (#26)
* [REF] cli: Small refactoring, typos and py3.5 compatibility (#25)
v3.1.0
------
* Bump version: 3.0.0 → 3.1.0
* [FIX] click: Compatibility with click==8.0.1 used by big image (#24)
v3.0.0
------
* Bump version: 2.1.1 → 3.0.0
* [REF] click: Use standard parameters, envvar and callback transformation and a few refactoring and more (#23)
v2.1.1
------
* Bump version: 2.1.0 → 2.1.1
* [REF] CI: Add test to run with dockerv vauxoo image (#22)
* [REF] click: Remove incompatible parameter for all click versions (#21)
v2.1.0
------
* Bump version: 2.0.0 → 2.1.0
* [FIX] CI: Auto deploy pypi
v2.0.0
------
* Bump version: 1.3.2 → 2.0.0
* [IMP] pre-commit-vauxoo: Add params, help, default and environment variable matches (#20)
* [FIX] prettierrc: Enable only for js and xml files (#19)
* [REF] CI: Order builds by OS and add py3.10 (#17)
* [REF] tests: Create dummy repo in tmp folder
* [REF] CI: Fix covtest
* [REF] tests: Migrating tests to unittest
v1.3.2
------
* Bump version: 1.3.1 → 1.3.2
* [REF] CI: Build package before to publish it (#15)
v1.3.1
------
* Bump version: 1.3.0 → 1.3.1
* [REF] gh-actions: Publish package (#14)
* [FIX] pre\_commit\_vauxoo: typos in log messages (#13)
v1.3.0
------
* Bump version: 1.2.1 → 1.3.0
* [REF] CI: Enable py3.10 (#12)
* [REF] github: Set pre-commit cache
* [REF] tests: Fixing test
* [FIX] pre\_commit\_vauxoo: Fix current path
* [REF] pre\_commit\_vauxoo: Use INCLUDE\_LINT and EXCLUDE\_AUTOFIX
* [REF] pre\_commit\_vauxoo: Add logging colorized and summary result
* [REF] pre\_commit\_vauxoo: Small refactoring
* [REF] config: Add flake8 optional checks includes bugbear (#8)
v1.2.1
------
* Bump version: 1.2.0 → 1.2.1
* [REF] README: Fix installation command and version (#9)
* [FIX] pre\_commit\_vauxoo: Return the same type of object (#7)
* [REF] pre\_commit\_vauxoo: Add verbose subprocess.call wrapper in order to know what command was executed (#6)
v1.2.0
------
* Bump version: 1.1.0 → 1.2.0
* [REF] pre\_commit\_vauxoo: Run pre-commit only in current path (#5)
v1.1.0
------
* Bump version: 1.0.1 → 1.1.0
* [REF] prettierrc.yml: Enable xmlSelfClosingSpace (#3)
v1.0.1
------
* Bump version: 1.0.0 → 1.0.1
* [REF] pre\_commit\_vauxoo: Look for .git dir in parent dirs and allow to run the command in any subfolder (#2)
* [REF] cfg: Update configuration from vx/mqt (remove flake8 bugbear)
* [REF] eslintrc: Support syntax "??="
* [ADD] pre-commit-vauxoo: first code
v1.0.0
------
* Add initial project skeleton
| text/x-rst | Vauxoo | info@vauxoo.com | null | null | LGPL-3.0-or-later | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Lesser General Public License v3 or later (LGPLv3+)",
"Operating System :: Unix",
"Operating System :: POSIX",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Utilities"
] | [] | https://github.com/Vauxoo/pre-commit-vauxoo | null | >=3.10 | [] | [] | [] | [
"click<=8.1.8",
"copier",
"jinja2",
"pathspec<1.0.0",
"pgsanity",
"pre-commit"
] | [] | [] | [] | [
"Documentation, https://pre-commit-vauxoo.readthedocs.io/",
"Changelog, https://pre-commit-vauxoo.readthedocs.io/en/latest/changelog.html",
"Issue Tracker, https://github.com/Vauxoo/pre-commit-vauxoo/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T23:37:26.496166 | pre_commit_vauxoo-8.2.25.tar.gz | 87,712 | ee/8c/651fec23f6a1f95d2d2de6413973fd08a8a128bbaa48e4ddc18f7afad9bb/pre_commit_vauxoo-8.2.25.tar.gz | source | sdist | null | false | 9c28dec3605504caa930106f2ab14405 | edcf63cfb2346f26806a34c1e4f6c8c26e702a36a6321ea4f2e40bf9edff76b9 | ee8c651fec23f6a1f95d2d2de6413973fd08a8a128bbaa48e4ddc18f7afad9bb | null | [
"LICENSE",
"AUTHORS.rst"
] | 245 |
2.1 | fair-trees | 3.1.9 | Fairness-aware decision tree and random forest classifiers | # fair-trees
This package is the implementation of the paper [**"Fair tree classifier using strong demographic parity"**](https://link.springer.com/article/10.1007/s10994-023-06376-z) (Pereira Barata et al., *Machine Learning*, 2023).
It provides fairness-aware decision tree and random forest classifiers built on a modified scikit-learn tree engine. The splitting criterion jointly optimises predictive performance and statistical parity with respect to one or more sensitive (protected) attributes **Z**.
## Installation
```bash
pip install fair-trees
```
Or from source:
```bash
pip install -e . --no-build-isolation
```
## Quick start
```python
import numpy as np
import pandas as pd
from fair_trees import FairDecisionTreeClassifier, FairRandomForestClassifier, load_datasets
datasets = load_datasets()
data = datasets["bank_marketing"] # "adult" is also available
# Preprocessing — the bundled data contains raw DataFrames
X = pd.get_dummies(data["X"]).values.astype(np.float64)
y = pd.factorize(data["y"].iloc[:, 0])[0]
Z = np.column_stack([pd.factorize(data["Z"][col])[0] for col in data["Z"].columns])
```
### Fairness-aware decision tree
```python
clf = FairDecisionTreeClassifier(
theta=0.3, # trade-off: 0 = pure accuracy, 1 = pure fairness
Z_agg="max", # how to aggregate across sensitive attributes / classes
max_depth=5,
)
clf.fit(X, y, Z=Z)
y_prob = clf.predict_proba(X)[:, 1]
```
### Fairness-aware random forest
```python
rf = FairRandomForestClassifier(
n_estimators=100,
theta=0.3,
Z_agg="max",
max_depth=5,
random_state=42,
)
rf.fit(X, y, Z=Z)
y_prob = rf.predict_proba(X)[:, 1]
```
## Evaluation
The package does not ship its own metric functions, but the two scores that
matter—**ROC-AUC** (predictive quality) and **SDP** (statistical
parity)—can be computed from `scipy` in a few lines.
### ROC-AUC via the Mann–Whitney U statistic
```python
from scipy.stats import mannwhitneyu
roc_auc = mannwhitneyu(
y_prob[y == 1],
y_prob[y == 0],
).statistic / (sum(y == 1) * sum(y == 0))
print(f"ROC-AUC: {roc_auc:.4f}")
```
### Statistical Disparity (SDP) score
SDP measures how well the model's predictions are separated by a protected
attribute. It is defined as:
```
SDP = 1 − |AUC_Z − 0.5| × 2
```
where `AUC_Z` is computed the same way as ROC-AUC but treating each
sensitive attribute/class in **Z** as the positive label.
When **Z** contains multiple columns (attributes) and/or more than two
classes per attribute, the per-group AUC values must be aggregated. The
`Z_agg` parameter controls this—matching the logic used inside the
splitting criterion:
| `Z_agg` | Behaviour |
|----------|-----------|
| `"mean"` | Average the per-group SDP scores (across classes within an attribute, then across attributes). |
| `"max"` | Take the **worst-case** (lowest) SDP across all groups—i.e. the group with the highest disparity dominates. |
```python
import numpy as np
from scipy.stats import mannwhitneyu
def sdp_score(y_prob, Z, Z_agg="max"):
"""Compute the Statistical Disparity (SDP) score.
Parameters
----------
y_prob : array-like of shape (n_samples,)
Predicted probabilities for the positive class.
Z : array-like of shape (n_samples,) or (n_samples, n_attributes)
Sensitive / protected attribute(s). Each column is treated as a
separate attribute; each unique value within a column is a class.
Z_agg : {"mean", "max"}, default="max"
Aggregation method across attributes and classes.
- "mean": average SDP across all groups.
- "max": return the worst-case (lowest) SDP.
Returns
-------
float
SDP in [0, 1]. 1 = perfect parity, 0 = maximum disparity.
"""
Z = np.atleast_2d(np.asarray(Z).T).T # ensure (n_samples, n_attr)
y_prob = np.asarray(y_prob)
sdp_values = []
for attr_idx in range(Z.shape[1]):
z_col = Z[:, attr_idx]
classes = np.unique(z_col)
attr_sdps = []
for cls in classes:
mask_pos = z_col == cls
mask_neg = ~mask_pos
if mask_pos.sum() == 0 or mask_neg.sum() == 0:
continue
auc_z = mannwhitneyu(
y_prob[mask_pos],
y_prob[mask_neg],
).statistic / (mask_pos.sum() * mask_neg.sum())
attr_sdps.append(1 - abs(auc_z - 0.5) * 2)
if not attr_sdps:
continue
if Z_agg == "mean":
sdp_values.append(np.mean(attr_sdps))
else: # "max" → worst case = minimum SDP
sdp_values.append(np.min(attr_sdps))
if not sdp_values:
return 1.0 # no disparity measurable
if Z_agg == "mean":
return float(np.mean(sdp_values))
else:
return float(np.min(sdp_values))
```
### Putting it all together
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import mannwhitneyu
from fair_trees import FairRandomForestClassifier, load_datasets
# Load and preprocess
datasets = load_datasets()
data = datasets["bank_marketing"]
X = pd.get_dummies(data["X"]).values.astype(np.float64)
y = pd.factorize(data["y"].iloc[:, 0])[0]
Z = np.column_stack([pd.factorize(data["Z"][col])[0] for col in data["Z"].columns])
# Sweep over theta values
thetas = [0, 0.2, 0.4, 0.6, 0.8, 1.0]
aucs, sdps = [], []
for theta in thetas:
rf = FairRandomForestClassifier(
n_estimators=100, theta=theta, Z_agg="max", max_depth=5, random_state=42,
)
rf.fit(X, y, Z=Z)
y_prob = rf.predict_proba(X)[:, 1]
roc_auc = mannwhitneyu(
y_prob[y == 1], y_prob[y == 0],
).statistic / (sum(y == 1) * sum(y == 0))
sdp = sdp_score(y_prob, Z, Z_agg="max")
aucs.append(roc_auc)
sdps.append(sdp)
print(f"theta={theta:.1f} ROC-AUC={roc_auc:.4f} SDP={sdp:.4f}")
# Plot
fig, (ax1, ax3) = plt.subplots(1, 2, figsize=(14, 5))
# Left — Metrics vs. theta (dual axis)
ax1.set_xlabel("theta")
ax1.set_ylabel("ROC-AUC", color="tab:blue")
ax1.plot(thetas, aucs, "o-", color="tab:blue", label="ROC-AUC")
ax1.tick_params(axis="y", labelcolor="tab:blue")
ax2 = ax1.twinx()
ax2.set_ylabel("SDP", color="tab:orange")
ax2.plot(thetas, sdps, "s--", color="tab:orange", label="SDP")
ax2.tick_params(axis="y", labelcolor="tab:orange")
ax1.set_title("Metrics vs. theta")
lines1, labels1 = ax1.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax1.legend(lines1 + lines2, labels1 + labels2, loc="lower center")
# Right — ROC-AUC vs. SDP frontier
ax3.plot(sdps, aucs, "o-", color="tab:green")
for i, theta in enumerate(thetas):
ax3.annotate(f"θ={theta}", (sdps[i], aucs[i]), textcoords="offset points",
xytext=(8, 4), fontsize=9)
ax3.set_xlabel("SDP (fairness →)")
ax3.set_ylabel("ROC-AUC (performance →)")
ax3.set_title("Performance–Fairness Frontier")
fig.suptitle("Performance-Fairness Trade-off", fontsize=14)
fig.tight_layout()
plt.savefig("tradeoff.png", dpi=150, bbox_inches="tight")
plt.show()
```

## Key parameters
| Parameter | Default | Description |
|-----------|---------|-------------|
| `theta` | `0.0` | Trade-off weight in `[0, 1]`. `0` = standard (unfair) tree; `1` = splits optimise only for fairness. |
| `Z_agg` | `"max"` | Aggregation over sensitive groups: `"mean"` (average) or `"max"` (worst-case). |
| `Z` | `None` | Sensitive attributes, passed to `.fit()`. Array of shape `(n_samples,)` or `(n_samples, n_attributes)`. |
All other parameters (`max_depth`, `min_samples_split`, `n_estimators`,
etc.) behave identically to their scikit-learn counterparts.
## Citation
If you use this software, please cite the paper:
> Pereira Barata, A., Takes, F.W., van den Herik, H.J., & Veenman, C. (2023). **Fair tree classifier using strong demographic parity.** *Machine Learning*. [doi:10.1007/s10994-023-06376-z](https://doi.org/10.1007/s10994-023-06376-z)
See [`CITATION.cff`](https://raw.githubusercontent.com/pereirabarataap/fair-trees/refs/heads/main/CITATION.cff) for a machine-readable citation file.
## License
BSD-3-Clause
| text/markdown | null | null | null | null | BSD-3-Clause | null | [
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Programming Language :: C",
"Programming Language :: Python",
"Topic :: Scientific/Engineering",
"Development Status :: 3 - Alpha",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Operating System :: MacOS",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.22.0",
"scipy>=1.8.0",
"joblib>=1.2.0",
"threadpoolctl>=3.1.0",
"pandas>=1.4.0",
"matplotlib>=3.5.0",
"numpy>=1.22.0; extra == \"build\"",
"scipy>=1.8.0; extra == \"build\"",
"cython>=3.0.10; extra == \"build\"",
"meson-python>=0.17.1; extra == \"build\"",
"pytest>=7.1.2; extra == \"tests\"",
"pytest-cov>=2.9.0; extra == \"tests\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:37:03.407187 | fair_trees-3.1.9.tar.gz | 1,286,151 | bc/3a/fc5c2a49f8f263dbac9cc3a50a81f9a1650a49ecd3316bb6f5253b57e3ed/fair_trees-3.1.9.tar.gz | source | sdist | null | false | 3a289c4f4c0fca36f5adaaf21536fbd2 | 2f2ec55e5e1d53ea8ac5cb0b7aea6df304e582e0718aaa55229e8b6abacba23c | bc3afc5c2a49f8f263dbac9cc3a50a81f9a1650a49ecd3316bb6f5253b57e3ed | null | [] | 1,015 |
2.2 | game-audio | 2.1.0 | Python bindings for Game Audio Module - a C++ audio system built on miniaudio | # Game Audio Module
A C++20 audio system built on miniaudio with full Python bindings for game development.
## Features
- **Layered Music**: Multi-track audio with independent layer control and fading
- **Sound Groups**: Categorize and control collections of sounds
- **Volume Control**: Master, group, and individual sound volume
- **Smooth Transitions**: Fade sounds in/out with customizable durations
- **Random Sound Containers**: Randomized playback with pitch variation
- **Spatial Audio**: 3D positional audio with distance attenuation and directional panning
- **High-Level API**: Core primitives designed for user-defined wrappers
- **Python Bindings**: Full pybind11 bindings for Python projects (including [Basilisk engine](https://github.com/BasiliskGroup/BasiliskEngine))
- **Cross-Platform**: Windows, macOS, Linux via miniaudio
## Quick Start
### For Python Users
**Option 1: Install via pip (Recommended for most users)**
#### From PyPI (Recommended - Simplest Version Management)
```bash
# Install latest version
pip install game-audio
# Install specific version
pip install game-audio==2.0.0
# Install version range (e.g., any 1.x version, but not 2.0+)
pip install "game-audio>=2.0.0,<3.0.0"
# Upgrade to latest
pip install --upgrade game-audio
# Downgrade to specific version
pip install game-audio==1.0.0
```
#### From GitHub Releases (Alternative - For Specific Versions)
If you need a specific version or PyPI is unavailable, install directly from GitHub releases:
```bash
# Install specific version (pip will auto-select the correct wheel for your platform)
pip install https://github.com/hannaharmon/game-audio/releases/download/v2.0.0/game_audio-2.0.0-*.whl
# Or specify exact wheel for your platform (Windows example)
pip install https://github.com/hannaharmon/game-audio/releases/download/v2.0.0/game_audio-2.0.0-cp311-cp311-win_amd64.whl
```
**Note**: When installing from GitHub releases, you must uninstall before switching to PyPI (or vice versa), as pip treats them as different sources.
**Option 2: Build from source with CMake**
If you need to build from source or integrate into a CMake project:
**1. Add to your project's CMakeLists.txt:**
```cmake
include(FetchContent)
FetchContent_Declare(
audio_module
GIT_REPOSITORY https://github.com/hannaharmon/game-audio
GIT_TAG v2.1.0 # Use a specific version tag for stability
)
FetchContent_MakeAvailable(audio_module)
```
**Important**: Always use version tags (e.g., `v1.0.0`) rather than `main` branch. Using `main` means your project may break when breaking changes are merged. Version tags provide stability, predictability, and control over when you upgrade. See [RELEASE_MANAGEMENT.md](RELEASE_MANAGEMENT.md) for details.
**2. Use in Python (recommended):**
```python
import game_audio
# Initialize (keep the session alive for the app lifetime)
session = game_audio.AudioSession()
audio = game_audio.AudioManager.get_instance()
# Create groups and play
music_group = audio.create_group()
sfx_group = audio.create_group()
# Cleanup (optional; session destructor will also handle this)
session.close()
```
**Direct Usage (advanced/engine-controlled):**
```python
import game_audio
audio = game_audio.AudioManager.get_instance()
audio.initialize()
music_group = audio.create_group()
sfx_group = audio.create_group()
audio.shutdown()
```
**Full Guide**: [PYTHON_BINDINGS.md](PYTHON_BINDINGS.md)
**Note**: For use with game engines like [Basilisk Engine](https://github.com/BasiliskGroup/BasiliskEngine), you can simply use `pip install game-audio` instead of adding it to your CMakeLists.txt. This makes integration much simpler!
### For C++ Users
**1. Add to your CMakeLists.txt:**
```cmake
include(FetchContent)
FetchContent_Declare(
audio_module
GIT_REPOSITORY https://github.com/hannaharmon/game-audio
GIT_TAG v2.1.0 # Pin to specific version for stability
)
FetchContent_MakeAvailable(audio_module)
target_link_libraries(your_game PRIVATE audio_module)
```
**Important**: Always use version tags (e.g., `v1.0.0`) rather than `main` branch. Using `main` means your project may break when breaking changes are merged. Version tags provide stability, predictability, and control over when you upgrade. See [RELEASE_MANAGEMENT.md](RELEASE_MANAGEMENT.md) for details.
**2. Use in C++ (recommended):**
```cpp
#include "audio_manager.h"
#include "audio_session.h"
// Initialize (keep the session alive for the app lifetime)
audio::AudioSession session;
auto& audio = audio::AudioManager::GetInstance();
// Create groups
auto music = audio.CreateGroup();
auto sfx = audio.CreateGroup();
// Cleanup handled automatically by AudioSession destructor (or call session.Close())
```
**Direct Usage (advanced/engine-controlled):**
```cpp
#include "audio_manager.h"
auto& audio = audio::AudioManager::GetInstance();
audio.Initialize();
auto music = audio.CreateGroup();
auto sfx = audio.CreateGroup();
audio.Shutdown();
```
**Full API Reference**: [Online Documentation](https://hannaharmon.github.io/game-audio)
## Examples
- **Python Interactive**: [examples/python_interactive.py](examples/python_interactive.py) - Layered music with volume controls
- **Python Spatial Audio**: [examples/spatial_audio_example.py](examples/spatial_audio_example.py) - 3D spatialized audio demo
- **Python Overlapping Spatial Sounds**: [examples/overlapping_spatial_sounds.py](examples/overlapping_spatial_sounds.py) - Multiple overlapping spatialized sounds from the same file
- **C++ Basic**: [examples/test_audio.cpp](examples/test_audio.cpp)
- **C++ Advanced**: [examples/test_audio_2.cpp](examples/test_audio_2.cpp)
## Building Locally
```bash
# Build (cross-platform via PowerShell)
./scripts/build.ps1 -Configurations Debug,Release # Windows (C++ + Python)
./scripts/build.ps1 -Configurations Release # Linux/macOS
# Run all tests (C++ + Python)
./tests/scripts/run_all_tests.ps1
# Run only C++ tests
./tests/scripts/run_cpp_tests.ps1
# Run only Python tests
./tests/scripts/run_python_tests.ps1
```
**Build Options:**
- `-DBUILD_PYTHON_BINDINGS=OFF` - Disable Python bindings
- `-DBUILD_AUDIO_TESTS=OFF` - Disable test builds
- `-DBUILD_AUDIO_EXAMPLES=OFF` - Disable example builds
## Architecture
**Core Components:**
- `AudioManager` - Main API (singleton)
- `AudioSession` - RAII helper for scoped initialization/shutdown
- `AudioTrack` - Multi-layer synchronized audio
- `AudioGroup` - Volume group management
- `Sound` - Individual sound instances
- `AudioSystem` - miniaudio wrapper
**Handles:**
- `TrackHandle`, `GroupHandle`, `SoundHandle` are opaque handle types returned by the API.
**High-Level Utilities:**
- `RandomSoundContainer` - Wwise-style random containers
## Testing
Run the comprehensive test suite:
```bash
# Run all tests (C++ + Python, both source build and installed wheel)
./tests/scripts/run_all_tests.ps1
# Run only C++ tests
./tests/scripts/run_cpp_tests.ps1
# Run only Python tests (source build)
./tests/scripts/run_python_tests.ps1
# Run only Python tests (installed wheel)
./tests/scripts/run_python_tests.ps1 -UseWheel
```
**The test suite covers:**
- **System initialization and lifecycle** - AudioSession, AudioManager initialization/shutdown
- **Logging controls** - Runtime log level configuration and output
- **Volume control** - Master, group, and individual sound volume with proper clamping
- **Group operations** - Creation, destruction, volume control, and management
- **Sound loading and playback** - File loading, playback control, and state management
- **Track and layer management** - Multi-track audio, layer control, and synchronization
- **Input validation** - Error handling for invalid handles, paths, and parameters
- **Thread safety** - Concurrent operations and resource access
- **Resource management** - Proper cleanup, handle validation, and memory management
- **Cross-platform compatibility** - Platform-specific code isolation and portability checks
Tests run automatically on every push via GitHub Actions, validating both source builds and installed Python wheels on Windows, Linux, and macOS.
## Documentation
- **Python**: [PYTHON_BINDINGS.md](PYTHON_BINDINGS.md)
- **C++ API**: [Online Doxygen Docs](https://hannaharmon.github.io/game-audio)
- **Examples**: See `examples/` directory
## Logging
Logging is always available but defaults to `Off`. Control it at runtime:
```cpp
// C++
audio::AudioManager::SetLogLevel(audio::LogLevel::Info); // Enable logging
audio::AudioManager::SetLogLevel(audio::LogLevel::Off); // Disable logging
```
```python
# Python
game_audio.AudioManager.set_log_level(game_audio.LogLevel.Info) # Enable logging
game_audio.AudioManager.set_log_level(game_audio.LogLevel.Off) # Disable logging
```
## License
This project is released under the Unlicense. See `LICENSE` for full terms and third-party notices (including miniaudio).
| text/markdown | null | Hanna Harmon <hanna.marie.harmon@gmail.com> | null | null | Unlicense | audio, game, sound, music, miniaudio, pybind11 | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Multimedia :: Sound/Audio",
"Topic :: Software Development :: Libraries :: Python Modules",
"License :: OSI Approved :: The Unlicense (Unlicense)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: C++"
] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/hannaharmon/game-audio",
"Documentation, https://hannaharmon.github.io/game-audio",
"Repository, https://github.com/hannaharmon/game-audio",
"Issues, https://github.com/hannaharmon/game-audio/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:36:32.923902 | game_audio-2.1.0-cp313-cp313-win_amd64.whl | 420,801 | 94/9c/e4862f2bd7e59c4769b8f96ff82cdd980699e5f68bc49aef124717fc5894/game_audio-2.1.0-cp313-cp313-win_amd64.whl | cp313 | bdist_wheel | null | false | 07345acf93443e631609ca24c546d36d | 276eb96fa3fe1f17ff7be64912a3bd81d08f2cfc616aaf645bb275598785e417 | 949ce4862f2bd7e59c4769b8f96ff82cdd980699e5f68bc49aef124717fc5894 | null | [] | 854 |
2.4 | veramem-kernel | 1.1.8 | Minimal deterministic cognitive kernel for traceable and auditable reasoning systems. | # Veramem Kernel
**A deterministic cognitive core for recording truth, enforcing invariants, and preserving temporal integrity.**
The Veramem Kernel is a minimal, sovereign foundation designed to make factual systems **auditable, deterministic, and composable by construction**.
It provides a formal substrate for building:
- trustworthy AI
- compliant cognitive systems
- distributed memory infrastructures
- long-term digital identity and knowledge preservation
[](https://github.com/Julien-Lefauconnier/kernel/actions)
[](https://badge.fury.io/py/veramem-kernel)
[](https://pypi.org/project/veramem-kernel/)
[](https://opensource.org/licenses/Apache-2.0)
[](conformance/)
---
## ⚠️ Maintainers Wanted — Project Handover
I am actively looking to **hand over the Veramem Kernel**.
The kernel is stable, deterministic, published on PyPI, and has a strong technical foundation.
However, I no longer have the time and energy to fully grow and maintain it.
### 🎯 Handover goals
- Find **one primary maintainer** (ownership transfer)
- Or **2–3 active co-maintainers**
- Or encourage serious forks aligned with the core philosophy:
- append-only truth layer
- strict invariants
- deterministic behavior
- safety-first design
📌 More details about governance, vision, and expectations:
→ See [MAINTAINERS.md](MAINTAINERS.md)
📩 Interested?
Open an issue with the label **maintainership** or contact me directly.
The project remains under **Apache 2.0** — anyone is free to fork, adapt, and build on it.
The long-term goal is for Veramem to become a **community-driven open standard**.
---
## Why Veramem?
Modern software and AI systems suffer from fundamental weaknesses:
- mutable state
- hidden side effects
- temporal ambiguity
- unverifiable reasoning
- weak auditability
- opaque decision pipelines
These limitations make systems fragile, unsafe, and difficult to trust.
The Veramem Kernel addresses these problems by enforcing:
- immutable factual recording
- strict temporal ordering
- invariant validation at write time
- deterministic replay and verification
- traceable signal lineage
It does not try to interpret the world.
It guarantees that **what is recorded is stable, ordered, and verifiable**.
---
## Core Capabilities
The kernel provides a small and strictly defined set of primitives:
- **Append-only journals** — Immutable recording of facts across domains (observations, knowledge, signals, audits, constraints)
- **Monotonic timeline** — Single irreversible ordering with fork/reconciliation support
- **Signal lineage** — Provenance tracking, signal evolution, conflict resolution
- **Invariant enforcement** — Every write validated against formal invariants
- **Deterministic behavior** — Same inputs always produce the same outputs (no hidden randomness or side effects)
All operations are pure, auditable, and reproducible.
---
## What the Veramem Kernel is NOT
The kernel is intentionally minimal and incomplete. It does **NOT**:
- interpret signals or infer meaning
- apply business or policy logic
- resolve priorities or optimize outcomes
- provide orchestration or workflow engines
- expose user-facing APIs
- manage databases or storage
- trigger external side effects
These responsibilities belong outside the kernel. This strict separation is essential for **safety**, **auditability**, and **long-term reliability**.
---
## Architecture Boundaries
Veramem enforces a strong separation between layers:
- **Kernel (truth layer)** — Factual recording, temporal ordering, invariant enforcement, historical integrity
- **Application stack** — Projects facts, applies policies, orchestrates workflows, manages storage
- **Reflexive layer** — Governed explanations, compliance narratives (never influences kernel state)
Violating these boundaries compromises determinism and trust.
---
## Intended Usage
The Veramem Kernel is designed to be embedded in systems requiring strong guarantees.
Typical use cases:
- AI memory and cognitive architectures
- Compliance and governance systems
- Digital identity and long-term knowledge preservation
- Distributed coordination and consensus
- Reproducible research environments
- Regulated or high-trust infrastructures
---
## Installation
The kernel is published on [PyPI](https://pypi.org/project/veramem-kernel/):
```bash
# Core kernel (minimal, no crypto dependencies)
pip install veramem-kernel
# With Ed25519 support for distributed trust & attestation
pip install veramem-kernel[crypto]
```
Requires Python 3.10+.
---
## Quick Start
Run a minimal deterministic timeline:
```bash
python -m examples.basic_timeline
```
Explore more examples in the examples/ directory:
- distributed_timeline.py — Fork, divergence, deterministic merge
- explainable_ai_backbone.py — Governed explanations & audit trails
- long_term_memory.py — Epochs, snapshots, long-horizon reconstruction
- etc... (more than 15 examples)
---
### Core Guarantees
The kernel provides non-negotiable guarantees enforced by construction:
- Append-only truth
- Temporal integrity
- Determinism
- Invariant safety
- Reproducibility
- Auditability
- Strict separation of concerns
- Cryptographic integrity (HMAC-SHA256 baseline + Ed25519 via [crypto])
These properties are verified through extensive tests and conformance fixtures (included in the package).
---
## Conformance & Interoperability
Golden fixtures (deterministic test vectors) are included:
- Attestation (HMAC + Ed25519)
- Timeline delta, fork, merge, reconcile
Regenerate and verify in CI:
```bash
python conformance/generate_fixtures.py
git diff --exit-code conformance/fixtures/
```
See conformance/ for the full suite.
---
## Open Source Scope
This repository contains only the Veramem Kernel:
- deterministic core
- invariant enforcement
- signal lineage
- timeline integrity
- cryptographic primitives
- formal specifications and models
It does not include storage backends, orchestration layers, deployment systems, or hosted services.
License: Apache 2.0
---
## Research & Formal Foundations
Grounded in:
- formal invariant systems
- deterministic computation
- temporal consistency models
- distributed trust architectures
- zero-knowledge cognitive design
See protocol/, docs/, and formal/ directories.
---
## Cognitive Foundations
The Veramem Kernel is grounded in the following cognitive and architectural frameworks:
- **ARVIS — Adaptive Resilient Vigilant Intelligence System**
- **ZKCS — Zero-Knowledge Cognitive Systems**
These define the principles of:
- reasoning under constraints
- explicit uncertainty
- abstention as a valid outcome
- traceable cognition
- zero-knowledge architecture
See:
- docs/research/ARVIS.md
- docs/research/ZKCS.md
---
## Contributing
We welcome contributions from:
- distributed systems engineers
- formal methods & cryptography researchers
- AI safety & governance experts
Please read:
CONTRIBUTING.md
MAINTAINERS.md (we welcome new maintainers!)
SECURITY.md
GOVERNANCE.md
Start with good first issues or help improve conformance tests!
The Veramem Kernel is built to outlive any single contributor.
Join us in creating a durable foundation for verifiable, trustworthy systems.
| text/markdown | Julien Lefauconnier | null | Julien Lefauconnier | null | null | ai, cognitive, deterministic, auditability, traceability, reasoning, kernel | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pynacl>=1.5; extra == \"crypto\""
] | [] | [] | [] | [
"Repository, https://github.com/Julien-Lefauconnier/kernel",
"Issues, https://github.com/Julien-Lefauconnier/kernel/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:35:21.940331 | veramem_kernel-1.1.8.tar.gz | 160,917 | 7e/8a/56577287d6d146a60cc99ad0c52f5906f601bcb2b1aa42151e5f1dc77154/veramem_kernel-1.1.8.tar.gz | source | sdist | null | false | 344b54c32bf5c7dd8429e4c5274617da | 07e002663830510d755feb02d3d2101e58477811b4c5c20e2362cc3e8ee3c6f2 | 7e8a56577287d6d146a60cc99ad0c52f5906f601bcb2b1aa42151e5f1dc77154 | Apache-2.0 | [
"LICENSE",
"NOTICE"
] | 242 |
2.4 | henchman-ai | 0.3.7 | A model-agnostic AI agent CLI - your AI henchman for the terminal | # Henchman-AI
> Your AI Henchman for the Terminal - A Model-Agnostic AI Agent CLI
[](https://pypi.org/project/henchman-ai/)
[](https://pypi.org/project/henchman-ai/)
[](https://opensource.org/licenses/MIT)
Henchman-AI is a powerful, terminal-based AI agent that supports multiple LLM providers (DeepSeek, OpenAI, Anthropic, Ollama, and more) through a unified interface. Inspired by gemini-cli, built for extensibility and production use.
## ✨ Features
- 🤝 **Multi-Agent Dev Team**: Orchestrate a team of specialists (Architect, Coder, Reviewer, Tester, etc.) to solve complex engineering tasks.
- 🔄 **Model-Agnostic**: Support any LLM provider through a unified abstraction layer
- 🐍 **Pythonic**: Leverages Python's async ecosystem and rich libraries for optimal performance
- 🔌 **Extensible**: Plugin system for tools, providers, and custom commands
- 🚀 **Production-Ready**: Proper error handling, comprehensive testing, and semantic versioning
- 🛠️ **Tool Integration**: Built-in support for file operations, web search, code execution, and more
- ⚡ **Fast & Efficient**: Async-first design with intelligent caching and rate limiting
- 🔒 **Secure**: Environment-based configuration and safe execution sandboxing
## 📦 Installation
### From PyPI (Recommended)
```bash
pip install henchman-ai
```
### From Source
```bash
git clone https://github.com/MGPowerlytics/henchman-ai.git
cd henchman-ai
pip install -e ".[dev]"
```
### With uv (Fastest)
```bash
uv pip install henchman-ai
```
## 🚀 Quick Start
1. **Set your API key** (choose your preferred provider):
```bash
export DEEPSEEK_API_KEY="your-api-key-here"
# or
export OPENAI_API_KEY="your-api-key-here"
# or
export ANTHROPIC_API_KEY="your-api-key-here"
```
2. **Start the CLI**:
```bash
henchman
```
3. **Or run with a prompt directly**:
```bash
henchman --prompt "Explain this Python code" < example.py
```
## 🏗️ Architecture
Henchman-AI features a modular, component-based architecture designed for maintainability and extensibility. The core interactive REPL (Read-Eval-Print Loop) has been refactored into specialized components:
### REPL Component Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ REPL (Orchestrator) │
│ ┌──────────┐ ┌───────────┐ ┌─────────────┐ ┌─────────┐ │
│ │ Input │ │ Output │ │ Command │ │ Tool │ │
│ │ Handler │◄─┤ Handler │◄─┤ Processor │◄─┤Executor │ │
│ └──────────┘ └───────────┘ └─────────────┘ └─────────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Multi-Agent Orchestrator │ │
│ └───────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
### Component Responsibilities
1. **REPL (Orchestrator)**: Main coordination class (406 lines, down from 559)
- Initializes and connects all components
- Manages the main interaction loop
- Delegates work to specialized components
- Maintains backward compatibility
2. **InputHandler**: User input processing
- Manages prompt sessions with history
- Handles @file expansion and shell command detection
- Processes keyboard interrupts and EOF
- Validates and sanitizes user input
3. **OutputHandler**: Console output and status display
- Manages rich console output and formatting
- Displays status bars and tool information
- Shows welcome/goodbye messages
- Handles event streaming and turn status
4. **CommandProcessor**: Slash command execution
- Processes /quit, /clear, /help, and other commands
- Manages command registry and argument parsing
- Delegates to specialized command handlers
- Provides command completion and validation
5. **ToolExecutor**: Tool execution and agent coordination
- Executes tool calls from agents
- Manages tool confirmation requests
- Processes agent event streams
- Handles tool iteration limits and cancellation
### Benefits of Component Architecture
- **Single Responsibility**: Each component has a clear, focused purpose
- **Testability**: Components can be tested independently (100% test coverage for core components)
- **Maintainability**: Smaller, focused classes are easier to understand and modify
- **Extensibility**: New components can be added without modifying the REPL
- **Performance**: Business logic moved out of REPL, leaving only orchestration
## 📖 Usage Examples
### Basic Commands
```bash
# Show version
henchman --version
# Show help
henchman --help
# Interactive mode (default)
henchman
# Headless mode with prompt
henchman -p "Summarize the key points from README.md"
# Specify a provider
henchman --provider openai -p "Write a Python function to calculate fibonacci"
# Use a specific model
henchman --model gpt-4-turbo -p "Analyze this code for security issues"
```
### File Operations
```bash
# Read and analyze a file
henchman -p "Review this code for bugs" < script.py
# Process multiple files
cat *.py | henchman -p "Find common patterns in these files"
# Generate documentation
henchman -p "Create API documentation for this module" < module.py > docs.md
```
## ⚙️ Configuration
Henchman-AI uses hierarchical configuration (later settings override earlier ones):
1. **Default settings** (built-in sensible defaults)
2. **User settings**: `~/.henchman/settings.yaml`
3. **Workspace settings**: `.henchman/settings.yaml` (project-specific)
4. **Environment variables** (highest priority)
### Example `settings.yaml`
```yaml
# Provider configuration
providers:
default: deepseek # or openai, anthropic, ollama, openrouter
deepseek:
model: deepseek-chat
base_url: "https://api.deepseek.com"
temperature: 0.7
openai:
model: gpt-4-turbo-preview
organization: "org-xxx"
# Tool settings
tools:
auto_accept_read: true
shell_timeout: 60
web_search_max_results: 5
# UI settings
ui:
theme: "monokai"
show_tokens: true
streaming: true
# System settings
system:
cache_enabled: true
cache_ttl: 3600
max_tokens: 4096
```
### Environment Variables
```bash
# Provider API keys
export DEEPSEEK_API_KEY="sk-xxx"
export OPENAI_API_KEY="sk-xxx"
export ANTHROPIC_API_KEY="sk-xxx"
# Configuration overrides
export HENCHMAN_DEFAULT_PROVIDER="openai"
export HENCHMAN_DEFAULT_MODEL="gpt-4"
export HENCHMAN_TEMPERATURE="0.5"
```
## 🔌 Supported Providers
| Provider | Models | Features |
|----------|--------|----------|
| **DeepSeek** | deepseek-chat, deepseek-coder | Free tier, Code completion |
| **OpenAI** | gpt-4, gpt-3.5-turbo, etc. | Function calling, JSON mode |
| **Anthropic** | claude-3-opus, claude-3-sonnet | Long context, Constitutional AI |
| **Ollama** | llama2, mistral, codellama | Local models, Custom models |
| **Custom** | Any OpenAI-compatible API | Self-hosted, Local inference |
## 🛠️ Development
### Setup Development Environment
```bash
# Clone and install
git clone https://github.com/MGPowerlytics/henchman-ai.git
cd henchman-ai
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -e ".[dev]"
```
### Running Tests
```bash
# Run all tests
pytest
# Run with coverage
pytest --cov=henchman --cov-report=html
# Run specific test categories
pytest tests/unit/ -v
pytest tests/integration/ -v
```
### Code Quality
```bash
# Linting
ruff check src/ tests/
ruff format src/ tests/
# Type checking
mypy src/
# Security scanning
bandit -r src/
```
### Building and Publishing
```bash
# Build package
hatch build
# Test build
hatch run test
# Publish to PyPI (requires credentials)
hatch publish
```
## 📚 Documentation
### Online Documentation
For detailed documentation, see the [docs directory](docs/) in this repository:
- [Getting Started](docs/getting-started.md)
- [Configuration Guide](docs/configuration.md)
- [API Reference](docs/api.md)
- [Tool Development](docs/tools.md)
- [Provider Integration](docs/providers.md)
- [MCP Integration](docs/mcp.md)
- [Extensions](docs/extensions.md)
### Building Documentation Locally
You can build and view the documentation locally:
```bash
# Install documentation dependencies
pip install mkdocs mkdocs-material mkdocstrings[python]
# Build static HTML documentation
python scripts/build_docs.py
# Or serve documentation locally (live preview)
mkdocs serve
```
The documentation will be available at `http://localhost:8000` when served locally.
## 🤝 Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details.
## 🐛 Reporting Issues
Found a bug or have a feature request? Please [open an issue](https://github.com/MGPowerlytics/henchman-ai/issues) on GitHub.
## 📄 License
Henchman-AI is released under the MIT License. See the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- Inspired by [gemini-cli](https://github.com/google/gemini-cli)
- Built with [Rich](https://github.com/Textualize/rich) for beautiful terminal output
- Uses [Pydantic](https://docs.pydantic.dev/) for data validation
- Powered by the Python async ecosystem
---
**Happy coding with your AI Henchman!** 🦸♂️🤖 | text/markdown | null | Matthew <matthew@example.com> | null | null | null | agent, ai, anthropic, assistant, cli, deepseek, henchman, llm, openai, terminal | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Text Processing :: Linguistic",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp>=3.9",
"anthropic>=0.40",
"anyio>=4.0",
"beautifulsoup4>=4.12",
"chromadb>=0.4",
"click>=8.0",
"fastembed>=0.3",
"httpx>=0.27",
"mcp>=1.0",
"networkx>=3.0",
"openai>=1.40",
"prompt-toolkit>=3.0",
"pydantic>=2.0",
"pyyaml>=6.0",
"rich>=13.0",
"textual>=0.75",
"tiktoken>=0.5",
"mypy>=1.10; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest-mock>=3.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"types-networkx>=3.0; extra == \"dev\"",
"types-pyyaml>=6.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/MGPowerlytics/henchman-ai",
"Repository, https://github.com/MGPowerlytics/henchman-ai",
"Documentation, https://github.com/MGPowerlytics/henchman-ai/tree/main/docs",
"Changelog, https://github.com/MGPowerlytics/henchman-ai/blob/main/CHANGELOG.md",
"Bug Tracker, https://github.com/MGPowerlytics/henchman-ai/issues",
"Discussions, https://github.com/MGPowerlytics/henchman-ai/discussions"
] | twine/6.2.0 CPython/3.12.7 | 2026-02-20T23:35:05.687954 | henchman_ai-0.3.7.tar.gz | 1,133,033 | 79/cc/24870acb35b59f79b3a6c16df42ad23e3c926102b95a362710ad686d5504/henchman_ai-0.3.7.tar.gz | source | sdist | null | false | 160ec563de336d7b08f86160271017df | d5c55028d4947cea8be83aea6a8714b0ce115a5087fb0f69eb6f4714cd0f78ed | 79cc24870acb35b59f79b3a6c16df42ad23e3c926102b95a362710ad686d5504 | MIT | [
"LICENSE"
] | 234 |
2.3 | sndls | 0.1.13 | An audio-friendly ls, with a little something extra | # `sndls`: An audio-friendly `ls`, with a little something extra
`sndls` (sound `ls`) is a command-line tool designed for quick and efficient inspection of audio data. It provides functionalities such as:
- Saving search results to a `.csv` file for later analysis.
- Detecting clipped, silent, or anomalous files that may impact machine learning pipelines.
- Computing and verifying SHA-256 hashes to detect file modifications or corruption.
- Filtering files using `python` expressions to identify those matching specific criteria.
- Performing fast, metadata-based file inspection.
- Executing post-processing actions, such as removing clipped files, copying files that meet certain conditions, and more.
`sndls` currently supports the following extensions:
`.aif`, `.aiff`, `.mp3`, `.flac`, `.ogg`, `.wav`, `.wave`.
# Table of contents
- [Installation](#installation)
- [Install through pip](#install-through-pip)
- [Install in developer mode](#install-in-developer-mode)
- [Install through uv](#install-through-uv)
- [Tutorial](#tutorial)
- [Quickstart](#quickstart)
- [Help](#help)
- [Recursive search](#recursive-search)
- [Generating SHA-256 hash](#generating-sha-256-hash)
- [Fast metadata search](#fast-metadata-search)
- [Saving output to csv file](#saving-output-to-csv-file)
- [Filtering by extension](#filtering-by-extension)
- [Filtering by python expressions](#filtering-by-python-expressions)
- [Filtering by using preloaded files](#filtering-by-using-preloaded-files)
- [Post-actions](#post-actions)
- [Random data sampling and splitting](#random-data-sampling-and-splitting)
- [Cite](#cite)
- [License](#license)
# Installation
## Install through pip
To install `sndls`, run:
```bash
pip install sndls
```
Verify the installation with:
```bash
sndls --version
```
This should output:
```
sndls version x.y.z yyyy-zzzz developed by Esteban Gómez
```
Where:
- `x.y.z` represents the major, minor, and patch version
- `yyyy-zzzz` indicates the development start year and the current
## Install in developer mode
Developer mode installation is intended for those developing new features for the tool. To set it up:
1. Clone the repository to your desired folder using:
```bash
git clone <repository_url>
```
2. Navigate to the root directory (where `pyproject.toml` is located):
```bash
cd <repository_folder>
```
3. Install in developer mode with:
```bash
python -m flit install -s
```
This will allow immediate reflection of any code modifications when the tool is executed in the terminal.
Before proceeding, ensure that Flit is installed. If not, install it with:
```bash
python -m pip install flit
```
For more information on `flit`, refer to the [Flit Command Line Interface documentation](https://flit.pypa.io/en/stable/).
## Install through `uv`
Alternatively, you can install the tool using `uv`. This is adequate for when you can to keep it isolated from your `python`
environment setup and just run it to analyze a certain data collection.
1. Install `uv` and `uvx` following the instructions for your operating system in [`uv` website](https://docs.astral.sh/uv/getting-started/installation/).
2. Run:
```bash
uv tool install sndls
```
3. Verify the installation with
```bash
uv tool run sndls --version
```
or you can use the shortcut version `uvx`:
```bash
uvx sndls --version
```
This should output:
```
sndls version x.y.z yyyy-zzzz developed by Esteban Gómez
```
Where:
- `x.y.z` represents the major, minor, and patch version
- `yyyy-zzzz` indicates the development start year and the current
# Tutorial
This quick tutorial is structured into multiple sections, each focusing on a
fundamental aspect of `sndls` and its core functionalities.
## Quickstart
To inspect the audio data in a certain folder, run:
```bash
sndls /path/to/folder
```
If no path is provided, the current directory will be used as the default input.
If your folder contains audio files, you should see output similar to the
following in your terminal (the information will vary based on your folder's contents):
```bash
/path/to/audio/dir/000_audio.wav 120.0K WAV PCM_16 50000x1@16000hz -18.5dBrms:0 -5.0dBpeak:0
/path/to/audio/dir/001_audio.wav 115.0K WAV PCM_16 52000x1@16000hz -19.0dBrms:0 -5.5dBpeak:0
/path/to/audio/dir/002_audio.wav 95.0K WAV PCM_16 48000x1@16000hz -17.0dBrms:0 -4.5dBpeak:0
/path/to/audio/dir/003_audio.wav 130.0K WAV PCM_16 65000x1@16000hz -18.0dBrms:0 -3.0dBpeak:0
Total file(s): 4
Mono file(s): 4
Stereo file(s): 0
Multichannel file(s): 0
Sample rate(s): 16000hz
Skipped files: 0
Clipped files: 0
Anomalous files: 0
Silent files: 0
Total duration: 14.5 second(s)
Minimum duration: 3.0 second(s)
Maximum duration: 4.0 second(s)
Average duration: 3.6 second(s)
Total size: 460.0K
Elapsed time: 5.0 ms
```
## Help
For a detailed description of all available options, run:
```bash
sndls --help
```
This will display all parameters along with their descriptions.
## Recursive search
By default, `sndls` searches for audio files only within the specified input folder.
To include audio files from nested directories, enable recursive search using `--recursive` or `-r`:
```bash
sndls /path/to/root/dir --recursive
```
## Generating SHA-256 hash
In addition to retrieving audio metadata and data for each file, you can generate the corresponding SHA-256 hash. To visualize the full SHA-256, use the `--sha256` option. If you'd prefer to see only the last 8 characters of the SHA-256, use the `--sha256-short` option instead:
```bash
sndls /path/to/audio/dir --sha256
```
This will make your output appear as follows:
```bash
/path/to/audio/dir/000_audio.wav d4f72a9b8cfd7e33ab32e4f24cfdb7f8a28f85a4b7f29de96b0b2b74369b48e5 106.3K WAV PCM_16 52782x1@16000hz -18.3dBrms:0 -2.5dBpeak:0
/path/to/audio/dir/001_audio.wav a6d1a0c02a5e55d531b29c6cf97c09cb68fe9b0f758bdf45c1ec8f7d915e9b63 111.7K WAV PCM_16 61425x1@16000hz -21.0dBrms:0 -4.2dBpeak:0
/path/to/audio/dir/002_audio.wav 0f2a4d6b19b6f9cf5d8f7d47d088dc9be7b964f017028d7389f1acb46a18c8b9 90.6K WAV PCM_16 49200x1@16000hz -16.8dBrms:0 -3.2dBpeak:0
/path/to/audio/dir/004_audio.wav 6a55cfef36e1a8937d66b9082f74c19bc82cdbf4db7a1c98a3f1b0883c1a7456 127.9K WAV PCM_16 68042x1@16000hz -19.1dBrms:0 -1.9dBpeak:0
...
```
If `--sha256-short` is used instead, you should see:
```bash
/path/to/audio/dir/000_audio.wav 369b48e5 106.3K WAV PCM_16 52782x1@16000hz -18.3dBrms:0 -2.5dBpeak:0
/path/to/audio/dir/001_audio.wav 915e9b63 111.7K WAV PCM_16 61425x1@16000hz -21.0dBrms:0 -4.2dBpeak:0
/path/to/audio/dir/002_audio.wav 6a18c8b9 90.6K WAV PCM_16 49200x1@16000hz -16.8dBrms:0 -3.2dBpeak:0
/path/to/audio/dir/004_audio.wav 3c1a7456 127.9K WAV PCM_16 68042x1@16000hz -19.1dBrms:0 -1.9dBpeak:0
...
```
## Fast metadata search
Inspecting large folders or those containing long audio files can take considerable time.
In some cases, it's preferable to extract only metadata without reading the actual audio samples.
For such cases, the `--meta` or `-m` option is available. In this case, only metadata
based information will be printed to the terminal. Information such as `peak_db`, `rms_db` will
not be calculated.
```bash
sndls /path/to/audio/dir --meta
```
For small folders, the difference in runtime may be negligible, but for larger datasets, it can be
substantial.
## Saving output to `.csv` file
The results of a given search can also be saved to a `.csv` file as tabular data for later inspection.
To do this, simply provide the `--csv` argument followed by the name of your desired output file:
```bash
sndls /path/to/audio/dir --csv output.csv
```
Please note that the `.csv` file will include the full file path and full SHA-256 (if `--sha256`
or `--sha256-short` is enabled). The results included in the `.csv` will be the exact results that match your search.
## Filtering by extension
Listed files can be filtered by many ways, including their extension. Only certain audio file extensions
that can be parsed by `soundfile` are currently supported. Use the `--extension` or `-e` option if you want
to restrict your results to a certain extension or extensions:
```bash
sndls /path/to/audio/dir --extension .wav .flac
```
In this case, the search will include only `.wav` and `.flac` files, ignoring all other extensions.
## Filtering by `python` expressions
In addition to filtering by extension using the `--extension` or `-e` option, you can create custom
filters to find files with specific traits. This can be useful for tasks like:
- Finding clipped, silent, or anomalous files
- Finding files within a specific duration range
- Finding files with a particular sample rate
For these cases, the `--select` or `-s`) option allows you to select files that meet certain criteria, while
the `--filter` or `-f` option lets you select all files except those that match the filter. Both options
accept `python` expressions for greater flexibility in your search.
Note that these options are mutually exclusive, meaning only one can be used at a time.
For example, to search for only clipped mono files, run:
```bash
sndls /path/to/audio/dir --select "is_clipped and num_channels == 1"
```
To filter out files shorter than 3.0 seconds, run:
```bash
sndls /path/to/audio/dir --filter "duration_seconds < 3.0"
```
Please note that some fields contain lists of values, where the length depends on the
number of channels in the file, such as `peak_db` or `rms_db`. In such cases, methods
like `any()` or `all()` can be useful.
For example, to find all files where all channels have peak values in decibels (`peak_db`)
greater than -3.0 dB, you can do the following:
```bash
sndls /path/to/audio/dir --select "all(db > -3.0 for db in peak_db)"
```
Here is a list of all fields that can be used to refine your search:
| Field | Description | Data type |
|----------------------------|--------------------------------------------------------------------------------------------------------------|---------------|
| `file` | Audio file path | `str` |
| `filename` | Audio filename | `str` |
| `fs` | Audio sample rate in hertz (e.g. 16000, 48000) | `int` |
| `num_channels` | Number of channels in the file | `int` |
| `num_samples_per_channels` | Number of samples per channels | `int` |
| `duration_seconds` | Duration of the file in seconds | `float` |
| `size_bytes` | Size of the file in bytes | `int` |
| `fmt` | File format (`WAV`, `RF64`, etc) | `str` |
| `subtype` | File subtype (`PCM_16`, `PCM_24`, `FLOAT`, etc) | `str` |
| `peak_db` | Per-channel peak value in decibels | `List[float]` |
| `rms_db` | Per-channel root mean square value in decibels | `List[float]` |
| `spectral_rolloff` | Average spectral-rolloff in hertz (only available with `--spectral-rolloff`) | `List[float]` |
| `spectral_rolloff_min` | Minimum spectral-rolloff in hertz (only available with `--spectral-rolloff` and `--spectral-rolloff-detail`) | `List[float]` |
| `spectral_rolloff_max` | Maximum spectral-rolloff in hertz (only available with `--spectral-rolloff` and `--spectral-rolloff-detail`) | `List[float]` |
| `is_silent` | `True` if all channels have less than `--silent-thresh` dB RMS | `bool` |
| `is_clipped` | `True` if any channel contains values outside the `-1.0` to `1.0` range | `bool` |
| `is_anomalous` | `True` if any sample is `NaN`, `inf` or `-inf` | `bool` |
| `is_invalid` | `True` if the file could not be read. Only valid with `--skip-invalid-files` | `bool` |
| `sha256` | SHA-256 hash (only available if `--sha256` or `--sha256-short` is enabled | `str` |
| `preload` | Preloaded `DataFrame` (only available with `--preload`) | `DataFrame` |
## Filtering by using preloaded files
`sndls` provides a `--preload` option to load a `.csv`, `.tsv`, or `.txt` file that can be used with the `--filter` and `--select` options. This feature allows you to expand your search and filtering capabilities, such as matching files from a specific file or finding a particular set of SHA-256 hashes, etc. To preload a file, you can do the following:
```bash
sndls /path/to/audio/dir --preload /path/to/preload/file
```
In all cases, your preloaded file will be interpreted as tabular data. To exclude the first row when it contains header information, use the `--preload-has-header` option. Otherwise, every row will be treated as data. All data from your preloaded file will be availabl
under the preload variable when writing `--filter` or `--select` expressions. You can use it as a regular `DataFrame`. If there is no header
information, the columns will be automatically numbered as `column_1`, `column_2`, etc.
```bash
sndls /path/to/audio/dir --preload /path/to/preload/file --select "((preload['column_1'].str.contains(filename)) & (preload['column_2'] == 'TARGET')).any()"
```
This expression will match all files whose filename is in `column_1` and `column_2` contains the value of `TARGET`. Please keep in mind that every file must be matched against your entire preload file, so using the `--preload` option for selection or filtering is expected to take longer than regular search expressions. However, it can be much more powerful in certain cases.
## Post-actions
In some cases, we want not just to see files matching a certain criteria, but also perform actions on them (e.g., remove clipped files or silent files from a dataset). For such cases, the `--post-action` option exists. It has five available values: `cp`, `mv`, `rm`, `cp+sp`, and `mv+sp`, where:
- `cp` will copy the files to `--post-action-output`.
- `mv` will move the files to `--post-action-output`.
- `rm` will delete the files (this action cannot be undone).
- `cp+sp` will first copy the files to `--post-action-output` and then create `--post-action-num-splits` splits of the data.
- `mv+sp` will first move the files to `--post-action-output` and then create `--post-action-num-splits` splits of the data.
- `dump` will create a file with all the file paths. This can be useful for using `rsync` with `--files-from` option.
- `dump+sp` will create `--post-action-num-splits` files, each one containing a subset of all the file paths.
In all cases, you will be asked to confirm the action through the command line. Here is an example:
```bash
sndls /path/to/audio/dir --post-action cp --post-action-output /post/action/output
...
N file(s) will be copied to '/post/action/output'
Do you want to continue? [y/n]:
```
Write `y` or `n` and then press enter. The action will then be executed.
If you are using this tool as part of an automated pipeline, you may want to skip user input. In such cases, there is the `--unattended` or `-u` option. When used, it will skip the confirmation prompt, but ensure that your action is correctly set up beforehand:
```bash
sndls /path/to/audio/dir --post-action cp --post-action-output /post/action/output --unattended
...
N file(s) will be copied to '/post/action/output'
Creating post action output folder '/post/action/output'
N/N file(s) copied to '/post/action/output'
```
The additional output lines show if all your files were correctly copied, moved, or deleted. Please note that moving or copying files will not overwrite already existing files.
## Random data sampling and splitting
`sndls` can be useful for sampling files that meet certain conditions from a large dataset, especially when copying everything or manually filtering the files might be time-consuming. The `--sample` option allows you to achieve this. In summary, this option can randomly sample a given number of files from your search results as follows:
```bash
sndls /path/to/audio/dir --sample 20
```
This command randomly samples 20 audio files from `/path/to/audio/dir`. These files can be used with the `--post-action` option to copy them to another folder for later inspection:
```bash
sndls /path/to/audio/dir --sample 20 --post-action cp --post-action-output /path/to/output/dir
```
This allows you to randomly sample data based on specific conditions, as it can be combined with the `--filter`, `--select`, or any other available options. To change the random seed used for selecting the files, you can do so as follows:
```bash
sndls /path/to/audio/dir --sample 20 --post-action cp --post-action-output /path/to/output/dir --random-seed 3673
```
Where 3673 can be any integer number that will be used as a random seed.
Additionally, if a `float` between `0.0` and `1.0` is provided with the `--sample` option, it will be interpreted as a percentage of the total number of files.
# Cite
If this tool contributed to your work, please consider citing it:
```
@misc{sndls,
author = {Esteban Gómez},
title = {sndls},
year = 2024,
url = {https://github.com/eagomez2/sndls}
}
```
This tool was developed by <a href="https://estebangomez.me/" target="_blank">Esteban Gómez</a>, member of the <a href="https://www.aalto.fi/en/department-of-information-and-communications-engineering/speech-interaction-technology" target="_blank">Speech Interaction Technology group from Aalto University</a>.
# License
For further details about the license of this tool, please see [LICENSE](LICENSE). | text/markdown | null | Esteban Gómez <esteban.gomezmellado@aalto.fi> | null | null | null | audio, dataset, dsp | [] | [] | null | null | >=3.9 | [] | [
"sndls"
] | [] | [
"numpy>=2.0.2",
"polars>=1.23.0",
"scipy>=1.13.1",
"soundfile>=0.13.1",
"tqdm>=4.67.1",
"ruff>=0.9.7; extra == \"lint\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/eagomez2/sndls/issues",
"Changelog, https://github.com/eagomez2/sndls/releases",
"Documentation, https://github.com/eagomez2/sndls",
"Home, https://github.com/eagomez2/sndls",
"Repository, https://github.com/eagomez2/sndls"
] | python-requests/2.32.5 | 2026-02-20T23:34:49.129652 | sndls-0.1.13.tar.gz | 37,726 | 08/b6/4b3b4664f6390b64569cc8f9eed092bcb9968f8de767b8d0beff5f56ec6d/sndls-0.1.13.tar.gz | source | sdist | null | false | 3d6d551f68e8bf47c3cd99c22e5640bd | f7e672632ed9ddafece56e1f6cfeae787e7f4de1922e0ba9c76ca3961901102d | 08b64b3b4664f6390b64569cc8f9eed092bcb9968f8de767b8d0beff5f56ec6d | null | [] | 238 |
2.4 | onair-monitor | 0.1.2 | Linux daemon that detects camera usage and notifies Home Assistant via webhooks. | # onair-monitor
A tiny Linux daemon that detects camera usage and notifies
[Home Assistant](https://www.home-assistant.io/) via webhooks.
Optionally shows a system-tray icon that turns red when a camera is active.
## Install
```bash
uv tool install onair-monitor
```
### Tray icon support
To enable the system-tray icon, install the `tray` extra and the required
system libraries:
**1. System libraries (needed to build PyGObject):**
Debian / Ubuntu:
```bash
sudo apt install libgirepository-2.0-dev libcairo2-dev
```
Fedora:
```bash
sudo dnf install gobject-introspection-devel cairo-devel
```
Arch Linux:
```bash
sudo pacman -S gobject-introspection cairo
```
**2. Install with the tray extra:**
```bash
uv tool install "onair-monitor[tray]"
```
> **GNOME users:** the tray icon requires the
> [AppIndicator](https://extensions.gnome.org/extension/615/appindicator-support/)
> extension.
## Configure
On first run, a default config is created at
`~/.config/onair-monitor/config.json`. Edit it to point at your
Home Assistant instance:
```json
{
"ha_url": "http://homeassistant.local:8123",
"webhook_on": "camera_on",
"webhook_off": "camera_off",
"poll_interval": 2
}
```
The monitor POSTs to `{ha_url}/api/webhook/{webhook_on|off}`.
### Setting up webhooks in Home Assistant
1. Go to **Settings > Automations & Scenes > Create Automation**.
2. Add a **Webhook** trigger and note the webhook ID.
3. Create one automation for `camera_on` and one for `camera_off`.
4. Use the webhook IDs in your config file.
## Run
```bash
# run directly (tray icon if available, otherwise headless)
onair-monitor
# force headless mode
onair-monitor --headless
```
### Autostart (desktop session)
```bash
onair-monitor --install-autostart
```
### Systemd user service
```bash
onair-monitor --install-service
systemctl --user start onair-monitor.service
```
## Uninstall
```bash
onair-monitor --uninstall
uv tool uninstall onair-monitor
```
The config file at `~/.config/onair-monitor/config.json` is kept — remove
it manually if you no longer need it.
## License
MIT
| text/markdown | null | Marius Helf <marius@happyyeti.tech> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Environment :: No Input/Output (Daemon)",
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pytest>=7.0; extra == \"dev\"",
"pillow>=9.0; extra == \"tray\"",
"pygobject>=3.54.5; extra == \"tray\"",
"pystray>=0.19; extra == \"tray\""
] | [] | [] | [] | [
"Homepage, https://github.com/mariushelf/onair_monitor",
"Repository, https://github.com/mariushelf/onair_monitor"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:34:36.196185 | onair_monitor-0.1.2.tar.gz | 54,791 | 7a/f6/aac2265c19b65133124197ad73bad68dcf07b93e5a92354f59d7956787e6/onair_monitor-0.1.2.tar.gz | source | sdist | null | false | a2543690821a48b02a07c43fec1c9273 | f50687e913dd725d2e2898e0651ddd0b8b83dc19c6df4822ce33c4c5753e0d25 | 7af6aac2265c19b65133124197ad73bad68dcf07b93e5a92354f59d7956787e6 | MIT | [
"LICENSE"
] | 224 |
2.4 | quantconnect-stubs | 17541 | Type stubs for QuantConnect's Lean | # QuantConnect Stubs
This package contains type stubs for QuantConnect's [Lean](https://github.com/QuantConnect/Lean) algorithmic trading engine and for parts of the .NET library that are used by Lean.
These stubs can be used by editors to provide type-aware features like autocomplete and auto-imports in QuantConnect strategies written in Python.
After installing the stubs, you can copy the following line to the top of every Python file to have the same imports as the ones that are added by default in the cloud:
```py
from AlgorithmImports import *
```
This line imports [all common QuantConnect members](https://github.com/QuantConnect/Lean/blob/master/Common/AlgorithmImports.py) and provides autocomplete for them.
| text/markdown | QuantConnect | support@quantconnect.com | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3"
] | [] | https://github.com/QuantConnect/quantconnect-stubs-generator | null | null | [] | [] | [] | [
"pandas",
"matplotlib"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T23:31:55.598404 | quantconnect_stubs-17541.tar.gz | 1,136,435 | 89/64/312fe0ce110e3bd9a88d2fe28046c90a27e96cd4420f880fdec6c4cf0bf9/quantconnect_stubs-17541.tar.gz | source | sdist | null | false | 760c68a1a80e0ffd6b5d2cf06ed891d9 | fad18f839b2f060322e419ac68f91895e77c920994c71b839c3d8a66be00b0e2 | 8964312fe0ce110e3bd9a88d2fe28046c90a27e96cd4420f880fdec6c4cf0bf9 | null | [] | 690 |
2.1 | metatrader5 | 5.0.5640 | API Connector to MetaTrader 5 Terminal | `MetaTrader <https://www.metatrader5.com>`_ is a multi-asset platform for trading in the Forex market and stock exchanges.
In addition to the basic trading functionality which enables access to financial markets, the platform provides powerful
tools for analyzing vast amounts of price data. These include charts, technical indicators, graphical objects and the
built-in C-like MQL5 programming language.
The platform architecture enables the compact storage and efficient management of price data related to hundreds and
thousands of financial instruments with a dozens of years of historical data. With the MetaTrader 5 for Python package,
you can analyze this information in your preferred environment.
Install the package and request arrays of bars and ticks with ease. Type the desired financial security name and date
in a command, and receive a complete data array. All the necessary related operations, such as the platform launch,
data synchronization with the broker's server and data transfer to the Python environment will be performed automatically.
For full documentation, see https://www.mql5.com/en/docs/integration/python_metatrader5
| text/x-rst | MetaQuotes Ltd. | plugins@metaquotes.net | MetaQuotes Ltd. | plugins@metaquotes.net | MIT | metatrader mt5 metaquotes mql5 forex currency exchange | [
"Development Status :: 5 - Production/Stable",
"Topic :: Office/Business :: Financial",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [
"Windows"
] | https://www.metatrader5.com | null | <4,>=3.6 | [] | [] | [] | [
"numpy>=1.7"
] | [] | [] | [] | [
"Documentation, https://www.mql5.com/en/docs/integration/python_metatrader5",
"Forum, https://www.mql5.com/en/forum"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T23:31:21.247331 | metatrader5-5.0.5640-cp39-cp39-win_amd64.whl | 58,834 | 03/c8/a74181a90cebb22d35efd0dc6187fd46c5bf454ec8fe8b2fd398da2db16a/metatrader5-5.0.5640-cp39-cp39-win_amd64.whl | cp39 | bdist_wheel | null | false | d15a1920542ab9ce560a52c6d2b69b98 | 6eb177c22e08665f45253c2689a8a04616556128293fa85dee0b87528a50c6ca | 03c8a74181a90cebb22d35efd0dc6187fd46c5bf454ec8fe8b2fd398da2db16a | null | [] | 2,999 |
2.4 | orca-mlips | 1.1.0 | MLIP plugins for ORCA ExtTool (UMA, ORB, MACE, AIMNet2) | # orca-mlips
[](https://zenodo.org/badge/latestdoi/1160316090)
MLIP (Machine Learning Interatomic Potential) plugins for ORCA `ExtTool` (`ProgExt`) interface.
Four model families are currently supported:
- **UMA** ([fairchem](https://github.com/facebookresearch/fairchem)) — default model: `uma-s-1p1`
- **ORB** ([orb-models](https://github.com/orbital-materials/orb-models)) — default model: `orb_v3_conservative_omol`
- **MACE** ([mace](https://github.com/ACEsuit/mace)) — default model: `MACE-OMOL-0`
- **AIMNet2** ([aimnetcentral](https://github.com/isayevlab/aimnetcentral)) — default model: `aimnet2`
All backends provide energy and gradient, and can output an **analytical Hessian** in **ORCA** `.hess` format via `--dump-hessian`.
An optional implicit-solvent correction (`xTB`) is also available via `--solvent`.
> The model server starts automatically and stays resident in memory, so repeated calls during optimization are fast.
Requires **Python 3.9** or later.
## Quick Start (Default = UMA)
1. Install PyTorch suitable for your environment (CUDA/CPU).
```bash
pip install torch==2.8.0 --index-url https://download.pytorch.org/whl/cu129
```
2. Install the package with the UMA profile. If you need ORB/MACE/AIMNet2, use `orca-mlips[orb]`/`orca-mlips[mace]`/`orca-mlips[aimnet2]`.
```bash
pip install "orca-mlips[uma]"
```
3. Log in to Hugging Face for UMA model access. (Not required for ORB/MACE/AIMNet2)
```bash
huggingface-cli login
```
4. Use in an ORCA input file. If you use ORB/MACE/AIMNet2, use `ProgExt "orb"`/`ProgExt "mace"`/`ProgExt "aimnet2"`.
For detailed ORCA External Tool / `ExtOpt` usage, see https://www.faccts.de/docs/orca/6.1/tutorials/workflows/extopt.html
```text
! ExtOpt Opt
%pal
nprocs 8
end
%method
ProgExt "uma"
end
* xyz 0 1
O 0.000000 0.000000 0.000000
H 0.758602 0.000000 0.504284
H -0.758602 0.000000 0.504284
*
```
Other backends:
```text
%method
ProgExt "orb"
end
%method
ProgExt "mace"
end
%method
ProgExt "aimnet2"
end
```
## Implicit Solvent Correction (xTB)
You can use an implicit-solvent correction via xTB. To use it, install xTB and pass the `--solvent` option.
Install xTB in your conda environment (easy path):
```bash
conda install xtb
```
Use `--solvent <name>` through `Ext_Params` (examples: `water`, `thf`):
```text
%method
ProgExt "uma"
Ext_Params "--solvent water"
end
%method
ProgExt "uma"
Ext_Params "--solvent thf"
end
```
This implementation follows the solvent-correction approach described in:
Zhang, C., Leforestier, B., Besnard, C., & Mazet, C. (2025). Pd-catalyzed regiodivergent arylation of cyclic allylboronates. Chemical Science, 16, 22656-22665. https://doi.org/10.1039/d5sc07577g
When you describe this correction in a paper, you can use:
`Implicit solvent effects were accounted for by integrating the ALPB [or CPCM-X] solvation model from the xtb package as an additional correction to UMA-generated energies, gradients, and Hessians.`
**Note:** `--solvent-model cpcmx` (CPCM-X) requires xTB built from source with `-DWITH_CPCMX=ON`. The conda-forge `xtb` package does not include CPCM-X support. See `SOLVENT_EFFECTS.md` for build instructions.
For details, see `SOLVENT_EFFECTS.md`.
## Using Analytical Hessian (optional two-step workflow)
Optimization and TS searches can run without providing an initial Hessian — ORCA builds one internally. Providing an analytical Hessian from the MLIP via `--dump-hessian` + `InHessName` improves convergence, especially for TS searches.
> **Why two steps?** ORCA has no API to receive Hessian data directly through `ExtTool`. The only supported path is:
> 1) dump Hessian with `--dump-hessian <file>` in step 1,
> 2) read it in step 2 with `InHessName <file>`.
Generate a `.hess` file first, then load it via `InHessName`.
### TS Search
**Step 1: Generate analytical Hessian via `--dump-hessian`**
```text
! ExtOpt Opt
%geom
MaxIter 1
end
%method
ProgExt "uma"
Ext_Params "--dump-hessian cla.hess"
end
* xyz 0 1
...
*
```
This runs a single-iteration optimization that triggers the ExtTool call and writes the analytical Hessian in ORCA `.hess` format. `! ExtOpt` is required to make ORCA use the external tool instead of its own internal methods. The job may exit with a non-zero status (not converged), but the `.hess` file is created.
**Step 2: TS optimization reading Hessian**
```text
! ExtOpt OptTS
%method
ProgExt "uma"
end
%geom
InHessName "cla.hess"
end
* xyz 0 1
...
*
```
ORCA reads the initial Hessian from the `.hess` file. The model server keeps the MLIP loaded so repeated calls during optimization are fast.
### Geometry Optimization (with analytical Hessian)
Same two-step workflow with `! ExtOpt Opt` instead of `! ExtOpt OptTS`:
```text
! ExtOpt Opt
%geom
MaxIter 1
end
%method
ProgExt "mace"
Ext_Params "--dump-hessian water.hess"
end
* xyz 0 1
...
*
```
then:
```text
! ExtOpt Opt
%method
ProgExt "mace"
end
%geom
InHessName "water.hess"
end
* xyz 0 1
...
*
```
## Installing Model Families
```bash
pip install "orca-mlips[uma]" # UMA (default)
pip install "orca-mlips[orb]" # ORB
pip install "orca-mlips[mace]" # MACE
pip install "orca-mlips[orb,mace]" # ORB + MACE
pip install "orca-mlips[aimnet2]" # AIMNet2
pip install "orca-mlips[orb,mace,aimnet2]" # ORB + MACE + AIMNet2
pip install orca-mlips # core only
```
> **Note:** UMA and MACE have a dependency conflict (`e3nn`). Use separate environments.
Local install:
```bash
git clone https://github.com/t-0hmura/orca-mlips.git
cd orca-mlips
pip install ".[uma]"
```
Model download notes:
- **UMA**: Hosted on Hugging Face Hub. Run `huggingface-cli login` once.
- **ORB / MACE / AIMNet2**: Downloaded automatically on first use.
## Upstream Model Sources
- UMA / FAIR-Chem: https://github.com/facebookresearch/fairchem
- ORB / orb-models: https://github.com/orbital-materials/orb-models
- MACE: https://github.com/ACEsuit/mace
- AIMNet2: https://github.com/isayevlab/aimnetcentral
## Advanced Options
See `OPTIONS.md` for backend-specific tuning parameters.
For solvent correction options, see `SOLVENT_EFFECTS.md`.
Command aliases:
- Short: `uma`, `orb`, `mace`, `aimnet2`
- Prefixed: `orca-mlips-uma`, `orca-mlips-orb`, `orca-mlips-mace`, `orca-mlips-aimnet2`
## Troubleshooting
- **`ProgExt "uma"` runs the wrong plugin** — Use `ProgExt "orca-mlips-uma"` to avoid alias conflicts.
- **`ProgExt "aimnet2"` runs the wrong plugin** — Use `ProgExt "orca-mlips-aimnet2"` to avoid alias conflicts.
- **`uma` command not found** — Activate the conda environment where the package is installed.
- **UMA model download fails (401/403)** — Run `huggingface-cli login`. Some models require access approval on Hugging Face.
- **Works interactively but fails in PBS jobs** — Use absolute path from `which uma` in the ORCA input.
## Citation
If you use this package, please cite:
```bibtex
@software{ohmura2026orcamlips,
author = {Ohmura, Takuto},
title = {orca-mlips},
year = {2026},
month = {2},
version = {1.1.0},
url = {https://github.com/t-0hmura/orca-mlips},
license = {MIT},
doi = {10.5281/zenodo.18695270}
}
```
## References
- ORCA ExtTool official tutorial (ExtOpt workflow): https://www.faccts.de/docs/orca/6.1/tutorials/workflows/extopt.html
- ORCA ExtTool: https://www.faccts.de/docs/orca/6.1/manual/contents/essentialelements/externaloptimizer.html
- ORCA external tools: https://github.com/faccts/orca-external-tools
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"ase",
"torch>=2.8.0; extra == \"uma\"",
"fairchem-core>=2.13; extra == \"uma\"",
"torch>=2.8.0; extra == \"orb\"",
"orb-models; extra == \"orb\"",
"torch>=2.8.0; extra == \"mace\"",
"mace-torch; extra == \"mace\"",
"torch>=2.8.0; extra == \"aimnet2\"",
"aimnet; extra == \"aimnet2\""
] | [] | [] | [] | [
"Homepage, https://github.com/t-0hmura/orca-mlips",
"Repository, https://github.com/t-0hmura/orca-mlips"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:31:17.803796 | orca_mlips-1.1.0.tar.gz | 39,037 | 77/d2/19c0c33ab89e0697c964f059fcaa195e6d1d7c21d8bf444f4f6beda76e56/orca_mlips-1.1.0.tar.gz | source | sdist | null | false | 652dd7880056f5eeb3766d55d6fd9701 | 3eff7d66bce105ae1905a43437876b740f9413d426480b6dfd270336c33bd18a | 77d219c0c33ab89e0697c964f059fcaa195e6d1d7c21d8bf444f4f6beda76e56 | MIT | [
"LICENSE"
] | 217 |
2.4 | django-geoaddress | 0.2.0 | Django integration for address verification and geocoding. Provides Django fields, widgets, and admin integration for the geoaddress library. | # django-geoaddress
Django integration for address verification and geocoding. Provides Django fields, widgets, and admin integration for the [geoaddress](https://github.com/hicinformatic/python-geoaddress) library.
## Installation
```bash
pip install django-geoaddress
```
## Features
- **Django Model Fields**: `GeoaddressField` for storing address data in Django models
- **Autocomplete Widget**: Interactive address autocomplete widget with real-time suggestions
- **Admin Integration**: Django admin interface for managing addresses and providers
- **Multiple Providers**: Support for multiple geocoding providers (Google Maps, Mapbox, Nominatim, etc.)
- **Virtual Models**: Uses `django-virtualqueryset` for dynamic provider and address models
- **Address Management**: View, search, and manage addresses through Django admin
## Quick Start
### 1. Add to INSTALLED_APPS
```python
INSTALLED_APPS = [
# ...
'djgeoaddress',
]
```
### 2. Include URLs
```python
# urls.py
from django.urls import path, include
urlpatterns = [
# ...
path('geoaddress/', include('djgeoaddress.urls')),
]
```
### 3. Use in Models
```python
from django.db import models
from djgeoaddress.fields import GeoaddressField
class MyModel(models.Model):
address = GeoaddressField()
# ... other fields
```
### 4. Use in Forms
The `GeoaddressField` automatically uses the `GeoaddressAutocompleteWidget` which provides:
- Real-time address autocomplete
- Address search across multiple providers
- Structured address data storage
## Address Field
The `GeoaddressField` stores address data as JSON with the following structure:
```python
{
"text": "Full formatted address string",
"reference": "Backend reference ID",
"address_line1": "Street number and name",
"address_line2": "Building, apartment (optional)",
"city": "City name",
"postal_code": "Postal/ZIP code",
"state": "State/region/province",
"country": "Country name",
"country_code": "ISO country code (e.g., FR, US, GB)",
"latitude": 48.8566,
"longitude": 2.3522,
"backend_name": "nominatim",
"geoaddress_id": "nominatim-123456"
}
```
## Admin Interface
Django-geoaddress provides admin interfaces for:
- **Address Management**: View and manage addresses with search and filtering
- **Provider Management**: View available geocoding providers and their capabilities
- **Address Autocomplete**: Interactive autocomplete in admin forms
Access the admin at:
- Addresses: `/admin/djgeoaddress/address/`
- Providers: `/admin/djgeoaddress/provider/`
## Supported Providers
The library supports multiple geocoding providers through the `geoaddress` library:
**Free providers** (no API key required):
- Nominatim (OpenStreetMap)
- Photon (Komoot/OSM)
**Paid/API key providers**:
- Google Maps
- Mapbox
- LocationIQ
- OpenCage
- Geocode Earth
- Geoapify
- Maps.co
- HERE
## Configuration
### Provider Configuration
Configure geocoding providers in your Django settings or through environment variables. Each provider may require API keys or specific configuration.
Example:
```python
# settings.py
GEOADDRESS_PROVIDERS = {
'google_maps': {
'api_key': 'your-api-key',
},
'mapbox': {
'api_key': 'your-api-key',
},
}
```
## Requirements
- Django >= 3.2
- Python >= 3.10
- geoaddress (automatically installed as dependency)
- django-virtualqueryset (for virtual models)
## Development
```bash
# Clone the repository
git clone https://github.com/hicinformatic/django-geoaddress.git
cd django-geoaddress
# Install in development mode
pip install -e .
pip install -e ".[dev]"
```
## License
MIT License - see LICENSE file for details.
## Links
- **Homepage**: https://github.com/hicinformatic/django-geoaddress
- **Repository**: https://github.com/hicinformatic/django-geoaddress
- **Issues**: https://github.com/hicinformatic/django-geoaddress/issues
- **geoaddress library**: https://github.com/hicinformatic/python-geoaddress
| text/markdown | null | Hicinformatic <hicinformatic@gmail.com> | null | null | MIT | django, geocoding, reverse-geocoding, address, geolocation, maps, django-fields, django-widgets, address-autocomplete, python | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Framework :: Django",
"Framework :: Django :: 3.2",
"Framework :: Django :: 4.0",
"Framework :: Django :: 4.1",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content",
"Topic :: Scientific/Engineering :: GIS"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"Django>=3.2",
"geoaddress>=0.2.2",
"django-boosted>=0.1.0",
"django-providerkit>=0.1.0",
"django-virtualqueryset>=0.1.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-django>=4.5; extra == \"dev\"",
"ruff>=0.5; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"django-stubs>=5.0.0; extra == \"dev\"",
"python-dotenv>=1.0; extra == \"dev\"",
"ruff>=0.5.0; extra == \"lint\"",
"mypy>=1.0.0; extra == \"lint\"",
"semgrep>=1.0.0; extra == \"lint\"",
"pylint>=3.0.0; extra == \"lint\"",
"radon>=6.0.0; extra == \"quality\"",
"vulture>=2.0.0; extra == \"quality\"",
"autoflake>=2.0.0; extra == \"quality\"",
"bandit>=1.7.0; extra == \"security\"",
"safety>=3.0.0; extra == \"security\"",
"pip-audit>=2.7.0; extra == \"security\"",
"semgrep>=1.0.0; extra == \"security\""
] | [] | [] | [] | [
"Homepage, https://github.com/hicinformatic/django-geoaddress",
"Repository, https://github.com/hicinformatic/django-geoaddress",
"Documentation, https://github.com/hicinformatic/django-geoaddress#readme",
"Issues, https://github.com/hicinformatic/django-geoaddress/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T23:29:51.754319 | django_geoaddress-0.2.0.tar.gz | 13,826 | 0c/1f/4e24b60b56cf84e96e0f33f846722ca3ddcb33983a53ba7f8940e73152c0/django_geoaddress-0.2.0.tar.gz | source | sdist | null | false | 31a42ae61ac6ef52956ee164538ed988 | e63aebb877c9433b4cca059628e786ab007291bc92c57c62abefaae5313a7131 | 0c1f4e24b60b56cf84e96e0f33f846722ca3ddcb33983a53ba7f8940e73152c0 | null | [
"LICENSE"
] | 221 |
2.4 | aivora | 0.1.0 | Feature interaction utilities for ML workflows | # Aivora
A Python package for generating interaction features.
## Installation
```bash
pip install aivora
```
| text/markdown | Your Name | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"pandas"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.7 | 2026-02-20T23:28:03.449587 | aivora-0.1.0.tar.gz | 2,218 | 2f/af/26795a29a2877f88be2b7cad86f57deacf809757664bd709111442586b02/aivora-0.1.0.tar.gz | source | sdist | null | false | 4f457efe69c7d7e3085a6d1dfe98355b | dc590eebc162151f1550676722de57f3f8f5e916f9cf3e8f319f0104f096f1ac | 2faf26795a29a2877f88be2b7cad86f57deacf809757664bd709111442586b02 | null | [] | 247 |
2.4 | g16-mlips | 1.1.0 | MLIP plugins for Gaussian16 External (UMA, ORB, MACE, AIMNet2) | # g16-mlips
[](https://zenodo.org/badge/latestdoi/1160316483)
MLIP (Machine Learning Interatomic Potential) plugins for Gaussian 16 `External` interface.
Four model families are currently supported:
- **UMA** ([fairchem](https://github.com/facebookresearch/fairchem)) — default model: `uma-s-1p1`
- **ORB** ([orb-models](https://github.com/orbital-materials/orb-models)) — default model: `orb_v3_conservative_omol`
- **MACE** ([mace](https://github.com/ACEsuit/mace)) — default model: `MACE-OMOL-0`
- **AIMNet2** ([aimnetcentral](https://github.com/isayevlab/aimnetcentral)) — default model: `aimnet2`
All backends provide energy, gradient, and **analytical Hessian** for **Gaussian 16**.
An optional implicit-solvent correction (`xTB`) is also available via `--solvent`.
> The model server starts automatically and stays resident, so repeated calls during optimization are fast.
Requires **Python 3.9** or later.
## Quick Start (Default = UMA)
1. Install PyTorch suitable for your environment (CUDA/CPU).
```bash
pip install torch==2.8.0 --index-url https://download.pytorch.org/whl/cu129
```
2. Install the package with the UMA profile. If you need ORB/MACE/AIMNet2, use `g16-mlips[orb]`/`g16-mlips[mace]`/`g16-mlips[aimnet2]`.
```bash
pip install "g16-mlips[uma]"
```
3. Log in to Hugging Face for UMA model access. (Not required for ORB/MACE/AIMNet2)
```bash
huggingface-cli login
```
4. Use in a Gaussian input file (**`nomicro` is required**.). If you use ORB/MACE/AIMNet2, use `external="orb"`/`external="mace"`/`external="aimnet2"`.
For detailed Gaussian `External` usage, see https://gaussian.com/external/
```text
%nprocshared=8
%mem=32GB
%chk=water_ext.chk
#p external="uma" opt(nomicro)
Water external UMA example
0 1
O 0.000000 0.000000 0.000000
H 0.758602 0.000000 0.504284
H -0.758602 0.000000 0.504284
```
Other backends:
```text
#p external="orb" opt(nomicro)
#p external="mace" opt(nomicro)
#p external="aimnet2" opt(nomicro)
```
> **Important:** For Gaussian `External` geometry optimization, always include `nomicro` in `opt(...)`.
> Without it, Gaussian uses micro-iterations that assume an internal gradient routine, which is incompatible with the external interface.
### Analytical Hessian (optional)
Optimization and IRC can run without providing an initial Hessian — Gaussian builds one internally using estimated force constants. Providing an MLIP analytical Hessian via `freq` + `readfc` improves convergence, especially for TS searches.
Gaussian `freq` (with `external=...`) is the only path that requests the plugin's analytical Hessian directly.
**Frequency calculation**
```text
%nprocshared=8
%mem=32GB
%chk=cla_ext.chk
#p external="uma" freq
CLA freq UMA
0 1
...
```
Gaussian sends `igrd=2` and stores the result in the `.chk` file.
### Using the analytical Hessian in optimization jobs
To use the MLIP analytical Hessian in `opt`/`irc`, read the Hessian from an existing checkpoint using Gaussian `%oldchk` + `readfc`.
```text
%nprocshared=8
%mem=32GB
%chk=cla_ext.chk
%oldchk=cla_ext.chk
#p external="uma" opt(readfc,nomicro)
CLA opt UMA
0 1
...
```
`readfc` reads the force constants from `%oldchk`. This applies to `opt` and `irc` runs.
Note that `freq` is the only job type that requests the analytical Hessian (`igrd=2`) from the plugin. `opt` and `irc` themselves never request it directly.
## Implicit Solvent Correction (xTB)
You can use an implicit-solvent correction via xTB. To use it, install xTB and pass the `--solvent` option.
Install xTB in your conda environment (easy path):
```bash
conda install xtb
```
Use `--solvent <name>` in `external="..."` (examples: `water`, `thf`):
```text
#p external="uma --solvent water" opt(nomicro)
#p external="uma --solvent thf" freq
```
This implementation follows the solvent-correction approach described in:
Zhang, C., Leforestier, B., Besnard, C., & Mazet, C. (2025). Pd-catalyzed regiodivergent arylation of cyclic allylboronates. Chemical Science, 16, 22656-22665. https://doi.org/10.1039/d5sc07577g
When you describe this correction in a paper, you can use:
`Implicit solvent effects were accounted for by integrating the ALPB [or CPCM-X] solvation model from the xtb package as an additional correction to UMA-generated energies, gradients, and Hessians.`
**Note:** `--solvent-model cpcmx` (CPCM-X) requires xTB built from source with `-DWITH_CPCMX=ON`. The conda-forge `xtb` package does not include CPCM-X support. See `SOLVENT_EFFECTS.md` for build instructions.
For details, see `SOLVENT_EFFECTS.md`.
## Installing Model Families
```bash
pip install "g16-mlips[uma]" # UMA (default)
pip install "g16-mlips[orb]" # ORB
pip install "g16-mlips[mace]" # MACE
pip install "g16-mlips[orb,mace]" # ORB + MACE
pip install "g16-mlips[aimnet2]" # AIMNet2
pip install "g16-mlips[orb,mace,aimnet2]" # ORB + MACE + AIMNet2
pip install g16-mlips # core only
```
> **Note:** UMA and MACE have a dependency conflict (`e3nn`). Use separate environments.
Local install:
```bash
git clone https://github.com/t-0hmura/g16-mlips.git
cd g16-mlips
pip install ".[uma]"
```
Model download notes:
- **UMA**: Hosted on Hugging Face Hub. Run `huggingface-cli login` once.
- **ORB / MACE / AIMNet2**: Downloaded automatically on first use.
## Upstream Model Sources
- UMA / FAIR-Chem: https://github.com/facebookresearch/fairchem
- ORB / orb-models: https://github.com/orbital-materials/orb-models
- MACE: https://github.com/ACEsuit/mace
- AIMNet2: https://github.com/isayevlab/aimnetcentral
## Advanced Options
See `OPTIONS.md` for backend-specific tuning parameters.
For solvent correction options, see `SOLVENT_EFFECTS.md`.
Command aliases:
- Short: `uma`, `orb`, `mace`, `aimnet2`
- Prefixed: `g16-mlips-uma`, `g16-mlips-orb`, `g16-mlips-mace`, `g16-mlips-aimnet2`
## Troubleshooting
- **`external="uma"` runs the wrong plugin** — Use `external="g16-mlips-uma"` to avoid alias conflicts.
- **`external="aimnet2"` runs the wrong plugin** — Use `external="g16-mlips-aimnet2"` to avoid alias conflicts.
- **`uma` command not found** — Activate the conda environment where the package is installed.
- **UMA model download fails (401/403)** — Run `huggingface-cli login`. Some models require access approval on Hugging Face.
- **Works interactively but fails in PBS jobs** — Use absolute path from `which uma` in the Gaussian input.
## Citation
If you use this package, please cite:
```bibtex
@software{ohmura2026g16mlips,
author = {Ohmura, Takuto},
title = {g16-mlips},
year = {2026},
month = {2},
version = {1.1.0},
url = {https://github.com/t-0hmura/g16-mlips},
license = {MIT},
doi = {10.5281/zenodo.18695243}
}
```
## References
- Gaussian External interface (official): https://gaussian.com/external/
- Gaussian External: `$g16root/g16/doc/extern.txt`, `$g16root/g16/doc/extgau`
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"ase",
"torch>=2.8.0; extra == \"uma\"",
"fairchem-core>=2.13; extra == \"uma\"",
"torch>=2.8.0; extra == \"orb\"",
"orb-models; extra == \"orb\"",
"torch>=2.8.0; extra == \"mace\"",
"mace-torch; extra == \"mace\"",
"torch>=2.8.0; extra == \"aimnet2\"",
"aimnet; extra == \"aimnet2\""
] | [] | [] | [] | [
"Homepage, https://github.com/t-0hmura/g16-mlips",
"Repository, https://github.com/t-0hmura/g16-mlips"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:27:51.695170 | g16_mlips-1.1.0.tar.gz | 38,895 | 35/24/ee5c11540755774f6f9b8a2bcc246e6e6b0900a98946258e50dc6c990f22/g16_mlips-1.1.0.tar.gz | source | sdist | null | false | a3c12c2015e12cf5d34009984e8b5d4e | 93da3b27af01a1b7ffb51f3fb29bbb1898dd5c1140d8df633154f5360db9a2a6 | 3524ee5c11540755774f6f9b8a2bcc246e6e6b0900a98946258e50dc6c990f22 | MIT | [
"LICENSE"
] | 232 |
2.4 | npc_lims | 0.1.191 | Tools to fetch and update paths, metadata and state for Mindscope Neuropixels sessions, in the cloud. | # npc_lims
**n**euro**p**ixels **c**loud **l**ab **i**nformation **m**anagement **s**ystem
Tools to fetch and update paths, metadata and state for Mindscope Neuropixels sessions, in the cloud.
[](https://pypi.org/project/npc-lims/)
[](https://pypi.org/project/npc-lims/)
[](https://app.codecov.io/github/AllenInstitute/npc_lims)
[](https://github.com/alleninstitute/npc_lims/actions/workflows/publish.yml)
[](https://github.com/alleninstitute/npc_lims/issues)
## quickstart
- make a new Python >=3.9 virtual environment with conda or venv (lighter option, since this package does not require pandas, numpy etc.):
```bash
python -m venv .venv
```
- activate the virtual environment:
- Windows
```cmd
.venv\scripts\activate
```
- Unix
```bash
source .venv/bin/activate.sh
```
- install the package:
```bash
python -m pip install npc_lims
```
- setup credentials
- required environment variables:
- AWS S3
- `AWS_DEFAULT_REGION`
- `AWS_ACCESS_KEY_ID`
- `AWS_SECRET_ACCESS_KEY`
- to find and read files on S3
- must have read access on relevant aind buckets
- can be in a standard `~/.aws` location, as used by AWS CLI or boto3
- CodeOcean API
- `CODE_OCEAN_API_TOKEN`
- `CODE_OCEAN_DOMAIN`
- to find processed data in "data assets" via the Codeocean API
- generated in CodeOcean:
- right click on `Account` (bottom left, person icon)
- click `User Secrets` - these are secrets than can be made available as environment variables in CodeOcean capsules
- go to `Access Tokens` and click `Generate new token` - this is for programatically querying CodeOcean's databases
- in `Token Name` enter `Codeocean API (read)` and check `read` on capsules and datasets
- a token will be generated: click copy (storing it in a password manager, if you use one)
- head back to `User Secrets` where we'll paste it into a new secret via `Add secret > API credentials` - in `description` enter `Codeocean API (read)` - in `API key` enter `CODE_OCEAN_API_KEY` - in `API secret` paste the copied secret from before (should start with `cop_`...)
`CODE_OCEAN_DOMAIN` is the codeocean https address, up to and including `.org`
- environment variables can also be specified in a file named `.env` in the current working directory
- example: https://www.dotenv.org/docs/security/env.html
- be very careful that this file does not get pushed to public locations, e.g. github
- if using git, add it to a `.gitignore` file in your project's root directory:
```gitignore
.env*
```
- now in Python we can find sessions that are available to work with:
```python
>>> import npc_lims;
# get a sequence of `SessionInfo` dataclass instances, one per session:
>>> tracked_sessions: tuple[npc_lims.SessionInfo, ...] = npc_lims.get_session_info()
# each `SessionInfo` instance has minimal metadata about its session:
>>> tracked_sessions[0] # doctest: +SKIP
npc_lims.SessionInfo(id='626791_2022-08-15', subject=626791, date='2022-08-15', idx=0, project='DRPilotSession', is_ephys=True, is_sync=True, allen_path=PosixUPath('//allen/programs/mindscope/workgroups/dynamicrouting/PilotEphys/Task 2 pilot/DRpilot_626791_20220815'))
>>> tracked_sessions[0].is_ephys # doctest: +SKIP
False
# currently, we're only tracking behavior and ephys sessions that use variants of https://github.com/samgale/DynamicRoutingTask/blob/main/TaskControl.py:
>>> all(s.date.year >= 2022 for s in tracked_sessions)
True
```
- "tracked sessions" are discovered via 3 routes:
- https://github.com/AllenInstitute/npc_lims/blob/main/tracked_sessions.yaml
- `\\allen\programs\mindscope\workgroups\dynamicrouting\DynamicRoutingTask\DynamicRoutingTraining.xlsx`
- `\\allen\programs\mindscope\workgroups\dynamicrouting\DynamicRoutingTask\DynamicRoutingTrainingNSB.xlsx`
| text/markdown | null | Arjun Sridhar <arjun.sridhar@alleninstitute.org>, Ben Hardcastle <ben.hardcastle@alleninstitute.org> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"License :: OSI Approved :: MIT License",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"npc-session>=0.1.36",
"redis>=4.1.4",
"pydbhub-bjh>=0.0.8",
"pyyaml>=6.0.1",
"pyopenssl>=23.2.0",
"openpyxl>=3.1.2",
"packaging>=23.2",
"types-pyYAML>=6.0.12.12",
"types-requests>=2.31.0.6",
"npc-io>=0.1.24",
"codeocean>=0.3.1",
"aind-session>=0.3.4",
"polars>1.0; extra == \"polars\""
] | [] | [] | [] | [
"Repository, https://github.com/AllenInstitute/npc_lims",
"Issues, https://github.com/AllenInstitute/npc_lims/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T23:27:37.760782 | npc_lims-0.1.191-py3-none-any.whl | 46,840 | 48/02/391819c38464ba45620287df20e01008f46564a49dd7c1c40fb5c60e6682/npc_lims-0.1.191-py3-none-any.whl | py3 | bdist_wheel | null | false | 751deba466b39e9a9746113025b4c258 | 3fd84d87dfb74521b1d19f74733c01de9b89b8b6485c7ef3523e3b713d7b7aaa | 4802391819c38464ba45620287df20e01008f46564a49dd7c1c40fb5c60e6682 | null | [
"LICENSE"
] | 0 |
2.4 | clusteval | 2.2.7 | clusteval is a python package for unsupervised cluster validation. | # clusteval
<p align="center">
<a href="https://erdogant.github.io/clusteval">
<img src="https://github.com/erdogant/clusteval/blob/master/docs/figs/logo_large_2.png" width="300" />
</a>
</p>
[](https://img.shields.io/pypi/pyversions/clusteval)
[](https://pypi.org/project/clusteval/)
[](https://github.com/erdogant/clusteval/blob/master/LICENSE)
[](https://www.buymeacoffee.com/erdogant)
[](https://github.com/erdogant/clusteval/network)
[](https://github.com/erdogant/clusteval/issues)
[](http://www.repostatus.org/#active)
[](https://pepy.tech/project/clusteval)
[](https://pepy.tech/project/clusteval)
[](https://zenodo.org/badge/latestdoi/232915924)
[](https://erdogant.github.io/clusteval/)
[](https://erdogant.github.io/clusteval/pages/html/Documentation.html#colab-notebook)
<!---[](https://erdogant.github.io/donate/?currency=USD&amount=5)-->
``clusteval`` is a python package that is developed to evaluate detected clusters and return the cluster labels that have most optimal **clustering tendency**, **Number of clusters** and **clustering quality**. Multiple evaluation strategies are implemented for the evaluation; **silhouette**, **dbindex**, and **derivative**, and four clustering methods can be used: **agglomerative**, **kmeans**, **dbscan** and **hdbscan**.
## Blogs
1. [A step-by-step guide for clustering images](https://medium.com/data-science-collective/a-step-by-step-guide-for-clustering-images-82b4a83b36a9)
2. [Detection of Duplicate Images Using Image Hash Functions](https://medium.com/data-science-collective/detection-of-near-identical-images-using-image-hash-functions-c61e133a0958)
3. [From Data to Clusters: When is Your Clustering Good Enough?](Soon)
4. [From Clusters To Insights; The Next Step](Soon)
---
## Documentation
Full documentation is available at [erdogant.github.io/clusteval](https://erdogant.github.io/clusteval/), including examples and API references.
---
## Installation
It is advisable to use a virtual environment:
```bash
conda create -n env_clusteval python=3.12
conda activate env_clusteval
```
Install via PyPI:
```bash
pip install clusteval
```
To upgrade to the latest version:
```bash
pip install --upgrade clusteval
```
Import the library:
```python
from clusteval import clusteval
```
---
## Examples
A structured overview is available in the [documentation](https://erdogant.github.io/clusteval/pages/html/Examples.html).
<table>
<tr>
<td align="center">
<a href="https://erdogant.github.io/clusteval/pages/html/Examples.html#cluster-evaluation">
<img src="https://github.com/erdogant/clusteval/blob/master/docs/figs/fig1b_sil.png" width="300"/>
<br>Silhouette Score
</a>
</td>
<td align="center">
<a href="https://erdogant.github.io/clusteval/pages/html/Plots.html#plot">
<img src="https://github.com/erdogant/clusteval/blob/master/docs/figs/fig1a_sil.png" width="300"/>
<br>Optimal Clusters
</a>
</td>
</tr>
<tr>
<td align="center">
<a href="https://erdogant.github.io/clusteval/pages/html/Plots.html#dendrogram">
<img src="https://github.com/erdogant/clusteval/blob/master/docs/figs/dendrogram.png" width="300"/>
<br>Dendrogram
</a>
</td>
<td align="center">
<a href="https://erdogant.github.io/clusteval/pages/html/Examples.html#dbindex-method">
<img src="https://github.com/erdogant/clusteval/blob/master/docs/figs/fig2_dbindex.png" width="300"/>
<br>Davies-Bouldin Index
</a>
</td>
</tr>
<tr>
<td align="center">
<a href="https://erdogant.github.io/clusteval/pages/html/Examples.html#derivative-method">
<img src="https://github.com/erdogant/clusteval/blob/master/docs/figs/fig3_der.png" width="300"/>
<br>Derivative Method
</a>
</td>
<td align="center">
<a href="https://erdogant.github.io/clusteval/pages/html/Examples.html#dbscan">
<img src="https://github.com/erdogant/clusteval/blob/master/docs/figs/fig5_dbscan.png" width="300"/>
<br>DBSCAN
</a>
</td>
</tr>
<tr>
<td align="center">
<a href="https://erdogant.github.io/clusteval/pages/html/Examples.html#hdbscan">
<img src="https://github.com/erdogant/clusteval/blob/master/docs/figs/fig4a_hdbscan.png" width="300"/>
<br>HDBSCAN A
</a>
</td>
<td align="center">
<a href="https://erdogant.github.io/clusteval/pages/html/Examples.html#hdbscan">
<img src="https://github.com/erdogant/clusteval/blob/master/docs/figs/fig4b_hdbscan.png" width="300"/>
<br>HDBSCAN B
</a>
</td>
</tr>
</table>
---
## Citation
Please cite `clusteval` in your publications if it has been helpful in your research. Citation information is available at the top right of the [GitHub page](https://github.com/erdogant/clusteval).
---
## Related Tools & Blogs
- Use **ARI** when clustering contains large equal-sized clusters
- Use **AMI** for unbalanced clusters with small components
- [Adjusted Rand Score — scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html)
- [Adjusted for Chance Measures — scikit-learn](https://scikit-learn.org/stable/auto_examples/cluster/plot_adjusted_for_chance_measures.html)
- [imagededup GitHub repo](https://github.com/idealo/imagededup)
- [Clustering images by visual similarity](https://towardsdatascience.com/how-to-cluster-images-based-on-visual-similarity-cd6e7209fe34)
- [Facebook DeepCluster](https://github.com/facebookresearch/deepcluster)
- [PCA on Hyperspectral Data](https://towardsdatascience.com/pca-on-hyperspectral-data-99c9c5178385)
- [Face Recognition with PCA](https://machinelearningmastery.com/face-recognition-using-principal-component-analysis/)
---
### Star history
[](https://www.star-history.com/#erdogant/clusteval&Date)
### Contributors
Thank the contributors!
<p align="left">
<a href="https://github.com/erdogant/clusteval/graphs/contributors">
<img src="https://contrib.rocks/image?repo=erdogant/clusteval" />
</a>
</p>
### Maintainer
* Erdogan Taskesen, github: [erdogant](https://github.com/erdogant)
* Contributions are welcome.
* Yes! This library is entirely **free** but it runs on coffee! :) Feel free to support with a <a href="https://erdogant.github.io/donate/?currency=USD&amount=5">Coffee</a>.
[](https://www.buymeacoffee.com/erdogant)
| text/markdown | null | Erdogan Taskesen <erdogant@gmail.com> | null | null | null | Python, machine-learning, unsupervised, clustering, dbindex, silhouette score, density based clustering, validation | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Operating System :: Unix",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3 | [] | [] | [] | [
"scatterd>=1.3.5",
"pypickle",
"matplotlib",
"numpy",
"pandas",
"tqdm",
"seaborn",
"scikit-learn",
"colourmap>=1.1.14",
"datazets>=1.1.0",
"df2onehot"
] | [] | [] | [] | [
"Homepage, https://erdogant.github.io/clusteval",
"Download, https://github.com/erdogant/clusteval/archive/{version}.tar.gz"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T23:26:13.121966 | clusteval-2.2.7.tar.gz | 36,979 | a7/d4/6826ec5d2a85d33eaf32acb1fd04c750d4a7afd918399e56fa30c5553c68/clusteval-2.2.7.tar.gz | source | sdist | null | false | c533ac5aa079ae019ba78b16f9bdc496 | 439ab17de89d6f9a6d1e9e9d7f7058b24b8520dd9590135414b8297e7ec569d2 | a7d46826ec5d2a85d33eaf32acb1fd04c750d4a7afd918399e56fa30c5553c68 | MIT | [
"LICENSE"
] | 315 |
2.4 | vastai-sdk | 0.5.0 | SDK for Vast.ai GPU Cloud Service | # Vast.ai Python SDK
[](https://badge.fury.io/py/vastai-sdk)
The official Vast.ai SDK pip package.
## Install
```bash
pip install vastai-sdk
```
## Examples
NOTE: Ensure your Vast.ai API key is set in your working environment as `VAST_API_KEY`. Alternatively, you may pass the API key in as a parameter to either client.
### Using the VastAI CLI client
1. Create the client
```python
from vastai import VastAI
vastai = VastAI() # or, VastAI("YOUR_API_KEY")
```
2. Run commands
```python
vastai.search_offers()
```
3. Get help
```python
help(v.create_instances)
```
### Using the Serverless client
1. Create the client
```python
from vastai import Serverless
serverless = Serverless() # or, Serverless("YOUR_API_KEY")
```
2. Get an endpoint
```python
endpoint = await serverless.get_endpoint("my-endpoint")
```
3. Make a request
```python
request_body = {
"model": "Qwen/Qwen3-8B",
"prompt" : "Who are you?",
"max_tokens" : 100,
"temperature" : 0.7
}
response = await serverless.request("/v1/completions", request_body)
```
4. Read the response
```python
text = response["response"]["choices"][0]["text"]
print(text)
```
Find more examples in the `examples` directory
| text/markdown | Chris McKenzie | chris@vast.ai | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"aiodns>=3.6.0",
"aiohttp>=3.9.1",
"anyio<4.5,>=4.4",
"borb<2.2.0,>=2.1.25",
"fastapi<1.0,>=0.110",
"hf_transfer>=0.1.9",
"jsonschema>=3.2",
"nltk<3.10,>=3.9",
"psutil<6.1,>=6.0",
"pycares==4.11.0",
"pycryptodome<3.21,>=3.20",
"pyparsing<4.0,>=3.1",
"python-dateutil>=2.8.2",
"pytz>=2023.3",
"requests>=2.32.3",
"transformers<4.53,>=4.52",
"urllib3<3.0,>=2.0",
"uvicorn[standard]<0.32,>=0.24",
"xdg>=1.0.0"
] | [] | [] | [] | [
"Homepage, https://vast.ai",
"Repository, https://github.com/vast-ai/vast-sdk",
"Source, https://github.com/vast-ai/vast-sdk"
] | poetry/2.3.2 CPython/3.12.12 Linux/6.11.0-1018-azure | 2026-02-20T23:26:07.111787 | vastai_sdk-0.5.0-py3-none-any.whl | 125,163 | 6e/e9/1092205a172d207426152904622eef208ec372063e41307c9ddfc8e89221/vastai_sdk-0.5.0-py3-none-any.whl | py3 | bdist_wheel | null | false | ebf100cb1fb093fbe8a91707031f451c | 5ad073bc8734d014bad4aa048e93f01c9477f9d5aada3aab5ffeabdba130830a | 6ee91092205a172d207426152904622eef208ec372063e41307c9ddfc8e89221 | null | [
"LICENSE"
] | 12,564 |
2.4 | tvminer | 0.22.1 | Thin wrapper around inference: load_model and predict_batch for batch frame results. | # tvminer
Thin PyPI package that exposes `load_model` and `predict_batch` by delegating to the bundled `inference` extension (`.so`). The wheel includes the `.so`, so it works wherever it is installed (local, venv, Chutes, PyPI).
## Build (include .so in wheel)
From the **repository root** (parent of `pypi_package/`):
```bash
./pypi_package/build_with_so.sh
```
This copies `inference.cpython-*.so` from the repo root into `pypi_package/src/tvminer/`, then runs `python -m build`. The resulting wheel in `pypi_package/dist/` contains the `.so`; any `pip install` of that wheel will have the extension available.
You must build the `.so` first (e.g. `python setup_turbo_v2_py.py build_ext --inplace` in the repo root).
## Install
```bash
pip install /path/to/pypi_package/dist/tvminer-*.whl
# or from PyPI after publishing
pip install tvminer
```
## Publish to PyPI
After building with `build_with_so.sh`:
```bash
pip install twine
twine upload pypi_package/dist/*
```
## Usage
```python
from pathlib import Path
from tvminer import load_model, Miner, TVFrameResult, BoundingBox
miner = load_model(Path("/path/to/hf_repo"))
# or
miner = Miner(Path("/path/to/hf_repo"))
results = miner.predict_batch(batch_images, offset=0, n_keypoints=32)
```
`predict_batch` runs `self.inference.predict_batch(batch_images, offset, n_keypoints)` internally.
| text/markdown | null | null | null | null | null | inference, miner, predict_batch | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"pydantic>=2",
"pytest; extra == \"dev\"",
"build; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T23:25:59.935021 | tvminer-0.22.1.tar.gz | 5,475,957 | ff/3d/a40de9bb9b7610291f3975d3f9e4f21acd9b4d73984d36b9535c68294d05/tvminer-0.22.1.tar.gz | source | sdist | null | false | ed57fec71036a853c8d7cfd16b8143c8 | 3ad65e7173a08cc127c956d18121741c8257f03241e6dd3947646c9033429752 | ff3da40de9bb9b7610291f3975d3f9e4f21acd9b4d73984d36b9535c68294d05 | MIT | [] | 241 |
2.4 | flaskpp | 0.4.9 | A Flask based framework for fast and easy app creation. Experience the real power of Flask without boilerplate, but therefore a well balanced mix of magic and the default Flask framework. | # 🧪 Flask++
> ⚠️ This framework will no longer be maintained! We recommend using [WebFluid](https://github.com/GrowVolution/WebFluid) instead.
Tired of setting up Flask from scratch every single time? 🤯 With **Flask++**, you can spin up and manage multiple apps in **under two minutes**. ⚡
And most important: This is **still Flask**. You won't have to miss the feeling of developing Flask. You've got **full control** about how much magic you would like to use and how much this framework should just feel like Flask. Not only that: If you experience something, which doesn't feel like Flask anymore... Please feel free to raise an issue and we'll fix that for you asap. ✌🏼️
It comes with the most common Flask extensions pre-wired and ready to go. Configuration is dead simple – extensions can be bound or unbound with ease. On top of that, it features a plug-&-play style **module system**, so you can just enable or disable functionality as needed. 🎚️
---
## 💡 Getting Started
If not already done, just install Python 3.10 or higher on your system. Then install Flask++ like every other python package:
```bash
pip install flaskpp
````
After that you can simply set up your app with the Flask++ CLI:
```bash
mkdir myproject
cd myproject
fpp init
# If you want to use modules, we recommend to create / install them before the setup.
# This will make life even easier, because you won't need to add them to your app config manually.
fpp modules create [name]
fpp modules install [id] [-s/--src] path/to/module
# You can also install from remote repositories (e.g. our I18n Manager):
fpp modules install i18n_module --src https://github.com/GrowVolution/FPP_i18n_module
fpp setup
# You can run your app(s) interactively:
fpp run [-i/--interactive]
# Or straight up:
fpp run [-a/--app] myapp [-p/--port] 5000 [-d/--debug]
# For further assistance use:
fpp --help
```
The setup wizard will guide you through the configuration step by step. 🎯 Once finished, your first app will be running – in less than the time it takes to make coffee. ☕🔥
**Tip:** We recommend installing Flask++ globally. If your OS does not support installing PyPI packages outside virtual environments, you can create a workaround like this:
```bash
sudo su
cd /opt
mkdir flaskpp
cd flaskpp
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install flaskpp
cat > cli <<EOF
#!/usr/bin/env bash
exec /opt/flaskpp/.venv/bin/python -m flaskpp "$@"
EOF
chmod +x cli
ln -s /opt/flaskpp/cli /usr/local/bin/fpp
cd ..
groupadd shared
find /home -mindepth 1 -maxdepth 1 -type d -print0 |
while IFS= read -r -d '' dir; do
user=$(basename "$dir")
usermod -aG shared "$user"
done
chown -R root:shared flaskpp
chmod -R 2775 flaskpp
exit
newgrp shared
```
---
## 🧩 Modules
To get started with modules, you can generate basic modules using the Flask++ CLI: `fpp modules create [module_name]`. Use it as a starting point for your own modules. 😉
---
## 🌐 Proxy Example (nginx)
If you’re deploying on a server, you can bind your app to a domain via nginx:
```nginx
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name myapp.example.org;
ssl_certificate /path/to/your/cert.pem;
ssl_certificate_key /path/to/your/key.pem;
location / {
proxy_pass http://127.0.0.1:5000;
include proxy_params; # default at /etc/nginx/
# optional tweaks:
# client_max_body_size 16M;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
# better: put these lines in /etc/nginx/upgrade_params
# and simply use: include upgrade_params;
}
}
```
---
## 📝 Documentation
For further information about this framework and how to use it, you may like to read our [documentation](DOCS.md). 🫶🏼
> ⚠️ Note: The documentation is intended as an architectural and reference guide, it does not provide a step-by-step tutorial.
> This is especially because Flask++ is a CLI first framework that provides a zero-code bootstrapping experience.
---
### 🌱 Let it grow
If you like this project, feel free to **fork it, open issues, or contribute ideas**. Every improvement makes life easier for the next developer. 💚
---
### 📜 License
Released under the [MIT License](LICENSE). Do whatever you want with it – open-source, commercial, or both. Follow your heart. 💯
---
**© GrowVolution e.V. 2025 – Release the brakes! 🚀**
| text/markdown | null | Pierre <pierre@growv-mail.org> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flask",
"flask-sqlalchemy",
"flask-migrate",
"flask-limiter",
"flask-babelplus",
"flask-mailman",
"flask-security-too",
"flask-wtf",
"flask-caching",
"flask-smorest",
"flask-jwt-extended",
"flask-authlib-client",
"pymysql",
"python-dotenv",
"python-socketio[asgi]",
"argon2_cffi",
"authlib",
"uvicorn",
"asgiref",
"immutables",
"requests",
"redis",
"pytz",
"gitpython",
"tqdm",
"typer",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-mock; extra == \"test\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T23:25:21.573233 | flaskpp-0.4.9.tar.gz | 66,571 | d0/12/aece66f9fd96ca3b32efa378396639c162877cf5c9ef83c53848cef20290/flaskpp-0.4.9.tar.gz | source | sdist | null | false | 3db496ea8a88e0cc806491f12f6442af | 1f028461fdbaaf53265e6a4f68e159d458eaf0cff46d31907c05bc7f3eafbae9 | d012aece66f9fd96ca3b32efa378396639c162877cf5c9ef83c53848cef20290 | null | [
"LICENSE"
] | 227 |
2.4 | agentcage | 0.3.4 | Defense-in-depth proxy sandbox for AI agents | <p align="center">
<img src="docs/agentcage.png" alt="agentcage logo" width="250">
</p>
# agentcage
*Defense-in-depth proxy sandbox for AI agents.*
Because "the agent would never do that" is not a security policy.
> :warning: **Warning:** This is an experimental project. It has not been audited by security professionals. Use it at your own risk. See [Security & Threat Model](docs/security.md) for details and known limitations.
> **Setting up OpenClaw?** See the [OpenClaw guide](docs/openclaw.md) and [`openclaw/config.yaml`](examples/openclaw/).
## What is it?
agentcage is a CLI that generates hardened, sandboxed environments for AI agents. In the default **container mode**, it produces [systemd quadlet](https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html) files that deploy three containers on a rootless [Podman](https://podman.io/) network -- no root privileges required. In **[Firecracker](https://firecracker-microvm.github.io/) mode**, the same container topology runs inside a dedicated microVM with its own Linux kernel, providing hardware-level isolation via KVM. In both modes, your agent runs on an internal-only network with no internet gateway; the only way out is through an inspecting [mitmproxy](https://mitmproxy.org/) that scans every HTTP request before forwarding it.
## Features
- :mag: **Pluggable inspector chain** -- domain filtering, secret detection, payload analysis, and custom Python inspectors
- :key: **Bidirectional secret injection** -- agent gets placeholders, proxy injects outbound, redacts inbound
- :detective: **Regex-based secret scanning** -- automatic provider-to-domain mapping, extensible via config
- :bar_chart: **Payload analysis** -- Shannon entropy, content-type mismatch detection, base64 blob scanning, body-size limits
- :globe_with_meridians: **WebSocket frame inspection** -- same inspector chain applied to every frame post-handshake
- :satellite: **DNS filtering** -- dnsmasq sidecar, RFC 5737 placeholder IPs for non-allowlisted domains, query logging
- :stopwatch: **Per-host rate limiting** -- token-bucket with configurable burst
- :pencil: **Structured audit logging** -- JSON lines for all inspection decisions (block, flag, allow)
- :lock: **Container hardening** -- read-only rootfs, all capabilities dropped, `no-new-privileges`
- :package: **Supply chain hardening** -- pinned base image digests, lockfile integrity, SHA-256 patch verification
## Design Principles
1. :no_entry: **Fail-closed** -- if any component fails, traffic stops, not bypasses.
2. :shield: **Secure by default** -- all hardening is on out of the box; security is opt-out, not opt-in.
3. :mag: **Inspect, don't just isolate** -- every request, frame, and query is analyzed before forwarding.
4. :closed_lock_with_key: **Agent never holds real secrets** -- placeholders in, real values injected in transit only.
5. :scroll: **Audit everything** -- all decisions logged as structured JSON by default.
## Why is it needed?
Most AI agent deployments hand the agent a [**lethal trifecta**](https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/):
1. :globe_with_meridians: **Internet access** -- the agent can reach any server on the internet.
2. :key: **Secrets** -- tokens and other secrets are passed as environment variables or mounted files.
3. :computer: **Arbitrary code execution** -- the agent runs code it writes itself, or code suggested by a model.
Any one of these alone is manageable. Combined, they create an exfiltration risk: if the agent is compromised, misaligned, or simply makes a mistake, it can send your secrets, source code, or private data to any endpoint on the internet. Most current setups have zero defense against this -- the agent has the same network access as any other process on the machine.
agentcage breaks the trifecta by placing the agent behind a defense-in-depth proxy sandbox: network isolation, domain filtering, secret injection, secret scanning, payload analysis, and container hardening -- all fail-closed. See [Security & Threat Model](docs/security.md) for the full breakdown of each layer and known limitations.
## How is it different?
Most agent sandboxes stop at network-level isolation: put the agent in a VM or container and control which hosts it can reach. agentcage adds a full inspection layer on top -- every HTTP request, WebSocket frame, and DNS query passes through a pluggable inspector chain before reaching the internet.
The agent never holds real secrets. Secret injection gives the agent placeholder tokens (`{{ANTHROPIC_API_KEY}}`); the proxy swaps in real values on outbound requests and redacts them from inbound responses. If a placeholder is sent to an unauthorized domain, the request is blocked. The secrets inspector provides a second line of defense with regex-based secret scanning that detects common key formats, each with automatic provider-to-domain mapping so legitimate API calls pass through without manual configuration.
On top of domain filtering and secret detection, the inspector chain analyzes payloads for anomalies -- Shannon entropy (catching encrypted/compressed exfiltration), content-type mismatches, base64 blobs -- and inspects WebSocket frames with the same chain. All decisions are written as structured JSON audit logs.
agentcage runs natively on headless Linux using rootless Podman -- fully self-hosted, single-binary CLI, open source.
## How does it work?
A cage is three containers on an internal Podman network: your agent (no internet gateway), a dual-homed DNS sidecar, and a dual-homed mitmproxy that inspects and forwards all traffic.
```
podman network: <name>-net (--internal, no internet gateway)
┌──────────────────────────────────────────────────────────────────┐
│ │
│ ┌──────────────┐ ┌───────────────┐ ┌──────────────────┐ │
│ │ Agent │ │ DNS sidecar │ │ mitmproxy │ │
│ │ │ │ (dnsmasq) │ │ + inspector chain│ │
│ │ HTTP_PROXY= ─┼────┼───────────────┼───►│ │ │
│ │ 10.89.0.11 ─┼────┼───────────────┼──►│ scans + forwards─┼──┼─► Internet
│ │ │ │ │ │ │ │
│ │ resolv.conf ─┼───►│ resolves via │ │ │ │
│ │ │ │ external net ─┼────┼──────────────────┼──┼─► Upstream DNS
│ │ │ │ │ │ │ │
│ │ ONLY on │ │ internal + │ │ internal + │ │
│ │ internal net │ │ external net │ │ external net │ │
│ └──────────────┘ └───────────────┘ └──────────────────┘ │
│ │
└──────────────────────────────────────────────────────────────────┘
```
All HTTP traffic is routed via `HTTP_PROXY` / `HTTPS_PROXY` to the mitmproxy container. A pluggable inspector chain evaluates every request -- enforcing domain allowlists, scanning for secret leaks, analyzing payloads -- before forwarding or blocking with a 403. The chain short-circuits on the first hard block.
See [Architecture](docs/architecture.md) for the full inspector chain, startup order, and certificate sharing.
## Isolation Modes
agentcage supports two isolation modes. Both share the same three-container topology and inspector chain — the difference is what provides the outer isolation boundary.
| | Container mode (default) | Firecracker mode |
|---|---|---|
| **Isolation** | Linux namespaces (rootless Podman) | Hardware virtualization (KVM) |
| **Kernel** | Shared with host | Dedicated guest kernel per cage |
| **Container escape risk** | Mitigated by hardening, not eliminated | Eliminated — escape lands in VM, not on host |
| **Root required** | No | Yes (for TAP device and bridge setup) |
| **macOS support** | Yes (via Podman machine) | No (requires `/dev/kvm`) |
| **Boot overhead** | ~1s | ~7s |
| **Best for** | Development, CI, low-risk workloads | Production, untrusted agents, high-security |
Set `isolation: firecracker` in your config to use Firecracker mode. See [Firecracker MicroVM Isolation](docs/firecracker.md) for setup and details.
## Prerequisites
- [Podman](https://podman.io/) (rootless)
- Python 3.12+
- [uv](https://docs.astral.sh/uv/) (Python package manager)
### Linux
**Arch Linux:**
```bash
sudo pacman -S podman python uv
```
**Debian / Ubuntu (24.04+):**
```bash
sudo apt install podman python3
curl -LsSf https://astral.sh/uv/install.sh | sh
```
**Fedora:**
```bash
sudo dnf install podman python3 uv
```
### macOS
```bash
brew install podman python uv
podman machine init
podman machine start
```
> **Note:** On macOS, Podman runs containers inside a Linux VM. `podman machine init` creates and `podman machine start` starts it.
### Firecracker Mode (optional)
Firecracker mode requires Linux with `/dev/kvm` access. See [Firecracker setup](docs/firecracker.md#setup) for full prerequisites. macOS is not supported for Firecracker mode.
## Install
```bash
uv tool install agentcage # from PyPI (when published)
uv tool install git+https://github.com/agentcage/agentcage.git # from GitHub
```
Or for development:
```bash
git clone https://github.com/agentcage/agentcage.git
cd agentcage
uv run agentcage --help
```
## Updating Dependencies
All dependencies are pinned (lock files, image digests, binary checksums). To check for updates:
```bash
./scripts/update-deps.py # check all, report only
./scripts/update-deps.py --update # check all, apply updates
./scripts/update-deps.py containers # check a single category
```
Categories: `python`, `containers`, `firecracker`, `kernel`, `node`, `pip`.
Requires `skopeo` for container image checks (`sudo pacman -S skopeo` on Arch).
## Usage
```bash
# 1. Write your config
cp examples/basic/config.yaml config.yaml
vim config.yaml
# 2. Store secrets (before creating the cage)
agentcage secret set myapp ANTHROPIC_API_KEY
agentcage secret set myapp GITHUB_TOKEN
# 3. Create the cage (builds images, generates quadlets, starts containers)
agentcage cage create -c config.yaml
# 4. Verify it's healthy
agentcage cage verify myapp
# 5. View logs
agentcage cage logs myapp # agent logs
agentcage cage logs myapp -s proxy # proxy inspection logs
agentcage cage logs myapp -s dns # DNS query logs
# 6. Audit inspection decisions
agentcage cage audit myapp # last 100 entries
agentcage cage audit myapp --summary --since 24h # daily summary
agentcage cage audit myapp -f --json -d blocked # alerting pipeline
# 7. Update after code/config changes
agentcage cage update myapp -c config.yaml
# 8. Rotate a secret (auto-reloads the cage)
agentcage secret set myapp ANTHROPIC_API_KEY
# 9. Restart without rebuild (config-only change)
agentcage cage reload myapp
# 10. Tear it all down
agentcage cage destroy myapp
```
## CLI Overview
| Group | Commands |
|---|---|
| `cage` | `create`, `update`, `list`, `destroy`, `verify`, `reload`, `audit` |
| `secret` | `set`, `list`, `rm` |
| `domain` | `list`, `add`, `rm` |
See [CLI Reference](docs/cli.md) for full documentation of all commands and options.
## Deployment State
agentcage tracks each cage in `~/.config/agentcage/deployments/<name>/config.yaml`. This stored config copy allows commands like `cage update` (without `-c`) and `cage reload` to operate without requiring the original config file. The state is removed when a cage is destroyed.
## Architecture
See [Architecture](docs/architecture.md) for the full container topology, inspector chain, startup order, and certificate sharing.
## Configuration
See the [Configuration Reference](docs/configuration.md) for all settings, defaults, and examples. Example configs: [`basic/config.yaml`](examples/basic/) | [`openclaw/config.yaml`](examples/openclaw/)
## Security
The agent has no internet gateway -- all traffic must pass through the proxy, which applies domain filtering, secret detection, payload inspection, and custom inspectors. For workloads requiring hardware-level isolation, Firecracker mode adds a dedicated guest kernel per cage, eliminating container escape as an attack vector. See [Security & Threat Model](docs/security.md) for the full threat model, defense layers, and known limitations.
## License
MIT
| text/markdown | Luca Martinetti | null | null | null | MIT | agent, ai, container, mitmproxy, proxy, sandbox, security | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Security",
"Topic :: System :: Networking :: Monitoring"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.1",
"jinja2>=3.1",
"pyyaml>=6.0"
] | [] | [] | [] | [
"Homepage, https://github.com/agentcage/agentcage",
"Repository, https://github.com/agentcage/agentcage",
"Documentation, https://github.com/agentcage/agentcage/tree/master/docs",
"Issues, https://github.com/agentcage/agentcage/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:24:43.301354 | agentcage-0.3.4.tar.gz | 574,973 | c9/c7/3d7384cb1246518a6374cf3fd8e5078fa98b5f870d1f126a20c13e223742/agentcage-0.3.4.tar.gz | source | sdist | null | false | 5761d3e0c36f47dc6c1a22ca40ce56bf | 37b9608f705b4e3f8dc536720080b673c5a73c8f42968be82c956fb23ced331c | c9c73d7384cb1246518a6374cf3fd8e5078fa98b5f870d1f126a20c13e223742 | null | [
"LICENSE"
] | 239 |
2.4 | hnet | 1.3.1 | Graphical Hypergeometric Networks | [](https://img.shields.io/pypi/pyversions/hnet)
[](https://pypi.org/project/hnet/)
[](https://erdogant.github.io/hnet/)
[](https://github.com/erdogant/hnet/)
[](https://pepy.tech/project/hnet)
[](https://pepy.tech/project/hnet)
[](https://github.com/erdogant/hnet/blob/master/LICENSE)
[](https://github.com/erdogant/hnet/network)
[](https://github.com/erdogant/hnet/issues)
[](http://www.repostatus.org/#active)
[](https://zenodo.org/badge/latestdoi/231843440)
[](https://erdogant.github.io/hnet/pages/html/Documentation.html#medium-blog)
[](https://erdogant.github.io/hnet/pages/html/Documentation.html#colab-notebook)
[](https://erdogant.github.io/hnet/pages/html/Documentation.html#)
<!---[](https://www.buymeacoffee.com/erdogant)-->
<!---[](https://erdogant.github.io/donate/?currency=USD&amount=5)-->
<div>
<a href="https://erdogant.github.io/hnet/"><img src="https://github.com/erdogant/hnet/blob/master/docs/figs/logo.png" width="200" align="left" /></a>
hnet is a Python package for probability density fitting of univariate distributions for random variables.
The hnet library can determine the best fit for over 90 theoretical distributions. The goodness-of-fit test is used to score for the best fit and after finding the best-fitted theoretical distribution, the loc, scale, and arg parameters are returned.
It can be used for parametric, non-parametric, and discrete distributions. ⭐️Star it if you like it⭐️
</div>
---
### Key Features
| Feature | Description | Docs | Medium | Gumroad+Podcast|
|---------|-------------|------|------|-------|
| **Association Learning** | Discover significant associations across variables using statistical inference. | [Link](https://erdogant.github.io/hnet/pages/html/Examples.html#titanic-dataset) | [Link](https://erdogant.medium.com/uncover-hidden-patterns-in-your-tabular-datasets-all-you-need-is-the-right-statistics-6de38f6a8aa7) | [Link](https://erdogant.gumroad.com/l/uncover-hidden-patterns-in-your-tabular-datasets-all-you-need-is-the-right-statistics-6de38f6a8aa7) |
| **Mixed Data Handling** | Works with continuous, discrete, categorical, and nested variables without heavy preprocessing. | [Link](https://erdogant.github.io/hnet/pages/html/index.html) | - | - |
| **Summarization** | Summarize complex networks into interpretable structures. | [Link](https://erdogant.github.io/hnet/pages/html/Use%20Cases.html#summarize-results) | - | - |
| **Feature Importance** | Rank variables by importance within associations. | [Link](https://erdogant.github.io/hnet/pages/html/Use%20Cases.html#feature-importance) | - | - |
| **Interactive Visualizations** | Explore results with dynamic dashboards and d3-based visualizations. | [Dashboard](https://erdogant.github.io/hnet/pages/html/Documentation.html#online-web-interface) | - | [Titanic Example](https://erdogant.github.io/docs/d3graph/titanic_example/index.html) |
| **Performance Evaluation** | Compare accuracy with Bayesian association learning and benchmarks. | [Link](https://arxiv.org/abs/2005.04679) | - | - |
| **Interactive Dashboard** | No data leaves your machine. All computations are performed locally. | [Link](https://erdogant.github.io/hnet/pages/html/Documentation.html#online-web-interface) | - | - |
---
### Resources and Links
- **Example Notebooks:** [Examples](https://erdogant.github.io/hnet/pages/html/Documentation.html)
- **Medium Blogs** [Medium](https://erdogant.github.io/hnet/pages/html/Documentation.html#medium-blogs)
- **Gumroad Blogs with podcast:** [GumRoad](https://erdogant.github.io/hnet/pages/html/Documentation.html#gumroad-products-with-podcasts)
- **Documentation:** [Website](https://erdogant.github.io/hnet)
- **Bug Reports and Feature Requests:** [GitHub Issues](https://github.com/erdogant/hnet/issues)
- Article: [arXiv](https://arxiv.org/abs/2005.04679)
- Article: [PDF](https://arxiv.org/pdf/2005.04679)
---
### Background
* HNet stands for graphical Hypergeometric Networks, which is a method where associations across variables are tested for significance by statistical inference.
The aim is to determine a network with significant associations that can shed light on the complex relationships across variables.
Input datasets can range from generic dataframes to nested data structures with lists, missing values and enumerations.
* Real-world data often contain measurements with both continuous and discrete values.
Despite the availability of many libraries, data sets with mixed data types require intensive pre-processing steps,
and it remains a challenge to describe the relationships between variables.
The data understanding phase is crucial to the data-mining process, however, without making any assumptions on the data,
the search space is super-exponential in the number of variables. A thorough data understanding phase is therefore not common practice.
* Graphical hypergeometric networks (``HNet``), a method to test associations across variables for significance using statistical inference. The aim is to determine a network using only the significant associations in order to shed light on the complex relationships across variables. HNet processes raw unstructured data sets and outputs a network that consists of (partially) directed or undirected edges between the nodes (i.e., variables). To evaluate the accuracy of HNet, we used well known data sets and generated data sets with known ground truth. In addition, the performance of HNet is compared to Bayesian association learning.
* HNet showed high accuracy and performance in the detection of node links. In the case of the Alarm data set we can demonstrate on average an MCC score of 0.33 + 0.0002 (*P*<1x10-6), whereas Bayesian association learning resulted in an average MCC score of 0.52 + 0.006 (*P*<1x10-11), and randomly assigning edges resulted in a MCC score of 0.004 + 0.0003 (*P*=0.49). HNet overcomes processes raw unstructured data sets, it allows analysis of mixed data types, it easily scales up in number of variables, and allows detailed examination of the detected associations.
<p align="left">
<a href="https://erdogant.github.io/hnet/pages/html/index.html">
<img src="https://github.com/erdogant/hnet/blob/master/docs/figs/fig1.png" width="600" />
</a>
</p>
---
### Installation
##### Install hnet from PyPI
```bash
pip install hnet
```
##### Install from Github source
```bash
pip install git+https://github.com/erdogant/hnet
```
##### Imort Library
```python
import hnet
print(hnet.__version__)
# Import library
from hnet import hnet
```
<hr>
## Installation
* Install hnet from PyPI (recommended).
```bash
pip install -U hnet
```
## Examples
- Simple example for the Titanic data set
```python
# Initialize hnet with default settings
from hnet import hnet
# Load example dataset
df = hnet.import_example('titanic')
# Print to screen
print(df)
```
# PassengerId Survived Pclass ... Fare Cabin Embarked
# 0 1 0 3 ... 7.2500 NaN S
# 1 2 1 1 ... 71.2833 C85 C
# 2 3 1 3 ... 7.9250 NaN S
# 3 4 1 1 ... 53.1000 C123 S
# 4 5 0 3 ... 8.0500 NaN S
# .. ... ... ... ... ... ... ...
# 886 887 0 2 ... 13.0000 NaN S
# 887 888 1 1 ... 30.0000 B42 S
# 888 889 0 3 ... 23.4500 NaN S
# 889 890 1 1 ... 30.0000 C148 C
# 890 891 0 3 ... 7.7500 NaN Q
#
##### <a href="https://erdogant.github.io/docs/d3graph/titanic_example/index.html">Play with the interactive Titanic results.</a>
<link rel="import" href="https://erdogant.github.io/docs/d3graph/titanic_example/index.html">
#
##### [Example: Learn association learning on the titanic dataset](https://erdogant.github.io/hnet/pages/html/Examples.html#titanic-dataset)
<p align="left">
<a href="https://erdogant.github.io/hnet/pages/html/Examples.html#titanic-dataset">
<img src="https://github.com/erdogant/hnet/blob/master/docs/figs/fig4.png" width="900" />
</a>
</p>
#
##### [Example: Summarize results](https://erdogant.github.io/hnet/pages/html/Use%20Cases.html#summarize-results)
Networks can become giant hairballs and heatmaps unreadable. You may want to see the general associations between the categories, instead of the label-associations.
With the summarize functionality, the results will be summarized towards categories.
<p align="left">
<a href="https://erdogant.github.io/hnet/pages/html/Use%20Cases.html#summarize-results">
<img src="https://github.com/erdogant/hnet/blob/master/docs/figs/other/titanic_summarize_static_heatmap.png" width="300" />
<a href="https://erdogant.github.io/docs/d3heatmap/d3heatmap.html">
<img src="https://github.com/erdogant/hnet/blob/master/docs/figs/other/titanic_summarize_dynamic_heatmap.png" width="400" />
</a>
</p>
<p align="left">
<a href="https://erdogant.github.io/hnet/pages/html/Examples.html#titanic-dataset">
<img src="https://github.com/erdogant/hnet/blob/master/docs/figs/other/titanic_summarize_static_graph.png" width="400" />
<img src="https://github.com/erdogant/hnet/blob/master/docs/figs/other/titanic_summarize_dynamic_graph.png" width="400" />
</a>
</p>
#
##### [Example: Feature importance](https://erdogant.github.io/hnet/pages/html/Use%20Cases.html#feature-importance)
<p align="left">
<a href="https://erdogant.github.io/hnet/pages/html/Use%20Cases.html#feature-importance">
<img src="https://github.com/erdogant/hnet/blob/master/docs/figs/other/feat_imp_1.png" width="600" />
<img src="https://github.com/erdogant/hnet/blob/master/docs/figs/other/feat_imp_2.png" width="600" />
<img src="https://github.com/erdogant/hnet/blob/master/docs/figs/other/feat_imp_3.png" width="600" />
</a>
</p>
#
#### Performance
<p align="left">
<a href="https://erdogant.github.io/hnet/pages/html/index.html">
<img src="https://github.com/erdogant/hnet/blob/master/docs/figs/fig3.png" width="600" />
</a>
</p>
<hr>
### Contributors
<p align="left">
<a href="https://github.com/erdogant/hnet/graphs/contributors">
<img src="https://contrib.rocks/image?repo=erdogant/hnet" />
</a>
</p>
### Maintainer
* Erdogan Taskesen, github: [erdogant](https://github.com/erdogant)
* Contributions are welcome.
* This library is free. But powered by caffeine! Like it? Chip in what it's worth, and keep me creating new functionalities!🙂
[](https://www.buymeacoffee.com/erdogant)
| text/markdown | null | Erdogan Taskesen <erdogant@gmail.com> | null | null | null | Python, hnet, network analysis, tabular data, data analysis, graph, machine learning, AI | [
"Programming Language :: Python :: 3",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Operating System :: Unix",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Bio-Informatics"
] | [] | null | null | >=3 | [] | [] | [] | [
"colourmap",
"pypickle",
"classeval",
"matplotlib",
"numpy",
"pandas",
"statsmodels",
"networkx",
"python-louvain",
"tqdm",
"scikit-learn",
"ismember",
"imagesc",
"df2onehot",
"fsspec",
"datazets"
] | [] | [] | [] | [
"Homepage, https://erdogant.github.io/hnet",
"Download, https://github.com/erdogant/hnet/archive/{version}.tar.gz"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T23:24:02.713196 | hnet-1.3.1.tar.gz | 54,413 | 11/24/f7513acc13aeec1d230e8bed097512733e2e3d80097545962dc43fe3da40/hnet-1.3.1.tar.gz | source | sdist | null | false | 6cc5e18f56f054f94d0c08675b0b877f | 7fe63db441895739f0cfcdda1c9459b0dadaf34830a0e2bb4c026523d2a2d512 | 1124f7513acc13aeec1d230e8bed097512733e2e3d80097545962dc43fe3da40 | MIT | [
"LICENCE.txt",
"LICENSE"
] | 250 |
2.4 | reuseify | 0.1.2 | Automate REUSE license annotation from git history. | <!--
SPDX-FileCopyrightText: 2026 Sahil Jhawar
SPDX-FileContributor: Sahil Jhawar
SPDX-License-Identifier: GPL-3.0-or-later
-->
<!--
-->
# reuseify
[](https://badge.fury.io/py/reuseify)
[](https://badge.fury.io/py/reuseify)
[](https://api.reuse.software/info/github.com/sahiljhawar/reuseify)
Automate [REUSE](https://reuse.software/) license annotation from git history.
`reuseify` inspects which files are missing license headers (via `reuse lint`),
looks up their git commit authors, and applies `reuse annotate`, all from a single CLI.
## Installation
```bash
uv pip install .
```
## Usage
The workflow is two steps: collect authors → annotate files.
### Step 1: collect authors
```bash
reuseify get-authors [OPTIONS]
```
Runs `reuse lint`, finds every file missing a license header, looks up its git
commit authors, and writes a JSON file:
```json
{
"src/foo.py": ["Alice", "Bob"],
"src/bar.c": ["Alice"],
"src/new.py": [] #NOT_IN_GIT
}
```
| Option | Short | Default | Description |
| ---------------------- | ----- | ----------------------------- | ---------------------------------------------------------------------- |
| `--output` | `-o` | `reuse_annotate_authors.json` | Output JSON file |
| `--include-not-in-git` | `-i` | off | Include files with no git history (empty author list) |
| `--exclude PATTERN` | `-e` | | Extra glob pattern to exclude (matched per path component, repeatable) |
Files matching built-in patterns are always excluded:
`__pycache__`, `.venv`, `venv`, `.env`, `env`, `.git`, `.vscode`, `.idea`,
`*.egg-info`, `*.pyc`, `dist`, `build`, `node_modules`, `.tox`,
`.mypy_cache`, `.pytest_cache`, `.ruff_cache`.
Files ignored by `.gitignore` are also excluded
automatically.
**Examples**
```bash
# defaults
reuseify get-authors
# custom output path + include untracked files
reuseify get-authors --output authors.json --include-not-in-git
# add an extra exclusion pattern
reuseify get-authors --exclude reports --exclude "*.tmp"
```
---
### Step 2: annotate files
```bash
reuseify annotate [OPTIONS] [REUSE ANNOTATE FLAGS...]
```
Reads the JSON file from [Step 1](#step-1-collect-authors) and calls `reuse annotate` for every file.
`--contributor` flags are injected automatically from the JSON data.
All unrecognised flags are forwarded verbatim to `reuse annotate`, giving you
full control over `--copyright`, `--license`, `--year`, `--style`,
`--fallback-dot-license`, `--force-dot-license`, `--skip-unrecognised`, etc.
| Option | Short | Default | Description |
| ---------------------------- | ----- | ----------------------------- | -------------------------------------------------------- |
| `--input` | `-i` | `reuse_annotate_authors.json` | JSON file from `get-authors` |
| `--default-contributor NAME` | `-d` | — | Fallback contributor for `NOT_IN_GIT` files (repeatable) |
Output is grouped: all successes first, then skips, then failures, then finally a summary.
### Examples
```bash
# basic
reuseify annotate \
--copyright "2025 X-Men" \
--license Apache-2.0 \
--fallback-dot-license
# custom input + fallback contributor for untracked files
reuseify annotate \
--input authors.json \
--default-contributor "Charles Xavier" \
--copyright "2025 X-Men" \
--license Apache-2.0
# multiple default contributors
reuseify annotate \
--default-contributor "Professor X" \
--default-contributor "Cyclops" \
--copyright "2025 X-Men" \
--license MIT
```
## Disclaimer
> [!CAUTION]
> Use at your own risk. `reuse annotate` modifies files in place.
```bash
reuse annotate --help
```
This project is not affiliated with the REUSE project or its maintainers in any way.
| text/markdown | null | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: POSIX",
"Operating System :: Unix",
"Operating System :: MacOS"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"typer>=0.12",
"rich>=13.0",
"reuse>=6.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/sahiljhawar/reuseify",
"Tracker, https://github.com/sahiljhawar/reuseify/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:22:53.715050 | reuseify-0.1.2.tar.gz | 19,654 | fd/be/3026c79aaa8b0d463bd96362e708b8b424f62bfd9791be7c760028f51059/reuseify-0.1.2.tar.gz | source | sdist | null | false | 0c1b26b3bc23a75963f3c6a64968586b | 05dd68e0fc68e1233504d17cb6812e07cb61bb140973b1a16eb80b0b34691e62 | fdbe3026c79aaa8b0d463bd96362e708b8b424f62bfd9791be7c760028f51059 | GPL-3.0-or-later | [
"LICENSES/GPL-3.0-or-later.txt"
] | 231 |
2.4 | dsff | 1.2.2 | DataSet File Format (DSFF) | <p align="center" id="top"><img src="https://github.com/packing-box/python-dsff/raw/main/docs/pages/imgs/logo.png"></p>
<h1 align="center">DataSet File Format <a href="https://twitter.com/intent/tweet?text=DataSet%20File%20Format%20-%20XSLX-based%20format%20for%20handling%20datasets.%0D%0ATiny%20library%20for%20handling%20a%20dataset%20as%20an%20XSLX%20and%20for%20converting%20it%20to%20ARFF,%20CSV%20or%20a%20FilelessDataset%20structure%20as%20for%20the%20Packing%20Box.%0D%0Ahttps%3a%2f%2fgithub%2ecom%2fpacking-box%2fpython-dsff%0D%0A&hashtags=python,dsff,machinelearning"><img src="https://img.shields.io/badge/Tweet--lightgrey?logo=twitter&style=social" alt="Tweet" height="20"/></a></h1>
<h3 align="center">Store a dataset in XSLX-like format.</h3>
[](https://pypi.python.org/pypi/dsff/)
[](https://python-dsff.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/dhondta/python-dsff/actions/workflows/python-package.yml)
[](#)
[](https://pypi.python.org/pypi/dsff/)
[](https://snyk.io/test/github/packing-box/python-dsff?targetFile=requirements.txt)
[](https://pypi.python.org/pypi/dsff/)
This library contains code for handling the DataSet File Format (DSFF) based on the XSLX format and for converting it to [ARFF](https://www.cs.waikato.ac.nz/ml/weka/arff.html) (for use with the [Weka](https://www.cs.waikato.ac.nz/ml/weka) framework), [CSV](https://www.rfc-editor.org/rfc/rfc4180) or a [FilelessDataset structure](https://docker-packing-box.readthedocs.io/en/latest/usage/datasets.html) (from the [Packing Box](https://github.com/packing-box/docker-packing-box)).
```sh
pip install --user dsff
```
## Usage
**Creating a DSFF from a FilelessDataset**
```python
>>> import dsff
>>> with dsff.DSFF() as f:
f.write("/path/to/my-dataset") # folder of a FilelessDataset (containing data.csv, features.json and metadata.json)
f.to_arff() # creates ./my-dataset.arff
f.to_csv() # creates ./my-dataset.csv
f.to_db() # creates ./my-dataset.db (SQLite DB)
# while leaving the context, ./my-dataset.dsff is created
```
**Creating a FilelessDataset from a DSFF**
```python
>>> import dsff
>>> with dsff.DSFF("/path/to/my-dataset.dsff") as f:
f.to_dataset() # creates ./my-dataset with data.csv, features.json and metadata.json
```
**
## Extensions
**Install all available extensions**
```sh
pip install --user dsff[all]
```
**Dealing with [Apache Arrow](https://arrow.apache.org/) formats**
```sh
pip install --user dsff[arrow]
```
```python
>>> import dsff
>>> with dsff.DSFF("/path/to/my-dataset.dsff") as f:
f.to_feather() # creates ./my-dataset.feather
f.to_orc() # creates ./my-dataset.orc
f.to_parquet() # creates ./my-dataset.parquet
```
## Related Projects
You may also like these:
- [Awesome Executable Packing](https://github.com/packing-box/awesome-executable-packing): A curated list of awesome resources related to executable packing.
- [Bintropy](https://github.com/packing-box/bintropy): Analysis tool for estimating the likelihood that a binary contains compressed or encrypted bytes (inspired from [this paper](https://ieeexplore.ieee.org/document/4140989)).
- [Dataset of packed ELF files](https://github.com/packing-box/dataset-packed-elf): Dataset of ELF samples packed with many different packers.
- [Dataset of packed PE files](https://github.com/packing-box/dataset-packed-pe): Dataset of PE samples packed with many different packers (fork of [this repository](https://github.com/chesvectain/PackingData)).
- [Docker Packing Box](https://github.com/packing-box/docker-packing-box): Docker image gathering packers and tools for making datasets of packed executables.
- [PEiD](https://github.com/packing-box/peid): Python implementation of the well-known Packed Executable iDentifier ([PEiD](https://www.aldeid.com/wiki/PEiD)).
- [PyPackerDetect](https://github.com/packing-box/pypackerdetect): Packing detection tool for PE files (fork of [this repository](https://github.com/cylance/PyPackerDetect)).
- [REMINDer](https://github.com/packing-box/reminder): Packing detector using a simple heuristic (inspired from [this paper](https://ieeexplore.ieee.org/document/5404211)).
| text/markdown | null | Alexandre D'Hondt <alexandre.dhondt@gmail.com> | null | null | GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
| python, programming, dataset-file-format, dsff | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: Other Audience",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"openpyxl",
"dsff[arrow]; extra == \"all\"",
"pyarrow; extra == \"arrow\""
] | [] | [] | [] | [
"documentation, https://python-dsff.readthedocs.io/en/latest/?badge=latest",
"homepage, https://github.com/packing-box/python-dsff",
"issues, https://github.com/packing-box/python-dsff/issues",
"repository, https://github.com/packing-box/python-dsff"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:22:28.405110 | dsff-1.2.2.tar.gz | 184,700 | 16/72/7cdbf8195065b85b97379eecd67286a840e812a8623b10cd21780e501353/dsff-1.2.2.tar.gz | source | sdist | null | false | 2c52539f3c0149c30baaeb5a5708b855 | 510feafbf11f325c4b4b18da69ba1209d91ae1b2745570cecbe69e396dfc8a4c | 16727cdbf8195065b85b97379eecd67286a840e812a8623b10cd21780e501353 | null | [
"LICENSE"
] | 251 |
2.4 | generic-robot-env | 0.0.3 | A generic MuJoCo-based robot environment generator for LeRobot or Gymnasium-style experiments. | # generic_robot_env
A generic MuJoCo-based robot environment generator for LeRobot or Gymnasium-style experiments.
This library is heavily inspired by and depends on the [gym_hil](https://github.com/huggingface/gym-hil) project which implemented the first variant of these configurations.
This package provides
* `RobotConfig`: A version of the gym_hil MujocoGymEnv that can extract its configuration from a well-formed mujoco scene.xml file.
* `GenericRobotEnv`, a configurable robot-control base environment around a MuJoCo XML model, with reusable robot methods (`apply_action`, `get_robot_state`, `reset_robot`, `render`, `get_gripper_pose`).
* `GenericTaskEnv`, a task-oriented layer on top of `GenericRobotEnv` that adds Panda-pick-style task behavior (task reset, environment state observation, reward, success/termination).
## Features
- Auto-detects joints, actuators, end-effector site, cameras and (optionally) a `home` keyframe from a MuJoCo XML file.
- Two control modes: `osc` (end-effector operational-space control) and `joint` (direct actuator control).
- Separation of concerns: robot-control APIs in `GenericRobotEnv`, task APIs in `GenericTaskEnv`.
- Optional image observations from model cameras.
- Returns structured observations compatible with Gymnasium `spaces.Dict`.
## Installation
This project depends on MuJoCo and Gymnasium. Install prerequisites in your Python environment (example using pip):
```bash
pip install generic_robot_env
```
Note: The repository includes typed stubs under `typings/` for local development and the project expects a working MuJoCo installation accessible to the `mujoco` Python package.
## Usage examples
- **Simple loop:** Create a configuration from a scene XML and run a quick random-action loop. [mujoco_menagerie](https://github.com/google-deepmind/mujoco_menagerie) is a good repository of scenes.
```python
from pathlib import Path
import numpy as np
from generic_robot_env.generic_robot_env import RobotConfig, GenericTaskEnv
xml = Path("mujoco_menagerie/aloha/scene.xml") # point to a model in the repo
config = RobotConfig.from_xml(xml, robot_name="aloha")
env = GenericTaskEnv(config, control_mode="osc", image_obs=False)
obs, _ = env.reset()
for _ in range(200):
action = env.action_space.sample()
obs, reward, terminated, truncated, info = env.step(action)
if terminated:
break
env.close()
```
- **Image observations:** Enable `image_obs=True` to receive pixel arrays under `obs['pixels']` keyed by camera name (when cameras exist in the MJCF).
```python
config = RobotConfig.from_xml(Path("mujoco_menagerie/aloha/aloha.xml"), robot_name="aloha")
env = GenericTaskEnv(config, control_mode="osc", image_obs=True)
obs, _ = env.reset()
# obs['pixels'] -> dict(camera_name -> HxWx3 uint8 array)
```
Notes:
- If the MuJoCo model contains a keyframe named `home`, the environment will try to use that as the default joint pose.
- If cameras are present and `image_obs=True`, pixel observations are returned under `obs['pixels']` keyed by camera name.
## Files
- `src/generic_robot_env/generic_robot_env.py` — main environment implementation (this module).
See [src/generic_robot_env/generic_robot_env.py](src/generic_robot_env/generic_robot_env.py) for the full implementation.
## Observation and action spaces
`GenericRobotEnv` observations are returned as a Gymnasium `spaces.Dict` with primary keys under `agent_pos`:
- `joint_pos`: Joint positions for the detected robot joints (array)
- `joint_vel`: Joint velocities (array)
- `tcp_pose`: End-effector pose as [x, y, z, qx, qy, qz, qw]
- `tcp_vel`: End-effector linear and angular velocity (6D)
- `gripper_pose` (optional): Single-value gripper state when a gripper actuator is detected
When `image_obs=True`, `pixels` is included and contains a `spaces.Dict` of camera-name -> RGB image arrays.
`GenericTaskEnv` uses Panda-pick-style observations:
- State mode: `{"agent_pos": <vector>, "environment_state": <object position>}`
- Image mode: `{"pixels": <camera dict>, "agent_pos": <vector>}`
Action spaces depend on `control_mode`:
- `osc`: Continuous Box controlling end-effector delta in position (x,y,z) and rotation (rx,ry,rz). If a gripper is available, an extra dimension for gripper command is appended.
- `joint`: Continuous Box mapped directly to actuator control values. When a gripper actuator exists, it is appended to the action vector.
## Implementation notes
- The environment auto-resolves joint and actuator ids using the MuJoCo model and maps qpos/qvel indices for direct data access.
- For OSC control, a simple opspace solver (from `gym_hil.controllers.opspace`) is used each simulation substep to compute actuator torques that track a desired end-effector target.
- Gripper mapping tries to respect actuator control ranges defined in the model (`actuator_ctrlrange`) when present.
## Running tests
Run the test suite with pytest from the repository root:
```bash
pytest tests
```
## Local presubmit hooks
Install local git hooks so linting/formatting runs before commit and tests run before push:
```bash
uv run pre-commit install --hook-type pre-commit --hook-type pre-push
```
Run hooks manually (optional):
```bash
uv run pre-commit run --all-files
uv run pre-commit run --hook-stage pre-push
```
- **Faster experiments:** Many models include camera/site names that the environment will auto-detect (end-effector sites, cameras, and optional `home` keyframes). If your chosen model provides a `home` keyframe the environment will attempt to use it as the default reset pose.
- **Example with a bundled task:** Some pre-made gym-style wrappers (for example in `gym_hil`) subclass the same base utilities; you can switch between `GenericRobotEnv` and those wrappers by pointing both at the same XML and configuration.
## **Tips and troubleshooting**
- **Missing end-effector/site detection:** If the end-effector isn't found automatically, open the XML and add a site with a common name like `ee`, `end_effector`, `tcp` or `attachment_site` so auto-detection can find it.
- **Gripper mapping:** If the model exposes a gripper actuator with a control range, the environment will append a gripper command dimension to the action space and will respect `actuator_ctrlrange` when possible.
## Contributing
Contributions, bug reports and improvements are welcome. When adding new robots or tasks, prefer adding descriptive camera and site names in the MJCF so the auto-detection heuristics can find the end-effector and camera frames.
## License
This project inherits the license of the repository. Ensure you follow the licensing terms of MuJoCo and any third-party dependencies.
| text/markdown | null | Bernie Telles <btelles@gmail.com> | null | null | MIT | gymnasium, mujoco, reinforcement-learning, robotics | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Games/Entertainment :: Simulation",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Physics"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"gym-hil>=0.1.13",
"gymnasium",
"mujoco>=3.5.0",
"numpy",
"pillow",
"pytest>=9.0.2",
"ruff>=0.15.2",
"mypy; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"ruff; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/btelles/generic-robot-env",
"Bug Tracker, https://github.com/btelles/generic-robot-env/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:21:43.366134 | generic_robot_env-0.0.3.tar.gz | 243,193 | 98/b3/618f9c43d429b75265428ece1cc6ce1b13b9d07594c7dffbb6b1e1cfbb28/generic_robot_env-0.0.3.tar.gz | source | sdist | null | false | 96e392910026ea03a246fe7462eb36d7 | b0b7ff0d9d8127b1d0231c3ac39dc36df3f4130c20816602da5aadce5bfa4574 | 98b3618f9c43d429b75265428ece1cc6ce1b13b9d07594c7dffbb6b1e1cfbb28 | null | [
"LICENSE"
] | 233 |
2.4 | NMSpy | 161436.0 | No Man's Sky python modding API | # NMS.py
NMS.py is a python library to expose interal game functions for No Man's Sky.
**NOTE:** This library is missing a lot of info which will gradually get added over time.
It should also be noted that Game updates can very easily break mods utilising NMS.py, so care should be taken when using mods.
Any responsibility for broken saves is entirely on the users of this library.
Also note that this library will never contain functions relating to online functionality to avoid any abuse.
The author of this library does not condone any use of this code for any purpose that is directly detrimental to other players.
## Installation
**Note:**
It is recommended that you download python from the [official site](https://www.python.org/downloads) as the windows store version may have issues, as well as the managed python which [uv](https://docs.astral.sh/uv/) installs.
**Note:**
Python 3.14 is not yet supported. This will come in time but requires some changes to a dependency in pyMHF.
The recommended way to install NMS.py is to simply run `python -m pip install nmspy`. This will install NMS.py and its' dependency [`pyMHF`](https://github.com/monkeyman192/pyMHF) into your system python. You can of course install it in a venv or as a dependency using uv if you prefer.
## Usage
To run NMS.py, enter the following command into a terminal:
```
pymhf run nmspy
```
This will display some config options to complete. The only option to consider is the location of the mods folder. It is recommended that you specify the `MODS` folder inside the `GAMEDATA` folder as your mod directory (ie. the same one you put normal mods in).
All mods will be placed in either this folder, or in a chcild folder of this. You can essentially think of any mod using NMS.py being able to be "installed" in the same way you would any other normal mod.
If NMS.py starts up successfully you should see two extra windows; an auto-created GUI from pyMHF, and a terminal window which will show the logs for pyMHF.
If you want to stop NMS, you can press `ctrl + C` in the window you started the process in to kill it.
## Writing mods
Currently the best way to see how to write a mod is to look at the `example_mods` folder, as well as looking at the [pyMHF docs](https://monkeyman192.github.io/pyMHF/) which has comprehensive details on how to use pyMHF.
## Contributing
NMS.py can always be better! If you are interested in improving it, the best way to do so is to either make mods using it, or, if you have reverse engineering skills, you can contribute either new functions or new field offsets.
### Adding new functions to hook
To add a new function to hook you need to add the details to `tools/data.json` as well as the `nmspy/data/types.py` file.
The data for these can generally be found by comparing the structure of the function in the latest exe to either the mac binary or the 4.13 binary.
It is recommended to use either [IDA Fusion](https://github.com/senator715/IDA-Fusion) or [SigmakerEX](https://github.com/kweatherman/sigmakerex) for IDA or some other plugin for any other disassembler to get the unique byte pattern for the function you are providing.
### Adding new struct fields
This is a bit trickier and will often involve a good amount of reverse engineering of the exe, comparing the data to the 4.13 binary as well as potentially the mac binary.
It is best if a comment is made above the added field so that it's possible to find it again when the game updates so that it's possible to check if the offset has changed.
## Credits
Thanks to the developers of minhook, cyminhook and pymem, all of which are instrumental in making this framework possible.
Big thanks to [vitalised](https://github.com/VITALISED) for their constant RE discussions, and [gurren3](https://github.com/gurrenm3) for the same as well as the initial work done on NMS.API which heavily inspired the creation of this.
Thanks also to the many people I have discussed various NMS details with, both big and small.
Thanks to [RaYRoD](https://github.com/RaYRoD-TV) for initially discovering the pdb as well as regular insightful discussions regarding all things reverse engineering NMS.
Thanks also to anyone who has contributed function definitions or patterns. Any and all help is always appreciated!
| text/markdown | monkeyman192 | null | monkeyman192 | null | null | hooking, games, hacking, modding | [
"Development Status :: 3 - Alpha",
"Environment :: Win32 (MS Windows)",
"Operating System :: Microsoft",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | <4,>=3.9 | [] | [] | [] | [
"pymhf[gui]==0.2.2"
] | [] | [] | [] | [
"Homepage, https://github.com/monkeyman192/NMS.py",
"Repository, https://github.com/monkeyman192/NMS.py.git"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:21:15.351211 | nmspy-161436.0.tar.gz | 669,295 | 6d/8e/c8ddb15255cdb3cc9cb231254375d2b130e2d0fadf00b4c6043a45d1804d/nmspy-161436.0.tar.gz | source | sdist | null | false | 380864d95009ec437f86f2eb2b3a1c36 | 1ea5153a085085b6ef83e8e55609c3c42f24b4bb10bdc7c25d960af1608d2794 | 6d8ec8ddb15255cdb3cc9cb231254375d2b130e2d0fadf00b4c6043a45d1804d | MIT | [
"LICENSE"
] | 0 |
2.4 | dimos | 0.0.10 | Powering agentive generalist robotics |
<div align="center">


<h2>The Agentive Operating System for Generalist Robotics</h2>
[](https://discord.gg/dimos)
[](https://github.com/dimensionalOS/dimos/stargazers)
[](https://github.com/dimensionalOS/dimos/fork)
[](https://github.com/dimensionalOS/dimos/graphs/contributors)



[](https://www.docker.com/)
<big><big>
[Hardware](#hardware) •
[Installation](#installation) •
[Development](#development) •
[Multi Language](#multi-language-support) •
[ROS](#ros-interop)
⚠️ **Alpha Pre-Release: Expect Breaking Changes** ⚠️
</big></big>
</div>
# Intro
Dimensional is the modern operating system for generalist robotics. We are setting the next-generation SDK standard, integrating with the majority of robot manufacturers.
With a simple install and no ROS required, build physical applications entirely in python that run on any humanoid, quadruped, or drone.
Dimensional is agent native -- "vibecode" your robots in natural language and build (local & hosted) multi-agent systems that work seamlessly with your hardware. Agents run as native modules — subscribing to any embedded stream, from perception (lidar, camera) and spatial memory down to control loops and motor drivers.
<table>
<tr>
<td align="center" width="50%">
<a href="docs/capabilities/navigation/readme.md"><img src="assets/readme/navigation.gif" alt="Navigation" width="100%"></a>
</td>
<td align="center" width="50%">
<a href="docs/capabilities/perception/readme.md"><img src="assets/readme/perception.png" alt="Perception" width="100%"></a>
</td>
</tr>
<tr>
<td align="center" width="50%">
<h3><a href="docs/capabilities/navigation/readme.md">Navigation and Mapping</a></h3>
SLAM, dynamic obstacle avoidance, route planning, and autonomous exploration — via both DimOS native and ROS<br><a href="https://x.com/stash_pomichter/status/2010471593806545367">Watch video</a>
</td>
<td align="center" width="50%">
<h3><a href="docs/capabilities/perception/readme.md">Perception</a></h3>
Detectors, 3d projections, VLMs, Audio processing
</td>
</tr>
<tr>
<td align="center" width="50%">
<a href="docs/capabilities/agents/readme.md"><img src="assets/readme/agentic_control.gif" alt="Agents" width="100%"></a>
</td>
<td align="center" width="50%">
<img src="assets/readme/spatial_memory.gif" alt="Spatial Memory" width="100%"></a>
</td>
</tr>
<tr>
<td align="center" width="50%">
<h3><a href="docs/capabilities/agents/readme.md">Agentive Control, MCP</a></h3>
"hey Robot, go find the kitchen"<br><a href="https://x.com/stash_pomichter/status/2015912688854200322">Watch video</a>
</td>
<td align="center" width="50%">
<h3>Spatial Memory</a></h3>
Spatio-temporal RAG, Dynamic memory, Object localization and permanence<br><a href="https://x.com/stash_pomichter/status/1980741077205414328">Watch video</a>
</td>
</tr>
</table>
# Hardware
<table>
<tr>
<td align="center" width="20%">
<h3>Quadruped</h3>
<img width="245" height="1" src="assets/readme/spacer.png">
</td>
<td align="center" width="20%">
<h3>Humanoid</h3>
<img width="245" height="1" src="assets/readme/spacer.png">
</td>
<td align="center" width="20%">
<h3>Arm</h3>
<img width="245" height="1" src="assets/readme/spacer.png">
</td>
<td align="center" width="20%">
<h3>Drone</h3>
<img width="245" height="1" src="assets/readme/spacer.png">
</td>
<td align="center" width="20%">
<h3>Misc</h3>
<img width="245" height="1" src="assets/readme/spacer.png">
</td>
</tr>
<tr>
<td align="center" width="20%">
🟩 <a href="docs/platforms/quadruped/go2/index.md">Unitree Go2 pro/air</a><br>
🟥 <a href="dimos/robot/unitree/b1">Unitree B1</a><br>
</td>
<td align="center" width="20%">
🟨 <a href="docs/todo.md">Unitree G1</a><br>
</td>
<td align="center" width="20%">
🟥 <a href="docs/todo.md">Xarm</a><br>
🟥 <a href="docs/todo.md">AgileX Piper</a><br>
</td>
<td align="center" width="20%">
🟥 <a href="dimos/robot/drone">Mavlink</a><br>
🟥 <a href="dimos/robot/drone">DJI SDK</a><br>
</td>
<td align="center" width="20%">
🟥 <a href="https://github.com/dimensionalOS/openFT-sensor">Force Torque Sensor</a><br>
</td>
</tr>
</table>
<br>
<div align="right">
🟩 stable 🟨 beta 🟧 alpha 🟥 experimental
</div>
# Installation
## System Install
To set up your system dependencies, follow one of these guides:
- 🟩 [Ubuntu 22.04 / 24.04](docs/installation/ubuntu.md)
- 🟩 [NixOS / General Linux](docs/installation/nix.md)
- 🟧 [macOS](docs/installation/osx.md)
## Python Installs
### Quickstart
```bash
uv venv --python "3.12"
source .venv/bin/activate
uv pip install dimos[base,unitree]
# Replay a recorded Go2 session (no hardware needed)
# NOTE: First run will show a black rerun window while ~2.4 GB downloads from LFS
dimos --replay run unitree-go2
```
```bash
# Install with simulation support
uv pip install dimos[base,unitree,sim]
# Run Go2 in MuJoCo simulation
dimos --simulation run unitree-go2
# Run G1 humanoid in simulation
dimos --simulation run unitree-g1-sim
```
```bash
# Control a real robot (Unitree Go2 over WebRTC)
export ROBOT_IP=<YOUR_ROBOT_IP>
dimos run unitree-go2
```
### Use DimOS as a Library
See below a simple robot connection module that sends streams of continuous `cmd_vel` to the robot and receives `color_image` to a simple `Listener` module. DimOS Modules are subsystems on a robot that communicate with other modules using standardized messages.
```py
import threading, time, numpy as np
from dimos.core import In, Module, Out, rpc, autoconnect
from dimos.msgs.geometry_msgs import Twist
from dimos.msgs.sensor_msgs import Image, ImageFormat
class RobotConnection(Module):
cmd_vel: In[Twist]
color_image: Out[Image]
@rpc
def start(self):
threading.Thread(target=self._image_loop, daemon=True).start()
def _image_loop(self):
while True:
img = Image.from_numpy(
np.zeros((120, 160, 3), np.uint8),
format=ImageFormat.RGB,
frame_id="camera_optical",
)
self.color_image.publish(img)
time.sleep(0.2)
class Listener(Module):
color_image: In[Image]
@rpc
def start(self):
self.color_image.subscribe(lambda img: print(f"image {img.width}x{img.height}"))
if __name__ == "__main__":
autoconnect(
RobotConnection.blueprint(),
Listener.blueprint(),
).build().loop()
```
### Blueprints
Blueprints are instructions for how to construct and wire modules. We compose them with
`autoconnect(...)`, which connects streams by `(name, type)` and returns a `Blueprint`.
Blueprints can be composed, remapped, and have transports overridden if `autoconnect()` fails due to conflicting variable names or `In[]` and `Out[]` message types.
A blueprint example that connects the image stream from a robot to an LLM Agent for reasoning and action execution.
```py
from dimos.core import autoconnect, LCMTransport
from dimos.msgs.sensor_msgs import Image
from dimos.robot.unitree.go2.connection import go2_connection
from dimos.agents.agent import agent
blueprint = autoconnect(
go2_connection(),
agent(),
).transports({("color_image", Image): LCMTransport("/color_image", Image)})
# Run the blueprint
if __name__ == "__main__":
blueprint.build().loop()
```
## Library API
- [Modules](docs/usage/modules.md)
- [LCM](docs/usage/lcm.md)
- [Blueprints](docs/usage/blueprints.md)
- [Transports](docs/usage/transports/index.md)
- [Data Streams](docs/usage/data_streams/README.md)
- [Configuration](docs/usage/configuration.md)
- [Visualization](docs/usage/visualization.md)
### Develop on DimOS
```sh
export GIT_LFS_SKIP_SMUDGE=1
git clone -b dev https://github.com/dimensionalOS/dimos.git
cd dimos
uv sync --all-extras --no-extra dds
# Run fast test suite
uv run pytest dimos
```
### Demos
<img src="assets/readme/dimos_demo.gif" alt="DimOS Demo" width="100%">
# Development
## Multi Language Support
Python is our glue and prototyping language, but we support many languages via LCM interop.
Check our language interop examples:
- [C++](examples/language-interop/cpp/)
- [Lua](examples/language-interop/lua/)
- [TypeScript](examples/language-interop/ts/)
## ROS interop
For researchers, we can talk to ROS directly via [ROS Transports](docs/usage/transports/index.md), or host dockerized ROS deployments as first-class DimOS modules, allowing you easy installation and portability
| text/markdown | null | Dimensional Team <build@dimensionalOS.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"dimos-lcm",
"PyTurboJPEG==1.8.2",
"numpy>=1.26.4",
"scipy>=1.15.1",
"pin>=3.3.0",
"reactivex",
"asyncio==3.4.3",
"sortedcontainers==2.4.0",
"pydantic",
"python-dotenv",
"annotation-protocol>=1.4.0",
"lazy_loader",
"dask[complete]==2025.5.1",
"plum-dispatch==2.5.7",
"structlog<26,>=25.5.0",
"colorlog==6.9.0",
"opencv-python",
"open3d-unofficial-arm; platform_system == \"Linux\" and platform_machine == \"aarch64\"",
"open3d>=0.18.0; platform_system != \"Linux\" or platform_machine != \"aarch64\"",
"pydantic-settings<3,>=2.11.0",
"textual==3.7.1",
"terminaltexteffects==0.12.2",
"typer<1,>=0.19.2",
"plotext==5.3.2",
"numba>=0.60.0",
"llvmlite>=0.42.0",
"rerun-sdk>=0.20.0",
"toolz>=1.1.0",
"cerebras-cloud-sdk; extra == \"misc\"",
"yapf==0.40.2; extra == \"misc\"",
"typeguard; extra == \"misc\"",
"empy==3.3.4; extra == \"misc\"",
"catkin_pkg; extra == \"misc\"",
"lark; extra == \"misc\"",
"tiktoken>=0.8.0; extra == \"misc\"",
"python-multipart==0.0.20; extra == \"misc\"",
"tensorzero==2025.7.5; extra == \"misc\"",
"ipykernel; extra == \"misc\"",
"sentence_transformers; extra == \"misc\"",
"scikit-learn; extra == \"misc\"",
"timm>=1.0.15; extra == \"misc\"",
"edgetam-dimos; extra == \"misc\"",
"opencv-contrib-python==4.10.0.84; extra == \"misc\"",
"open_clip_torch==3.2.0; extra == \"misc\"",
"torchreid==0.2.5; extra == \"misc\"",
"gdown==5.2.0; extra == \"misc\"",
"tensorboard==2.20.0; extra == \"misc\"",
"googlemaps>=4.10.0; extra == \"misc\"",
"onnx; extra == \"misc\"",
"einops==0.8.1; extra == \"misc\"",
"xarm-python-sdk>=1.17.0; extra == \"misc\"",
"rerun-sdk>=0.20.0; extra == \"visualization\"",
"langchain==1.2.3; extra == \"agents\"",
"langchain-chroma<2,>=1; extra == \"agents\"",
"langchain-core==1.2.3; extra == \"agents\"",
"langchain-openai<2,>=1; extra == \"agents\"",
"langchain-text-splitters<2,>=1; extra == \"agents\"",
"langchain-huggingface<2,>=1; extra == \"agents\"",
"langchain-ollama<2,>=1; extra == \"agents\"",
"bitsandbytes<1.0,>=0.48.2; sys_platform == \"linux\" and extra == \"agents\"",
"ollama>=0.6.0; extra == \"agents\"",
"anthropic>=0.19.0; extra == \"agents\"",
"openai; extra == \"agents\"",
"openai-whisper; extra == \"agents\"",
"sounddevice; extra == \"agents\"",
"mcp>=1.0.0; extra == \"agents\"",
"fastapi>=0.115.6; extra == \"web\"",
"sse-starlette>=2.2.1; extra == \"web\"",
"uvicorn>=0.34.0; extra == \"web\"",
"ffmpeg-python; extra == \"web\"",
"soundfile; extra == \"web\"",
"ultralytics>=8.3.70; extra == \"perception\"",
"filterpy>=1.4.5; extra == \"perception\"",
"Pillow; extra == \"perception\"",
"lap>=0.5.12; extra == \"perception\"",
"transformers[torch]==4.49.0; extra == \"perception\"",
"moondream; extra == \"perception\"",
"omegaconf>=2.3.0; extra == \"perception\"",
"hydra-core>=1.3.0; extra == \"perception\"",
"dimos[base]; extra == \"unitree\"",
"unitree-webrtc-connect-leshy>=2.0.7; extra == \"unitree\"",
"drake==1.45.0; (sys_platform == \"darwin\" and platform_machine != \"aarch64\") and extra == \"manipulation\"",
"drake>=1.40.0; (sys_platform != \"darwin\" and platform_machine != \"aarch64\") and extra == \"manipulation\"",
"piper-sdk; extra == \"manipulation\"",
"xarm-python-sdk>=1.17.0; extra == \"manipulation\"",
"kaleido>=0.2.1; extra == \"manipulation\"",
"plotly>=5.9.0; extra == \"manipulation\"",
"xacro; extra == \"manipulation\"",
"matplotlib>=3.7.1; extra == \"manipulation\"",
"pyyaml>=6.0; extra == \"manipulation\"",
"onnxruntime; extra == \"cpu\"",
"ctransformers==0.2.27; extra == \"cpu\"",
"cupy-cuda12x==13.6.0; platform_machine == \"x86_64\" and extra == \"cuda\"",
"nvidia-nvimgcodec-cu12[all]; platform_machine == \"x86_64\" and extra == \"cuda\"",
"onnxruntime-gpu>=1.17.1; platform_machine == \"x86_64\" and extra == \"cuda\"",
"ctransformers[cuda]==0.2.27; extra == \"cuda\"",
"xformers>=0.0.20; platform_machine == \"x86_64\" and extra == \"cuda\"",
"ruff==0.14.3; extra == \"dev\"",
"mypy==1.19.0; extra == \"dev\"",
"pre_commit==4.2.0; extra == \"dev\"",
"pytest==8.3.5; extra == \"dev\"",
"pytest-asyncio==0.26.0; extra == \"dev\"",
"pytest-mock==3.15.0; extra == \"dev\"",
"pytest-env==1.1.5; extra == \"dev\"",
"pytest-timeout==2.4.0; extra == \"dev\"",
"coverage>=7.0; extra == \"dev\"",
"requests-mock==1.12.1; extra == \"dev\"",
"terminaltexteffects==0.12.2; extra == \"dev\"",
"watchdog>=3.0.0; extra == \"dev\"",
"md-babel-py==1.1.1; extra == \"dev\"",
"python-lsp-server[all]==1.14.0; extra == \"dev\"",
"python-lsp-ruff==2.3.0; extra == \"dev\"",
"lxml-stubs<1,>=0.5.1; extra == \"dev\"",
"pandas-stubs<3,>=2.3.2.250926; extra == \"dev\"",
"types-PySocks<2,>=1.7.1.20251001; extra == \"dev\"",
"types-PyYAML<7,>=6.0.12.20250915; extra == \"dev\"",
"types-colorama<1,>=0.4.15.20250801; extra == \"dev\"",
"types-defusedxml<1,>=0.7.0.20250822; extra == \"dev\"",
"types-gevent<26,>=25.4.0.20250915; extra == \"dev\"",
"types-greenlet<4,>=3.2.0.20250915; extra == \"dev\"",
"types-jmespath<2,>=1.0.2.20250809; extra == \"dev\"",
"types-jsonschema<5,>=4.25.1.20251009; extra == \"dev\"",
"types-networkx<4,>=3.5.0.20251001; extra == \"dev\"",
"types-protobuf<7,>=6.32.1.20250918; extra == \"dev\"",
"types-psutil<8,>=7.0.0.20251001; extra == \"dev\"",
"types-pytz<2026,>=2025.2.0.20250809; extra == \"dev\"",
"types-simplejson<4,>=3.20.0.20250822; extra == \"dev\"",
"types-tabulate<1,>=0.9.0.20241207; extra == \"dev\"",
"types-tensorflow<3,>=2.18.0.20251008; extra == \"dev\"",
"types-tqdm<5,>=4.67.0.20250809; extra == \"dev\"",
"types-psycopg2>=2.9.21.20251012; extra == \"dev\"",
"py-spy; extra == \"dev\"",
"psycopg2-binary>=2.9.11; extra == \"psql\"",
"mujoco>=3.3.4; extra == \"sim\"",
"playground>=0.0.5; extra == \"sim\"",
"pygame>=2.6.1; extra == \"sim\"",
"pymavlink; extra == \"drone\"",
"dimos[dev]; extra == \"dds\"",
"cyclonedds>=0.10.5; extra == \"dds\"",
"dimos-lcm; extra == \"docker\"",
"numpy>=1.26.4; extra == \"docker\"",
"scipy>=1.15.1; extra == \"docker\"",
"reactivex; extra == \"docker\"",
"dask[distributed]==2025.5.1; extra == \"docker\"",
"plum-dispatch==2.5.7; extra == \"docker\"",
"structlog<26,>=25.5.0; extra == \"docker\"",
"pydantic; extra == \"docker\"",
"pydantic-settings<3,>=2.11.0; extra == \"docker\"",
"typer<1,>=0.19.2; extra == \"docker\"",
"opencv-python-headless; extra == \"docker\"",
"lcm; extra == \"docker\"",
"sortedcontainers; extra == \"docker\"",
"PyTurboJPEG; extra == \"docker\"",
"rerun-sdk; extra == \"docker\"",
"open3d-unofficial-arm; (platform_system == \"Linux\" and platform_machine == \"aarch64\") and extra == \"docker\"",
"open3d>=0.18.0; (platform_system != \"Linux\" or platform_machine != \"aarch64\") and extra == \"docker\"",
"dimos[agents,perception,sim,visualization,web]; extra == \"base\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.12 | 2026-02-20T23:21:12.540963 | dimos-0.0.10.tar.gz | 969,266 | fa/40/8a76c36f1c39834f191dccb29963becb96d2693702b4bf0051bcc02f3cb4/dimos-0.0.10.tar.gz | source | sdist | null | false | 5a5f59d610eee91163147ea97536d97e | 0c9c356d56defc225051c6f7711152bbf68a40756f7f0f71317452494f6cf315 | fa408a76c36f1c39834f191dccb29963becb96d2693702b4bf0051bcc02f3cb4 | null | [
"LICENSE"
] | 183 |
2.4 | opencode-memory | 0.2 | A Python MCP stdio instance for adding memory to AI coding agents | # OpenCode Memory
Give your AI coding agent persistent memory and conversation archiving.
## What It Does
OpenCode Memory provides two capabilities to AI coding agents like opencode:
1. **Memory** - Store and retrieve facts, preferences, and context across sessions
2. **Conversation Archiving** - Save full conversations as readable markdown files
Your AI agent remembers who you are, what you're working on, and can browse past discussions.
## Why It Matters
Without memory:
- You repeat your preferences every session
- Project context is lost between conversations
- Important decisions get forgotten
- Past discussions are inaccessible
OpenCode Memory solves this by providing persistent storage your AI agent can access in every conversation.
## Installation
```bash
uv tool install opencode-memory
```
Or run directly without installing:
```bash
uvx opencode-memory --stdio
```
## Configure for OpenCode
Add to your MCP configuration file (typically `~/.config/opencode/config.json`):
```json
{
"mcpServers": {
"memory": {
"command": "uvx",
"args": ["opencode-memory", "--stdio"],
"env": {
"OPENAI_API_KEY": "sk-your-api-key",
"OPENCODE_MEM_FAISS_DIRECTORY": "$HOME/.opencode/memory/faiss"
}
}
}
}
```
### Environment Variables
| Variable | Required | Description |
|----------|----------|-------------|
| `OPENAI_API_KEY` | Yes | For embedding generation |
| `OPENCODE_MEM_FAISS_DIRECTORY` | Yes | Where vector index is stored |
| `OPENCODE_MEM_EMBEDDING_MODEL` | No | Default: `text-embedding-3-large` |
## Usage Recommendation: Dedicated Chat Agent
**Recommended approach:** Create a dedicated opencode agent that acts as a ChatGPT-like interface.
### Why a Dedicated Agent?
When you use opencode primarily for conversation rather than code editing:
1. **All conversations are archived** - Every discussion saved as markdown
2. **Memories accumulate** - Preferences, context, and knowledge build over time
3. **Historical search** - Browse and search past conversations by topic
4. **Session continuity** - Pick up where you left off in previous sessions
### How to Create a Dedicated Agent
Create the agent file at `~/.config/opencode/agent/Chat.md`:
```markdown
# Chat Agent
A conversational agent with memory and conversation archiving capabilities.
## Behavior
You are a helpful conversational assistant, similar to ChatGPT. Your role is to engage in natural conversation while maintaining persistent memory and archiving all interactions.
## Memory
You have access to memory tools. Use them to:
- Store user preferences (coding style, tools, workflows)
- Remember important information the user shares
- Track ongoing topics and interests
- Save useful context for future sessions
Before responding, search your memory for relevant context. Store meaningful facts after conversations.
## Conversation Archiving
**CRITICAL: Archive every interaction using opencode's native file tools.**
After each user request followed by your response (one interaction):
1. Use the `write` or `edit` tool to append the interaction to a markdown file named `YYYY-MM-DD.md` in a directory of your choice (e.g., `~/.opencode/memory/history/2026-02-20.md`)
2. The file must start with a keywords header:
```markdown
keywords: python, async, asyncio, error-handling
---
## User
[timestamp] Your question here...
## Assistant
[timestamp] Your response here...
```
3. Keywords act as hashtags for finding conversations. They should:
- Be broad enough to categorize the topic
- Include technologies, concepts, or themes discussed
- Be updated after each interaction if new topics emerge
4. Append new interactions to today's file using the `edit` tool. Update the keywords line if the conversation covers new topics.
## Example
After discussing Python async functions:
```markdown
keywords: python, async, asyncio, error-handling, concurrency
## User
[2026-02-20 14:30] How do I handle exceptions in asyncio.gather?
## Assistant
[2026-02-20 14:30] You can use return_exceptions=True parameter...
## User
[2026-02-20 14:35] What about timeout handling?
## Assistant
[2026-02-20 14:35] For timeouts, use asyncio.wait_for()...
```
## Workflow
For each message:
1. Search memory for relevant context
2. Respond naturally to the user
3. Use `write` or `edit` tools to archive the interaction to today's markdown file
4. Update keywords if new topics emerged
5. Store any important facts as memories
```
### Using the Agent
```bash
mkdir -p ~/chat
cd ~/chat
opencode --agent Chat
```
The agent will automatically:
- Archive every interaction to daily markdown files
- Build a searchable conversation history
- Remember your preferences and context
- Update keywords as topics evolve
### What You Get
With a dedicated memory-enabled agent:
- **Preferences remembered** - "I use dark theme" is stored and recalled
- **Project context** - "Working on FastAPI backend" persists across sessions
- **Conversation history** - Browse `~/.opencode/memory/history/` for past discussions
- **Topic search** - Use opencode's grep to find conversations about specific topics
## How Memory Works
### Two Storage Layers
1. **Memory Layer** - Semantic storage for facts and knowledge
- Vector-based similarity search
- Filter by categories and metadata
- Automatic expiration for time-sensitive info
2. **Conversation Layer** - Markdown files for complete discussion history
- Human-readable format
- Searchable with standard tools (grep, opencode search)
- Version control friendly
### Available Tools
Your AI agent has access to MCP tools
| Tool | Purpose |
|------|---------|
| `add_memory` | Store a fact or preference |
| `search_memory` | Find relevant memories by meaning |
| `get_all_memories` | Retrieve all stored memories |
| `update_memory` | Modify an existing memory |
| `delete_memory` | Remove a memory |
See [API.md](API.md) for detailed tool documentation.
## Example Usage Patterns
### Remembering Preferences
```
You: "I always use 2-space indentation for Python"
AI: [Stores: "User prefers 2-space indentation for Python"
Categories: preferences, python]
```
Future sessions recall this automatically.
### Tracking Current Work
```
AI: [Stores: "Currently implementing OAuth2 authentication"
Categories: current-focus
Expires: end of sprint]
```
### Archiving Discussions
Use opencode's `write` or `edit` tools to save conversations:
```
AI: [Uses write tool to create: 2026-02-20.md]
Content: keywords, timestamps, and full conversation
```
Later, opencode can grep through archived conversations:
```bash
# Find conversations about database design
opencode grep "database" ~/.opencode/memory/history/
```
## Documentation
- **[API.md](API.md)** - Complete MCP tool reference with examples
- **[DESIGN.md](DESIGN.md)** - Architecture decisions and rationale
## Requirements
- Python 3.10+
- OpenAI API key (for embeddings)
## License
MIT
| text/markdown | Anomalo | null | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"faiss-cpu>=1.7.0",
"mcp>=1.6.0",
"mem0ai>=1.0.4",
"pydantic>=2.0.0",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:21:10.137354 | opencode_memory-0.2.tar.gz | 133,766 | 47/76/2ac050c45863e8aab06e1b04b46a0b5f621159da4505fa7e52057853facf/opencode_memory-0.2.tar.gz | source | sdist | null | false | 237d52c2e0c50215751cc03eb297f9f8 | b7b0c372c582c62571c5ea9bb5e6efc84b77fd2110d66e20d562653679be9b27 | 47762ac050c45863e8aab06e1b04b46a0b5f621159da4505fa7e52057853facf | null | [] | 229 |
2.4 | iplotx | 1.7.1 | Universal network and tree visualisation library. | [](https://github.com/fabilab/iplotx/actions/workflows/test.yml)
[](https://pypi.org/project/iplotx/)
[](https://iplotx.readthedocs.io/en/latest/)
[](https://coveralls.io/github/fabilab/iplotx?branch=main)

[DOI](https://f1000research.com/articles/14-1377)
# iplotx
[](https://iplotx.readthedocs.io/en/latest/gallery/index.html).
Visualise networks and trees in Python, with style.
Supports:
- **networks**:
- [networkx](https://networkx.org/)
- [igraph](igraph.readthedocs.io/)
- [graph-tool](https://graph-tool.skewed.de/)
- [zero-dependency](https://iplotx.readthedocs.io/en/latest/gallery/plot_simplenetworkdataprovider.html#sphx-glr-gallery-plot-simplenetworkdataprovider-py)
- **trees**:
- [ETE4](https://etetoolkit.github.io/ete/)
- [cogent3](https://cogent3.org/)
- [Biopython](https://biopython.org/)
- [scikit-bio](https://scikit.bio)
- [dendropy](https://jeetsukumaran.github.io/DendroPy/index.html)
- [zero-dependency](https://iplotx.readthedocs.io/en/latest/gallery/tree/plot_simpletreedataprovider.html#sphx-glr-gallery-tree-plot-simpletreedataprovider-py)
In addition to the above, *any* network or tree analysis library can register an [entry point](https://iplotx.readthedocs.io/en/latest/providers.html#creating-a-custom-data-provider) to gain compatibility with `iplotx` with no intervention from our side.
## Installation
```bash
pip install iplotx
```
## Quick Start
```python
import networkx as nx
import matplotlib.pyplot as plt
import iplotx as ipx
g = nx.Graph([(0, 1), (1, 2), (2, 3), (3, 4), (4, 0)])
layout = nx.layout.circular_layout(g)
ipx.plot(g, layout)
```

## Documentation
See [readthedocs](https://iplotx.readthedocs.io/en/latest/) for the full documentation.
## Gallery
See [gallery](https://iplotx.readthedocs.io/en/latest/gallery/index.html).
## Citation
If you use `iplotx` for publication figures, please cite:
```
F. Zanini. A universal tool for visualisation of networks and trees in Python. F1000Research 2025, 14:1377. https://doi.org/10.12688/f1000research.173131.1
```
## Features
- Plot networks from multiple libraries including networkx, igraph and graph-tool, using Matplotlib. ✅
- Plot trees from multiple libraries such as cogent3, ETE4, skbio, biopython, and dendropy. ✅
- Flexible yet easy styling, including an internal library of styles ✅
- Interactive plotting, e.g. zooming and panning after the plot is created. ✅
- Store the plot to disk in many formats (SVG, PNG, PDF, GIF, etc.). ✅
- 3D network visualisation with depth shading. ✅
- Efficient plotting of large graphs (up to ~1 million nodes on a laptop). ✅
- Edit plotting elements after the plot is created, e.g. changing node colors, labels, etc. ✅
- Animations, e.g. showing the evolution of a network over time. ✅
- Mouse and keyboard interaction, e.g. hovering over nodes/edges to get information about them. ✅
- Node clustering and covers, e.g. showing communities in a network. ✅
- Edge tension, edge waypoints, and edge ports. ✅
- Choice of tree layouts and orientations. ✅
- Tree-specific options: cascades, subtree styling, split edges, etc. ✅
- (WIP) Support uni- and bi-directional communication between graph object and plot object.🏗️
## Authors
Fabio Zanini (https://fabilab.org)
| text/markdown | null | Fabio Zanini <fabio.zanini@unsw.edu.au> | null | Fabio Zanini <fabio.zanini@unsw.edu.au> | MIT | graph, network, phylogeny, plotting, tree, visualisation | [
"Development Status :: 5 - Production/Stable",
"Framework :: Matplotlib",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Visualization",
"Topic :: System :: Networking",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"matplotlib>=3.10.0",
"numpy>=2.0.0",
"pandas>=2.0.0",
"igraph>=0.11.0; extra == \"igraph\"",
"networkx>=2.0.0; extra == \"networkx\""
] | [] | [] | [] | [
"Homepage, https://github.com/fabilab/iplotx",
"Documentation, https://readthedocs.org/iplotx",
"Repository, https://github.com/fabilab/iplotx.git",
"Bug Tracker, https://github.com/fabilab/iplotx/issues",
"Changelog, https://github.com/fabilab/iplotx/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:21:01.630325 | iplotx-1.7.1.tar.gz | 2,002,736 | 8a/5a/43d702a7d265cdd5883e34e41df1e4febb1aa3d00c85214c49d44bb38a2d/iplotx-1.7.1.tar.gz | source | sdist | null | false | 1cd89d30c76040eed0eca034b6825372 | 4d4abdfa692ecb398726527ec23903ead0d2931aca114d93645fd7e561214dde | 8a5a43d702a7d265cdd5883e34e41df1e4febb1aa3d00c85214c49d44bb38a2d | null | [] | 858 |
2.4 | ismember | 1.2.0 | Python package ismember returns array elements that are members of set array. | # ismember
[](https://img.shields.io/pypi/pyversions/ismember)
[](https://pypi.org/project/ismember/)

[](https://github.com/erdogant/ismember/blob/master/LICENSE)
[](https://pepy.tech/project/ismember)
[](https://pepy.tech/project/ismember/)
[](https://erdogant.github.io/ismember/)
<!---[](https://www.buymeacoffee.com/erdogant)-->
<!---[](https://erdogant.github.io/donate/?currency=USD&amount=5)-->
``ismember`` is a Python library that checks whether the elements in X is present in Y.
#
**⭐️ Star this repo if you like it ⭐️**
#
### [Documentation pages](https://erdogant.github.io/ismember/)
On the [documentation pages](https://erdogant.github.io/ismember/) you can find more information about ``ismember`` with examples.
#
##### Install ismember from PyPI
```bash
pip install ismember # normal install
pip install -U ismember # update if needed
```
### Import ismember package
```python
from ismember import ismember
```
<hr>
#### Quick example
Use the documentation pages for more detailed usage. Some of the most used functionalities are linked below.
```python
from ismember import ismember
# Example with lists
I, idx = ismember([1,2,3,None], [4,1,2])
I, idx = ismember(["1","2","3"], ["4","1","2"])
```
#
#### [Example: Check whether the elements of X are present in Y](https://erdogant.github.io/ismember/pages/html/Examples.html#)
#
#### [Example: Determine the corresponding location of the values that are present in Y array](https://erdogant.github.io/ismember/pages/html/Examples.html#determine-the-corresponding-location-of-the-values-that-are-present-in-y-array)
#
#### [Example: Row wise comparison](https://erdogant.github.io/ismember/pages/html/Examples.html#row-wise-comparison-1)
#
#### [Example: Elementwise comparison](https://erdogant.github.io/ismember/pages/html/Examples.html#elementwise-comparison)
<hr>
#### ☕ Support
If you find this project useful, consider supporting me:
<a href="https://www.buymeacoffee.com/erdogant">
<img src="https://img.buymeacoffee.com/button-api/?text=Buy me a coffee&emoji=&slug=erdogant&button_colour=FFDD00&font_colour=000000&font_family=Cookie&outline_colour=000000&coffee_colour=ffffff" />
</a>
| text/markdown | null | Erdogan Taskesen <erdogant@gmail.com> | null | null | null | Python, ismember, set membership, numpy, array, set, utilities | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Operating System :: Unix",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3 | [] | [] | [] | [
"numpy"
] | [] | [] | [] | [
"Homepage, https://erdogant.github.io/ismember",
"Download, https://github.com/erdogant/ismember/archive/{version}.tar.gz"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T23:20:39.394459 | ismember-1.2.0.tar.gz | 6,638 | 58/5f/ff61ea61280a9099968b715d2f27cd48b7c358157a20146e831a308617b6/ismember-1.2.0.tar.gz | source | sdist | null | false | 33ea052461fb03316dc04f9bb5d1cd87 | 2263aaaad010e29b7a71bd6a6740e6c925d66574051a49ecaa7d1d213414c9b2 | 585fff61ea61280a9099968b715d2f27cd48b7c358157a20146e831a308617b6 | MIT | [
"LICENSE"
] | 674 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.