metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | aegra-cli | 0.7.0 | Aegra CLI - Manage your self-hosted agent deployments | # aegra-cli
Aegra CLI - Command-line interface for managing self-hosted agent deployments.
Aegra is an open-source, self-hosted alternative to LangGraph Platform. Use this CLI to initialize projects, run development servers, and manage Docker services.
## Installation
### From PyPI
```bash
pip install aegra-cli
```
### From Source
```bash
# Clone the repository
git clone https://github.com/ibbybuilds/aegra.git
cd aegra
# Install all workspace packages
uv sync --all-packages
```
## Quick Start
```bash
# Initialize a new Aegra project (prompts for location, template, and name)
aegra init
# Follow the printed next steps
cd <your-project>
cp .env.example .env # Add your OPENAI_API_KEY
uv sync # Install dependencies
uv run aegra dev # Start PostgreSQL + dev server
```
## Commands
### `aegra version`
Show version information for aegra-cli and aegra-api.
```bash
aegra version
```
Output displays a table with versions for both packages.
---
### `aegra init`
Initialize a new Aegra project with configuration files and directory structure.
```bash
aegra init [PATH] [OPTIONS]
```
**Arguments:**
| Argument | Default | Description |
|----------|---------|-------------|
| `PATH` | `.` | Project directory to initialize |
**Options:**
| Option | Description |
|--------|-------------|
| `-t, --template INT` | Template number (1 = "New Aegra Project", 2 = "ReAct Agent") |
| `-n, --name STR` | Project name |
| `--force` | Overwrite existing files if they exist |
**Templates:**
| # | Name | Description |
|---|------|-------------|
| 1 | New Aegra Project | Simple chatbot with basic graph structure |
| 2 | ReAct Agent | Tool-calling agent with a tools module |
**Examples:**
```bash
# Initialize in current directory
aegra init
# Initialize in a specific directory
aegra init my-project
# Initialize with a specific template and name
aegra init my-project --template 2 --name "My ReAct Agent"
# Initialize in current directory, overwriting existing files
aegra init --force
```
**Created Files:**
- `aegra.json` - Graph configuration
- `pyproject.toml` - Python project configuration
- `.env.example` - Environment variable template
- `.gitignore` - Git ignore rules
- `README.md` - Project readme
- `src/<slug>/__init__.py` - Package init file
- `src/<slug>/graph.py` - Graph implementation
- `src/<slug>/state.py` - State schema definition
- `src/<slug>/prompts.py` - Prompt templates
- `src/<slug>/context.py` - Context configuration
- `src/<slug>/utils.py` - Utility functions
- `src/<slug>/tools.py` - Tool definitions (ReAct template only)
- `docker-compose.yml` - Docker Compose for PostgreSQL and API services
- `Dockerfile` - Container build
---
### `aegra dev`
Run the development server with hot reload.
```bash
aegra dev [OPTIONS]
```
**Options:**
| Option | Default | Description |
|--------|---------|-------------|
| `--host HOST` | `127.0.0.1` | Host to bind the server to |
| `--port PORT` | `8000` | Port to bind the server to |
| `--app APP` | `aegra_api.main:app` | Application import path |
| `-c, --config PATH` | | Path to aegra.json config file |
| `-e, --env-file PATH` | | Path to .env file |
| `-f, --file PATH` | | Path to docker-compose.yml file |
| `--no-db-check` | | Skip the automatic database check |
**Examples:**
```bash
# Start with defaults (localhost:8000)
aegra dev
# Start on all interfaces, port 3000
aegra dev --host 0.0.0.0 --port 3000
# Start with a custom app
aegra dev --app myapp.main:app
# Start with a specific config and env file
aegra dev --config ./aegra.json --env-file ./.env
# Start without automatic database check
aegra dev --no-db-check
```
The server automatically restarts when code changes are detected.
---
### `aegra serve`
Run the production server (no hot reload).
```bash
aegra serve [OPTIONS]
```
**Options:**
| Option | Default | Description |
|--------|---------|-------------|
| `--host HOST` | `0.0.0.0` | Host to bind the server to |
| `--port PORT` | `8000` | Port to bind the server to |
| `--app APP` | `aegra_api.main:app` | Application import path |
| `-c, --config PATH` | | Path to aegra.json config file |
**Examples:**
```bash
# Start with defaults (0.0.0.0:8000)
aegra serve
# Start with a custom config
aegra serve --config ./aegra.json
```
---
### `aegra up`
Start services with Docker Compose.
```bash
aegra up [SERVICES...] [OPTIONS]
```
**Arguments:**
| Argument | Description |
|----------|-------------|
| `SERVICES` | Optional list of specific services to start |
**Options:**
| Option | Default | Description |
|--------|---------|-------------|
| `-f, --file FILE` | | Path to docker-compose.yml file |
| `--build / --no-build` | `--build` | Build images before starting containers (on by default) |
**Examples:**
```bash
# Start all services (builds by default)
aegra up
# Start only postgres
aegra up postgres
# Start without building
aegra up --no-build
# Start with a custom compose file
aegra up -f ./docker-compose.yml
```
---
### `aegra down`
Stop services with Docker Compose.
```bash
aegra down [OPTIONS]
```
**Options:**
| Option | Description |
|--------|-------------|
| `-f, --file FILE` | Path to docker-compose.yml file |
| `-v, --volumes` | Remove named volumes declared in the compose file |
**Examples:**
```bash
# Stop all services
aegra down
# Stop and remove volumes (WARNING: data will be lost)
aegra down -v
# Stop with a custom compose file
aegra down -f ./docker-compose.yml
```
---
## Environment Variables
The CLI respects the following environment variables (typically set via `.env` file):
```bash
# Database
POSTGRES_USER=aegra
POSTGRES_PASSWORD=aegra_secret
POSTGRES_HOST=localhost
POSTGRES_DB=aegra
# Authentication
AUTH_TYPE=noop # Options: noop, custom
# Server (for aegra dev)
HOST=0.0.0.0
PORT=8000
# Configuration
AEGRA_CONFIG=aegra.json
```
## Requirements
- Python 3.11+
- Docker (for `aegra up` and `aegra down` commands)
- PostgreSQL (or use Docker)
## Related Packages
- **aegra-api**: Core API package providing the Agent Protocol server
- **aegra**: Meta-package that installs both aegra-cli and aegra-api
| text/markdown | null | Muhammad Ibrahim <mibrahim37612@gmail.com> | null | null | null | agents, cli, deployment, docker, langgraph | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Build Tools"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aegra-api~=0.7.0",
"click>=8.1.7",
"jinja2>=3.1",
"python-dotenv>=1.1.1",
"rich>=13.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:13:59.981867 | aegra_cli-0.7.0.tar.gz | 31,489 | 73/6e/0b36a7269fd7c84c4231634ef59e84dc5bdbda5c15587d7756f37216795e/aegra_cli-0.7.0.tar.gz | source | sdist | null | false | 6b9c6d5a83774c1522d6395e767b2e50 | c2445b1d260e7632999118c2f27e24d86b11751962b1701c19b130e2cf41a00e | 736e0b36a7269fd7c84c4231634ef59e84dc5bdbda5c15587d7756f37216795e | Apache-2.0 | [] | 381 |
2.1 | catboost | 1.2.10 | CatBoost Python Package | CatBoost is a fast, scalable, high performance gradient boosting on decision trees library. Used for ranking, classification, regression and other ML tasks.
| null | CatBoost Developers | null | null | null | Apache License, Version 2.0 | catboost | [
"Development Status :: 5 - Production/Stable",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [
"Linux"
] | https://catboost.ai | null | null | [] | [] | [] | [
"graphviz",
"matplotlib",
"numpy<3.0,>=1.16.0",
"pandas<4.0,>=0.24",
"scipy",
"plotly",
"six",
"traitlets; extra == \"widget\"",
"ipython; extra == \"widget\"",
"ipywidgets<9.0,>=7.0; extra == \"widget\""
] | [] | [] | [] | [
"GitHub, https://github.com/catboost/catboost",
"Bug Tracker, https://github.com/catboost/catboost/issues",
"Documentation, https://catboost.ai/docs/",
"Benchmarks, https://catboost.ai/#benchmark"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:13:29.092923 | catboost-1.2.10.tar.gz | 39,925,863 | e9/0e/09e8fa0858570fda88090bc3f441b69c18ea3d6f4a02fd41aa5426c157bf/catboost-1.2.10.tar.gz | source | sdist | null | false | 7d0be82bd7eb4e067a991402c4752504 | 26ae6d423acaf0e9d8160f2477a990431057ed04522d993c2f42dac62743b4f7 | e90e09e8fa0858570fda88090bc3f441b69c18ea3d6f4a02fd41aa5426c157bf | null | [] | 256,462 |
2.4 | aegra-api | 0.7.0 | Aegra core API - Self-hosted Agent Protocol server | # aegra-api
Aegra API - Self-hosted Agent Protocol server.
Aegra is an open-source, self-hosted alternative to LangGraph Platform. This package provides the core API server that implements the Agent Protocol, allowing you to run AI agents on your own infrastructure without vendor lock-in.
## Features
- **Agent Protocol Compliant**: Works with Agent Chat UI, LangGraph Studio, CopilotKit
- **Drop-in Replacement**: Compatible with the LangGraph SDK
- **Self-Hosted**: Run on your own PostgreSQL database
- **Streaming Support**: Real-time streaming of agent responses
- **Human-in-the-Loop**: Built-in support for human approval workflows
- **Vector Store**: Semantic search capabilities with PostgreSQL
## Installation
```bash
pip install aegra-api
```
## Quick Start
The easiest way to get started is with the [aegra-cli](../aegra-cli/README.md):
```bash
# Install the CLI
pip install aegra-cli
# Initialize a new project (interactive)
aegra init
cd <your-project>
# Configure environment
cp .env.example .env
# Add your OPENAI_API_KEY to .env
# Install dependencies and start developing
uv sync
uv run aegra dev
```
### Manual Setup
If you prefer manual setup:
```bash
# Install dependencies
pip install aegra-api
# Set environment variables
export POSTGRES_USER=aegra
export POSTGRES_PASSWORD=aegra_secret
export POSTGRES_HOST=localhost
export POSTGRES_DB=aegra
# Run migrations
alembic upgrade head
# Start server
uvicorn aegra_api.main:app --reload
```
## Configuration
### aegra.json
Define your agent graphs in `aegra.json`:
```json
{
"graphs": {
"agent": "./graphs/my_agent/graph.py:graph",
"assistant": "./graphs/assistant/graph.py:graph"
},
"http": {
"app": "./custom_routes.py:app"
}
}
```
### Environment Variables
```bash
# Database
POSTGRES_USER=aegra
POSTGRES_PASSWORD=aegra_secret
POSTGRES_HOST=localhost
POSTGRES_DB=aegra
# Authentication
AUTH_TYPE=noop # Options: noop, custom
# Server
HOST=0.0.0.0
PORT=8000
# Configuration
AEGRA_CONFIG=aegra.json
# LLM (for example agents)
OPENAI_API_KEY=sk-...
# Observability (optional)
OTEL_TARGETS=LANGFUSE,PHOENIX
```
## API Endpoints
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/assistants` | POST | Create assistant from graph_id |
| `/assistants` | GET | List user's assistants |
| `/assistants/{id}` | GET | Get assistant details |
| `/threads` | POST | Create conversation thread |
| `/threads/{id}/state` | GET | Get thread state |
| `/threads/{id}/runs` | POST | Execute graph (streaming/background) |
| `/runs/{id}/stream` | POST | Stream run events |
| `/store` | PUT | Save to vector store |
| `/store/search` | POST | Semantic search |
| `/health` | GET | Health check |
## Creating Graphs
Agents are Python modules exporting a compiled `graph` variable:
```python
# graphs/my_agent/graph.py
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
class State(TypedDict):
messages: list[str]
def process_node(state: State) -> State:
messages = state.get("messages", [])
messages.append("Processed!")
return {"messages": messages}
# Build the graph
builder = StateGraph(State)
builder.add_node("process", process_node)
builder.add_edge(START, "process")
builder.add_edge("process", END)
# Export as 'graph'
graph = builder.compile()
```
## Architecture
```
+---------------------------------------------------------+
| FastAPI HTTP Layer (Agent Protocol API) |
| - /assistants, /threads, /runs, /store endpoints |
+---------------------------------------------------------+
| Middleware Stack |
| - Auth, CORS, Structured Logging, Correlation ID |
+---------------------------------------------------------+
| Service Layer (Business Logic) |
| - LangGraphService, AssistantService, StreamingService |
+---------------------------------------------------------+
| LangGraph Runtime |
| - Graph execution, state management, tool execution |
+---------------------------------------------------------+
| Database Layer (PostgreSQL) |
| - AsyncPostgresSaver (checkpoints), AsyncPostgresStore |
+---------------------------------------------------------+
```
## Package Structure
```
libs/aegra-api/
├── src/aegra_api/
│ ├── api/ # Agent Protocol endpoints
│ │ ├── assistants.py # /assistants CRUD
│ │ ├── threads.py # /threads and state management
│ │ ├── runs.py # /runs execution and streaming
│ │ └── store.py # /store vector storage
│ ├── services/ # Business logic layer
│ ├── core/ # Infrastructure (database, auth, orm)
│ ├── models/ # Pydantic request/response schemas
│ ├── middleware/ # ASGI middleware
│ ├── observability/ # OpenTelemetry tracing
│ ├── utils/ # Helper functions
│ ├── main.py # FastAPI app entry point
│ ├── config.py # HTTP/store config loading
│ └── settings.py # Environment settings
├── tests/ # Test suite
├── alembic/ # Database migrations
└── pyproject.toml
```
## Related Packages
- **aegra-cli**: Command-line interface for project management
## Documentation
For full documentation, see the [docs/](../../docs/) directory.
| text/markdown | null | Muhammad Ibrahim <mibrahim37612@gmail.com> | null | null | null | agent-protocol, agents, fastapi, langgraph, llm | [
"Development Status :: 4 - Beta",
"Framework :: FastAPI",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"alembic>=1.16.4",
"asgi-correlation-id>=4.3.4",
"asyncpg>=0.30.0",
"fastapi>=0.116.1",
"langgraph-checkpoint-postgres>=2.0.23",
"langgraph>=1.0.3",
"openinference-instrumentation-langchain>=0.1.58",
"opentelemetry-api>=1.39.1",
"opentelemetry-exporter-otlp>=1.39.1",
"opentelemetry-sdk>=1.39.1",
... | [] | [] | [] | [
"Homepage, https://github.com/ibbybuilds/aegra",
"Documentation, https://github.com/ibbybuilds/aegra#readme",
"Repository, https://github.com/ibbybuilds/aegra",
"Issues, https://github.com/ibbybuilds/aegra/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:13:14.627520 | aegra_api-0.7.0.tar.gz | 211,400 | 62/f3/587a6e86d1b6df6240400659ffb1d7ee6208ef09825e60d278570860dbae/aegra_api-0.7.0.tar.gz | source | sdist | null | false | 505f3daa17ebd9e7414685a06987d6d9 | 2dda8f2cbb2de361235594d81ddee301d5c60c9fe8c817a8b1e200b4887c18ed | 62f3587a6e86d1b6df6240400659ffb1d7ee6208ef09825e60d278570860dbae | Apache-2.0 | [] | 391 |
2.3 | mh_structlog | 0.0.49 | Some Structlog configuration and wrappers to easily use structlog. | # MH-Structlog
This package is used to setup the python logging system in combination with structlog. It configures both structlog and the standard library logging module, so your code can either use a structlog logger (which is recommended) or keep working with the standard logging library. This way all third-party packages that are producing logs (which use the stdlib logging module) will follow your logging setup and you will always output structured logging.
It is a fairly opinionated setup but has some configuration options to influence the behaviour. The two output log formats are either pretty-printing (for interactive views) or json. It includes optional reporting to Sentry, and can also log to a file.
## Usage
This library should behave mostly as a drop-in import instead of the logging library import.
So instead of
```python
import logging
logger = logging.getLogger(__name__)
logger.info('hey')
```
you can do
```python
import mh_structlog as logging
logging.setup() # necessary once at program startup, see readme further below
logger = logging.getLogger(__name__)
logger.info('hey')
```
One big advantage of using the structlog logger over de stdlib logging one, is that you can pass arbitrary keyword arguments to our loggers when producing logs. E.g.
```python
import mh_structlog as logging
logger = logging.getLogger(__name__)
logger.info('some message', hey='ho', a_list=[1,2,3])
```
These extra key-value pairs will be included in the produced logs; either pretty-printed to the console or as data in the json entries.
## Configuration via `setup()`
To configure your logging, call the `setup` function, which should be called once as early as possible in your program execution. This function configures all loggers.
```python
import mh_structlog as logging
logging.setup()
```
This will work out of the box with sane defaults: it logs to stdout in a pretty colored output when running in an interactive terminal, else it defaults to producing json output. See the next section for information on the arguments to this method.
### Configuration options
For a setup which logs everything to the console in a pretty (colored) output, simply do:
```python
from mh_structlog import *
setup(
log_format='console',
)
getLogger().info('hey')
```
To log as json:
```python
from mh_structlog import *
setup(
log_format='json',
)
getLogger().info('hey')
```
To filter everything out up to a certain level:
```python
from mh_structlog import *
setup(
log_format='console',
global_filter_level=WARNING,
)
getLogger().info('hey') # this does not get printed
getLogger().error('hey') # this does get printed
```
To write logs to a file additionally (next to stdout):
```python
from mh_structlog import *
setup(
log_format='console',
log_file='myfile.log',
)
getLogger().info('hey')
```
To silence specific named loggers specifically (instead of setting the log level globally, it can be done per named logger):
```python
from mh_structlog import *
setup(
log_format='console',
logging_configs=[
filter_named_logger('some_named_logger', WARNING),
],
)
getLogger('some_named_logger').info('hey') # does not get logged
getLogger('some_named_logger').warning('hey') # does get logged
getLogger('some_other_named_logger').info('hey') # does get logged
getLogger('some_other_named_logger').warning('hey') # does get logged
```
To include the source information about where a log was produced:
```python
from mh_structlog import *
setup(
include_source_location=True
)
getLogger().info('hey')
```
To choose how many frames you want to include in stacktraces on logging exceptions:
```python
from mh_structlog import *
setup(
log_format='json',
max_frames=3,
)
try:
5 / 0
except Exception as e:
getLogger().exception(e)
```
To enable Sentry integration, pass a dict with a config according to the arguments which [structlog-sentry](https://github.com/kiwicom/structlog-sentry?tab=readme-ov-file#usage) allows to the setup function:
```python
from mh_structlog import *
import sentry_sdk
config = {'dsn': '1234'}
sentry_sdk.init(dsn=config['dsn'])
setup(
sentry_config={'event_level': WARNING} # pass everything starting from WARNING level to Sentry
)
```
## Development
Install the environment:
```shell
uv sync --python-preference only-managed --frozen --all-extras --all-groups
```
Run the unittests:
```shell
uv run pytest -s --pdb --pdbcls=IPython.terminal.debugger:Pdb
```
| text/markdown | Mathieu Hinderyckx | Mathieu Hinderyckx <mathieu.hinderyckx@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"orjson>=3.11.5",
"rich>=14.2.0",
"structlog>=25.5.0",
"aws-lambda-powertools>=3.23.0; extra == \"aws\"",
"django>=5.0; extra == \"django\"",
"structlog-sentry>=2.2.1; extra == \"sentry\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T16:13:12.192757 | mh_structlog-0.0.49-py3-none-any.whl | 11,976 | 2d/2f/8a7f600a6c13a8b6540292898b7ff669305baec1018314aaa5071a34ad15/mh_structlog-0.0.49-py3-none-any.whl | py3 | bdist_wheel | null | false | bd88be6bfe3802d67e6674cf3c88422c | cd2bb04ff4d59c3186faf84a4da4feb939cf11c10875af918fbbb8ef2da7e565 | 2d2f8a7f600a6c13a8b6540292898b7ff669305baec1018314aaa5071a34ad15 | null | [] | 0 |
2.4 | surfmon | 0.5.0 | Monitor Windsurf and Windsurf Next resource usage and diagnose problems | # Surfmon
<p align="center">
<img src="docs/screenshots/header.gif" alt="surfmon header" width="800">
</p>
[](https://github.com/detailobsessed/surfmon/actions?query=workflow%3Aci)
[](https://github.com/detailobsessed/surfmon/actions?query=workflow%3Arelease)
[](https://detailobsessed.github.io/surfmon/)
[](https://pypi.org/project/surfmon/)
[](https://www.python.org/downloads/)
[](https://github.com/detailobsessed/surfmon/actions?query=workflow%3Aci)
**Surf**ace **Mon**itor for Windsurf IDE — a performance monitoring and diagnostics tool for [Windsurf](https://codeium.com/windsurf) (Stable, Next, and Insiders).
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Why Use Surfmon?](#why-use-surfmon)
- [Commands](#commands)
- [check — Quick Performance Snapshot](#check--quick-performance-snapshot)
- [watch — Live Monitoring Dashboard](#watch--live-monitoring-dashboard)
- [analyze — Historical Trend Analysis](#analyze--historical-trend-analysis)
- [compare — Before/After Diff](#compare--beforeafter-diff)
- [cleanup — Remove Orphaned Processes](#cleanup--remove-orphaned-processes)
- [prune — Deduplicate Watch Reports](#prune--deduplicate-watch-reports)
- [What It Monitors](#what-it-monitors)
- [Target Selection](#target-selection)
- [Exit Codes](#exit-codes)
- [Common Issues](#common-issues)
- [Development](#development)
- [Package Structure](#package-structure)
- [Running Tests](#running-tests)
- [Dependencies](#dependencies)
- [Requirements](#requirements)
- [Creating Screenshots](#creating-screenshots)
## Installation
```bash
pip install surfmon
```
Or with [uv](https://docs.astral.sh/uv/):
```bash
uv tool install surfmon
```
Or run directly without installing:
```bash
uvx surfmon check -t stable # Using uvx
pipx run surfmon check -t stable # Using pipx
```
For development:
```bash
git clone https://github.com/detailobsessed/surfmon.git
cd surfmon
uv sync
```
## Quick Start
```bash
# One-shot health check (--target is required)
surfmon check -t stable
# Verbose output with all process details
surfmon check -t stable -v
# Save reports (auto-named with timestamp, enables verbose output)
surfmon check -t next -s
# Target Windsurf Insiders
surfmon check -t insiders
```

## Why Use Surfmon?
- 🔍 **Debug Performance Issues** — Identify memory leaks, CPU spikes, and resource bottlenecks
- 📊 **Monitor Over Time** — Track resource usage trends with watch sessions and historical analysis
- 🧹 **Clean Up Resources** — Remove orphaned processes and duplicate reports
- 🔧 **Troubleshoot Crashes** — Detect extension host crashes, language server issues, and PTY leaks
- 📈 **Visualize Trends** — Generate matplotlib plots showing resource usage over time
## Commands
### `check` — Quick Performance Snapshot
The main command. Shows system resources, Windsurf memory/CPU, active workspaces, top processes, and language servers in consistent fixed-width tables.
```bash
surfmon check -t stable # Basic check
surfmon check -t stable -v # Verbose (all processes)
surfmon check -t next -s # Auto-save JSON + Markdown reports (enables verbose)
surfmon check -t stable --json report.json # Save JSON to specific path
surfmon check -t stable --md report.md # Save Markdown to specific path
surfmon check -t stable --json r.json --md r.md # Save both formats with custom names
```
### `watch` — Live Monitoring Dashboard
Continuously monitors Windsurf with a live-updating terminal dashboard. Saves periodic JSON snapshots for historical analysis.
```bash
surfmon watch -t stable # Default: 5s interval, save every 5min
surfmon watch -t next -i 10 -s 600 # Check every 10s, save every 10min
surfmon watch -t insiders -i 10 -n 720 # 720 checks = 2 hours
surfmon watch -t stable -o ~/reports # Custom output directory
```

### `analyze` — Historical Trend Analysis
Analyzes JSON reports from `watch` sessions (or any directory containing JSON reports) to detect memory leaks, process growth, and performance degradation. Optionally generates a 9-panel matplotlib visualization.
```bash
surfmon analyze reports/watch/20260204-134518/
surfmon analyze reports/watch/20260204-134518/ --plot
surfmon analyze reports/watch/20260204-134518/ --plot --output analysis.png
```
**Terminal Output:**

**Generated Matplotlib Visualization:**

### `compare` — Before/After Diff
```bash
surfmon check -t stable --json before.json
# ... make changes ...
surfmon check -t stable --json after.json
surfmon compare before.json after.json
```

### `cleanup` — Remove Orphaned Processes
Detects and kills orphaned `chrome_crashpad_handler` processes left behind after Windsurf exits. Windsurf must be closed for this command to work.
```bash
surfmon cleanup -t stable # Interactive (asks for confirmation)
surfmon cleanup -t next --force # No confirmation
```
### `prune` — Deduplicate Watch Reports
Removes duplicate/identical JSON reports that accumulate during `watch` sessions when nothing changes.
```bash
surfmon prune reports/watch/20260204-134518/ --dry-run
surfmon prune reports/watch/20260204-134518/
```
## What It Monitors
**System** — Total/available memory, memory %, swap, CPU cores
**Windsurf Processes** — Process count, total memory & CPU, top 10 by memory, thread counts
**Language Servers** — Detects and tracks basedpyright, JDT.LS, Codeium language servers, YAML/JSON servers
**MCP Servers** — Lists enabled MCP servers from Codeium config
**Workspaces** — Active workspace paths and load times
**PTY Usage** — Windsurf PTY allocation vs system limits
**Issues** — Orphaned crash handlers, extension host crashes, update service timeouts, telemetry failures, `logs` directory in extensions folder
## Target Selection
Surfmon requires you to specify which Windsurf installation to monitor. Use `--target` (`-t`) with one of `stable`, `next`, or `insiders`:
```bash
surfmon check -t stable # Windsurf Stable
surfmon check -t next # Windsurf Next
surfmon check -t insiders # Windsurf Insiders
```
Alternatively, set `SURFMON_TARGET` in your environment to avoid passing `-t` every time:
```bash
export SURFMON_TARGET=insiders
surfmon check
```
The `--target` flag is required for `check`, `watch`, and `cleanup`. Commands that operate on saved files (`compare`, `prune`, `analyze`) do not require it.
## Exit Codes
- `0` — No issues detected
- `1` — Issues detected (see output)
- `130` — Interrupted (Ctrl+C)
## Common Issues
| Issue | Cause | Fix |
| ----- | ----- | --- |
| Orphaned crash handlers | Crash reporters not cleaned up on exit | `surfmon cleanup -t stable --force` |
| `logs` directory error | Marimo extension creates logs in wrong place | Move `~/.windsurf/extensions/logs` |
| Update service timeouts | DNS or firewall blocking update checks | Check DNS/firewall settings |
| High memory usage | Too many language servers or extensions | Disable unused extensions |
## Development
### Package Structure
```
src/surfmon/
__init__.py # Version
cli.py # Typer CLI — check, watch, compare, cleanup, prune, analyze
config.py # Target detection, paths, environment config
monitor.py # Core data collection — processes, language servers, MCP, PTYs
output.py # Rich terminal display and Markdown export
compare.py # Report comparison with colored diffs
tests/
conftest.py # Shared fixtures
test_bugfixes.py # Regression tests
test_cli.py # CLI command tests
test_compare.py # Report comparison tests
test_config.py # Configuration and target detection tests
test_monitor.py # Core monitoring logic tests
test_output.py # Display and formatting tests
```
### Running Tests
```bash
poe test # Run tests
poe test-cov # Run with coverage
poe lint # Ruff check
poe typecheck # ty check
```
### Dependencies
- **[psutil](https://github.com/giampaolo/psutil)** — Cross-platform process and system monitoring
- **[typer](https://github.com/fastapi/typer)** — CLI framework
- **[rich](https://github.com/Textualize/rich)** — Terminal output with tables and colors
- **[python-decouple](https://github.com/HBNetwork/python-decouple)** — Environment configuration
- **[matplotlib](https://matplotlib.org/)** — Visualization for `analyze` plots
### Requirements
- Python 3.14+
- macOS (tested), Linux (untested), Windows (untested) though it should work
- Windsurf IDE installed
### Creating Screenshots
Screenshots in this README were created using:
- **Static images** ([termshot](https://github.com/homeport/termshot)) - Captures terminal output as PNG
- **Animated GIF** ([vhs](https://github.com/charmbracelet/vhs)) - Records terminal sessions as GIF
To recreate the watch GIF:
```bash
brew install vhs gifsicle
# Create tape file
cat > watch-demo.tape << 'EOF'
Output docs/screenshots/watch.gif
Set FontSize 13
Set Width 900
Set Height 400
Set Theme "Catppuccin Mocha"
Set BorderRadius 10
Set WindowBar Colorful
Set WindowBarSize 30
Type "uvx surfmon watch --interval 2 --max 15"
Enter
Sleep 32s
EOF
# Generate and optimize
vhs watch-demo.tape
gifsicle -O3 --colors 256 docs/screenshots/watch.gif -o docs/screenshots/watch.gif
```
| text/markdown | Ismar Iljazovic | Ismar Iljazovic <ismar@gmail.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.14",
"Topic :: Documentation",
"Topic :: Software Development",
"Topic :: ... | [] | null | null | >=3.14 | [] | [] | [] | [
"typer>=0.23",
"psutil>=7.2",
"rich>=14.3",
"python-decouple>=3.8",
"matplotlib>=3.10"
] | [] | [] | [] | [
"Homepage, https://detailobsessed.github.io/surfmon",
"Documentation, https://detailobsessed.github.io/surfmon",
"Changelog, https://detailobsessed.github.io/surfmon/changelog",
"Repository, https://github.com/detailobsessed/surfmon",
"Issues, https://github.com/detailobsessed/surfmon/issues",
"Discussion... | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T16:11:37.320062 | surfmon-0.5.0.tar.gz | 33,284 | 61/d6/9f0fd0f31820dde4d38de48af347cf95ded80fbb418d80ba7c1dabf2cd9e/surfmon-0.5.0.tar.gz | source | sdist | null | false | 15bb139fa89143c1dce54491893ca14f | 2ec595b8c61c39bb2c3a74f9a9cfdff074b6133f1cebae8ed1e420b0ce75c3a7 | 61d69f0fd0f31820dde4d38de48af347cf95ded80fbb418d80ba7c1dabf2cd9e | MIT | [
"LICENSE"
] | 232 |
2.1 | questdb-rest | 5.0.2 | QuestDB REST API Python client library and CLI | # QuestDB REST API Python Client, CLI and REPL Shell
> QuestDB comes with a very nice web console, but there's no CLI, so I wrote one (can't live without the terminal!).
The REST API is very well defined: https://questdb.com/docs/reference/api/rest/, only 3 documented endpoints. One undocumented endpoints I also implemented are `/chk` to check for if a table exists, I found the route when trying to ingest CSV via the web console.
## A short tour
```
# << a short tour of questdb-cli >>
# querying the public demo instance, print the data in psql table format
$ qdb-cli --port 443 --host https://demo.questdb.io exec --psql -q 'trades limit 20'
+----------+--------+----------+------------+-----------------------------+
| symbol | side | price | amount | timestamp |
|----------+--------+----------+------------+-----------------------------|
| ETH-USD | sell | 2615.54 | 0.00044 | 2022-03-08T18:03:57.609765Z |
| BTC-USD | sell | 39270 | 0.001 | 2022-03-08T18:03:57.710419Z |
| ETH-USD | buy | 2615.4 | 0.002 | 2022-03-08T18:03:57.764098Z |
| ETH-USD | buy | 2615.4 | 0.001 | 2022-03-08T18:03:57.764098Z |
| ETH-USD | buy | 2615.4 | 0.00042698 | 2022-03-08T18:03:57.764098Z |
| ETH-USD | buy | 2615.36 | 0.025936 | 2022-03-08T18:03:58.194582Z |
| ETH-USD | buy | 2615.37 | 0.0350084 | 2022-03-08T18:03:58.194582Z |
| ETH-USD | buy | 2615.46 | 0.172602 | 2022-03-08T18:03:58.194582Z |
| ETH-USD | buy | 2615.47 | 0.14811 | 2022-03-08T18:03:58.194582Z |
| BTC-USD | sell | 39265.3 | 0.000127 | 2022-03-08T18:03:58.357448Z |
| BTC-USD | sell | 39265.3 | 0.000245 | 2022-03-08T18:03:58.357448Z |
| BTC-USD | sell | 39265.3 | 7.3e-05 | 2022-03-08T18:03:58.357448Z |
| BTC-USD | sell | 39263.3 | 0.00392897 | 2022-03-08T18:03:58.357448Z |
| ETH-USD | buy | 2615.35 | 0.0224587 | 2022-03-08T18:03:58.612275Z |
| ETH-USD | buy | 2615.36 | 0.0324461 | 2022-03-08T18:03:58.612275Z |
| BTC-USD | sell | 39265.3 | 6.847e-05 | 2022-03-08T18:03:58.660121Z |
| BTC-USD | sell | 39262.4 | 0.00046562 | 2022-03-08T18:03:58.660121Z |
| ETH-USD | buy | 2615.62 | 0.00044 | 2022-03-08T18:03:58.682070Z |
| ETH-USD | buy | 2615.62 | 0.00044 | 2022-03-08T18:03:58.682070Z |
| ETH-USD | buy | 2615.62 | 0.00044 | 2022-03-08T18:03:58.682070Z |
+----------+--------+----------+------------+-----------------------------+
# export the whole table (180 MB, be careful)
$ qdb-cli --port 443 --host https://demo.questdb.io exp 'trips' > trips.csv
# import the copy in your local instance
# let's configure the CLI to use your local instance first
$ qdb-cli gen-config
# edit the config file to set your local instance
# lightning fast local import!
# the imp command can infer table name using different rules, install it and run --help to see
$ qdb-cli imp --name trips trips.csv --partitionBy WEEK --timestamp pickup_datetime
# you can also pipe data directly from stdin using the qdb-imp-from-stdin helper script
$ cat trips.csv | qdb-imp-from-stdin --name trips --partitionBy WEEK --timestamp pickup_datetime
+-----------------------------------------------------------------------------------------------------------------+
| Location: | trips | Pattern | Locale | Errors |
| Partition by | WEEK | | | |
| Timestamp | pickup_datetime | | | |
+-----------------------------------------------------------------------------------------------------------------+
| Rows handled | 1000000 | | | |
| Rows imported | 1000000 | | | |
+-----------------------------------------------------------------------------------------------------------------+
| 0 | cab_type | VARCHAR | 0 |
| 1 | vendor_id | VARCHAR | 0 |
| 2 | pickup_datetime | TIMESTAMP | 0 |
| 3 | dropoff_datetime | TIMESTAMP | 0 |
| 4 | rate_code_id | VARCHAR | 0 |
| 5 | pickup_latitude | DOUBLE | 0 |
| 6 | pickup_longitude | DOUBLE | 0 |
| 7 | dropoff_latitude | DOUBLE | 0 |
| 8 | dropoff_longitude | DOUBLE | 0 |
| 9 | passenger_count | INT | 0 |
| 10 | trip_distance | DOUBLE | 0 |
| 11 | fare_amount | DOUBLE | 0 |
| 12 | extra | DOUBLE | 0 |
| 13 | mta_tax | DOUBLE | 0 |
| 14 | tip_amount | DOUBLE | 0 |
| 15 | tolls_amount | DOUBLE | 0 |
| 16 | ehail_fee | DOUBLE | 0 |
| 17 | improvement_surcharge | DOUBLE | 0 |
| 18 | congestion_surcharge | DOUBLE | 0 |
| 19 | total_amount | DOUBLE | 0 |
| 20 | payment_type | VARCHAR | 0 |
| 21 | trip_type | VARCHAR | 0 |
| 22 | pickup_location_id | INT | 0 |
| 23 | dropoff_location_id | INT | 0 |
+-----------------------------------------------------------------------------------------------------------------+
# check schema to confirm the import
$ qdb-cli schema trips
CREATE TABLE 'trips' (
cab_type VARCHAR,
vendor_id VARCHAR,
pickup_datetime TIMESTAMP,
dropoff_datetime TIMESTAMP,
rate_code_id VARCHAR,
pickup_latitude DOUBLE,
pickup_longitude DOUBLE,
dropoff_latitude DOUBLE,
dropoff_longitude DOUBLE,
passenger_count INT,
trip_distance DOUBLE,
fare_amount DOUBLE,
extra DOUBLE,
mta_tax DOUBLE,
tip_amount DOUBLE,
tolls_amount DOUBLE,
ehail_fee DOUBLE,
improvement_surcharge DOUBLE,
congestion_surcharge DOUBLE,
total_amount DOUBLE,
payment_type VARCHAR,
trip_type VARCHAR,
pickup_location_id INT,
dropoff_location_id INT
) timestamp(pickup_datetime) PARTITION BY WEEK WAL
WITH maxUncommittedRows=500000, o3MaxLag=600000000us;
# rename commands for your convenience (run something like `RENAME TABLE 'test.csv' TO 'myTable'`; under the hood)
$ qdb-cli rename trips taxi_trips_feb_2018
{
"status": "OK",
"message": "Table 'trips' renamed to 'taxi_trips_feb_2018'"
}
```
## Table of Contents
- [QuestDB REST API Python Client, CLI and REPL Shell](#questdb-rest-api-python-client-cli-and-repl-shell)
- [A short tour](#a-short-tour)
- [Table of Contents](#table-of-contents)
- [How's this different from the official `py-questdb-client` and `py-questdb-query` packages?](#hows-this-different-from-the-official-py-questdb-client-and-py-questdb-query-packages)
- [Features beyond what the vanilla REST API provides](#features-beyond-what-the-vanilla-rest-api-provides)
- [Docs, screenshots and video demos](#docs-screenshots-and-video-demos)
- [`imp` programmatically derives table name from filename when uploading CSVs](#imp-programmatically-derives-table-name-from-filename-when-uploading-csvs)
- [`exec` supports multiple queries in one go](#exec-supports-multiple-queries-in-one-go)
- [Query output parsing and formatting](#query-output-parsing-and-formatting)
- [`schema`](#schema)
- [`chk`](#chk)
- [Usage](#usage)
- [Global options to fine tune log levels](#global-options-to-fine-tune-log-levels)
- [Configuring CLI - DB connection options](#configuring-cli---db-connection-options)
- [Accompanying Bash Scripts](#accompanying-bash-scripts)
- [Subcommands that run complex workflows](#subcommands-that-run-complex-workflows)
- [`create-or-replace-table-from-query` or `cor`](#create-or-replace-table-from-query-or-cor)
- [`rename` with table exists checks](#rename-with-table-exists-checks)
- [`dedupe` check, enable, disable](#dedupe-check-enable-disable)
- [Examples](#examples)
- [Advanced Scripting](#advanced-scripting)
- [Drop all backup tables with UUID4 in the name](#drop-all-backup-tables-with-uuid4-in-the-name)
- [Piping query or table names from stdin](#piping-query-or-table-names-from-stdin)
- [Change partitioning strategy to YEAR for existing table](#change-partitioning-strategy-to-year-for-existing-table)
- [Batch change partitioning strategy and enable deduplication with `xargs`](#batch-change-partitioning-strategy-and-enable-deduplication-with-xargs)
- [PyPI packages and installation](#pypi-packages-and-installation)
- [The Python API](#the-python-api)
- [Screenshots](#screenshots)
- [Code Stats](#code-stats)
- [LOC by file](#loc-by-file)
- [Token count by function](#token-count-by-function)
- [Function LOC Sunburst Chart](#function-loc-sunburst-chart)
## How's this different from the official `py-questdb-client` and `py-questdb-query` packages?
- `py-questdb-client`: Focuses on ingestion from Python data structures and / or DataFrames, I don't think it does anything else
- `py-questdb-query`: Cython based library to get numpy arrays or dataframes from the REST API
- This python client: Gets raw JSON from REST API, doesn't depend on numpy or pandas, making the CLI lightweight and fast to start
## Features beyond what the vanilla REST API provides
### Docs, screenshots and video demos
Originally I just wrote the CLI (`cli.py`), then it becomes really complicated that I had to split the code and put the REST API interfacing part into a module (`__init__.py`).
- Write-up and demo: https://teddysc.me/blog/questdb-rest
- 6 min demo: https://www.youtube.com/watch?v=l_1HBbAHeBM
- https://teddysc.me/blog/rlwrap-questdb-shell
- GitHub: https://github.com/tddschn/questdb-rest
- PyPI: https://pypi.org/project/questdb-rest/
- QuestDB-Shell: https://github.com/tddschn/questdb-shell
### `imp` programmatically derives table name from filename when uploading CSVs
`questdb-cli imp` options that are not part of the REST API spec:
```
--name-func {stem,add_prefix}
Function to generate table name from filename (ignored if --name set). Available: stem, add_prefix (default: None)
--name-func-prefix NAME_FUNC_PREFIX
Prefix string for 'add_prefix' name function. (default: )
-D, --dash-to-underscore
If table name is derived from filename (i.e., --name not set), convert dashes (-) to underscores (_). Compatible with --name-func. (default: False)
```
Global flag `--stop-on-error` controls if it should stop talking to the API on first CSV import error or not.
### `exec` supports multiple queries in one go
The API and web console will only take your last query if you attempt to give it more than 1, while this project uses `sqlparser` to split the queries and send them one by one for you for convenience. Global flag `--stop-on-error` controls if it should stop talking to the API on first error or not. Since the API doesn't always return a status code other than 200 on error, I dived in to the Dev Tools to see what exactly tells me if a request is successful or not.
The queries can be piped in from stdin, or read from a file, or you can supply it from the command line.
### Query output parsing and formatting
The `/exec` endpoints only speaks JSON, this tool gives you options to format the output table to as markdown with `--markdown` or a psql-style ASCII table with `--psql` (default is JSON).
For CSV output, use `questdb-cli exp` instead.
### `schema`
Convenience command to fetch schema for 1 or more tables. Hard to do without reading good chunk of the QuestDB doc. The web console supports copying schemas from the tables list.
```
qdb-cli schema equities_1d
CREATE TABLE 'equities_1d' (
timestamp TIMESTAMP,
open DOUBLE,
high DOUBLE,
low DOUBLE,
close DOUBLE,
volume LONG,
ticker SYMBOL CAPACITY 1024 CACHE
) timestamp(timestamp) PARTITION BY YEAR WAL
WITH maxUncommittedRows=500000, o3MaxLag=600000000us
DEDUP UPSERT KEYS(timestamp,ticker);
```
### `chk`
The `chk` command to talk to `/chk` endpoint, which is used by the web console's CSV upload UI.
## Usage
### Global options to fine tune log levels
```
qdb-cli -h
usage: questdb-cli [-h] [-H HOST] [--port PORT] [-u USER] [-p PASSWORD]
[--timeout TIMEOUT] [--scheme {http,https}] [-i | -D] [-R]
[--config CONFIG] [--stop-on-error | --no-stop-on-error]
{imp,exec,exp,chk,schema,rename,create-or-replace-table-from-query,cor,drop,drop-table,dedupe,gen-config,mcp}
...
QuestDB REST API Command Line Interface.
Logs to stderr, outputs data to stdout.
Uses QuestDB REST API via questdb_rest library.
positional arguments:
{imp,exec,exp,chk,schema,rename,create-or-replace-table-from-query,cor,drop,drop-table,dedupe,gen-config,mcp}
Available sub-commands
imp Import data from file(s) using /imp.
exec Execute SQL statement(s) using /exec (returns JSON).
Reads SQL from --query, --file, --get-query-from-python-module, or stdin.
exp Export data using /exp (returns CSV to stdout or file).
chk Check if a table exists using /chk (returns JSON). Exit code 0 if exists, 3 if not.
schema Fetch CREATE TABLE statement(s) for one or more tables.
rename Rename a table using RENAME TABLE. Backs up target name by default if it exists.
create-or-replace-table-from-query (cor)
Atomically replace a table with the result of a query, with optional backup.
drop (drop-table) Drop one or more tables using DROP TABLE.
dedupe Enable, disable, or check data deduplication settings for a WAL table.
gen-config Generate a default config file at ~/.questdb-rest/config.json
mcp Start MCP server for LLM integration (e.g., Claude). Requires questdb-rest[mcp].
options:
-h, --help Show this help message and exit.
-H HOST, --host HOST QuestDB server host.
--port PORT QuestDB REST API port.
-u USER, --user USER Username for basic authentication.
-p PASSWORD, --password PASSWORD
Password for basic authentication. If -u is given but -p is not, will prompt securely unless password is in config.
--timeout TIMEOUT Request timeout in seconds.
--scheme {http,https}
Connection scheme (http or https).
-i, --info Use info level logging (default is WARNING).
-D, --debug Enable debug level logging to stderr.
-R, --dry-run Simulate API calls without sending them. Logs intended actions.
--config CONFIG Path to a specific config JSON file (overrides default ~/.questdb-rest/config.json).
--stop-on-error, --no-stop-on-error
Stop execution immediately if any item (file/statement/table) fails (where applicable).
This CLI can also be used as a Python library.
```
### Configuring CLI - DB connection options
Run `qdb-cli gen-config` and edit the generated config file to specify your DB's port, host, and auth info.
All options are optional and will use the default `localhost:9000` if not specified.
### Accompanying Bash Scripts
```plain
# check next section too
$ qdb-drop-tables-by-regex
Usage: ~/.local/bin/qdb-drop-tables-by-regex [-n] [-c] -p PATTERN
Options:
-p PATTERN Regex pattern to match table names (required)
-n Dry run; show what would be dropped but do not execute
-c Confirm each drop interactively
-h Show this help message
```
## Subcommands that run complex workflows
### `create-or-replace-table-from-query` or `cor`
https://stackoverflow.com/a/79601299/11133602
QuestDB doesn't have `DELETE FROM` to delete rows, you can only create a new table and drop the old one. This command does that for you, and optionally backs up the old table.
It does complex checks to ensure the queries are correctly constructed and are run in the correct order.
One of the query that will be executed is `CREATE TABLE IF NOT EXISTS <table> AS <query>`.
```plain
qdb-cli cor --help
usage: questdb-cli create-or-replace-table-from-query [-h] [-q QUERY | -f FILE | -G GET_QUERY_FROM_PYTHON_MODULE] [-B BACKUP_TABLE_NAME | --no-backup-original-table] [-P {NONE,YEAR,MONTH,DAY,HOUR,WEEK}] [-t TIMESTAMP]
[--statement-timeout STATEMENT_TIMEOUT]
table
positional arguments:
table Name of the target table to create or replace.
options:
-h, --help Show this help message and exit.
-q QUERY, --query QUERY
SQL query string defining the new table content.
-f FILE, --file FILE Path to file containing the SQL query.
-G GET_QUERY_FROM_PYTHON_MODULE, --get-query-from-python-module GET_QUERY_FROM_PYTHON_MODULE
Get query from a Python module (format 'module_path:variable_name').
--statement-timeout STATEMENT_TIMEOUT
Query timeout in milliseconds for underlying operations.
Backup Options (if target table exists):
-B BACKUP_TABLE_NAME, --backup-table-name BACKUP_TABLE_NAME, --rename-original-table-to BACKUP_TABLE_NAME
Specify a name for the backup table (if target exists). Default: generated name.
--no-backup-original-table
DROP the original table directly instead of renaming it to a backup.
New Table Creation Options:
-P {NONE,YEAR,MONTH,DAY,HOUR,WEEK}, --partitionBy {NONE,YEAR,MONTH,DAY,HOUR,WEEK}
Partitioning strategy for the new table.
-t TIMESTAMP, --timestamp TIMESTAMP
Designated timestamp column name for the new table.
-k COLUMN [COLUMN ...], --upsert-keys COLUMN [COLUMN ...]
List of column names to use as UPSERT KEYS when creating the new table. Must include the designated timestamp (if specified via -t). Requires WAL.
```
```plain
# oh snap! I inserted wrong PLTR data to the equities_1 table, the timestamp col is messed up
# let's fix it by creating a new table with the correct data
qdb-cli --info cor equities_1 -q "equities_1 where ticker != 'PLTR'" -t timestamp -P WEEK
INFO: Log level set to INFO
INFO: Connecting to http://localhost:9000
INFO: Starting create-or-replace operation for table 'equities_1' using temp table '__qdb_cli_temp_equities_1_26b1ac1a_5853_4215_b9b0_aa9b872c1f7b'...
WARNING: Input query from query string does not start with SELECT. Assuming it's valid QuestDB shorthand.
WARNING: Query: equities_1 where ticker != 'PLTR'
INFO: Using query from query string for table creation.
INFO: Creating temporary table '__qdb_cli_temp_equities_1_26b1ac1a_5853_4215_b9b0_aa9b872c1f7b' from query...
INFO: Successfully created temporary table '__qdb_cli_temp_equities_1_26b1ac1a_5853_4215_b9b0_aa9b872c1f7b'.
INFO: Checking if target table 'equities_1' exists...
INFO: Generated backup name: 'qdb_cli_backup_equities_1_bc345051_9157_4e3c_83ec_70e8430a3f64'
INFO: Checking if backup table 'qdb_cli_backup_equities_1_bc345051_9157_4e3c_83ec_70e8430a3f64' exists...
INFO: Backup table 'qdb_cli_backup_equities_1_bc345051_9157_4e3c_83ec_70e8430a3f64' does not exist. Proceeding with rename.
INFO: Renaming original table 'equities_1' to backup table 'qdb_cli_backup_equities_1_bc345051_9157_4e3c_83ec_70e8430a3f64'...
INFO: Successfully renamed 'equities_1' to 'qdb_cli_backup_equities_1_bc345051_9157_4e3c_83ec_70e8430a3f64'.
INFO: Renaming temporary table '__qdb_cli_temp_equities_1_26b1ac1a_5853_4215_b9b0_aa9b872c1f7b' to target table 'equities_1'...
INFO: Successfully renamed temporary table '__qdb_cli_temp_equities_1_26b1ac1a_5853_4215_b9b0_aa9b872c1f7b' to 'equities_1'.
{
"status": "OK",
"message": "Successfully created/replaced table 'equities_1'. Original table backed up as 'qdb_cli_backup_equities_1_bc345051_9157_4e3c_83ec_70e8430a3f64'.",
"target_table": "equities_1",
"backup_table": "qdb_cli_backup_equities_1_bc345051_9157_4e3c_83ec_70e8430a3f64",
"original_dropped_no_backup": false
}
```
### `rename` with table exists checks
```plain
qdb-cli rename --help
usage: questdb-cli rename [-h] [--no-backup-if-new-table-exists] [--statement-timeout STATEMENT_TIMEOUT] old_table_name new_table_name
positional arguments:
old_table_name Current name of the table.
new_table_name New name for the table.
options:
-h, --help Show this help message and exit.
--no-backup-if-new-table-exists
If the new table name already exists, do not back it up first. Rename might fail. (default: False)
--statement-timeout STATEMENT_TIMEOUT
Query timeout in milliseconds (per RENAME statement). (default: None)
```
Example:
```plain
qdb chk trades2
{
"tableName": "trades2",
"status": "Exists"
}
❯ qdb chk trades3
{
"tableName": "trades3",
"status": "Exists"
}
❯ qdb rename trades2 trades3
WARNING: Target table name 'trades3' already exists.
{
"status": "OK",
"message": "Table 'trades2' successfully renamed to 'trades3'. Existing table at 'trades3' was backed up as 'qdb_cli_backup_trades3_f652d5ac_b9dd_4561_a835_eae947866e4f'.",
"old_name": "trades2",
"new_name": "trades3",
"backup_of_new_name": "qdb_cli_backup_trades3_f652d5ac_b9dd_4561_a835_eae947866e4f"
}
# ok let's drop it now
qdb drop qdb_cli_backup_trades3_f652d5ac_b9dd_4561_a835_eae947866e4f
{
"status": "OK",
"table_dropped": "qdb_cli_backup_trades3_f652d5ac_b9dd_4561_a835_eae947866e4f",
"message": "Table 'qdb_cli_backup_trades3_f652d5ac_b9dd_4561_a835_eae947866e4f' dropped successfully.",
"ddl_response": "OK"
}
qdb chk qdb_cli_backup_trades3_f652d5ac_b9dd_4561_a835_eae947866e4f
{
"tableName": "qdb_cli_backup_trades3_f652d5ac_b9dd_4561_a835_eae947866e4f",
"status": "Does not exist"
}
```
### `dedupe` check, enable, disable
Usage:
Default is `--check`.
This command parses the `CREATE TABLE` statement to get the `UPSERT KEYS` and `DESIGNATED TIMESTAMP` columns for you.
```plain
❯ qdb-cli dedupe trades --help
usage: questdb-cli dedupe [-h] [--enable | --disable | --check] [-k COLUMN [COLUMN ...]] [--statement-timeout STATEMENT_TIMEOUT] table_name
positional arguments:
table_name Name of the target WAL table.
options:
-h, --help Show this help message and exit.
--enable Enable deduplication. Requires --upsert-keys. (default: False)
--disable Disable deduplication. (default: False)
--check Check current deduplication status and keys (default action). (default: False)
-k COLUMN [COLUMN ...], --upsert-keys COLUMN [COLUMN ...]
List of column names to use as UPSERT KEYS when enabling. Must include the designated timestamp. (default: None)
--statement-timeout STATEMENT_TIMEOUT
Query timeout in milliseconds for the ALTER TABLE statement. (default: None)
```
Example:
```plain
# trades table is the same as the one in the demo instance
qdb-cli dedupe trades
{
"status": "OK",
"table_name": "trades",
"action": "check",
"deduplication_enabled": true,
"designated_timestamp": "timestamp",
"upsert_keys": [
"timestamp",
"symbol"
]
}
❯ qdb-cli dedupe trades --disable
{
"status": "OK",
"table_name": "trades",
"action": "disable",
"deduplication_enabled": false,
"ddl": "OK"
}
❯ qdb-cli dedupe trades --enable -k timestamp,symbol
ERROR: Error: Designated timestamp column 'timestamp' must be included in --upsert-keys.
{
"status": "Error",
"table_name": "trades",
"action": "enable",
"message": "Designated timestamp column 'timestamp' must be included in upsert keys.",
"provided_keys": [
"timestamp,symbol"
]
}
[1] 47734 exit 1 questdb-cli dedupe trades --enable -k timestamp,symbol
❯ qdb-cli dedupe trades --enable -k timestamp symbol
{
"status": "OK",
"table_name": "trades",
"action": "enable",
"deduplication_enabled": true,
"upsert_keys": [
"timestamp",
"symbol"
],
"ddl": "OK"
}
```
## MCP Server (LLM Integration)
`questdb-rest` includes an [MCP (Model Context Protocol)](https://modelcontextprotocol.io) server that lets LLMs like Claude interact with QuestDB directly.
### Installation
```bash
# Install with MCP support
uv tool install "questdb-rest[mcp]"
# or
pip install "questdb-rest[mcp]"
```
### Usage
```bash
# Start MCP server via CLI subcommand
qdb-cli mcp
# Or via the dedicated entry point
qdb-mcp
```
### Claude Desktop / Claude Code config
```json
{
"mcpServers": {
"questdb": {
"command": "qdb-cli",
"args": ["mcp"]
}
}
}
```
### Exposed MCP tools
| Tool | Description |
|------|-------------|
| `execute_sql` | Execute any SQL query with output format options (json/csv/psql/markdown) |
| `list_tables` | List tables with regex filtering, UUID filtering, and limit |
| `describe_table` | Get column info for a table |
| `get_table_schema` | Get the CREATE TABLE statement for a table |
| `check_table_exists` | Check if a table exists |
| `export_csv` | Export query results as CSV |
#### Tool Parameters
**`execute_sql`**
- `query` (required): SQL query to execute
- `limit`: Result limit (default: "100", use "0" for unlimited)
- `statement_timeout`: Query timeout in milliseconds
- `output_format`: "json" (default), "csv", "psql" (ASCII table), or "markdown"
**`list_tables`**
- `pattern`: Regex to match table names (e.g., "trades", "cme_.*")
- `exclude_pattern`: Regex to exclude tables (e.g., "backup", "test_.*")
- `has_uuid`: `true` for tables with UUID-4 in name, `false` for without
- `limit`: Max tables to return (default: 100, `null` for unlimited)
**`export_csv`**
- `query` (required): SQL query to execute
- `limit`: Result limit (default: "100", use "0" for unlimited)
### Exposed MCP resources
| URI | Description |
|-----|-------------|
| `questdb://tables` | Dynamic list of all tables |
| `questdb://table/{name}/schema` | CREATE TABLE statement for a specific table |
Connection config is loaded automatically from `~/.questdb-rest/config.json` (same as the CLI).
## Examples
Check the `Short Tour` section above for a quick overview of the CLI.
### Advanced Scripting
```bash
# drop all tables with name regex matching 'test_table_'
# exp exports as CSV, so we use tail to skip the header
qdb-cli exp "select table_name from tables where table_name ~ 'test_table_'" | tail -n +2 | xargs -I{} bash -c 'echo Dropping table {}; qdb-cli exec -q "drop table {}"'
```
For convenience, I included a bash script `qdb-drop-tables-by-regex` and `qdb-imp-from-stdin` (see below) that does exactly this - it will be installed if you install the `questdb-rest` PyPI package.
```bash
curl 'https://raw.githubusercontent.com/your/test.csv' | qdb-imp-from-stdin -n your_table_name
```
Or use the more general purpose version:
```bash
qdb-table-names test_table_ | qdb-cli drop
```
### Drop all backup tables with UUID4 in the name
```plain
# dry run first:
qdb-table-names backup --uuid | qdb-cli --dry-run drop
{
"dry_run": true,
"table_dropped": "qdb_cli_backup_cme_liq_ba_LE_0ae696bb_076e_4c0e_b7ba_3999e8939c89",
"ddl": "OK (Simulated)"
}
{
"dry_run": true,
"table_dropped": "qdb_cli_backup_cme_liq_ba_LE_96042ea7_d2eb_4455_a8d3_250ab75f347a",
"ddl": "OK (Simulated)"
}
# destructive command, be careful!
qdb-table-names backup --uuid | qdb-cli drop
{
"status": "OK",
"table_dropped": "qdb_cli_backup_cme_liq_ba_LE_0ae696bb_076e_4c0e_b7ba_3999e8939c89",
"message": "Table 'qdb_cli_backup_cme_liq_ba_LE_0ae696bb_076e_4c0e_b7ba_3999e8939c89' dropped successfully.",
"ddl_response": "OK"
}
{
"status": "OK",
"table_dropped": "qdb_cli_backup_cme_liq_ba_LE_96042ea7_d2eb_4455_a8d3_250ab75f347a",
"message": "Table 'qdb_cli_backup_cme_liq_ba_LE_96042ea7_d2eb_4455_a8d3_250ab75f347a' dropped successfully.",
"ddl_response": "OK"
}
```
```plain
# yes, this command is installed if you install the Python package
$ qdb-table-names --help
Usage: qdb-table-names [-u|--uuid] [-U|--no-uuid] [regex]
Get a list of table names from QuestDB.
If you provide a regex, only tables whose name matches will be returned.
-u, --uuid Only tables containing a UUID-4 in their name
-U, --no-uuid Only tables NOT containing a UUID-4 in their name
You may combine either UUID flag with an additional regex, but -u and -U are mutually exclusive.
OPTIONS:
-h, --help Show this help message and exit
-u, --uuid Match only tables containing a UUID-4 in their name
-U, --no-uuid Match only tables NOT containing a UUID-4 in their name
EXAMPLES:
# list all table names
qdb-table-names
# list only tables containing a UUID-4
qdb-table-names -u
# list only tables NOT containing a UUID-4
qdb-table-names -U
# list only tables starting with "equities_"
qdb-table-names equities_
# combine regex and UUID-flag
qdb-table-names -u equities_
qdb-table-names -U equities_
```
### Piping query or table names from stdin
`qdb-cli exec` supports reading multiple queries (delimited by `;`) from stdin, or from a file.
Besides `qdb-cli drop` (see example right above), these subcommands also support reading table names (1 per line) from stdin: `chk`, `dedupe`, `schema`.
Examples:
```plain
qdb-table-names cme_liq | qdb-cli chk
{
"tableName": "cme_liq_ba_LE",
"status": "Exists"
}
{
"tableName": "cme_liq_ba_HG",
"status": "Exists"
}
{
"tableName": "cme_liq_ba_SI",
"status": "Exists"
}
{
"tableName": "cme_liq_ba_GC",
"status": "Exists"
}
```
```sql
-- run this:
-- qdb-table-names cme_liq | qdb-cli schema
CREATE TABLE 'cme_liq_ba_LE' (
CT VARCHAR,
MP DOUBLE,
LVL1A DOUBLE,
LVL2A DOUBLE,
LVL3A DOUBLE,
LVL4A DOUBLE,
LVL5A DOUBLE,
WT LONG,
timestamp TIMESTAMP
) timestamp(timestamp) PARTITION BY YEAR WAL
WITH maxUncommittedRows=500000, o3MaxLag=600000000us
DEDUP UPSERT KEYS(timestamp);
CREATE TABLE 'cme_liq_ba_HG' (
MP DOUBLE,
LVL1B DOUBLE,
LVL1A DOUBLE,
LVL2B DOUBLE,
LVL2A DOUBLE,
LVL3B DOUBLE,
LVL3A DOUBLE,
LVL4B DOUBLE,
LVL10B DOUBLE,
LVL10A DOUBLE,
CT VARCHAR,
LVL4A DOUBLE,
LVL5B DOUBLE,
LVL5A DOUBLE,
LVL6B DOUBLE,
LVL6A DOUBLE,
LVL7B DOUBLE,
LVL7A DOUBLE,
LVL8B DOUBLE,
LVL8A DOUBLE,
LVL9B DOUBLE,
WT LONG,
LVL9A DOUBLE,
timestamp TIMESTAMP
) timestamp(timestamp) PARTITION BY DAY WAL
WITH maxUncommittedRows=500000, o3MaxLag=600000000us
DEDUP UPSERT KEYS(timestamp);
-- ...
```
### Change partitioning strategy to YEAR for existing table
```plain
# let check original schema before we make big changes
qdb-cli schema cme_liq_ba_6S
CREATE TABLE 'cme_liq_ba_6S' (
MP DOUBLE,
LVL1B DOUBLE,
LVL1A DOUBLE,
LVL2B DOUBLE,
LVL2A DOUBLE,
LVL3B DOUBLE,
LVL3A DOUBLE,
LVL4B DOUBLE,
LVL10B DOUBLE,
LVL10A DOUBLE,
CT VARCHAR,
LVL4A DOUBLE,
LVL5B DOUBLE,
LVL5A DOUBLE,
LVL6B DOUBLE,
LVL6A DOUBLE,
LVL7B DOUBLE,
LVL7A DOUBLE,
LVL8B DOUBLE,
LVL8A DOUBLE,
LVL9B DOUBLE,
WT LONG,
LVL9A DOUBLE,
timestamp TIMESTAMP
) timestamp(timestamp) PARTITION BY DAY WAL
WITH maxUncommittedRows=500000, o3MaxLag=600000000us
DEDUP UPSERT KEYS(timestamp);
# forgot to specify the designated timestamp column
❯ qdb-cli cor cme_liq_ba_6S -q cme_liq_ba_6S -k timestamp -P YEAR
WARNING: Input query from query string does not start with SELECT. Assuming it's valid QuestDB shorthand.
WARNING: Query: cme_liq_ba_6S
WARNING: QuestDB API Error: HTTP 400: partitioning is possible only on tables with designated timestamps
WARNING: Response Body: {"query": "CREATE TABLE __qdb_cli_temp_cme_liq_ba_6S_683ce6ae_9c45_4bd1_836a_b1184075dea2 AS (cme_liq_ba_6S) PARTITION BY YEAR DEDUP UPSERT KEYS(timestamp);", "error": "partitioning is possible only on tables with designated timestamps", "position": 111}
ERROR: Error creating temporary table '__qdb_cli_temp_cme_liq_ba_6S_683ce6ae_9c45_4bd1_836a_b1184075dea2': HTTP 400: HTTP 400: partitioning is possible only on tables with designated timestamps
[1] 64741 exit 1 questdb-cli cor cme_liq_ba_6S -q cme_liq_ba_6S -k timestamp -P YEAR
❯ qdb-cli cor cme_liq_ba_6S -q cme_liq_ba_6S -k timestamp -P YEAR -t timestamp
WARNING: Input query from query string does not start with SELECT. Assuming it's valid QuestDB shorthand.
WARNING: Query: cme_liq_ba_6S
{
"status": "OK",
"message": "Successfully created/replaced table 'cme_liq_ba_6S'. DEDUP enabled with keys: ['timestamp']. Original table backed up as 'qdb_cli_backup_cme_liq_ba_6S_dd70f217_4931_428f_8d84_3fa6003fbe4c'.",
"target_table": "cme_liq_ba_6S",
"upsert_keys_set": [
"timestamp"
],
"backup_table": "qdb_cli_backup_cme_liq_ba_6S_dd70f217_4931_428f_8d84_3fa6003fbe4c",
"original_dropped_no_backup": false
}
# check the schema again
❯ qdb-cli schema cme_liq_ba_6S
CREATE TABLE 'cme_liq_ba_6S' (
MP DOUBLE,
LVL1B DOUBLE,
LVL1A DOUBLE,
LVL2B DOUBLE,
LVL2A DOUBLE,
LVL3B DOUBLE,
LVL3A DOUBLE,
LVL4B DOUBLE,
LVL10B DOUBLE,
LVL10A DOUBLE,
CT VARCHAR,
LVL4A DOUBLE,
LVL5B DOUBLE,
LVL5A DOUBLE,
LVL6B DOUBLE,
LVL6A DOUBLE,
LVL7B DOUBLE,
LVL7A DOUBLE,
LVL8B DOUBLE,
LVL8A DOUBLE,
LVL9B DOUBLE,
WT LONG,
LVL9A DOUBLE,
timestamp TIMESTAMP
) timestamp(timestamp) PARTITION BY YEAR WAL
WITH maxUncommittedRows=500000, o3MaxLag=600000000us
DEDUP UPSERT KEYS(timestamp);
# original table is backed up
❯ qdb-table-names --uuid
qdb_cli_backup_cme_liq_ba_6S_dd70f217_4931_428f_8d84_3fa6003fbe4c
```
### Batch change partitioning strategy and enable deduplication with `xargs`
Change partition to `BY YEAR`:
```plain
$ qdb-table-names cme_liq | xargs -I{} qdb-cli --info cor -q {} {} -t timestamp -P YEAR --no-backup-original-table
INFO: Log level set to INFO
INFO: Connecting to http://localhost:9000
INFO: Starting create-or-replace operation for table 'cme_liq_ba_ZF' using temp table '__qdb_cli_temp_cme_liq_ba_ZF_b802072f_3d4b_40bb_9661_beae1838e3f5'...
WARNING: Input query from query string does not start with SELECT. Assuming it's valid QuestDB shorthand.
WARNING: Query: cme_liq_ba_ZF
INFO: Using query from query string for table creation.
INFO: Creating temporary table '__qdb_cli_temp_cme_liq_ba_ZF_b802072f_3d4b_40bb_9661_beae1838e3f5' from query...
INFO: Successfully created temporary table '__qdb_cli_temp_cme_liq_ba_ZF_b802072f_3d4b_40bb_9661_beae1838e3f5'.
INFO: Checking if target table 'cme_liq_ba_ZF' exists...
INFO: --no-backup-original-table specified. Dropping original table 'cme_liq_ba_ZF'...
INFO: Successfully dropped original table 'cme_liq_ba_ZF'.
INFO: Renaming temporary table '__qdb_cli_temp_cme_liq_ba_ZF_b802072f_3d4b_40bb_9661_beae1838e3f5' to target table 'cme_liq_ba_ZF'...
INFO: Successfully renamed temporary table '__qdb_cli_temp_cme_liq_ba_ZF_b802072f_3d4b_40bb_9661_beae1838e3f5' to 'cme_liq_ba_ZF'.
{
"status": "OK",
"message": "Successfully created/replaced table 'cme_liq_ba_ZF'. Original table was dropped (no backup).",
"target_table": "cme_liq_ba_ZF",
"upsert_keys_set": null,
"backup_table": null,
"original_dropped_no_backup": true
}
INFO: Log level set to INFO
INFO: Connecting to http://localhost:9000
INFO: Starting create-or-replace operation for table 'cme_liq_ba_ZT' using temp table '__qdb_cli_temp_cme_liq_ba_ZT_e1827495_381a_4029_a744_aa3982a85fe6'...
WARNING: Input query from query string does not start with SELECT. Assuming it's valid QuestDB shorthand.
WARNING: Query: cme_liq_ba_ZT
INFO: Using query from query string for table creation.
INFO: Creating temporary table '__qdb_cli_temp_cme_liq_ba_ZT_e1827495_381a_4029_a744_aa3982a85fe6' from query...
INFO: Successfully created temporary table '__qdb_cli_temp_cme_liq_ba_ZT_e1827495_381a_4029_a744_aa3982a85fe6'.
INFO: Checking if target table 'cme_liq_ba_ZT' exists...
INFO: --no-backup-original-table specified. Dropping original table 'cme_liq_ba_ZT'...
INFO: Successfully dropped original table 'cme_liq_ba_ZT'.
INFO: Renaming temporary table '__qdb_cli_temp_cme_liq_ba_ZT_e1827495_381a_4029_a744_aa3982a85fe6' to target table 'cme_liq_ba_ZT'...
INFO: Successfully renamed temporary table '__qdb_cli_temp_cme_liq_ba_ZT_e1827495_381a_4029_a744_aa3982a85fe6' to 'cme_liq_ba_ZT'.
{
"status": "OK",
"message": "Successfully created/replaced table 'cme_liq_ba_ZT'. Original table was dropped (no backup).",
"target_table": "cme_liq_ba_ZT",
"upsert_keys_set": null,
"backup_table": null,
"original_dropped_no_backup": true
}
```
## PyPI packages and installation
`questdb-cli`, `questdb-rest` and `questdb-api` are the same package (just aliases), with `questdb-rest` guaranteed to be the most updated.
Installing any of them will give you the `questdb-cli` and `qdb-cli` commands (same thing).
Install (Python >=3.11 required):
```bash
uv tool install questdb-rest
```
```bash
pipx install questdb-rest
```
```bash
# not recommended, but if you really want to:
pip install questdb-rest
```
To include MCP server support for LLM integration:
```bash
uv tool install "questdb-rest[mcp]"
# or
pip install "questdb-rest[mcp]"
```
## The Python API
These classes are provided with extensive methods to interact with the REST API (it's all in `__init__.py`).
```plain
QuestDBError
QuestDBConnectionError
QuestDBAPIError
QuestDBClient
```
## Screenshots




## Code Stats
Below are updated for version 3.0.3.
See also https://teddysc.me/blog/code-stats-visualization
### LOC by file

- automatic graph layout (`diag:graph`, `diag:node`, `diag:edge`)
- reusable components (`diag:template`, `diag:instance`)
- compile-time composition (`diag:include`)
- connection helpers (`diag:arrow`, `diag:anchor`)
- wrapped text on native `<text>` (`diag:wrap="true"`)
## Why svg++
LLMs already have strong SVG muscle memory: tags, attributes, groups, styles, and transforms.
svg++ is intentionally LLM-first: keep the familiar SVG surface area, then add a few high-leverage primitives for the parts models usually get wrong in raw SVG:
- layout without manual coordinate math
- wrapped text that measures correctly
- graph placement/routing from node+edge intent
- reusable templates and compile-time includes
Result: prompts stay short, edits stay local, and the generated output remains portable plain SVG.
## Installation
```bash
pip install diagramagic
```
**Note**: This package includes a Rust extension for accurate SVG measurement. During installation, the extension will be compiled from source, which requires the Rust toolchain:
```bash
# Install Rust (if not already installed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Then install diagramagic
pip install diagramagic
```
Installation typically takes 30-60 seconds while the Rust extension compiles.
Advanced graph layouts (`<diag:graph layout="circular|radial">`) also require a system Graphviz install (`dot` on PATH):
```bash
brew install graphviz
which dot
dot -V
```
Note: this cannot be installed automatically via `pyproject.toml`/`pip` because `dot` is a system executable dependency.
## Quick Start
- **Compile**: `diagramagic compile input.svg++`
- **Render PNG**: `diagramagic render input.svg++`
- **Library**: `from diagramagic import diagramagic`
- **Cheat sheet**: `diagramagic cheatsheet` (or see `AGENTS.md`)
- **Full spec**: `PROJECTSPEC.md`
- **Tests**: `python tests/run_tests.py`
svg++ basics: wrap your document in `<diag:diagram>` with the `diag:` namespace, use `<diag:flex>` for layout, and use `diag:wrap="true"` on `<text>` for multi-line text. Everything compiles to pure SVG 1.1.
Need reusable pieces? Define `<diag:template name="card">…</diag:template>` once, then drop `<diag:instance template="card">` wherever you need consistent cards or packets.
Output defaults to a white canvas; set `diag:background="none"` (or any color) on `<diag:diagram>` to change it.
## Workflow Loop
Typical authoring loop:
1. Write or edit `.svg++`
2. Compile to `.svg` with `diagramagic compile ...`
3. Render to `.png` with `diagramagic render ...`
4. Inspect output (human or agent)
5. Adjust source and repeat
## Claude Skill Install
If you use Claude Code skills, install this repo's `SKILL.md` like this:
```bash
mkdir -p ~/.claude/skills/diagramagic
cp SKILL.md ~/.claude/skills/diagramagic/SKILL.md
```
## svg++ Tags
New `diag:` elements currently supported:
- `<diag:diagram>` (root)
- `<diag:flex>` (row/column layout)
- `<diag:graph>` (auto node/edge layout)
- `<diag:node>` (graph node)
- `<diag:edge>` (graph edge)
- `<diag:arrow>` (general connector by id)
- `<diag:anchor>` (named connection point)
- `<diag:template>`, `<diag:instance>`, `<diag:slot>`, `<diag:param>` (templating)
- `<diag:include>` (compile-time sub-diagram include)
Text wrapping stays on standard SVG `<text>`:
- use `diag:wrap="true"` (and optional `diag:max-width`) on `<text>` for multi-line layout
- `diag:node` is a graph container, not a text primitive
Example:
```xml
<diag:diagram xmlns="http://www.w3.org/2000/svg"
xmlns:diag="https://diagramagic.ai/ns"
width="300" height="160">
<style>
.card { fill:#fff; stroke:#999; rx:10; ry:10; }
.title { font-size:16; font-weight:600; }
.body { font-size:12; }
</style>
<diag:flex x="20" y="20" width="260" padding="14" gap="8" background-class="card">
<text class="title" diag:wrap="false">Hello svg++</text>
<text class="body" diag:wrap="true">
This paragraph wraps to the flex width automatically.
</text>
</diag:flex>
</diag:diagram>
```
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"pillow>=10.0.0",
"pillow>=10.0.0; extra == \"dev\"",
"build>=1.3.0; extra == \"dev\"",
"twine>=5.0.0; extra == \"dev\"",
"maturin>=1.4; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.5 | 2026-02-18T16:10:57.967434 | diagramagic-0.1.17.tar.gz | 331,781 | 8b/bd/d93833aa6a1bf4b204bb12596704a9bb03a0362dda945bb3c6f4fa697811/diagramagic-0.1.17.tar.gz | source | sdist | null | false | ca2c110c654eaefd3664571aa9e1dbc7 | f2cb1814756e02921eecbfe9f3cf07aa5ea57cafbabeaec9286a4bf381799a38 | 8bbdd93833aa6a1bf4b204bb12596704a9bb03a0362dda945bb3c6f4fa697811 | null | [
"LICENSE"
] | 162 |
2.4 | esptool | 5.2.0 | A serial utility for flashing, provisioning, and interacting with Espressif SoCs. | # esptool
A Python-based, open-source, platform-independent serial utility for flashing, provisioning, and interacting with Espressif SoCs.
[](https://github.com/espressif/esptool/actions/workflows/test_esptool.yml) [](https://github.com/espressif/esptool/actions/workflows/build_esptool.yml)
[](https://results.pre-commit.ci/latest/github/espressif/esptool/master)
## Documentation
Visit the [documentation](https://docs.espressif.com/projects/esptool/) or run `esptool -h`.
## Contribute
If you're interested in contributing to esptool, please check the [contributions guide](https://docs.espressif.com/projects/esptool/en/latest/contributing.html).
## About
esptool was initially created by Fredrik Ahlberg (@[themadinventor](https://github.com/themadinventor/)), and later maintained by Angus Gratton (@[projectgus](https://github.com/projectgus/)). It is now supported by Espressif Systems. It has also received improvements from many members of the community.
## License
This document and the attached source code are released as Free Software under GNU General Public License Version 2 or later. See the accompanying [LICENSE file](https://github.com/espressif/esptool/blob/master/LICENSE) for a copy.
| text/markdown | Fredrik Ahlberg (themadinventor), Angus Gratton (projectgus), Espressif Systems | null | null | null | GPLv2+ | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: POSIX",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS :: MacOS X",
"Topic :: Software Development :: Embedded Systems",
"Environment :: Console... | [] | null | null | >=3.10 | [] | [] | [] | [
"bitstring!=4.2.0,>=3.1.6",
"cryptography>=43.0.0",
"pyserial>=3.3",
"reedsolo<1.8,>=1.5.3",
"PyYAML>=5.1",
"intelhex",
"rich_click<2",
"click<9",
"pyelftools; extra == \"dev\"",
"coverage~=6.0; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-rerunfailu... | [] | [] | [] | [
"Homepage, https://github.com/espressif/esptool/",
"Documentation, https://docs.espressif.com/projects/esptool/",
"Source, https://github.com/espressif/esptool/",
"Tracker, https://github.com/espressif/esptool/issues/",
"Changelog, https://github.com/espressif/esptool/blob/master/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:10:52.641322 | esptool-5.2.0.tar.gz | 463,000 | 77/25/7b50d81a66f600a60f23258fa134201e97e854271b478ca4e21e9f694355/esptool-5.2.0.tar.gz | source | sdist | null | false | bf7d2d8aa95fce03b49db4e87ce003a9 | 9c355b7d6331cc92979cc710ae5c41f59830d1ea29ec24c467c6005a092c06d6 | 77257b50d81a66f600a60f23258fa134201e97e854271b478ca4e21e9f694355 | null | [
"LICENSE"
] | 108,480 |
2.4 | pytest-gcppubsub | 0.2.0 | A Pytest fixture for managing Google Cloud Platform PubSub emulator | # pytest-gcppubsub
A pytest plugin that manages the [GCP Pub/Sub emulator](https://cloud.google.com/pubsub/docs/emulator) lifecycle. Start the emulator automatically when your tests run — no manual setup required.
## Features
- **Automatic emulator management** — starts `gcloud beta emulators pubsub start` before tests and stops it after
- **pytest-xdist support** — parallel workers share a single emulator instance via file-lock coordination
- **Environment configuration** — sets `PUBSUB_EMULATOR_HOST` and `PUBSUB_PROJECT_ID` so `google-cloud-pubsub` clients connect automatically
- **Auto port assignment** — use `--pubsub-port=0` to pick a free port, avoiding conflicts
- **Async compatible** — session-scoped fixture works with `pytest-asyncio` out of the box
## Prerequisites
The [Google Cloud SDK](https://cloud.google.com/sdk/docs/install) must be installed with the Pub/Sub emulator component:
```bash
gcloud components install pubsub-emulator
```
## Installation
```bash
pip install pytest-gcppubsub
```
To also install the `google-cloud-pubsub` client library (for the optional client fixtures):
```bash
pip install pytest-gcppubsub[client]
```
## Quick Start
Request the `pubsub_emulator` fixture in your tests:
```python
def test_publish_message(pubsub_emulator):
from google.cloud import pubsub_v1
publisher = pubsub_v1.PublisherClient()
topic_path = publisher.topic_path(pubsub_emulator.project, "my-topic")
publisher.create_topic(request={"name": topic_path})
future = publisher.publish(topic_path, b"hello world")
future.result()
```
The plugin starts the emulator once per session and sets the environment variables so all `google-cloud-pubsub` clients route to it automatically.
## Fixtures
### `pubsub_emulator` (session-scoped)
Starts the Pub/Sub emulator and yields an `EmulatorInfo` object:
| Attribute | Type | Description |
|-----------|------|-------------|
| `host` | `str` | Emulator host (e.g. `localhost`) |
| `port` | `int` | Emulator port |
| `project` | `str` | GCP project ID |
| `host_port` | `str` | Combined `host:port` string |
Sets `PUBSUB_EMULATOR_HOST` and `PUBSUB_PROJECT_ID` environment variables for the session and restores them on teardown.
### `pubsub_publisher_client` (function-scoped)
Returns a `pubsub_v1.PublisherClient` connected to the emulator. Skips the test if `google-cloud-pubsub` is not installed.
### `pubsub_subscriber_client` (function-scoped)
Returns a `pubsub_v1.SubscriberClient` connected to the emulator. Skips the test if `google-cloud-pubsub` is not installed.
## Configuration
Settings can be provided via CLI flags or `pyproject.toml` / `pytest.ini`. CLI flags take precedence.
| CLI Flag | ini Option | Default | Description |
|----------|-----------|---------|-------------|
| `--pubsub-host` | `pubsub_emulator_host` | `localhost` | Emulator bind host |
| `--pubsub-port` | `pubsub_emulator_port` | `8085` | Emulator port (`0` for auto) |
| `--pubsub-project` | `pubsub_project_id` | `test-project` | GCP project ID |
| `--pubsub-timeout` | `pubsub_emulator_timeout` | `15` | Startup timeout (seconds) |
Example `pyproject.toml`:
```toml
[tool.pytest.ini_options]
pubsub_project_id = "my-test-project"
pubsub_emulator_port = "0"
```
## pytest-xdist Support
When running with `pytest-xdist`, the plugin coordinates workers so that only the first worker starts the emulator. Subsequent workers attach to the running instance. The last worker to finish tears it down. This uses file-lock based coordination and handles stale processes from crashed runs.
```bash
pytest -n auto # all workers share one emulator
```
## Async Tests
The `pubsub_emulator` fixture is session-scoped and synchronous, which is compatible with async test functions. Since `PUBSUB_EMULATOR_HOST` is set in the environment, async clients like `PublisherAsyncClient` connect to the emulator automatically:
```python
import pytest
from google.cloud.pubsub_v1 import PublisherAsyncClient
@pytest.fixture
async def async_publisher(pubsub_emulator):
return PublisherAsyncClient()
async def test_async_publish(async_publisher, pubsub_emulator):
topic = f"projects/{pubsub_emulator.project}/topics/my-topic"
await async_publisher.create_topic(request={"name": topic})
```
## License
MIT
| text/markdown | Neale Petrillo | neale.a.petrillo@gmail.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"filelock>=3.0",
"google-cloud-pubsub>=2.0; extra == \"client\"",
"pytest>=7.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:10:28.791357 | pytest_gcppubsub-0.2.0.tar.gz | 6,151 | 17/4a/b8bcd31b361c3c1bd0b4b931ad12f6f29ed473fdfb159f3f27a1ba0f6847/pytest_gcppubsub-0.2.0.tar.gz | source | sdist | null | false | d65b1ecc7d629337285a9bb96e093176 | 661a4d81fd0b3021e8f89d07ff7d5485ddf39532f63b4532652c5561929a47ad | 174ab8bcd31b361c3c1bd0b4b931ad12f6f29ed473fdfb159f3f27a1ba0f6847 | null | [
"LICENSE"
] | 225 |
2.4 | SRinputs | 0.2.0 | A Python suite for safe user inputs (SR) and automated fake data simulation (FR) using Faker. | # SRinputs - Smart & Reliable Inputs Suite

A comprehensive Python suite designed to supercharge the built-in `input()` function. Whether you need to validate real user data or simulate fake information for testing, **SRinputs** has you covered.
## Installation
```bash
pip install SRinputs
```
## Modules
The package is now divided into two specialized modules:
1. **SRinputs** (Static & Repetitive)Focuses on safe data collection from real users. It handles validation, language detection, and prevents common crashes.
* Type Validation: Supports int, float, and str.
* Safety: Catch KeyboardInterrupt and EOFError gracefully.
* Persistence: Mandatory entries by default (no more empty inputs by mistake).
* Bulk Collection: multiInput to gather lists of data easily.
2. **FRinputs** (Fake & Random) Designed for developers and students. It "mocks" inputs using the Faker library, allowing you to test your classes and functions without typing a single word in the terminal.
* Zero-Typing Testing: Simulates input automatically.
* Realistic Data: Names (by gender), Emails, URLs, IDs, and Social Media Handles.
* Customizable: Supports custom prompts and formats.
## Usage Examples
Safe User Input (SR)
```
from SRinputs.SRinputs import IntInput, multiInput
# Validates an integer
age = IntInput("Please enter your age: ")
# Collects exactly 5 names in a list
names = multiInput(5, "Enter a student name: ")
```
## Automated Testing / Mocking (FR)
Ideal for testing your classes without manual input:
```
from SRinputs.FRinputs import FRNameInput, FRHandleInput, FREmailInput
class User:
def __init__(self, name, handle, email):
self.name = name
self.handle = handle
self.email = email
# No terminal typing required! Data is generated and "injected" automatically.
user1 = User(
name = FRNameInput(),
handle = FRHandleInput(),
email = FREmailInput()
)
print(f"Created: {user1.name} ({user1.handle})")
```
## Supported Types (FR Module)
| Function | Description |
|---|---|
| FRNameInput | Generates names (supports 'male', 'female', 'nonbinary'). |
| FRDateInput | enerates dates with multiple format options (0-3). |
| FRHandleInput | Generates social media handles like @name_123. |
| FRIdInput | Generates IDs (8-10 digits). |
| FREmailInput | Generates random email addresses. |
| FRUrlInput | Generates fake website links. |
| text/markdown | null | Keiner Mendoza <keynerismo@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audience :: Education",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"fast_langdetect",
"faker"
] | [] | [] | [] | [
"Homepage, https://github.com/keyles-Py/SRinputs",
"Bug_Tracker, https://github.com/keyles-Py/SRinputs/issues"
] | twine/6.2.0 CPython/3.13.3 | 2026-02-18T16:10:09.672635 | srinputs-0.2.0.tar.gz | 6,047 | f1/9d/296cbcf9fd1efccedbe59ec2ac0966260d89ae6504e2ff5a5cb0d6289271/srinputs-0.2.0.tar.gz | source | sdist | null | false | 4ab89c7466d3b01319334bb97cbac991 | 6f791ef120d37ce41318638b0b557aeb3234358fa430aefd9b786b0362692786 | f19d296cbcf9fd1efccedbe59ec2ac0966260d89ae6504e2ff5a5cb0d6289271 | null | [
"LICENSE.txt"
] | 0 |
2.4 | pyneuphonic | 3.0.0 | A python SDK for the Neuphonic TTS Engine. | # PyNeuphonic
The official Neuphonic Python library providing simple, convenient access to the Neuphonic text-to-speech websocket
API from any Python 3.10+ application.
For comprehensive guides and official documentation, check out [https://docs.neuphonic.com](https://docs.neuphonic.com).
If you need support or want to join the community, visit our [Discord](https://discord.gg/G258vva7gZ)!
- [Example Applications](#example-applications)
- [Documentation](#documentation)
- [Installation](#installation)
- [API Key](#api-key)
- [Audio Generation](#audio-generation)
- [Configure the Text-to-Speech Synthesis](#configure-the-text-to-speech-synthesis)
- [SSE (Server Side Events)](#sse-server-side-events)
- [Asynchronous SSE](#asynchronous-sse)
- [Asynchronous Websocket](#asynchronous-websocket)
- [Voices](#voices)
- [Get Voices](#get-voices)
- [Get Voice](#get-voice)
- [Clone Voice](#clone-voice)
- [Update Voice](#update-voice)
- [Delete Voice](#delete-voice)
- [Saving Audio](#saving-audio)
- [Agents](#agents)
- [Connecting MCP Servers](#connecting-mcp-servers)
- [List agents](#list-agents)
- [Get agent](#get-agent)
- [Multilingual Agents](#multilingual-agents)
- [Interruption handling](#interruption-handling)
## Example Applications
Check out the [examples](./examples/) folder for some example applications.
## Documentation
See [https://docs.neuphonic.com](https://docs.neuphonic.com) for the complete API documentation.
## Installation
Install this package into your environment using your chosen package manager:
```bash
pip install pyneuphonic
```
In most cases, you will be playing the audio returned from our servers directly on your device.
We offer utilities to play audio through your device's speakers using `pyaudio`.
To use these utilities, please also `pip install pyaudio`.
> :warning: Mac users encountering a `'portaudio.h' file not found` error can resolve it by running
> `brew install portaudio`.
### API Key
Get your API key from the [Neuphonic website](https://beta.neuphonic.com) and set it in your
environment, for example:
```bash
export NEUPHONIC_API_KEY=<YOUR API KEY HERE>
```
## Speech Generation
### Configure the Text-to-Speech Synthesis
To configure the TTS settings, modify the TTSConfig model.
The following parameters are examples of parameters which can be adjusted. Ensure that the selected combination of model, language, and voice is valid. For details on supported combinations, refer to the [Models](https://docs.neuphonic.com/resources/models) and [Voices](https://docs.neuphonic.com/resources/voices) pages.
- **`lang_code`**
Language code for the desired language.
**Default**: `'en'` **Examples**: `'en'`, `'es'`, `'de'`, `'nl'`
- **`voice`**
The voice ID for the desired voice. Ensure this voice ID is available for the selected model and language.
**Default**: `None` **Examples**: `'8e9c4bc8-3979-48ab-8626-df53befc2090'`
- **`speed`**
Playback speed of the audio.
**Default**: `1.0`
**Examples**: `0.7`, `1.0`, `1.5`
View the [TTSConfig](https://github.com/neuphonic/pyneuphonic/blob/main/pyneuphonic/models.py) object to see all valid options.
### SSE (Server Side Events)
```python
from pyneuphonic import Neuphonic, TTSConfig
from pyneuphonic.player import AudioPlayer
import os
client = Neuphonic(api_key=os.environ.get('NEUPHONIC_API_KEY'))
sse = client.tts.SSEClient()
# View the TTSConfig object to see all valid options
tts_config = TTSConfig(
speed=1.05,
lang_code='en',
voice_id='e564ba7e-aa8d-46a2-96a8-8dffedade48f' # use client.voices.list() to view all voice ids
)
# Create an audio player with `pyaudio`
with AudioPlayer() as player:
response = sse.send('Hello, world!', tts_config=tts_config)
player.play(response)
player.save_audio('output.wav') # save the audio to a .wav file from the player
```
### Asynchronous SSE
```python
from pyneuphonic import Neuphonic, TTSConfig
from pyneuphonic.player import AsyncAudioPlayer
import os
import asyncio
async def main():
client = Neuphonic(api_key=os.environ.get('NEUPHONIC_API_KEY'))
sse = client.tts.AsyncSSEClient()
# Set the desired configurations: playback speed and voice
tts_config = TTSConfig(speed=1.05, lang_code='en', voice_id=None)
async with AsyncAudioPlayer() as player:
response = sse.send('Hello, world!', tts_config=tts_config)
await player.play(response)
player.save_audio('output.wav') # save the audio to a .wav file
asyncio.run(main())
```
### Asynchronous Websocket
```python
from pyneuphonic import Neuphonic, TTSConfig, WebsocketEvents
from pyneuphonic.models import APIResponse, TTSResponse
from pyneuphonic.player import AsyncAudioPlayer
import os
import asyncio
async def main():
client = Neuphonic(api_key=os.environ.get('NEUPHONIC_API_KEY'))
ws = client.tts.AsyncWebsocketClient()
# Set the desired voice
tts_config = TTSConfig(voice_id=None) # will default to the default voice_id, please refer to the Neuphonic Docs
player = AsyncAudioPlayer()
await player.open()
# Attach event handlers. Check WebsocketEvents enum for all valid events.
async def on_message(message: APIResponse[TTSResponse]):
await player.play(message.data.audio)
async def on_close():
await player.close()
ws.on(WebsocketEvents.MESSAGE, on_message)
ws.on(WebsocketEvents.CLOSE, on_close)
await ws.open(tts_config=tts_config)
# A special symbol ' <STOP>' must be sent to the server, otherwise the server will wait for
# more text to be sent before generating the last few snippets of audio
await ws.send('Hello, world!', autocomplete=True)
await ws.send('Hello, world! <STOP>') # Both the above line, and this line, are equivalent
await asyncio.sleep(3) # let the audio play
player.save_audio('output.wav') # save the audio to a .wav file
await ws.close() # close the websocket and terminate the audio resources
asyncio.run(main())
```
## Saving Audio
To save the audio to a file, you can use the `save_audio` function from the `pyneuphonic` package to save the audio from responses from the synchronous SSE client.
```python
from pyneuphonic import save_audio
...
response = sse.send('Hello, world!', tts_config=tts_config)
save_audio(response, 'output.wav')
```
The `save_audio` function takes in two arguments: the response from the TTS service (as well as audio bytes) and the file path to save the audio to.
For async responses, you can use the `async_save_audio` function.
```python
from pyneuphonic.player import async_save_audio
...
response = sse.send('Hello, world!', tts_config=tts_config)
await async_save_audio(response, 'output.wav')
```
## Voices
### Get Voices
To get all available voices you can run the following snippet.
```python
from pyneuphonic import Neuphonic
import os
client = Neuphonic(api_key=os.environ.get('NEUPHONIC_API_KEY'))
response = client.voices.list() # get's all available voices
voices = response.data['voices']
voices
```
### Get Voice
To get information about an existing voice please call.
```python
response = client.voices.get(voice_id='<VOICE_ID>') # gets information about the selected voice id
response.data # response contains all information about this voice
```
### Clone Voice
To clone a voice based on a audio file, you can run the following snippet.
```python
from pyneuphonic import Neuphonic
import os
client = Neuphonic(api_key=os.environ.get('NEUPHONIC_API_KEY'))
response = client.voices.clone(
voice_name='<VOICE_NAME>',
voice_tags=['tag1', 'tag2'], # optional, add descriptive tags of what your voice sounds like
voice_file_path='<FILE_PATH>.wav' # replace with file path to a sample of the voice to clone
)
response.data # this will contain a success message with the voice_id of the cloned voice
```
If you have successfully cloned a voice, the following message will be displayed: "Voice has
successfully been cloned with ID `<VOICE_ID>`." Once cloned, you can use this voice just like any of
the standard voices when calling the TTS (Text-to-Speech) service.
To see a list of all available voices, including cloned ones, use `client.voices.list()`.
**Note:** Your voice reference clip must meet the following criteria: it should be at least 6
seconds long, in .mp3 or .wav format, and no larger than 10 MB in size.
### Update Voice
You can update any of the attributes of a voice: name, tags and the reference audio file the voice
was cloned on.
You can select which voice to update using either it's `voice_id` or it's name.
```python
# Updating using the original voice's name
response = client.voices.update(
voice_name='<ORIGINAL_VOICE_NAME>', # this is the name of voice we want to update
# Provide any, or all of the following, to update the voice
new_voice_name='<NEW_VOICE_NAME>',
new_voice_tags=['new_tag_1', 'new_tag_2'], # overwrite all previous tags
new_voice_file_path='<NEW_FILE_PATH>.wav',
)
response.data
```
```python
# Updating using the original voice's `voice_id`
response = client.voices.update(
voice_id ='<VOICE_ID>', # this is the id of voice we want to update
# Provide any, or all of the following, to update the voice
new_voice_name='<NEW_VOICE_NAME>',
new_voice_tags=['new_tag_1', 'new_tag_2'], # overwrite all previous tags
new_voice_file_path='<NEW_FILE_PATH>.wav',
)
response.data
```
**Note:** Your voice reference clip must meet the following criteria: it should be at least 6 seconds long, in .mp3 or .wav format, and no larger than 10 MB in size.
### Delete Voice
To delete a cloned voice:
```python
# Delete using the voice's name
response = client.voices.delete(voice_name='<VOICE_NAME>')
response.data
```
```python
# Delete using the voices `voice_id`
response = client.voices.delete(voice_id='<VOICE_ID>')
response.data
```
## Agents
With Agents, you can create, manage, and interact with intelligent AI assistants. You can create an
agent easily using the example here:
```python
import os
import asyncio
# View the AgentConfig object for a full list of parameters to configure the agent
from pyneuphonic import Neuphonic, Agent, AgentConfig
async def main():
client = Neuphonic(api_key=os.environ.get('NEUPHONIC_API_KEY'))
agent_id = client.agents.create(
name='Agent 1',
prompt='You are a helpful agent. Answer in 10 words or less.',
greeting='Hi, how can I help you today?'
).data['agent_id']
# All additional keyword arguments (such as `agent_id`) are passed as
# parameters to the model. See AgentConfig model for full list of parameters.
agent = Agent(client, agent_id=agent_id)
try:
await agent.start()
while True:
await asyncio.sleep(1)
except KeyboardInterrupt:
await agent.stop()
asyncio.run(main())
```
### Connecting MCP Servers
Connect your custom MCP servers to enhance your agent with unlimited capabilities.
You can connect MCP servers to your Agent to provide it with any functionality you need. The Agent will automatically utilize these tools throughout the conversation as appropriate. For an introduction to MCP, refer to the [official documentation](https://modelcontextprotocol.io/introduction).
```python
client = Neuphonic(api_key=os.environ.get('NEUPHONIC_API_KEY'))
agent = Agent(
client,
agent_id='<AGENT_ID>',
mcp_servers=['https://1234-56-789-123-4.ngrok-free.app/sse']
)
```
### List agents
To list all your agents:
```python
response = client.agents.list()
response.data
```
### Get agent
To get information about a specific agent:
```python
response = client.agents.get(agent_id='<AGENT_ID>')
response.data
```
### Multilingual Agents
Neuphonic agents support multiple languages, allowing you to create conversational AI in your preferred language:
- **Available Languages**: For a comprehensive list of supported languages, visit our [Official Documentation - Languages](https://docs.neuphonic.com/resources/languages)
- **Example Implementation**: Check out the [Spanish agent example](./examples/agents/multilingual_agent.py) to see multilingual capabilities in action
Creating a multilingual agent is as simple as specifying the `lang_code` and appropriate `voice_id` when instantiating your `Agent`.
| text/markdown | Neuphonic | support@neuphonic.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"aioconsole<0.8.0,>=0.7.1",
"certifi>=2025.6.15",
"httpx<0.28.0,>=0.27.2",
"pydantic<3.0.0,>=2.9.2",
"websockets<16.0,>=14.0"
] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.10.19 Linux/6.14.0-1017-azure | 2026-02-18T16:08:47.450675 | pyneuphonic-3.0.0-py3-none-any.whl | 23,153 | 2e/4c/7e97e4e35bcf26a39ce9647fedfa8e91492b7803ac71e3722fc529a3b958/pyneuphonic-3.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 7aa54d47f6631a73894e11c99970071c | ca77b1827695d09733be62e9d195c0678e0d9698aaed50a7e72f674713c62133 | 2e4c7e97e4e35bcf26a39ce9647fedfa8e91492b7803ac71e3722fc529a3b958 | null | [
"LICENSE.txt"
] | 257 |
2.4 | langchain-tests | 1.1.5 | Standard tests for LangChain implementations | # 🦜️🔗 langchain-tests
[](https://pypi.org/project/langchain-tests/#history)
[](https://opensource.org/licenses/MIT)
[](https://pypistats.org/packages/langchain-tests)
[](https://x.com/langchain)
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
## Quick Install
```bash
pip install langchain-tests
```
## 🤔 What is this?
This is a testing library for LangChain integrations. It contains the base classes for a standard set of tests.
## 📖 Documentation
For full documentation, see the [API reference](https://reference.langchain.com/python/langchain_tests/).
## 📕 Releases & Versioning
See our [Releases](https://docs.langchain.com/oss/python/release-policy) and [Versioning](https://docs.langchain.com/oss/python/versioning) policies.
We encourage pinning your version to a specific version in order to avoid breaking your CI when we publish new tests. We recommend upgrading to the latest version periodically to make sure you have the latest tests.
Not pinning your version will ensure you always have the latest tests, but it may also break your CI if we introduce tests that your integration doesn't pass.
## 💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see the [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview).
## Usage
To add standard tests to an integration package (e.g., for a chat model), you need to create
1. A unit test class that inherits from `ChatModelUnitTests`
2. An integration test class that inherits from `ChatModelIntegrationTests`
`tests/unit_tests/test_standard.py`:
```python
"""Standard LangChain interface tests"""
from typing import Type
import pytest
from langchain_core.language_models import BaseChatModel
from langchain_tests.unit_tests import ChatModelUnitTests
from langchain_parrot_chain import ChatParrotChain
class TestParrotChainStandard(ChatModelUnitTests):
@pytest.fixture
def chat_model_class(self) -> Type[BaseChatModel]:
return ChatParrotChain
```
`tests/integration_tests/test_standard.py`:
```python
"""Standard LangChain interface tests"""
from typing import Type
import pytest
from langchain_core.language_models import BaseChatModel
from langchain_tests.integration_tests import ChatModelIntegrationTests
from langchain_parrot_chain import ChatParrotChain
class TestParrotChainStandard(ChatModelIntegrationTests):
@pytest.fixture
def chat_model_class(self) -> Type[BaseChatModel]:
return ChatParrotChain
```
## Reference
The following fixtures are configurable in the test classes. Anything not marked
as required is optional.
- `chat_model_class` (required): The class of the chat model to be tested
- `chat_model_params`: The keyword arguments to pass to the chat model constructor
- `chat_model_has_tool_calling`: Whether the chat model can call tools. By default, this is set to `hasattr(chat_model_class, 'bind_tools)`
- `chat_model_has_structured_output`: Whether the chat model can structured output. By default, this is set to `hasattr(chat_model_class, 'with_structured_output')`
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming ... | [] | null | null | <4.0.0,>=3.10.0 | [] | [] | [] | [
"httpx<1.0.0,>=0.28.1",
"langchain-core<2.0.0,>=1.2.7",
"numpy>=1.26.2; python_version < \"3.13\"",
"numpy>=2.1.0; python_version >= \"3.13\"",
"pytest-asyncio<2.0.0,>=0.20.0",
"pytest-benchmark",
"pytest-codspeed",
"pytest-recording",
"pytest-socket<1.0.0,>=0.7.0",
"pytest<10.0.0,>=7.0.0",
"syr... | [] | [] | [] | [
"Homepage, https://docs.langchain.com/",
"Documentation, https://docs.langchain.com/",
"Repository, https://github.com/langchain-ai/langchain",
"Issues, https://github.com/langchain-ai/langchain/issues",
"Changelog, https://github.com/langchain-ai/langchain/releases?q=%22langchain-tests%3D%3D1%22",
"Twitt... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:08:31.745195 | langchain_tests-1.1.5.tar.gz | 154,114 | 21/94/e626a40c14a5bc7b60563446a23330a06654d0b1e3804109ed792ca0c638/langchain_tests-1.1.5.tar.gz | source | sdist | null | false | 9f8e5f8e55129ca9adc520683cdd1a2a | add75c24ea4aacb5f4efa02670fcceee5d5276848586f724e90135da6d3e070e | 2194e626a40c14a5bc7b60563446a23330a06654d0b1e3804109ed792ca0c638 | null | [] | 72,189 |
2.4 | pan-insights-sdk | 0.2.1 | Python SDK for Palo Alto Networks Prisma Access Insights 3.0 API | # Prisma Access Insights SDK
Python SDK and CLI for querying the Palo Alto Networks Prisma Access Insights 3.0 API.
Query users, applications, sites, and security events from your Prisma Access deployment for reporting, monitoring, and analytics.
## Installation
### pip
```bash
pip install pan-insights-sdk
```
### Docker
```bash
docker build -t insights .
docker run --rm insights --help
```
### From source
```bash
git clone https://github.com/ancoleman/insights-sdk-cli.git
cd insights-sdk-cli
make dev
```
See [Installation Guide](docs/installation.md) for all options and CI/CD setup.
## Quick Start
### 1. Set Credentials
```bash
export SCM_CLIENT_ID=your-service-account@tsg.iam.panserviceaccount.com
export SCM_CLIENT_SECRET=your-secret
export SCM_TSG_ID=your-tsg-id
```
### 2. CLI Usage
```bash
insights test # Test connection
insights users list agent # List users (last 24h)
insights users count agent # Connected user count
insights apps list # List applications
insights sites traffic # Site traffic
insights --help # All commands
```
### 3. Python SDK
```python
from insights_sdk import InsightsClient
with InsightsClient(
client_id="your-client-id",
client_secret="your-secret",
tsg_id="your-tsg-id",
) as client:
users = client.get_agent_users(hours=24)
print(f"Found {len(users.get('data', []))} users")
```
## Documentation
| Guide | Description |
|-------|-------------|
| [Installation](docs/installation.md) | pip, Docker, source, and CI/CD setup |
| [CLI Reference](docs/cli-reference.md) | Complete command reference |
| [SDK Guide](docs/sdk-guide.md) | Python SDK usage and filtering |
## Command Groups
| Group | Description |
|-------|-------------|
| `insights users` | User queries (list, count, sessions, devices) |
| `insights apps` | Application queries |
| `insights sites` | Site queries |
| `insights security` | PAB security events |
| `insights monitoring` | Monitored user metrics |
| `insights accelerated` | Accelerated app metrics |
## Development
```bash
make help # Show all targets
make dev # Install with dev deps
make test # Run tests
make lint # Run linters
make format # Format code
make build # Build Docker image
```
## License
MIT
| text/markdown | Anton Coleman | null | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.28.0",
"pydantic>=2.12.0",
"typer>=0.20.0",
"rich>=14.2.0",
"python-dotenv>=1.2.0",
"pytest>=9.0.0; extra == \"dev\"",
"pytest-asyncio>=1.2.0; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"respx>=0.22.0; extra == \"dev\"",
"black>=25.9.0; extra == \"dev\"",
"mypy>=1.18.0; ... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:08:20.038228 | pan_insights_sdk-0.2.1.tar.gz | 39,675 | fd/70/62d2517e30bab3aed565dc7c4ff59c3aec6fb5179cca512eb3c7b380d542/pan_insights_sdk-0.2.1.tar.gz | source | sdist | null | false | 2f3a4edc3b246534828987f2c5d8604e | 157302fe49d20a783013d39bbbfe3c8201e7277ef49207fd09397b31df40efb0 | fd7062d2517e30bab3aed565dc7c4ff59c3aec6fb5179cca512eb3c7b380d542 | null | [
"LICENSE"
] | 246 |
2.4 | push-to-whisper | 0.1.1 | Yet another voice memo tool | # push-to-whisper
A smart voice memo tool aka **`push-to-stt-to-md-to-llm-to-clipboard-or-whatever`.**

### What you can do with push-to-whisper:
- **Record** audio while holding a global key combination.
- **Save** the recording as a `.wav` file (e.g., directly into your Obsidian vault).
- **Transcode** it into `.ogg` or other formats for efficiency (via ffmpeg).
- **Transcribe** it into Markdown using Whisper (Currently supports `whisper.cpp` server).
- **Refine** the text using LLM APIs like OpenAI, Gemini, or Ollama (via LiteLLM).
- Auto tagging, auto summarization, etc.
- **Copy** the result to your clipboard automatically.
- **Notify** success or send results to notification services like Slack, Discord, or Ntfy (via Apprise).
Every step above is modular. You can combine them to build your own custom workflow in a simple YAML configuration file.
## Installation
1. Install system dependencies:
```bash
# Debian/Ubuntu
sudo apt install libgirepository1.0-dev libcairo2-dev python3-dev ffmpeg
```
2. Install the package using `uv`:
```bash
uv tool install push-to-whisper
```
3. Install the systemd user service and generate a default config:
```bash
push-to-whisper install-daemon
```
## Configuration
The configuration file is located at `~/.config/push-to-whisper/config.yaml`. You can customize the Whisper endpoint, LLM API keys (LiteLLM), and processing pipelines.
To re-initialize or export the default configuration:
```bash
push-to-whisper init --bare -o ~/.config/push-to-whisper/config.yaml
```
## Usage
Once the daemon is installed via `install-daemon`, it will start automatically on login.
### Default Shortcuts
On Linux (KDE/GNOME), shortcuts are managed by the system. After running `install-daemon`, you can assign keys to the following actions in your system settings:
- **Transcription to Markdown**: (Recommended: `ALT+SHIFT+x`) - Transcribe -> Transcode -> Save Audio -> Save Markdown -> Notify.
- **Transcription to Clipboard**: (Recommended: `ALT+SHIFT+c`) - Transcribe -> Transcode -> Copy to Clipboard -> Notify.
*Note: Currently tested and supported only on Linux (Fedora) with KDE Plasma (Wayland). Native support for Windows and macOS is planned for future releases.*
## Development
- **Formatting**: `uv run ruff format .`
- **Linting**: `uv run ruff check . --fix`
- **Testing**: `uv run pytest`
## License
MIT
| text/markdown | null | null | null | null | MIT | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=2.4.2",
"requests>=2.32.5",
"scipy>=1.17.0",
"sounddevice>=0.5.5",
"pydbus>=0.6.0",
"pydantic-settings>=2.0.0",
"pyyaml>=6.0.0",
"pygobject>=3.48.0",
"apprise>=1.9.7",
"litellm>=1.81.12",
"patch-ng>=1.19.0",
"jinja2>=3.1.6",
"platformdirs>=4.9.2",
"faster-whisper>=1.2.1",
"ruff>=... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T16:08:14.938793 | push_to_whisper-0.1.1-py3-none-any.whl | 32,866 | 0e/e9/9fa47caa31a8079c34b89f473e0e9c1b425fa2cede59236f0c32726f309a/push_to_whisper-0.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | de801ba71849ba951b7404b805917f30 | 99485bc1789ddad9620a778cd923acc62196bf7e91f9e3177a6e1a3fd56a1549 | 0ee99fa47caa31a8079c34b89f473e0e9c1b425fa2cede59236f0c32726f309a | null | [
"LICENSE"
] | 249 |
2.4 | odoo-addon-somconnexio | 16.0.1.3.1 | Customizations for Som Connexió ERP. | ##########################
SomConnexio - ERP System
##########################
.. |badge1| image:: https://codecov.io/gl/coopdevs/odoo-somconnexio/branch/master/graph/badge.svg?token=ZfxYjFpQBz
:alt: codecov
:target: https://codecov.io/gl/coopdevs/odoo-somconnexio
.. |badge2| image:: https://img.shields.io/badge/licence-AGPL--3-blue.png
:alt: License: AGPL-3
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
.. |badge3| image:: https://img.shields.io/badge/maturity-Mature-brightgreen.png
:alt: Mature
:target: https://odoo-community.org/page/development-status
|badge1| |badge2| |badge3|
This project provides an ERP system for `Som Connexio
<https://somosconexion.coop/>`_ telecommunication users cooperative.
**************
Installation
**************
This package requires Odoo v12.0 installed.
You can install this module using ``pip``:
.. code:: bash
$ pip install odoo-addon-somconnexio
More info in: https://pypi.org/project/odoo-addon-somconnexio/
*************
Development
*************
Configure local development environment
=======================================
First of all, to start development, we need to create a virtualenv in
our local machine to install the pre-commit dependencies. Using a
virtualenv with Python 3.7, we install the pre-commit hooks to execute
the linters (and in the future the formatter).
In your local environment, where you execute the ``git commit ...``
command, run:
#. Install ``pyenv``
.. code:: bash
curl https://pyenv.run | bash
2. Build the Python version
.. code:: bash
pyenv install 3.7.7
3. Create a virtualenv
.. code:: bash
pyenv virtualenv 3.7.7 odoo-somconnexio
4. Activate the virtualenv
.. code:: bash
pyenv activate odoo-somconnexio
5. Install dependencies
.. code:: bash
pip install pre-commit
6. Install pre-commit hooks
.. code:: bash
pyenv exec pre-commit install
Create development environment (LXC Container)
==============================================
Create the ``devenv`` container with the ``somconnexio`` module mounted
and provision it. Follow the `instructions
<https://gitlab.com/coopdevs/odoo-somconnexio-inventory#requirements>`_
in `odoo-somconnexio-inventory
<https://gitlab.com/coopdevs/odoo-somconnexio-inventory>`_.
Once created, we can stop or start our ``odoo-sc`` lxc container as
indicated here:
.. code:: bash
$ sudo systemctl start lxc@odoo-sc
$ sudo systemctl stop lxc@odoo-sc
To check our local lxc containers and their status, run:
.. code:: bash
$ sudo lxc-ls -f
Start the ODOO application
==========================
Enter to your local machine as the user ``odoo``, activate the python
environment first and run the odoo bin:
.. code:: bash
$ ssh odoo@odoo-sc.local
$ pyenv activate odoo
$ cd /opt/odoo
$ set -a && source /etc/default/odoo && set +a
$ ./odoo-bin -c /etc/odoo/odoo.conf -u somconnexio -d odoo --workers 0
To use the local somconnexio module (development version) instead of the
PyPI published one, you need to upgrade the `version in the manifest
<https://gitlab.com/coopdevs/odoo-somconnexio/-/blob/master/somconnexio/__manifest__.py#L3>`_
and then update the module with ``-u`` in the Odoo CLI.
Restart ODOO database from scratch
==================================
Enter to your local machine as the user ``odoo``, activate the python
environment first, drop the DB, and run the odoo bin to create it again:
.. code:: bash
$ ssh odoo@odoo-sc.local
$ pyenv activate odoo
$ dropdb odoo
$ cd /opt/odoo
$ ./odoo-bin -c /etc/odoo/odoo.conf -i somconnexio -d odoo --stop-after-init
Deploy branch
=============
For tests purposes, we might want to deploy a given branch (``BRANCH``)
into a server (staging), instead of publishing a new package release
just to test some fix or new feature.
To do so, we need to enter into the server with an authorized user
(``<USER>``), and then switch to ``odoo`` user to install with pip the
package version found in the git branch.
.. code:: bash
$ ssh <USER>@staging-odoo.somconnexio.coop
$ sudo su - odoo
$ cd /opt/odoo
$ pyenv activate odoo
$ pip install -e git+https://gitlab.com/coopdevs/odoo-somconnexio@<BRANCH>#egg=odoo12-addon-somconnexio\&subdirectory=setup/somconnexio
At this point we need to restart Odoo to load the new installed module
version.
.. code:: bash
$ sudo systemctl stop odoo
$ ./odoo-bin -c /etc/odoo/odoo.conf -u somconnexio -d odoo --stop-after-init --logfile /dev/stdout
$ sudo systemctl start odoo
To restart the odoo service it is better to stop it, execute odoo with
the upgrade (``-u``) option and start it again, rather that just
``restart`` it, in case there are changes in views within the deployed
branch.
Run tests
=========
You can run the tests with this command:
.. code:: bash
$ ./odoo-bin -c /etc/odoo/odoo.conf -u somconnexio -d odoo --stop-after-init --test-enable --workers 0
The company data is rewritten every module upgrade
Run tests with coverage
=======================
You can run the tests with a coverage report following the next steps:
#. Copy the `coveragerc
<https://github.com/coopdevs/maintainer-quality-tools/blob/master/cfg/.coveragerc>`_
file in your ``odoo`` base path (``/opt/odoo``) changing the
``include`` option to the ``somconnexio`` module path
(``/opt/odoo_modules/somconnexio/*``).
#. Go to ``/opt/odoo``
#. Run:
.. code:: bash
$ coverage run odoo-bin -c /etc/odoo/odoo.conf -u somconnexio -d odoo --stop-after-init --test-enable --workers 0 && coverage report --show-missing
Update CHANGELOG without running pipeline
=========================================
If you need to update the CHANGELOG but you don't need to wait for the
pipeline to end, you can put ``[skip ci]`` in your commit message and
the pipeline will be skipped. More info in
https://docs.gitlab.com/ee/ci/yaml/#skip-pipeline
**************
Contributors
**************
- ``Som Connexió SCCL <https://somconnexio.coop/>``
- Gerard Funonsas gerard.funosas@somconnexio.coop
- Borja Gimeno borja.gimeno@somconnexio.coop
- ``Coopdevs Treball SCCL <https://coopdevs.coop/>``
- Daniel Palomar daniel.palomar@coopdevs.org
- César López cesar.lopez@coopdevs.org
| text/x-rst | Som Connexió SCCL, Coopdevs Treball SCCL | null | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 16.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://coopdevs.org | null | >=3.10 | [] | [] | [] | [
"factory-boy",
"faker==9.3.1",
"mm-proxy-python-client==0.1.0",
"odoo-addon-account_asset_management<16.1dev,>=16.0dev",
"odoo-addon-account_banking_sepa_credit_transfer<16.1dev,>=16.0dev",
"odoo-addon-account_banking_sepa_direct_debit<16.1dev,>=16.0dev",
"odoo-addon-account_chart_update<16.1dev,>=16.0d... | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.12 | 2026-02-18T16:08:07.611892 | odoo_addon_somconnexio-16.0.1.3.1.tar.gz | 681,713 | de/16/2544c021ac0fdc3968818f76bd46ec01775b8e53b3e7370b97309a0fa4cd/odoo_addon_somconnexio-16.0.1.3.1.tar.gz | source | sdist | null | false | cac50eb4b4c44abedba89ba12b16694a | 10693a8b84814031f04bd55f8bbd20387c72580d94c639c248fe95e4874c9fa6 | de162544c021ac0fdc3968818f76bd46ec01775b8e53b3e7370b97309a0fa4cd | null | [] | 155 |
2.3 | owimetadatabase-preprocessor | 0.10.7 | Package for preprocessing data from owimetadatabase. | # Owimetadatabase preprocessor
[](https://pypi.org/project/owimetadatabase-preprocessor/)
[](https://pypi.org/project/owimetadatabase-preprocessor/)
[](https://github.com/OWI-Lab/owimetadatabase-preprocessor/blob/main/LICENSE)
[](https://github.com/OWI-Lab/owimetadatabase-preprocessor/actions/workflows/ci.yml)
[](https://github.com/OWI-Lab/owimetadatabase-preprocessor/actions/workflows/ci.yml)
[](https://github.com/OWI-Lab/owimetadatabase-preprocessor/issues)
[](https://doi.org/10.5281/zenodo.10620568)
Tools for preprocessing geometries from the metadatabase. Read the documentation [here](https://owi-lab.github.io/owimetadatabase-preprocessor/).
## Installation
In your desired virtual environment with Python 3 and pip installed:
``pip install owimetadatabase-preprocessor``
## Installation (alternative)
In your desired virtual environment and directory with Python 3 and pip installed:
``git clone <repo-github-address>``
``pip install <repo-local-name>``
## Installation (beta)
In case you want to try the latest beta-version (if it is more advanced than the latest stable one):
``pip install owimetadatabase-preprocessor --pre``
## Contributing
If you want to contribute to the development of the package, you can, in your desired virtual environment and directory with Python 3 and pip installed:
``git clone <repo-address>``
``pip install -e <repo-name>/[dev]``
This way, you will install all the required dependecies and the package itself in editable mode, i.e. all changes to it will be reflected immediately locally so it can be tested.
The repository also has ``.lock`` file if you use ``poetry``.
## Authors
`owi_preprocessor` was written by the team of OWI-lab.
## Acknowledgements
This package was developed as part of the ETF Smartlife (FOD165) and WILLOW (EUAR157) projects.
## License
The package is licensed under the [GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html). | text/markdown | arsmlnkv | arsmlnkv <melnikov.arsene@gmail.com> | null | null | GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. <one line to give the program's name and a brief idea of what it does.> Copyright (C) <year> <name of author> This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>. Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: <program> Copyright (C) <year> <name of author> This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <https://www.gnu.org/licenses/>. The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read <https://www.gnu.org/licenses/why-not-lgpl.html>. | owimetadatabase | [
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: ... | [] | null | null | <3.14,>=3.9 | [] | [] | [] | [
"filelock==3.19.1; python_full_version < \"3.10\"",
"filelock>=3.20.3; python_full_version >= \"3.10\"",
"fonttools==4.60.2; python_full_version < \"3.10\"",
"fonttools>=4.61.0; python_full_version >= \"3.10\"",
"plotly>=5.19.0",
"requests>=2.32.0",
"matplotlib==3.9.4; python_full_version < \"3.10\"",
... | [] | [] | [] | [
"homepage, https://owi-lab.github.io/owimetadatabase-preprocessor/",
"repository, https://github.com/OWI-Lab/owimetadatabase-preprocessor",
"documentation, https://owi-lab.github.io/owimetadatabase-preprocessor/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T16:07:53.560366 | owimetadatabase_preprocessor-0.10.7-py3-none-any.whl | 77,322 | af/41/b6168a31dbf04e2fbfaa36a72099abdb4db6048f54fc06916657c042e4a4/owimetadatabase_preprocessor-0.10.7-py3-none-any.whl | py3 | bdist_wheel | null | false | 73b62d24561641bdd992e8431c4e1277 | f7c9be202ec2bec20d3d0363a09d5004ac55e2be52507c626096af00f8f25c36 | af41b6168a31dbf04e2fbfaa36a72099abdb4db6048f54fc06916657c042e4a4 | null | [] | 240 |
2.4 | brdr | 0.15.5 | BRDR - a Python library to assist in realigning (multi-)polygons (OGC Simple Features) to reference borders | # `brdr`
a Python library to assist in realigning (multi-)polygons (OGC Simple Features) to reference borders
<!-- badges: start -->
[](https://pypi.org/project/brdr/)
[](https://doi.org/10.5281/zenodo.11385644)
<!-- badges: end -->
Quick links:
- [Documentation & API Reference](https://onroerenderfgoed.github.io/brdr/)
- [Installation](#installation)
- [Development](#development)
- [Issues, questions, comments and contributions](#comments-and-contributions)
## Description
### Documentation/API Reference
[https://onroerenderfgoed.github.io/brdr/](https://onroerenderfgoed.github.io/brdr/)
### Intro
`brdr` is a Python package that assists in aligning geometric boundaries to reference boundaries. This is an important
task in geographic data management to enhance data quality.
* In the context of geographic data management, it is important to have accurate and consistent boundaries for a variety
of applications such as calculating areas, analyzing spatial relationships, and visualizing and querying geographic
information.
* When creating geographic data, it is often more efficient to derive boundaries from existing reference data rather
than collecting new data in the field.
* `brdr` can be used to align boundaries from new data to reference data, ensuring that the boundaries are accurate and
consistent.
### Example
The figure below shows:
* the original thematic geometry (blue),
* A reference layer (yellow-black).
* The resulting geometry after alignment with `brdr` (green)
<img src="docs/figures/example.png" width="50%">
In the animated gif below you can see the core of 'brdr' in action:
* The visualization on the left:
* the original thematic geometry (blue),
* A reference layer (yellow-black).
* The resulting geometry after alignment with ``brdr`` (green)**
* The graphic on the right:
* X-as: Relevant distance (~distance that change is allowed), that increases
* Y-as: Change (%) of the resulting geometry
**brdr** will 'detect' stable situations that result in one or more predictions.

### Functionalities
`brdr` provides a variety of functionalities in the Aligner-class to assist in aligning boundaries, including data-loaders, processors to make predictions and export-functionalities. Besides the generic functionalities, a range of Flanders-specific functionalities are provided.
The API reference and examples can be found at: [Documentation & API Reference](https://onroerenderfgoed.github.io/brdr/)
### Possible application fields
* Geodata-management:
* Implementation of `brdr` in business-processes and tooling
* Bulk geodata-alignment
* Alignment after reprojection of data
* Cleaning data: In a postprocessing-phase, the algorithm executes sliver-cleanup and validity-cleaning on the
resulting geometries
* Version management: visualise differences between versions of geodata
* ...
* Data-Analysis: Investigate the pattern in deviation and change between thematic and reference boundaries
* Update-detection: Investigate the descriptive formula before and after alignment to check for (automatic)
alignment of geodata
* ...
### QGIS-plugin
An implementation of `brdr` for QGIS can be found at [GitHub-brdrQ](https://github.com/OnroerendErfgoed/brdrQ/).
This QGIS-plugin provides a User Interface to align thematic data to a reference layer, showing the results in the QGIS
Table of Contents.
## Installation
You can install the latest release of `brdr` from
[GitHub](https://github.com/OnroerendErfgoed/brdr/) or
[PyPi](https://pypi.org/project/brdr/):
``` python
pip install brdr
```
## Basic example
``` python
from brdr.aligner import Aligner
from brdr.geometry_utils import geom_from_wkt
from brdr.loader import DictLoader
# CREATE AN ALIGNER
aligner = Aligner(
crs="EPSG:31370",
)
# ADD A THEMATIC POLYGON TO THEMATIC DICTIONARY and LOAD into Aligner
thematic_dict = {"theme_id_1": geom_from_wkt("POLYGON ((0 0, 0 9, 5 10, 10 0, 0 0))")}
loader = DictLoader(thematic_dict)
aligner.load_thematic_data(loader)
# ADD A REFERENCE POLYGON TO REFERENCE DICTIONARY and LOAD into Aligner
reference_dict = {"ref_id_1": geom_from_wkt("POLYGON ((0 1, 0 10,8 10,10 1,0 1))")}
loader = DictLoader(reference_dict)
aligner.load_reference_data(loader)
# EXECUTE THE ALIGNMENT
relevant_distance = 1
aligner_result = aligner.process(
relevant_distances=[relevant_distance],
)
process_results = aligner_result.get_results(aligner=aligner)
# PRINT RESULTS IN WKT
print("result: " + process_results["theme_id_1"][relevant_distance]["result"].wkt)
print(
"added area: "
+ process_results["theme_id_1"][relevant_distance]["result_diff_plus"].wkt
)
print(
"removed area: "
+ process_results["theme_id_1"][relevant_distance]["result_diff_min"].wkt
)
```
The resulting figure shows:
* the reference polygon (yellow-black)
* the original geometry (blue)
* the resulting geometry (green line)
* the added zone (green squares)
* the removed zone (red squares)
<img src="docs/figures/basic_example.png" width="100%" />
More examples can be found in [Examples](https://github.com/OnroerendErfgoed/brdr/tree/main/examples)
## Workflow
(see also Basic example)
To use `brdr`, follow these steps:
* Create a Aligner-class with specific parameters:
* relevant_distance (m) (default: 1): Distance-parameter used to decide which parts will be aligned, and which parts
remain unchanged.
* od_strategy (enum) (default: SNAP_SINGLE_SIDE): Strategy to align geodata that is not covered by reference-data
* threshold_overlap_percentage (%)(0-100) (default 50)
* crs: The Coordinate Reference System (CRS) (default: EPSG:31370 - Belgian Lambert72)
* Load thematic data
* Load reference data
* Process (align) the thematic data
* Results are returned:
* Resulting geometry
* Differences: parts that are 'different' from the original geometry (positive or negative)
* Positive differences: parts that are added to the original geometry
* Negative differences: parts that are removed form the original geometry
* Relevant intersections: relevant intersecting parts of the reference geometries
* Relevant differences: relevant differences of the reference geometries
## The `brdr`-algorithm
The algorithm for alignment is based on 2 main principles:
* Principle of intentionality: Thematic boundaries can consciously or unconsciously deviate from the reference borders.
The algorithm should keep notice of that.
* Selective spatial conservation of shape: The resulting geometry should re-use the shape of the reference borders where
aligned is of relevance.
The figure below shows a schematic overview of the algorithm:

The algorithm can be split into 3 main phases:
* Initialisation:
* Deciding which reference polygons are candidate-polygons to re-use its shape.
The reference candidate polygons are selected based on spatial intersection with the thematic geometry.
* Processing:
* Process all candidate-reference polygons one-by-one
* Calculate relevant zones for each candidate-reference-polygon
* relevant intersections: zones that must be present in the final result
* relevant differences: zones that must be excluded from the final result

* Evaluate each candidate based on their relative zones: which parts must be kept and which parts must be excluded

* Union all kept parts to recompose a resulting geometry
* Post-processing:
* Validation/correction of differences between the original input geometry and the composed intermediate resulting
geometry after processing the algorithm
* Technical validation of inner holes and multipolygons that are created by processing the algorithm
* Clean-up slivers
* Make the resulting geometry valid
RESULT:
A new resulting output geometry, aligned to the reference-polygons
## Development
### pip-compile
```sh
PIP_COMPILE_ARGS="-v --strip-extras --no-header --resolver=backtracking --no-emit-options --no-emit-find-links"
pip-compile $PIP_COMPILE_ARGS
pip-compile $PIP_COMPILE_ARGS -o requirements-dev.txt --all-extras
```
### tests
```python
python - m
pytest - -cov = brdr
tests / --cov - report
term - missing
```
### Docker
As an example-usage (proof-of-concept), a Dockerfile is created to set up a GRB-specific webservice that 'predicts' one
or multiple actual geometries for a input-geometry based on the reference source GRB.
This webservice is based on 'brdr'.
This POC can be found at [brdr-webservice (GRB-actualisator)](<https://github.com/dieuska/brdr-webservice>).
```bat
docker build -f Dockerfile . -t grb_webservice
docker run --rm -p 80:80 --name grb_webservice grb_webservice
example can be found at: http://localhost:80/docs#/default/actualiser_actualiser_post
```
## Motivation & citation
A more in-depth description of the algorithm can be found in the following article (in dutch):
- Dieussaert, K., Vanvinckenroye, M., Vermeyen, M., & Van Daele, K. (2024). Grenzen verleggen.
Automatische correcties van geografische afbakeningen op verschuivende
onderlagen *Onderzoeksrapporten Agentschap Onroerend Erfgoed*,
332. <https://doi.org/10.55465/SXCW6218>.
## Comments and contributions
We would love to hear from you and your experiences with
`brdr` or its sister project [`brdrQ`](https://github.com/OnroerendErfgoed/brdrQ).
The [discussions forum](https://github.com/OnroerendErfgoed/brdr/discussions/) is the place to be when:
- You have any questions on using `brdr` or `brdrQ` or their
applicability to your use cases
- Want to share your experiences with the library
- Have any suggestions for improvements or feature requests
If you have discovered a bug in the `brdr` library you can report it here:
<https://github.com/OnroerendErfgoed/brdr/issues>
We try to keep the list of issues as clean as possible. If
you're unsure whether something is a bug, or whether the bug is in `brdr`
or `brdrQ`, we encourage you to go through the [discussions forum](https://github.com/OnroerendErfgoed/brdr/discussions)
first.
## Acknowledgement
This software was created by [Athumi](https://athumi.be/en/), the Flemish data utility company,
and [Flanders Heritage Agency](https://www.onroerenderfgoed.be/flanders-heritage-agency).


| text/markdown | null | Karel Dieussaert <karel.dieussaert@vlaanderen.be>, Emrys Roef <emrys.roef@vlaanderen.be> | null | Emrys Roef <emrys.roef@vlaanderen.be>, Koen Van Daele <koen.vandaele@vlaanderen.be>, Vermeyen Maarten <maarten.vermeyen@vlaanderen.be> | MIT License
Copyright (c) 2024 Onroerend Erfgoed
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientifi... | [] | null | null | >=3.10 | [] | [] | [] | [
"geojson~=3.2",
"geopandas>=1.1.2,~=1.1",
"networkx~=3.6",
"osmnx~=2.0",
"pyproj~=3.7",
"requests~=2.32",
"shapely~=2.1",
"topojson~=1.10",
"black~=25.12; extra == \"dev\"",
"flake8==7.1.1; extra == \"dev\"",
"hatchling~=1.28; extra == \"dev\"",
"matplotlib~=3.10; extra == \"dev\"",
"mypy~=1... | [] | [] | [] | [
"Documentation, https://github.com/OnroerendErfgoed/brdr/blob/main/README.md",
"Repository, https://github.com/OnroerendErfgoed/brdr",
"Issues, https://github.com/OnroerendErfgoed/brdr/issues"
] | Hatch/1.16.2 cpython/3.13.7 HTTPX/0.28.1 | 2026-02-18T16:07:41.705203 | brdr-0.15.5-py3-none-any.whl | 119,701 | 00/70/f3049b5918f13f4b166fc043b3d39eb7c5ddd7a591aa001a7b3e356efb1c/brdr-0.15.5-py3-none-any.whl | py3 | bdist_wheel | null | false | 82b4914fe2688f809541b0333e2b3843 | b4abbfb0508077cdcea9d3fad9b6ab4b0880eaa7077d8313cb0c6da6ec5b27a5 | 0070f3049b5918f13f4b166fc043b3d39eb7c5ddd7a591aa001a7b3e356efb1c | null | [
"LICENSE"
] | 269 |
2.4 | anime-parsers-ru | 1.13.3 | Python package for parsing russian anime players | # AnimeParsers
[
]() [
]()
## Описание
Данный проект нацелен на создание наиболее широкого спектра парсеров на python для различных аниме-плееров в русскоязычном/снг сегменте
Актуальная стабильная версия доступна на [pypi](https://pypi.org/project/anime-parsers-ru/) или в [релизах](https://github.com/YaNesyTortiK/AnimeParsers/releases) на гитхабе
## Что есть на данный момент
- [x] Парсер Kodik (__требуется api ключ__)
- [x] Асинхронный парсер Kodik
- [x] Парсер AniBoom (на основе animego, не требует api ключей)
- [x] Асинхронный парсер Aniboom
- [ ] Парсер JutSu (без функции поиска, не требует api ключей) (Сервис заблокирован ркн)
- [x] Парсер Shikimori (с возможностью использовать псевдо-api, не требует api ключей)
- [x] Асинхронный парсер Shikimori
## Установка
- Стандартная установка:
```commandline
pip install anime-parsers-ru
```
- Установка с lxml:
```commandline
pip install anime-parsers-ru[lxml]
```
Для использования lxml при инициализации парсера установите параметр `use_lxml = True`
- Установка с асинхронными библиотеками (без lxml):
```commandline
pip install anime-parsers-ru[async]
```
Установка lxml вручную:
```commandline
pip install lxml
```
# Инструкция к парсерам
## Оглавление
- [Kodik инструкция](#kodik-инструкция)
- [AniBoom инструкция](#aniboom-инструкция)
- [JutSu инструкция](#jutsu-инструкция)
- [Shikimori инструкция](#shikimori-инструкция)
- [Типы Исключений](#типы-исключений)
## Kodik инструкция
> [!IMPORTANT]
> Если вы хотите использовать функции библиотеки для апи кодика, то вся документация расположена в файле [KODIK_API.md](KODIK_API.md)
> [!WARNING]
> Токен получаемый с помощью функции `get_token` НЕ работает для функций base_search, base_search_by_id, get_list и search
> По умолчанию данная функция не используется и класс требует от пользователя указать корректный токен.
> Если вы хотите использовать ограниченный функционал библиотеки, то можете при инициализации указать параметры
> `token=KodikParser.get_token(), validate_token=False` (Для асинхронного параметр `token=KodikParserAsync.get_token_sync()`)
> [!TIP]
> В большинстве случаев в комментариях к функциям описаны шаблоны и возможные значения возвращаемых данных
0. Установите и импортируйте библиотеку
Стандартно:
```commandline
pip install anime-parsers-ru
```
С lxml:
```commandline
pip install anime-parsers-ru[lxml]
```
```python
from anime_parsers_ru import KodikParser
parser = KodikParser(<ваш api ключ>)
```
__Для асинхронного кода__:
```commandline
pip install anime-parsers-ru[async]
```
(Установка без lxml)
```python
from anime_parsers_ru import KodikParserAsync
parser = KodikParserAsync(<ваш api ключ>)
```
1. Поиск аниме по названию
```python
parser.search(title="Наруто", limit=None, include_material_data=True, anime_status=None, strict=False, only_anime=False) # список словарей
# title - Название аниме/фильма/сериала
# limit - количество результатов выдачи (int) (результатов будет сильно меньше чем указанное число, так как в выдаче результаты повторяются)
# include_material_data - Добавлять дополнительные данные об элементе
# anime_status - Статус выхода аниме (доступно: released, ongoing, None - если ищется не аниме или любой статус)
# strict - Исключение названий далеких от оригинального
# only_anime - возвращать только элементы где type in ['anime', 'anime-serial']
```
Возвращает:
```json
[
{
"title": "Название",
"type": "тип мультимедиа (anime, film, ...)",
"year": "Год выпуска фильма",
"screenshots": [
"ссылки на скриншоты"
],
"shikimori_id": "Id шикимори, если нет - None",
"kinopoisk_id": "Id кинопоиска, если нет - None",
"imdb_id": "Id imdb, если нет - None",
"worldart_link": "ссылка на worldart, если нет - None",
"additional_data": {
"Здесь будут находится все остальные данные выданные кодиком, не связанные с отдельным переводом"
},
"material_data": {
"Здесь будут все данные о сериале имеющиеся у кодика. (None если указан параметр include_material_data=False)
В том числе оценки на шикимори, статус выхода, даты анонсов, выхода, все возможные названия, жанры, студии и многое другое."
},
"link": "ссылка на kodik.info (Пример: //kodik.info/video/20609/e8fd5bc1190b7eb1ee1a3e1c3aec5f62/720p)"
},
]
```
2. Поиск аниме по id
```python
parser.search_by_id(id="20", id_type="shikimori", limit=None)
# id - id аниме на одном из сайтов
# id_type - с какого сайта id (Поддерживается: shikimori, kinopoisk, imdb)
# limit - количество результатов выдачи (int) (результатов будет сильно меньше чем указанное число, так как в выдаче результаты повторяются)
```
Возвращает:
```json
[
{
"title": "Название",
"type": "тип мультимедиа (anime, film, ...)",
"year": "Год выпуска фильма",
"screenshots": [
"ссылки на скриншоты"
],
"shikimori_id": "Id шикимори, если нет - None",
"kinopoisk_id": "Id кинопоиска, если нет - None",
"imdb_id": "Id imdb, если нет - None",
"worldart_link": "ссылка на worldart, если нет - None",
"additional_data": {
"Здесь будут находится все остальные данные выданные кодиком, не связанные с отдельным переводом"
},
"material_data": {
"Здесь будут все данные о сериале имеющиеся у кодика. (None если указан параметр include_material_data=False)
В том числе оценки на шикимори, статус выхода, даты анонсов, выхода, все возможные названия, жанры, студии и многое другое."
},
"link": "ссылка на kodik.info (Пример: //kodik.info/video/20609/e8fd5bc1190b7eb1ee1a3e1c3aec5f62/720p)"
},
]
```
3. Получить список аниме
```python
data = parser.get_list(limit_per_page=50, pages_to_parse=1, include_material_data=True, anime_status=None, only_anime=False, start_from=None)
# limit_per_page - количество результатов на одной странице (итоговых результатов будет сильно меньше чем указан параметр)
# pages_to_parse - количество страниц для обработки (каждая страница - отдельный запрос)
# include_material_data - включить в результат дополнительные данные
# anime_status - Статус выхода аниме (доступно: released, ongoing, None - если ищется не аниме или любой статус)
# only_anime - возвращать только элементы где type in ['anime', 'anime-serial']
# start_from - начать поиск со страницы под id (id возвращается вторым элементом результата функции)
```
Возвращает:
```json
(
[
{
"title": "Название",
"type": "тип мультимедиа (anime, film, ...)",
"year": "Год выпуска фильма",
"screenshots": [
"ссылки на скриншоты"
],
"shikimori_id": "Id шикимори, если нет - None",
"kinopoisk_id": "Id кинопоиска, если нет - None",
"imdb_id": "Id imdb, если нет - None",
"worldart_link": "ссылка на worldart, если нет - None",
"additional_data": {
"Здесь будут находится все остальные данные выданные кодиком, не связанные с отдельным переводом"
},
"material_data": {
"Здесь будут все данные о сериале имеющиеся у кодика. (None если указан параметр include_material_data=False)
В том числе оценки на шикимори, статус выхода, даты анонсов, выхода, все возможные названия, жанры, студии и многое другое."
},
"link": "ссылка на kodik.info (Пример: //kodik.info/video/20609/e8fd5bc1190b7eb1ee1a3e1c3aec5f62/720p)"
},
],
"next_page_id": "id следующей страницы (для последовательного парсинга нескольких страниц) (может быть None, если след. страниц нет)"
)
```
4. Получить информацию об аниме
```python
parser.get_info(id="z20", id_type="shikimori")
# id - id аниме на одном из сайтов
# id_type - с какого сайта id (Поддерживается: shikimori, kinopoisk, imdb)
```
Возвращает:
```json
{
"series_count": 220,
"translations": [
{"id": "735", "type": "Озвучка", "name": "2x2 (220 эп.)"},
{"id": "609", "type": "Озвучка", "name": "AniDUB (220 эп.)"},
{"id": "869", "type": "Субтитры", "name": "Субтитры (220 эп.)"},
{"id": "958", "type": "Озвучка", "name": "AniRise (135 эп.)"},
{"id": "2550", "type": "Озвучка", "name": "ANI.OMNIA (8 эп.)"}
]
}
```
- Получить отдельно кол-во серий:
```python
parser.series_count("z20", "shikimori") # число
```
- Получить отдельно переводы:
```python
parser.translations("z20", "shikimori") # список словарей
```
5. Прямая ссылка на видеофайл
```python
parser.get_link(
id="z20",
id_type="shikimori",
seria_num=1,
translation_id="609") # Кортеж
# id - id медиа
# id_type - тип id (возможные: shikimori, kinopoisk, imdb)
# seria_num - номер серии (если фильм или одно видео - 0)
# translation_id - id перевода (прим: Anilibria = 610, если неизвестно - 0)
```
Возвращает кортеж: `("//cloud.kodik-storage.com/useruploads/67b6e546-e51d-43d2-bb11-4d8bfbedc2d7/d6f4716bc90bd30694cf09b0062d07a2:2024062705/", 720)`
1. Ссылка
Пример: `//cloud.kodik-storage.com/useruploads/67b6e546-e51d-43d2-bb11-4d8bfbedc2d7/d6f4716bc90bd30694cf09b0062d07a2:2024062705/`
К данной ссылке в начале нужно добавить `http:` или `https:`, а в конце качество.mp4 (`720.mp4`) (Обычно доступны следующие варианты качества: `360`, `480`, `720`)
2. Максимально возможное качество
Прим: `720` (1280x720)
6. Ссылка на m3u8 плейлист
```python
parser.get_m3u8_playlist_link(
id="z20",
id_type="shikimori",
seria_num=1,
translation_id="609",
quality=480) # Для "Наруто" нет 720p, хотя сервер и возвращает 720 в списке источников
# id - id медиа
# id_type - тип id (возможные: shikimori, kinopoisk, imdb)
# seria_num - номер серии (если фильм или одно видео - 0)
# translation_id - id перевода (прим: Anilibria = 610, если неизвестно - 0)
# quality - Желаемое качество (360, 480, 720). Если указанное качество будет больше, чем максимально доступное, вернется ссылка с максимально доступным качеством. По умолчанию: 720
```
Возвращает строку вида:
`https://cloud.kodik-storage.com/.../.../720.mp4:hls:manifest.m3u8`
7. Текстовое содержание m3u8 плейлиста
```python
parser.get_m3u8_playlist(
id="z20",
id_type="shikimori",
seria_num=1,
translation_id="609",
quality=480) # Для "Наруто" нет 720p, хотя сервер и возвращает 720 в списке источников
# id - id медиа
# id_type - тип id (возможные: shikimori, kinopoisk, imdb)
# seria_num - номер серии (если фильм или одно видео - 0)
# translation_id - id перевода (прим: Anilibria = 610, если неизвестно - 0)
# quality - Желаемое качество (360, 480, 720). Если указанное качество будет больше, чем максимально доступное, вернется ссылка с максимально доступным качеством. По умолчанию: 720
```
Возвращает строку вида:
```
#EXTM3U
#EXT-X-TARGETDURATION:6
#EXT-X-ALLOW-CACHE:YES
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:1
#EXTINF:6.000,
https://.../480.mp4:hls:seg-1-v1-a1.ts
#EXTINF:6.000,
https://.../480.mp4:hls:seg-2-v1-a1.ts
#EXTINF:6.000,
https://.../480.mp4:hls:seg-3-v1-a1.ts
```
Ключевое отличие от плейлиста который скачивается просто по ссылке (полученной в п.6) в том, что данная функция добавляет полную ссылку до сегментов. В изначальном файле ссылки содержатся в виде `./480.mp4...` что будет работать если ссылка открыта, например, в браузере, но не будет работать если файл открыт локально. С добавлением полной ссылки можно сохранить файл локально и запускать любым плеером который поддерживает m3u8 плейлисты (например VLC).
> [!IMPORTANT]
> В случае, если аниме является фильмом или содержит только одну серию, в параметр `seria_num` указывается значение `0`. В случае если перевод/субтитры неизвестны или нет выбора, в параметр `translation_id` указывается значение `"0"`
6. Прямое обращение к апи кодика
Рекомендуется использовать модули KodikSearch и KodikList для обращения к апи.
```python
parser.api_request (
endpoint="list",
filters={
"limit": 5
},
parameters={
"with_episodes_data": True
}
)
# endpoint - ссылка куда направляется запрос (доступно: "search", "list", "translations")
# filters - фильтры запроса
# parameters - дополнительные параметры (для удобства можно их записывать в один словарь с фильтрами)
```
Возвращает необработанный ответ от сервера кодика.
Для подробного списка фильтров, параметров и примеров смотрите [инструкцию](KODIK_API.md).
7. Получить токен
```python
parser.get_token() # строка
# Или
KodikParser.get_token()
```
Использует один из скриптов кодика в котором указан api ключ, поэтому может не работать из-за внесенных изменений
## AniBoom инструкция
0. Установите и импортируйте библиотеку
```commandline
pip install anime-parsers-ru
```
```python
from anime_parsers_ru import AniboomParser
parser = AniboomParser()
# Если вы знаете что есть актуальное зеркало сайта, можете указать его домен в параметре `mirror` при инициализации класса
```
__Для асинхронного кода__:
```commandline
pip install anime-parsers-ru[async]
```
```python
from anime_parsers_ru import AniboomParserAsync
parser = AniboomParserAsync()
# Далее перед всеми функциями дополнительно нужно прописывать await
# Если вы знаете что есть актуальное зеркало сайта, можете указать его домен в параметре `mirror` при инициализации класса
```
1. Поиск по названию
1. Быстрый поиск
```python
parser.fast_search("Название аниме")
```
Возвращает список из словарей в виде:
```json
[
{
"title": "Название аниме",
"year": "Год выпуска",
"other_title": "Другое название(оригинальное название)",
"type": "Тип аниме (ТВ сериал, фильм, ...)",
"link": "Ссылка на страницу с информацией",
"animego_id": "id на анимего (по сути в ссылке на страницу с информацией последняя цифра и есть id)"
},
]
```
2. Поиск с дополнительной информацией / Расширенный поиск
```python
parser.search("Название аниме")
```
Возвращает список из словарей:
```json
[
{
"title": "Название",
"other_titles": ["Альтернативное название 1", "..."],
"status": "Статус аниме (онгоинг, анонс, вышел, ...)",
"type": "Тип аниме (ТВ сериал, фильм, ...)",
"genres": ["Жанр1", "Жанр2", "..."],
"description": "описание",
"episodes": "если аниме вышло, то количество серий, если еще идет, то 'вышло / всего'",
"episodes_info": [
{
"num": "Номер эпизода",
"title": "Название эпизода",
"date": "Даты выхода (предполагаемые если анонс)",
"status": "'вышло' или 'анонс' (Имеется в виду вышло в оригинале, не переведено)",
},
],
"translations": [
{
"name": "Название студии",
"translation_id": "id перевода в плеере aniboom"
},
],
"poster_url": "Ссылка на постер аниме",
"trailer": "Ссылка на ютуб embed трейлер",
"screenshots": [
"Список ссылок на скриншоты"
],
"other_info": {
// Данная информация может меняться в зависимости от типа или состояния тайтла
"Возрастные ограничения": "(прим: 16+)",
"Выпуск": "(прим: с 2 апреля 2024)",
"Главные герои": ["Список главных героев"],
"Длительность": "(прим: 23 мин. ~ серия)",
"Первоисточник": "(прим: Легкая новела)",
"Рейтинг MPAA": "(прим: PG-13)",
"Сезон": "(прим. Весна 2024)",
"Снят по ранобэ": "название ранобэ (Или так же может быть 'Снят по манге')",
"Студия": "название студии"
},
"link": "Ссылка на страницу с информацией",
"animego_id": "id на анимего (по сути в ссылке на страницу с информацией последняя цифра и есть id)"
},
]
```
2. Данные по эпизодам. Если в аниме 1 эпизод или это фильм, то данных по эпизодам может не быть.
```python
parser.episodes_info('ссылка на страницу аниме на animego.org') # Ссылка доступна из поиска по ключу 'link'
```
Возвращает отсортированный по номеру серии список:
```json
[
{
"num": "Номер эпизода",
"title": "Название эпизода",
"date": "Даты выхода (предполагаемые если анонс)",
"status": "'вышло' или 'анонс' (Имеется в виду вышло в оригинале, не переведено)"
},
]
```
3. Данные по аниме (как в полном/расширенном поиске)
```python
parser.anime_info('ссылка на страницу аниме на animego.org') # Ссылка доступна из поиска по ключу 'link'
```
Возвращает словарь:
```json
{
"title": "Название",
"other_titles": ["Альтернативное название 1", "..."],
"status": "Статус аниме (онгоинг, анонс, вышел, ...)",
"type": "Тип аниме (ТВ сериал, фильм, ...)",
"genres": ["Жанр1", "Жанр2", "..."],
"description": "описание",
"episodes": "если аниме вышло, то количество серий, если еще идет, то 'вышло / всего'",
"episodes_info": [
{
"num": "Номер эпизода",
"title": "Название эпизода",
"date": "Даты выхода (предполагаемые если анонс)",
"status": "'вышло' или 'анонс' (Имеется в виду вышло в оригинале, не переведено)",
},
],
"translations": [
{
"name": "Название студии",
"translation_id": "id перевода в плеере aniboom"
},
],
"poster_url": "Ссылка на постер аниме",
"trailer": "Ссылка на ютуб embed трейлер",
"screenshots": [
"Список ссылок на скриншоты"
],
"other_info": {
// Данная информация может меняться в зависимости от типа или состояния тайтла
"Возрастные ограничения": "(прим: 16+)",
"Выпуск": "(прим: с 2 апреля 2024)",
"Главные герои": ["Список главных героев"],
"Длительность": "(прим: 23 мин. ~ серия)",
"Первоисточник": "(прим: Легкая новела)",
"Рейтинг MPAA": "(прим: PG-13)",
"Сезон": "(прим. Весна 2024)",
"Снят по ранобэ": "название ранобэ (Или так же может быть 'Снят по манге')",
"Студия": "название студии"
},
"link": "Ссылка на страницу с информацией",
"animego_id": "id на анимего (по сути в ссылке на страницу с информацией последняя цифра и есть id)"
},
```
4. Данные по переводам (которые есть в плеере aniboom)
```python
parser.get_translation_info('animego_id') # Ссылка доступна из поиска по ключу 'animego_id'
```
Возвращает список словарей:
```json
[
{
"name": "Название студии озвучки",
"translation_id": "id перевода в плеере aniboom"
}
]
```
5. Получить контент файла mpd (mp4 файл разбитый на чанки) в виде строки. При сохранении данной строки в .mpd файл и при открытии его плеером, который поддерживает такой формат (прим: VLC PLayer), можно смотреть серию без рекламы. Обратите внимание, что в данном файле находятся именно ссылки на чанки, а не само видео, поэтому потребуется доступ в интернет. (Вы можете использовать ffmpeg для конвертации этого файла в mp4 формат)
```python
parser.get_mpd_playlist('animego_id', 'episode_num', 'translation_id')
# animego_id можно найти в результате поиска по ключу 'animego_id' (либо взять последние цифры в ссылке на страницу аниме на animego.org)
# episode_num - номер вышедшего эпизода (нужно чтобы эпизод вышел именно с выбранной озвучкой)
# translation_id - id перевода в базе aniboom (Можно найти либо в результате поиска, либо через anime_info, либо через get_translation_info)
```
Возвращает строку - контент mpd файла
> [!IMPORTANT]
> В случае, если аниме является фильмом или содержит только одну серию, в параметр `episode_num` указывается значение `0`.
6. Сохранить mpd файл (Дополняет предыдущую функцию get_mpd_playlist)
```python
parser.get_as_file('animego_id', 'episode_num', 'translation_id', 'filename')
# animego_id можно найти в результате поиска по ключу 'animego_id' (либо взять последние цифры в ссылке на страницу аниме на animego.org)
# episode_num - номер вышедшего эпизода (нужно чтобы эпизод вышел именно с выбранной озвучкой)
# translation_id - id перевода в базе aniboom (Можно найти либо в результате поиска, либо через anime_info, либо через get_translation_info)
# filename - имя файла или путь
```
Сохраняет файл по указанному имени/пути
> [!IMPORTANT]
> В случае, если аниме является фильмом или содержит только одну серию, в параметр `episode_num` указывается значение `0`.
## JutSu инструкция
0. Установите и импортируйте библиотеку
```commandline
pip install anime-parsers-ru
```
```python
from anime_parsers_ru import JutsuParser
parser = JutsuParser()
# Если вы знаете что есть актуальное зеркало сайта, можете указать его домен в параметре `mirror` при инициализации класса
```
1. Данные по аниме (по ссылке на страницу)
```python
parser.get_anime_info("Ссылка на страницу")
# Пример ссылки: https://jut.su/tondemo-skill/
# Для аниме: Кулинарные скитания в параллельном мире
```
Возвращает словарь:
```json
{
"title": "Название аниме",
"origin_title": "Оригинальное название (транслит японского названия на английском)",
"age_rating": "Возрастное ограничение",
"description": "Описание",
"years": ["Год выхода 1 сезона", "Год выхода 2 сезона"],
"genres": ["Жанр 1", "Жанр 2"],
"poster": "Ссылка на картинку (плохое качество)",
"seasons": [
[ // 1 сезон будет обязательно, даже если у аниме нет других сезонов
"ссылка на 1 серию 1 сезона (страница с плеером)",
"ссылка на 2 серию 1 сезона (страница с плеером)"
],
[ // 2 сезон если есть
"ссылка на 1 серию 2 сезона (страница с плеером)",
"ссылка на 2 серию 2 сезона (страница с плеером)"
],
],
"seasons_names": [ // Если у аниме только 1 сезон, этот список будет пустым
"Название 1 сезона",
"Название 2 сезона"
],
"films": [ // Если фильмов нет - список пустой
"Ссылка на фильм 1 (страница с плеером)",
"Ссылка на фильм 2 (страница с плеером)",
]
}
```
2. Получить ссылку на mp4 файл
```python
parser.get_mp4_link('ссылка на страницу с плеером')
# Пример ссылки: https://jut.su/tondemo-skill/episode-1.html
# Еще пример ссылки: https://jut.su/ookami-to-koshinryou/season-1/episode-1.html
```
Возвращает словарь:
```json
{
"360": "ссылка на mp4 файл с качеством 360p",
}
```
> [!IMPORTANT]
> Для разных аниме разное количество доступных качеств плеера. (Например для "Наруто" доступно только 360 и 480, для большинства новых аниме доступно качество до 1080)
> Также jutsu не позволяет выбрать озвучку для аниме.
> [!NOTE]
> Для jutsu нет функции поиска, потому что он использует поиск яндекса по сайту и из-за того что он "умный" он может работать абсолютно непредсказуемо.
> В качестве "поиска" вы можете использовать оригинальное название аниме. Так как ссылка формируется по следующей схеме:
> Название аниме: Волчица и пряности
> Оригинальное название: Ookami to Koushinryou
> Ссылка на страницу: https://jut.su/ookami-to-koshinryou/
## Shikimori инструкция
0. Установите и импортируйте библиотеку
```commandline
pip install anime-parsers-ru
```
```python
from anime_parsers_ru import ShikimoriParser
parser = ShikimoriParser()
# Если вы знаете что есть актуальное зеркало сайта, можете указать его домен в параметре `mirror` при инициализации класса
```
__Для асинхронного кода__:
```commandline
pip install anime-parsers-ru[async]
```
```python
from anime_parsers_ru import ShikimoriParserAsync
parser = ShikimoriParserAsync()
# Далее перед всеми функциями дополнительно нужно прописывать await
# Если вы знаете что есть актуальное зеркало сайта, можете указать его домен в параметре `mirror` при инициализации класса
```
> [!NOTE]
> Шикимори ограничивает частоту запросов на сервер.
> Если шикимори возвращает код ответа 520, парсер вернет exception TooManyRequests.
> Для избежания этой ошибки делайте задержку 1-3 секунды между запросами.
1. Поиск аниме по названию
```python
parser.search('Название аниме')
```
Возвращает список словарей:
```json
[
{
"genres": ["Жанр1", "Жанр2"],
"link": "Ссылка на страницу аниме",
"original_title": "Оригинальное название (транслит японского названия на английском)",
"poster": "Ссылка на постер к аниме (плохое качество) (если есть, иначе None)",
"shikimori_id": "id шикимори",
"status": "статус (вышло, онгоинг, анонс) (если есть, иначе None)",
"studio": "студия анимации (если есть, иначе None)",
"title": "Название",
"type": "тип аниме (TV сериал, OVA, ONA, ...) (если есть, иначе None)",
"year": "год выхода (если есть, иначе None)"
}
]
```
2. Информация об аниме
```python
parser.anime_info('shikimori_link')
# Ссылку на шикимори можно получить с помощью функции
# parser.link_by_id
```
Возвращает словарь:
```json
{
"dates": "Даты выхода",
"description": "Описание",
"episode_duration": "Средняя продолжительность серии",
"episodes": "Количество эпизодов если статус 'вышло' или 'вышедших эпизодов / анонсировано эпизодов' или None (если фильм)",
"genres": ["Жанр1", "Жанр2"],
"licensed": "Кто лицензировал в РФ или None",
"licensed_in_ru": "Название аниме как лицензировано в РФ или None",
"next_episode": "Дата выхода следующего эпизода или None",
"original_title": "Оригинальное название",
"picture": "Ссылка на jpeg постер",
"premiere_in_ru": "Дата премьеры в РФ или None",
"rating": "возрастной рейтинг",
"score": "оценка на шикимори",
"status": "статус выхода",
"studio": "студия анимации",
"themes": ["Тема1", "Тема2"],
"title": "Название на русском",
"type": "тип аниме (TV Сериал, Фильм, т.п.)"
}
```
3. Дополнительная информация об а | text/markdown | null | YaNesyTortiK <ya.nesy.tortik.email@gmail.com> | null | YaNesyTortiK <ya.nesy.tortik.email@gmail.com> | null | anime, parser, kodik, parsing, aniboom, animego, jutsu, shikimori, kodikapi, kodik api, аниме, парсинг, кодик, парсер, анибум, анимего, джутсу, шикимори, кодик апи | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"beautifulsoup4>=4.12",
"requests>=2.32",
"aiohttp>=3.9.5; extra == \"async\"",
"lxml>=5.2; extra == \"lxml\""
] | [] | [] | [] | [
"Homepage, https://github.com/YaNesyTortiK/AnimeParsers",
"Issues, https://github.com/YaNesyTortiK/AnimeParsers/issues"
] | twine/6.2.0 CPython/3.12.5 | 2026-02-18T16:07:38.369518 | anime_parsers_ru-1.13.3.tar.gz | 106,615 | 2d/90/c1ea17cbfafeebaabc6e56192909e9a2a9887bfb14ea697d4c998074b186/anime_parsers_ru-1.13.3.tar.gz | source | sdist | null | false | 2662d65a548fe0e3403e660c8d0ede02 | 1d0cce11d5b025f6f0cde33317d9d3b4405a04ca54e42533b71e1e6ee346c967 | 2d90c1ea17cbfafeebaabc6e56192909e9a2a9887bfb14ea697d4c998074b186 | null | [
"LICENSE"
] | 298 |
2.4 | rnet | 3.0.0rc21 | An ergonomic Python HTTP client with TLS fingerprint | # rnet
[](https://github.com/0x676e67/rnet/actions/workflows/ci.yml)


[](https://pypi.org/project/rnet/)
[](https://pepy.tech/projects/rnet)
> 🚀 Help me work seamlessly with open source sharing by [sponsoring me on GitHub](https://github.com/0x676e67/0x676e67/blob/main/SPONSOR.md)
An ergonomic and modular Python HTTP client for advanced and low-level emulation, featuring customizable TLS, JA3/JA4, and HTTP/2 fingerprinting capabilities, powered by [wreq](https://github.com/0x676e67/wreq).
## Features
- Async and Blocking `Client`s
- Plain bodies, JSON, urlencoded, multipart
- HTTP Trailer
- Cookie Store
- Redirect Policy
- Original Header
- Rotating Proxies
- Connection Pooling
- Streaming Transfers
- Zero-Copy Transfers
- WebSocket Upgrade
- HTTPS via BoringSSL
- Free-Threaded Safety
- Automatic Decompression
- Certificate Store (CAs & mTLS)
## Example
The following example uses the `asyncio` runtime with `rnet` installed via pip:
```bash
pip install asyncio rnet --pre --upgrade
```
And then the code:
```python
import asyncio
from rnet import Client, Emulation
async def main():
# Build a client
client = Client(emulation=Emulation.Safari26)
# Use the API you're already familiar with
resp = await client.get("https://tls.peet.ws/api/all")
print(await resp.text())
if __name__ == "__main__":
asyncio.run(main())
```
Additional learning resources include:
- [DeepWiki](https://deepwiki.com/0x676e67/rnet)
- [Documentation](https://rnet.readthedocs.io/)
- [Synchronous Examples](https://github.com/0x676e67/rnet/tree/main/python/examples/blocking)
- [Asynchronous Examples](https://github.com/0x676e67/rnet/tree/main/python/examples)
## Behavior
1. **HTTP/2 over TLS**
Due to the complexity of TLS encryption and the widespread adoption of HTTP/2, browser fingerprints such as **JA3**, **JA4**, and **Akamai** cannot be reliably emulated using simple fingerprint strings. Instead of parsing and emulating these string-based fingerprints, `rnet` provides fine-grained control over TLS and HTTP/2 extensions and settings for precise browser behavior emulation.
2. **Device Emulation**
TLS and HTTP/2 fingerprints are often identical across various browser models because these underlying protocols evolve slower than browser release cycles. In most cases, the `User-Agent` version is the only variable. Detailed mapping is available in the [documentation](https://rnet.readthedocs.io/).
## Building
1. Platforms
- Linux(**glibc**/**musl**): `x86_64`, `aarch64`, `armv7`, `i686`
- macOS: `x86_64`,`aarch64`
- Windows: `x86_64`,`i686`,`aarch64`
- Android: `aarch64`, `x86_64`
2. Development
Install the BoringSSL build environment by referring to [boringssl](https://github.com/google/boringssl/blob/main/BUILDING.md).
```bash
# on ubuntu or debian
sudo apt install -y build-essential cmake perl pkg-config libclang-dev musl-tools git
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
pip install uv maturin
uv venv
source .venv/bin/activate
# development
maturin develop --uv
# build wheels
maturin build --release
```
## Benchmark
Outperforms `requests`, `httpx`, `aiohttp`, and `curl_cffi`, and you can see the [benchmark](https://github.com/0x676e67/rnet/tree/main/bench) for details — benchmark data is for reference only and actual performance may vary based on your environment and use case.
## Services
Help sustain the ongoing development of this open-source project by reaching out for [commercial support](mailto:gngppz@gmail.com). Receive private guidance, expert reviews, or direct access to the maintainer, with personalized technical assistance tailored to your needs.
## License
Licensed under either of Apache License, Version 2.0 ([LICENSE](./LICENSE) or http://www.apache.org/licenses/LICENSE-2.0).
## Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the [Apache-2.0](./LICENSE) license, shall be licensed as above, without any additional terms or conditions.
## Sponsors
<a href="https://scrape.do/?utm_source=github&utm_medium=rnet" target="_blank">
<img src="https://raw.githubusercontent.com/0x676e67/rnet/main/.github/assets/scrapedo.svg">
</a>
**[Scrape.do](https://scrape.do/?utm_source=github&utm_medium=rnet)** is the ultimate toolkit for collecting public data at scale. Unmatched speed, unbeatable prices, unblocked access.
One line of code. Instant data access
🔁 Automatic Proxy Rotation 🤖 Bypass Anti-bot Solutions ⛏️ Seamless Web Scraping
🚀 **[Register](https://dashboard.scrape.do/login)** | 👔 **[Linkedin](https://www.linkedin.com/company/scrape-do/)** | 📖 **[Docs](https://scrape.do/documentation)**
---
<a href="https://www.ez-captcha.com" target="_blank">
<img src="https://www.ez-captcha.com/siteLogo.png" height="50" width="50">
</a>
Captcha solving can be slow and unreliable, but **[EzCaptcha](https://www.ez-captcha.com/?r=github-rnet)** delivers fast, reliable solving through a simple API — supporting a wide range of captcha types with no complex integration required.
**ReCaptcha** • **FunCaptcha** • **CloudFlare** • **Akamai** • **AkamaiSbsd** • **HCaptcha**
Designed for developers, it offers high accuracy, low price, low latency, and easy integration, helping you automate verification while keeping traffic secure and user flows smooth.
🚀 **[Get API Key](https://www.ez-captcha.com/?r=github-rnet)** | 📖 **[Docs](https://ezcaptcha.atlassian.net/wiki/spaces/IS/pages/7045121/EzCaptcha+API+Docs+English)** | 💬 **[Telegram](https://t.me/+NrVmPhlb9ZFkZGY5)**
---
<a href="https://www.thordata.com/products/residential-proxies?ls=github&lk=rnet" target="_blank">
<img src="https://raw.githubusercontent.com/0x676e67/rnet/main/.github/assets/thordata.svg">
</a>
**[Thordata](https://www.google.com/url?q=https://www.thordata.com/?ls%3Dgithub%26lk%3Drnet&sa=D&source=editors&ust=1768812458958099&usg=AOvVaw1VwMpnrjCaf7iWbVsM5V0k)**: Get Reliable Global Proxies at an Unbeatable Value.
One-click data collection with enterprise-grade stability and compliance. Join thousands of developers using ThorData for high-scale operations.
**Exclusive Offer**: Sign up for a free Residential Proxy trial and 2,000 FREE SERP API calls!
👔 **[Linkedin](https://www.linkedin.com/company/thordata/?viewAsMember=true)** | 💬 **[Discord](https://discord.gg/t9qnNKfurd)** | ✈️ **[Telegram](https://t.me/thordataproxy)**
---
<a href="https://salamoonder.com/" target="_blank">
<img src="https://salamoonder.com/auth/assets/images/3d_logo.png" height="50" width="50">
</a>
Anti-bots evolve quickly, but **[Salamoonder](https://salamoonder.com/)** moves faster, delivering reliable anti-bot tokens with just two API requests — no browser automation or unnecessary complexity required.
**Kasada** • **Incapsula** • **Datadome** • **Akamai** • **And many more**
Automatic updates keep your integration simple and low-maintenance, and it’s nearly **50%** cheaper than the competition, giving you faster results at a lower cost.
🚀 **[Register](https://salamoonder.com/auth/register)** | 📖 **[Docs](https://apidocs.salamoonder.com/)** | 💬 **[Telegram](https://t.me/salamoonder_telegram)**
---
<a href="https://hypersolutions.co/?utm_source=github&utm_medium=readme&utm_campaign=rnet" target="_blank"><img src="https://raw.githubusercontent.com/0x676e67/rnet/main/.github/assets/hypersolutions.jpg" height="47" width="149"></a>
TLS fingerprinting alone isn't enough for modern bot protection. **[Hyper Solutions](https://hypersolutions.co?utm_source=github&utm_medium=readme&utm_campaign=rnet)** provides the missing piece - API endpoints that generate valid antibot tokens for:
**Akamai** • **DataDome** • **Kasada** • **Incapsula**
No browser automation. Just simple API calls that return the exact cookies and headers these systems require.
🚀 **[Get Your API Key](https://hypersolutions.co?utm_source=github&utm_medium=readme&utm_campaign=rnet)** | 📖 **[Docs](https://docs.justhyped.dev)** | 💬 **[Discord](https://discord.gg/akamai)**
---
<a href="https://dashboard.capsolver.com/passport/register?inviteCode=y7CtB_a-3X6d" target="_blank"><img src="https://raw.githubusercontent.com/0x676e67/rnet/main/.github/assets/capsolver.jpg" height="47" width="149"></a>
[CapSolver](https://www.capsolver.com/?utm_source=github&utm_medium=banner_repo&utm_campaign=rnet) leverages AI-powered Auto Web Unblock to bypass Captchas effortlessly, providing fast, reliable, and cost-effective data access with seamless integration into Colly, Puppeteer, and Playwright—use code **`RNET`** for a 6% bonus!
| text/markdown; charset=UTF-8; variant=GFM | null | 0x676e67 <gngppz@gmail.com> | null | null | Apache-2.0 | http, client, websocket, ja3, ja4 | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approve... | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://github.com/0x676e67/rnet/blob/main/python/rnet",
"Homepage, https://github.com/0x676e67/rnet",
"Repository, https://github.com/0x676e67/rnet"
] | maturin/1.10.2 | 2026-02-18T16:06:06.962934 | rnet-3.0.0rc21-cp311-abi3-win32.whl | 3,549,392 | 94/54/e9a0ea90dbc36983c1fc93b7cd6606686808f67f126f76bbfcea998e3e5d/rnet-3.0.0rc21-cp311-abi3-win32.whl | cp311 | bdist_wheel | null | false | 5b947b4febf877a8078a5bbb86b85a2a | 8a5bc2626241ad15cd05b59aaa65dbce90467390224a4351020739374c5ec88c | 9454e9a0ea90dbc36983c1fc93b7cd6606686808f67f126f76bbfcea998e3e5d | null | [] | 3,563 |
2.4 | nxtomomill | 2.0.6 | applications and library to convert raw format to NXtomo format | # nxtomomill
nxtomomill provide a set of applications and tools around the [NXtomo](https://manual.nexusformat.org/classes/applications/NXtomo.html) format defined by the [NeXus community](https://manual.nexusformat.org/index.html#).
It includes for example the convertion from bliss raw data (@ESRF) to NXtomo, or from spec EDF (@ESRF) to NXtomo. But also creation from scratch and edition of an NXtomo from a python API.
It also embed a `nexus` module allowing users to easily edit Nxtomo
## installation
To install the latest 'nxtomomill' pip package
```bash
pip install nxtomomill
```
You can also install nxtomomill from source:
```bash
pip install git+https://gitlab.esrf.fr/tomotools/nxtomomill.git
```
## documentation
General documentation can be found here: [https://tomotools.gitlab-pages.esrf.fr/nxtomomill/](https://tomotools.gitlab-pages.esrf.fr/nxtomomill/)
## application
documentation regarding applications can be found here: [https://tomotools.gitlab-pages.esrf.fr/nxtomomill/tutorials/index.html](https://tomotools.gitlab-pages.esrf.fr/nxtomomill/tutorials/index.html)
or to get help you can directly go for
```bash
nxtomomill --help
```
| text/markdown | null | Henri Payno <henri.payno@esrf.fr>, Pierre Paleo <pierre.paleo@esrf.fr>, Pierre-Olivier Autran <pierre-olivier.autran@esrf.fr>, Jérôme Lesaint <jerome.lesaint@esrf.fr>, Alessandro Mirone <mirone@esrf.fr> | null | null |
The nxtomomill library goal is to provide a python interface to read ESRF tomography dataset.
nxtomomill is distributed under the MIT license.
The MIT license follows:
Copyright (c) European Synchrotron Radiation Facility (ESRF)
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
| NXtomo, nexus, tomography, tomotools, esrf, bliss-tomo | [
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Languag... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"h5py>=3.0",
"silx>=2.0",
"nxtomo>=3.0.0dev0",
"pint",
"packaging",
"tomoscan[full]>=2.2.0a4",
"tqdm",
"pydantic",
"eval_type_backport; python_version < \"3.10\"",
"platformdirs",
"pytest; extra == \"test\"",
"python-gitlab; extra == \"test\"",
"pytest; extra == \"doc\"",
"Sphin... | [] | [] | [] | [
"Homepage, https://gitlab.esrf.fr/tomotools/nxtomomill",
"Documentation, https://tomotools.gitlab-pages.esrf.fr/nxtomomill/",
"Repository, https://gitlab.esrf.fr/tomotools/nxtomomill",
"Changelog, https://gitlab.esrf.fr/tomotools/nxtomomill/-/blob/master/CHANGELOG.md"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T16:05:11.539979 | nxtomomill-2.0.6.tar.gz | 172,705 | 07/16/8852dadbbd09d0d6cc3f8d8f1aea57080fa3f8947ff568587eac193e2564/nxtomomill-2.0.6.tar.gz | source | sdist | null | false | 4512cf86f840170c7e494510015729ee | c291207187ce792ef0d236a70600abf8b0b47a9c890132cc3523ccc7dbe25ed3 | 07168852dadbbd09d0d6cc3f8d8f1aea57080fa3f8947ff568587eac193e2564 | null | [
"LICENSE"
] | 309 |
2.4 | pyactuator | 0.0.5 | Thin broker-agnostic execution layer for trading: order submission, status, and fills | # pyactuator
Thin broker-agnostic execution layer for trading systems: submit orders, poll status, cancel, and (optionally) subscribe to fills. Designed to sit between your FSM/OMS (e.g. pystator) and broker APIs (Alpaca, future IB/crypto).
## Features
- **Normalized types**: `OrderRequest`, `OrderResponse`, `OrderStatus`, `Fill` — your stack stays broker-agnostic.
- **ExecutionClient protocol**: One interface (`submit`, `get_status`, `cancel`, optional `subscribe_fills`) implemented per broker.
- **Adapters**: Alpaca (via alpaca-py), Mock (in-memory for tests and paper).
- **Optional helpers**: Retry policy, idempotency key handling, timeout wrapper.
## Installation
```bash
# Core only (types, protocol, mock adapter)
pip install pyactuator
# With Alpaca broker support
pip install pyactuator[alpaca]
# Development
pip install -e ".[dev]"
```
## Quick start
```python
from decimal import Decimal
from pyactuator import ExecutionClient, OrderRequest, Side, OrderType, TimeInForce
from pyactuator.adapters.mock import MockExecutionClient
# Use mock for tests or paper
client: ExecutionClient = MockExecutionClient()
order = OrderRequest(
client_order_id="my-order-001",
symbol="AAPL",
side=Side.BUY,
quantity=Decimal("10"),
order_type=OrderType.MARKET,
time_in_force=TimeInForce.DAY,
)
response = await client.submit(order)
print(response.success, response.external_order_id)
status = await client.get_status(response.external_order_id)
await client.close()
```
With Alpaca (requires `pip install pyactuator[alpaca]`):
```python
from pyactuator.adapters.alpaca import AlpacaExecutionClient
client = AlpacaExecutionClient(
api_key="...",
api_secret="...",
paper=True,
)
# Same OrderRequest / submit / get_status / cancel
```
Optional retry wrapper and idempotency helpers:
```python
from pyactuator.helpers import RetryExecutionClient, generate_client_order_id
from pyactuator.adapters.alpaca import AlpacaExecutionClient
client = AlpacaExecutionClient(api_key="...", api_secret="...", paper=True)
client = RetryExecutionClient(client, max_attempts=3)
order_id = generate_client_order_id(prefix="pa", order_id="my-internal-id")
order = OrderRequest(client_order_id=order_id, symbol="AAPL", side=Side.BUY, quantity=Decimal("10"), ...)
```
## Integration with pystator
Your FSM or OrderManager receives an `ExecutionClient` (injected or constructed). When the FSM triggers "submit" (e.g. after risk approval via pyfortis), call `await client.submit(order_request)`. pystator stays broker-agnostic; execution is behind this single interface.
## License
MIT.
| text/markdown | null | StatFYI <contact@statfyi.com> | null | null | null | trading, execution, broker, alpaca, order, finance | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Typing :: Typed",
"Topic :... | [] | null | null | >=3.11 | [] | [] | [] | [
"alpaca-py>=0.14.0; extra == \"alpaca\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\"",
"mypy>=1.5; extra == \"dev\"",
"pre-commit>=3.0; extra == \"dev\"",
"build>=1.0; extra == \"dev\"",
"twine>=5.0... | [] | [] | [] | [
"Homepage, https://github.com/statfyi/pyactuator",
"Documentation, https://github.com/statfyi/pyactuator#readme",
"Repository, https://github.com/statfyi/pyactuator",
"Issues, https://github.com/statfyi/pyactuator/issues",
"Changelog, https://github.com/statfyi/pyactuator/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T16:05:05.652780 | pyactuator-0.0.5.tar.gz | 12,177 | bc/ab/d7896497acf3316bb4b7309a62d996b5b250a7a427ef67bef6cd4b54fefe/pyactuator-0.0.5.tar.gz | source | sdist | null | false | acc0b46a79be8c9d7f7a518dc1ac1b9e | b6f67cd274b6fcd35dd8cf4f3e1818ebe49b975b910d4834acaddaa8531cf85a | bcabd7896497acf3316bb4b7309a62d996b5b250a7a427ef67bef6cd4b54fefe | MIT | [] | 216 |
2.4 | pyaileys | 0.1.4 | Async WhatsApp Web (Multi-Device) protocol client in pure Python, inspired by Baileys. | # pyaileys
[](https://github.com/atiti/pyaileys/actions/workflows/ci.yml)
[](https://pypi.org/project/pyaileys/)
[](https://pypi.org/project/pyaileys/)
[](LICENSE)
[](https://github.com/astral-sh/ruff)
Async WhatsApp Web (Multi-Device) protocol client in pure Python, inspired by Baileys.
## What This Is
- WebSocket protocol client (no browser automation)
- QR pairing + multi-device session persistence (Baileys-like auth folder)
- `asyncio` API for receiving stanzas/events and sending messages
- Minimal runtime deps: `websockets`, `protobuf`, `cryptography`
## Status
This is an early-stage protocol client.
What works today:
- MD session login + QR pairing
- 1:1 Signal E2E (`pkmsg`/`msg`) decrypt/encrypt
- Group Signal E2E (`skmsg`) decrypt/encrypt (Sender Keys)
- Text send (1:1 multi-device fanout, groups via Sender Keys)
- Typing/recording indications (`chatstate`)
- Media send (image, PTT voice note, documents, video, stickers, static location, contacts)
- Media download/decrypt (image, audio/PTT, documents, video, stickers)
- History Sync ingestion into an in-memory store
- Best-effort contact/profile metadata (names from history sync + `notify` push names, profile picture URL, status/about)
## Limitations (Important)
- Media thumbnails + waveform: not generated automatically (you can supply `jpeg_thumbnail` / `waveform`)
- App-state sync supports snapshot + patch processing and updates the in-memory store (best-effort model application)
- App-state sync depends on app-state keys from the primary phone; the client requests missing keys, but very old sessions may require re-pairing
- Chat/contact model: minimal demo store; contact names/profile are best-effort
- API stability: no guarantees yet (pre-1.0)
## Legal / Safety
This project is not affiliated with WhatsApp/Meta. Using unofficial clients may violate WhatsApp Terms of Service.
You are responsible for compliance and for preventing abuse (spam/automation).
## Installation
```bash
pip install pyaileys
```
Optional (pretty QR output in terminal + SVG QR file):
```bash
pip install "pyaileys[qrcode]"
```
## Quickstart (Pair + Connect)
```python
import asyncio
from pyaileys import WhatsAppClient
async def main() -> None:
client, auth_state = await WhatsAppClient.from_auth_folder("./auth")
async def on_update(update) -> None:
# update is `pyaileys.socket.ConnectionUpdate`
if update.qr:
print("QR string:", update.qr)
if update.connection:
print("connection:", update.connection)
async def on_creds_update(_creds) -> None:
await auth_state.save_creds()
client.on("connection.update", on_update)
client.on("creds.update", on_creds_update)
await client.connect()
await auth_state.save_creds()
# keep the process alive
await asyncio.Event().wait()
asyncio.run(main())
```
## Contacts & Profiles (Best-Effort)
WhatsApp Web does not provide a simple "address book" API. In practice, name/profile info comes from multiple places:
- History sync conversations (`displayName`/`name`/`username`)
- Incoming message stanzas (`notify` push name)
- Explicit queries (e.g. profile picture URL, about/status)
This library exposes a small convenience layer:
```python
dn = client.get_display_name("12345@s.whatsapp.net")
contact = client.get_contact("12345@s.whatsapp.net")
pic = await client.profile_picture_url("12345@s.whatsapp.net", picture_type="preview")
statuses = await client.fetch_status("12345@s.whatsapp.net")
```
Notes:
- `get_display_name()` prefers a "saved name" (history sync) and falls back to push name (`notify`).
- `fetch_status()` may return `""` if the status is hidden/blocked, and `None` if unavailable.
## Examples
Kitchen sink (interactive):
```bash
python examples/demo_app.py --auth ./auth --log-nodes
```
Simple CLI (decrypt + store + send text/media, includes `appsync`):
```bash
python examples/simple_cli.py --auth ./auth
```
Automated end-to-end smoke test against your real linked account:
```bash
tools/e2e_smoke.sh --jid 4527148803@s.whatsapp.net --auth ./auth
```
This script drives `examples/simple_cli.py`, sends test messages/media, downloads media back, and prints a pass/fail checklist.
QR-only helper (writes `qr.svg` into the auth dir if `qrcode` extra is installed):
```bash
python examples/login_qr.py
```
## Development
```bash
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
ruff check .
ruff format .
mypy src/pyaileys
pytest -q
```
Optional (recommended): install git pre-commit hooks to auto-run formatting & lint on commit:
```bash
pre-commit install
```
## Releasing to PyPI (Trusted Publishing)
This repo includes a GitHub Actions workflow (`.github/workflows/release.yml`) that publishes to PyPI when you push a
tag like `v0.1.0`.
- Bump versions in `pyproject.toml` and `src/pyaileys/__init__.py`
- Tag and push: `git tag vX.Y.Z && git push --tags`
## Regenerating Generated Files
`wabinary` token tables are generated from a Baileys checkout:
```bash
git clone https://github.com/WhiskeySockets/Baileys.git /path/to/Baileys
python3 tools/gen_wabinary_constants.py --baileys /path/to/Baileys
```
`proto/WAProto.proto` is vendored from Baileys and patched to satisfy `protoc`:
```bash
python3 tools/patch_waproto_for_protoc.py
protoc -Iproto --python_out=src/pyaileys/proto proto/WAProto.proto
```
## Credits
- Inspired by the Baileys TypeScript library (MIT): https://github.com/WhiskeySockets/Baileys
## Contributing
See `CONTRIBUTING.md`, `CODE_OF_CONDUCT.md`, and `SECURITY.md`.
Simple chat CLI (includes `appsync` for full app-state sync):
```bash
python examples/simple_cli.py --auth ./auth
```
| text/markdown | Attila Sukosd | null | null | null | MIT License
Copyright (c) 2026 Attila Sukosd
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | asyncio, baileys, protocol, websocket, whatsapp | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Lan... | [] | null | null | >=3.11 | [] | [] | [] | [
"cryptography>=42.0.0",
"protobuf>=4.25.0",
"websockets>=12.0",
"build>=1.2.1; extra == \"dev\"",
"mypy>=1.10.0; extra == \"dev\"",
"pre-commit>=3.7.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.5.0; extra == \"dev\"",
"twine>=5.1.1; ... | [] | [] | [] | [
"Homepage, https://github.com/atiti/pyaileys",
"Repository, https://github.com/atiti/pyaileys",
"Issues, https://github.com/atiti/pyaileys/issues",
"Changelog, https://github.com/atiti/pyaileys/blob/main/CHANGELOG.md",
"Security, https://github.com/atiti/pyaileys/security"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:05:03.125581 | pyaileys-0.1.4.tar.gz | 225,049 | 40/24/a7620125e4810992f4ebd796712ca67b21dc26672e70b22ec52708125445/pyaileys-0.1.4.tar.gz | source | sdist | null | false | f16623c39e160b153fa71679260f83ee | c3538fb23a5d7ed0db63edb14c62c6ca432ddf362706663c6405e0fe1d304407 | 4024a7620125e4810992f4ebd796712ca67b21dc26672e70b22ec52708125445 | null | [
"LICENSE"
] | 226 |
2.4 | earthkit-meteo | 0.6.1 | Meteorological computations | <p align="center">
<picture>
<source srcset="https://github.com/ecmwf/logos/raw/refs/heads/main/logos/earthkit/earthkit-meteo-dark.svg" media="(prefers-color-scheme: dark)">
<img src="https://github.com/ecmwf/logos/raw/refs/heads/main/logos/earthkit/earthkit-meteo-light.svg" height="120">
</picture>
</p>
<p align="center">
<a href="https://github.com/ecmwf/codex/raw/refs/heads/main/ESEE">
<img src="https://github.com/ecmwf/codex/raw/refs/heads/main/ESEE/foundation_badge.svg" alt="ECMWF Software EnginE">
</a>
<a href="https://github.com/ecmwf/codex/raw/refs/heads/main/Project Maturity">
<img src="https://github.com/ecmwf/codex/raw/refs/heads/main/Project Maturity/emerging_badge.svg" alt="Maturity Level">
</a>
<!-- <a href="https://codecov.io/gh/ecmwf/earthkit-hydro">
<img src="https://codecov.io/gh/ecmwf/earthkit-hydro/branch/develop/graph/badge.svg" alt="Code Coverage">
</a> -->
<a href="https://opensource.org/licenses/apache-2-0">
<img src="https://img.shields.io/badge/Licence-Apache 2.0-blue.svg" alt="Licence">
</a>
<a href="https://github.com/ecmwf/earthkit-meteo/releases">
<img src="https://img.shields.io/github/v/release/ecmwf/earthkit-meteo?color=purple&label=Release" alt="Latest Release">
</a>
</p>
<p align="center">
<a href="#quick-start">Quick Start</a>
•
<a href="#installation">Installation</a>
•
<a href="https://earthkit-meteo.readthedocs.io/en/latest/">Documentation</a>
</p>
> \[!IMPORTANT\]
> This software is **Emerging** and subject to ECMWF's guidelines on [Software Maturity](https://github.com/ecmwf/codex/raw/refs/heads/main/Project%20Maturity).
**earthkit-meteo** is a Python package providing meteorological computations using array input (Numpy, Torch and CuPy) and output. It is part of the [earthkit](https://github.com/ecmwf/earthkit) ecosystem.
## Quick Start
```python
from earthkit.meteo import thermo
# using Numpy arrays
import numpy as np
t = np.array([264.12, 261.45]) # Kelvins
p = np.array([850, 850]) * 100.0 # Pascals
theta = thermo.potential_temperature(t, p)
# using Torch tensors
import torch
t = torch.tensor([264.12, 261.45]) # Kelvins
p = torch.tensor([850.0, 850.0]) * 100.0 # Pascals
theta = thermo.potential_temperature(t, p)
```
## Installation
Install via `pip` with:
```
$ pip install earthkit-meteo
```
Alternatively, install via `conda` with:
```
$ conda install earthkit-meteo -c conda-forge
```
## Licence
```
Copyright 2023, European Centre for Medium Range Weather Forecasts.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
In applying this licence, ECMWF does not waive the privileges and immunities
granted to it by virtue of its status as an intergovernmental organisation
nor does it submit to any jurisdiction.
```
| text/markdown | null | "European Centre for Medium-Range Weather Forecasts (ECMWF)" <software.support@ecmwf.int> | null | null | Apache License Version 2.0 | null | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Prog... | [] | null | null | >=3.10 | [] | [] | [] | [
"earthkit-utils>=0.2",
"numpy",
"cupy; extra == \"gpu\"",
"torch; extra == \"gpu\"",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\""
] | [] | [] | [] | [
"Documentation, https://earthkit-meteo.readthedocs.io/",
"Homepage, https://github.com/ecmwf/earthkit-meteo/",
"Issues, https://github.com/ecmwf/earthkit-meteo.issues",
"Repository, https://github.com/ecmwf/earthkit-meteo/"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T16:04:11.567010 | earthkit_meteo-0.6.1.tar.gz | 370,291 | 43/89/324733e6b2a02d4c336e4579c8899d301ba3b576a8a540071cd095b38183/earthkit_meteo-0.6.1.tar.gz | source | sdist | null | false | 3fe088d2c1cba8ca26f29cbfcc1bb211 | 82a89983c8ed9302ca07fc040c2787033c98188ce75fe9332f5ca7808379c60e | 4389324733e6b2a02d4c336e4579c8899d301ba3b576a8a540071cd095b38183 | null | [
"LICENSE"
] | 5,413 |
2.4 | bpsai-pair | 2.15.10 | AI-augmented pair programming framework with 200+ CLI commands for planning, orchestration, Trello/GitHub integration, and autonomous workflows | # bpsai-pair
> AI-augmented pair programming framework with 200+ CLI commands
[](https://pypi.org/project/bpsai-pair/)
[](https://www.python.org/downloads/)
[](LICENSE)
## Overview
**bpsai-pair** (PairCoder) is a comprehensive AI pair programming framework that provides structured workflows, enforcement gates, and integrations to ensure AI agents follow proper development practices.
- **Planning & Task Management** — Sprint planning, task lifecycle, Trello sync, and budget tracking
- **Skill-Based Workflows** — 9 built-in skills for TDD, code review, releases, architecture, and more
- **Integration Hub** — Trello, GitHub, MCP servers, and Toggl time tracking
- **Architecture Enforcement** — File size limits, function boundaries, import caps, and auto-split suggestions
- **Telemetry & Feedback** — Session telemetry, self-calibrating estimation, anomaly detection
- **Workspace Orchestration** — Multi-project workspaces, cross-repo contract detection, impact analysis
- **Intelligence Pipeline** — Usage snapshots, value extraction scoring, tamper detection
- **Interactive Setup Wizard** — Web-based project configuration with AI-guided setup
- **Licensing & Security** — Tiered feature gating, secret scanning, containment mode
## Installation
```bash
# Core installation
pip install bpsai-pair
# With integrations
pip install bpsai-pair[trello] # Trello board sync
pip install bpsai-pair[github] # GitHub PR management
pip install bpsai-pair[mcp] # MCP server support
pip install bpsai-pair[all] # All extras
```
## Quick Start
```bash
# Initialize a new project
bpsai-pair init
# Or use the interactive wizard
bpsai-pair wizard
# Check project status
bpsai-pair status
# Create a sprint plan
bpsai-pair plan new my-feature --type feature
# Start a task (with Trello sync)
bpsai-pair ttask start TRELLO-123
# Run architecture checks
bpsai-pair arch check
# Pack context for AI assistants
bpsai-pair pack
```
## Key Command Groups
| Group | Commands | Description |
|-------|----------|-------------|
| `plan` | 8 | Sprint planning, task creation, Trello sync |
| `task` | 12 | Task lifecycle, status updates, archival |
| `trello` / `ttask` | 27 | Trello board management, card workflows |
| `github` | 8 | PR creation, merge, auto-archive |
| `skill` | 8 | Workflow skills, export to Cursor/Windsurf |
| `license` | 10 | License management, feature gating |
| `telemetry` | 3 | Session telemetry, privacy config, export |
| `feedback` | 4 | Calibration, accuracy, task-type estimates |
| `workspace` | 5 | Multi-project orchestration, impact analysis |
| `arch` | 2 | Architecture enforcement, split suggestions |
| `budget` | 3 | Token budget tracking, task cost estimates |
| `security` | 4 | Secret scanning, containment mode |
## License Tiers
| Feature | Solo | Pro | Enterprise |
|---------|:----:|:---:|:----------:|
| Planning & tasks | Y | Y | Y |
| Skills & enforcement | Y | Y | Y |
| Setup wizard | Y | Y | Y |
| Telemetry & feedback | Y | Y | Y |
| Trello integration | | Y | Y |
| GitHub integration | | Y | Y |
| MCP servers | | Y | Y |
| Token budget & cost tracking | | Y | Y |
| Workspace orchestration | | Y | Y |
| Remote access & SSO | | | Y |
Check your license: `bpsai-pair license status`
## Documentation
- [Website & Docs](https://paircoder.ai)
- [Quick Start Guide](https://paircoder.ai/docs/getting-started/)
## Requirements
- Python 3.10 or higher
- Git (for project management features)
## Support
- Email: support@bpsaisoftware.com
| text/markdown | null | BPS AI Software <support@bpsaisoftware.com> | null | null | null | ai, pair-programming, cli, claude, gpt, codex, gemini, mcp, trello, github, autonomous, workflow, planning, tasks | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"T... | [] | null | null | >=3.10 | [] | [] | [] | [
"typer>=0.12",
"rich>=13.7",
"pyyaml>=6.0",
"tiktoken>=0.5.0",
"pydantic>=2.0",
"cryptography>=41.0",
"fastapi>=0.109.0",
"jinja2>=3.1.0",
"uvicorn>=0.27.0",
"sse-starlette>=1.6.0",
"toggl>=0.1.0",
"anthropic>=0.76.0",
"py-trello>=0.19.0; extra == \"trello\"",
"PyGithub>=2.1; extra == \"gi... | [] | [] | [] | [
"Homepage, https://paircoder.ai",
"Documentation, https://paircoder.ai/#/docs",
"Repository, https://github.com/BPSAI/paircoder"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T16:04:02.927445 | bpsai_pair-2.15.10.tar.gz | 692,360 | 5a/0b/43f398065b990f818a3964fa311351fc80987541d67bc8dbdc53625e4488/bpsai_pair-2.15.10.tar.gz | source | sdist | null | false | ff81a90851b222eab6a4c9fdd772ce04 | 0a47f7817370c8a891289760c6796d2b95e9a3043836f89ba9fcd0ae2daf8025 | 5a0b43f398065b990f818a3964fa311351fc80987541d67bc8dbdc53625e4488 | LicenseRef-Proprietary | [
"LICENSE"
] | 249 |
2.4 | pyuff | 2.5.2 | UFF (Universal File Format) read/write. | |pytest| |documentation|
pyuff
=====
Universal File Format read and write
------------------------------------
This module defines an UFF class to manipulate with the UFF (Universal File Format) files.
Read from and write of data-set types **15, 55, 58, 58b, 82, 151, 164, 2411, 2412, 2414, 2420, 2429, 2467** are supported.
Check out the `documentation <https://pyuff.readthedocs.io/en/latest/index.html>`_.
To install the package, run:
.. code:: python
pip install pyuff
Showcase
---------
To analyse UFF file we first load the uff module and example file:
.. code:: python
import pyuff
uff_file = pyuff.UFF('data/beam.uff')
To check which datasets are written in the file use:
.. code:: python
uff_file.get_set_types()
Reading from the UFF file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To load all datasets from the UFF file to data object use:
.. code:: python
data = uff_file.read_sets()
The first dataset 58 contains following keys:
.. code:: python
data[4].keys()
Most important keys are ``x``: x-axis and ``data``: y-axis that define the stored response:
.. code:: python
plt.semilogy(data[4]['x'], np.abs(data[4]['data']))
plt.xlabel('Frequency [Hz]')
plt.ylabel('FRF Magnitude [dB m/N]')
plt.xlim([0,1000])
plt.show()
Writing measurement data to UFF file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Loading the accelerance data:
.. code:: python
measurement_point_1 = np.genfromtxt('data/meas_point_1.txt', dtype=complex)
measurement_point_2 = np.genfromtxt('data/meas_point_2.txt', dtype=complex)
measurement_point_3 = np.genfromtxt('data/meas_point_3.txt', dtype=complex)
.. code:: python
measurement_point_1[0] = np.nan*(1+1.j)
.. code:: python
measurement = [measurement_point_1, measurement_point_2, measurement_point_3]
Creating the UFF file where we add dataset 58 for measurement consisting of the dictionary-like keys containing the measurement data and the information about the measurement:
.. code:: python
for i in range(3):
print('Adding point {:}'.format(i + 1))
response_node = 1
response_direction = 1
reference_node = i + 1
reference_direction = 1
acceleration_complex = measurement[i]
frequency = np.arange(0, 1001)
name = 'TestCase'
data = {'type':58,
'func_type': 4,
'rsp_node': response_node,
'rsp_dir': response_direction,
'ref_dir': reference_direction,
'ref_node': reference_node,
'data': acceleration_complex,
'x': frequency,
'id1': 'id1',
'rsp_ent_name': name,
'ref_ent_name': name,
'abscissa_spacing':1,
'abscissa_spec_data_type':18,
'ordinate_spec_data_type':12,
'orddenom_spec_data_type':13}
uffwrite = pyuff.UFF('./data/measurement.uff')
uffwrite.write_set(data,'add')
Or we can use support function ``prepare_58`` to prepare the dictionary for creating the UFF file. Functions for other datasets can be found in `supported datasets <https://pyuff.readthedocs.io/en/latest/Supported_datasets.html>`_.
.. code:: python
for i in range(3):
print('Adding point {:}'.format(i + 1))
response_node = 1
response_direction = 1
reference_node = i + 1
reference_direction = 1
acceleration_complex = measurement[i]
frequency = np.arange(0, 1001)
name = 'TestCase'
pyuff.prepare_58(func_type=4,
rsp_node=response_node,
rsp_dir=response_direction,
ref_dir=reference_direction,
ref_node=reference_node,
data=acceleration_complex,
x=frequency,
id1='id1',
rsp_ent_name=name,
ref_ent_name=name,
abscissa_spacing=1,
abscissa_spec_data_type=18,
ordinate_spec_data_type=12,
orddenom_spec_data_type=13)
.. |pytest| image:: https://github.com/ladisk/pyuff/actions/workflows/python-package.yml/badge.svg
:target: https://github.com/ladisk/pyuff/actions
.. |documentation| image:: https://readthedocs.org/projects/pyuff/badge/?version=latest
:target: https://pyuff.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
| text/x-rst | null | "Primož Čermelj, Janko Slavič" <janko.slavic@fs.uni-lj.si> | null | "Janko Slavič et al." <janko.slavic@fs.uni-lj.si> | null | UFF, UNV, Universal File Format, read/write | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"build; extra == \"dev\"",
"pytest; extra == \"dev\"",
"sphinx; extra == \"dev\"",
"sphinx-copybutton; extra == \"dev\"",
"sphinx-rtd-theme; extra == \"dev\"",
"twine; extra == \"dev\"",
"wheel; extra == \"dev\""
] | [] | [] | [] | [
"homepage, https://github.com/ladisk/pyuff",
"documentation, https://pyuff.readthedocs.io/en/latest/",
"source, https://github.com/ladisk/pyuff"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:03:45.719224 | pyuff-2.5.2.tar.gz | 54,023 | c0/51/ade2d8460fb1eefbadd3a6d51414571f8f181542a08720f9f5bc9ee60a76/pyuff-2.5.2.tar.gz | source | sdist | null | false | 62d6d888fc948a039945fa256a932bda | baf88fe8859a22715c270b3932824e307ca2f6d12bf358046ab1868a3c59fb34 | c051ade2d8460fb1eefbadd3a6d51414571f8f181542a08720f9f5bc9ee60a76 | MIT | [
"LICENSE"
] | 813 |
2.1 | origen-metal | 1.2.2 | Bare metal APIs for the Origen SDK | [Bare metal APIs for the Origen SDK]("https://origen-sdk.org/o2")
| text/markdown | Origen-SDK | null | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming L... | [] | https://origen-sdk.org/o2 | null | <3.13,>=3.7.0 | [] | [] | [] | [
"colorama>=0.4.4",
"importlib-metadata>=6.7.0",
"pyreadline3<4.0,>=3.3; sys_platform == \"win32\"",
"termcolor>=1.1.0"
] | [] | [] | [] | [] | twine/4.0.2 CPython/3.11.14 | 2026-02-18T16:03:39.955265 | origen_metal-1.2.2-cp39-cp39-win_amd64.whl | 4,434,476 | c1/a3/213b26b9a73f8098235588b3e6e20c0e7a12f2b15c59ae97676edb0fc331/origen_metal-1.2.2-cp39-cp39-win_amd64.whl | cp39 | bdist_wheel | null | false | 0e72a35f35ec21d22116d391648ee515 | 83a434f0e877b9eec8cbe620aaa8e59b20e6750870d4110146aeee131183b360 | c1a3213b26b9a73f8098235588b3e6e20c0e7a12f2b15c59ae97676edb0fc331 | null | [] | 944 |
2.4 | napari-ortho | 0.0.1 | An orthogonal viewer for 3D data in Napari. | # napari-ortho
[](https://github.com/Karol-G/napari-ortho/raw/main/LICENSE)
[](https://pypi.org/project/napari-ortho)
[](https://python.org)
[](https://github.com/Karol-G/napari-ortho/actions)
[](https://codecov.io/gh/Karol-G/napari-ortho)
[](https://napari-hub.org/plugins/napari-ortho)
[](https://napari.org/stable/plugins/index.html)
[](https://github.com/copier-org/copier)
An orthogonal viewer for 3D data in Napari.
----------------------------------
This [napari] plugin was generated with [copier] using the [napari-plugin-template] (None).
<!--
Don't miss the full getting started guide to set up your new package:
https://github.com/napari/napari-plugin-template#getting-started
and review the napari docs for plugin developers:
https://napari.org/stable/plugins/index.html
-->
## Installation
You can install `napari-ortho` via [pip]:
```
pip install napari-ortho
```
If napari is not already installed, you can install `napari-ortho` with napari and Qt via:
```
pip install "napari-ortho[all]"
```
To install latest development version :
```
pip install git+https://github.com/Karol-G/napari-ortho.git
```
## Contributing
Contributions are very welcome. Tests can be run with [tox], please ensure
the coverage at least stays the same before you submit a pull request.
## License
Distributed under the terms of the [MIT] license,
"napari-ortho" is free and open source software
## Issues
If you encounter any problems, please [file an issue] along with a detailed description.
[napari]: https://github.com/napari/napari
[copier]: https://copier.readthedocs.io/en/stable/
[@napari]: https://github.com/napari
[MIT]: http://opensource.org/licenses/MIT
[BSD-3]: http://opensource.org/licenses/BSD-3-Clause
[GNU GPL v3.0]: http://www.gnu.org/licenses/gpl-3.0.txt
[GNU LGPL v3.0]: http://www.gnu.org/licenses/lgpl-3.0.txt
[Apache Software License 2.0]: http://www.apache.org/licenses/LICENSE-2.0
[Mozilla Public License 2.0]: https://www.mozilla.org/media/MPL/2.0/index.txt
[napari-plugin-template]: https://github.com/napari/napari-plugin-template
[file an issue]: https://github.com/Karol-G/napari-ortho/issues
[napari]: https://github.com/napari/napari
[tox]: https://tox.readthedocs.io/en/latest/
[pip]: https://pypi.org/project/pip/
[PyPI]: https://pypi.org/
| text/markdown | Karol Gotkowski | karol.gotkowski@dkfz.de | null | null | The MIT License (MIT)
Copyright (c) 2026 Karol Gotkowski
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
| null | [
"Development Status :: 2 - Pre-Alpha",
"Framework :: napari",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Pr... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"magicgui",
"qtpy",
"scikit-image",
"napari[all]; extra == \"all\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/Karol-G/napari-ortho/issues",
"Documentation, https://github.com/Karol-G/napari-ortho#README.md",
"Source Code, https://github.com/Karol-G/napari-ortho",
"User Support, https://github.com/Karol-G/napari-ortho/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:03:36.828122 | napari_ortho-0.0.1.tar.gz | 10,537 | f8/e9/9ad424a091c68af10c75f82061102dcb5f7a96871db27e97ae8c808601d0/napari_ortho-0.0.1.tar.gz | source | sdist | null | false | d6f792cb4f992ec67a684a6d7a10a797 | f1656753e2ab9466321d4606d1fdcabaeb3fcf1e26dd0d5c4da2bcf00ae25ed8 | f8e99ad424a091c68af10c75f82061102dcb5f7a96871db27e97ae8c808601d0 | null | [
"LICENSE"
] | 257 |
2.4 | locisimiles | 1.1.0 | LociSimiles is a Python package for finding intertextual links in Latin literature using pre-trained language models. | # Loci Similes
**LociSimiles** is a Python package for finding intertextual links in Latin literature using pre-trained language models.
## Basic Usage
```python
# Load example query and source documents
query_doc = Document("../data/hieronymus_samples.csv")
source_doc = Document("../data/vergil_samples.csv")
# Load the pipeline with pre-trained models
pipeline = ClassificationPipelineWithCandidategeneration(
classification_name="...",
embedding_model_name="...",
device="cpu",
)
# Run the pipeline with the query and source documents
results = pipeline.run(
query=query_doc, # Query document
source=source_doc, # Source document
top_k=3 # Number of top similar candidates to classify
)
pretty_print(results)
# Save results to CSV or JSON
pipeline.to_csv("results.csv")
pipeline.to_json("results.json")
```
## Command-Line Interface
LociSimiles provides a command-line tool for running the pipeline directly from the terminal:
### Basic Usage
```bash
locisimiles query.csv source.csv -o results.csv
```
### Advanced Usage
```bash
locisimiles query.csv source.csv -o results.csv \
--classification-model julian-schelb/PhilBerta-class-latin-intertext-v1 \
--embedding-model julian-schelb/SPhilBerta-emb-lat-intertext-v1 \
--top-k 20 \
--threshold 0.7 \
--device cuda \
--verbose
```
### Options
- **Input/Output:**
- `query`: Path to query document CSV file (columns: `seg_id`, `text`)
- `source`: Path to source document CSV file (columns: `seg_id`, `text`)
- `-o, --output`: Path to output CSV file for results (required)
- **Models:**
- `--classification-model`: HuggingFace model for classification (default: PhilBerta-class-latin-intertext-v1)
- `--embedding-model`: HuggingFace model for embeddings (default: SPhilBerta-emb-lat-intertext-v1)
- **Pipeline Parameters:**
- `-k, --top-k`: Number of top candidates to retrieve per query segment (default: 10)
- `-t, --threshold`: Classification probability threshold for filtering results (default: 0.5)
- **Device:**
- `--device`: Choose `auto`, `cuda`, `mps`, or `cpu` (default: auto-detect)
- **Other:**
- `-v, --verbose`: Enable detailed progress output
- `-h, --help`: Show help message
### Output Format
The CLI saves results to a CSV file with the following columns:
- `query_id`: Query segment identifier
- `query_text`: Query text content
- `source_id`: Source segment identifier
- `source_text`: Source text content
- `similarity`: Cosine similarity score (0-1)
- `probability`: Classification confidence (0-1)
- `above_threshold`: "Yes" if probability ≥ threshold, otherwise "No"
## Optional Gradio GUI
Install the optional GUI extra to experiment with a minimal Gradio front end:
```bash
pip install locisimiles[gui]
```
Launch the interface from the command line:
```bash
locisimiles-gui
```
| text/markdown | Julian Schelb | julian.schelb@uni-konstanz.de | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"accelerate>=0.20.0",
"audioop-lts<0.3.0,>=0.2.1; python_version >= \"3.13\" and extra == \"gui\"",
"chromadb<2.0.0,>=0.4.0",
"gradio>=5.49.1; extra == \"gui\"",
"mkdocs>=1.5.0; extra == \"dev\"",
"mkdocs-material>=9.0.0; extra == \"dev\"",
"mkdocstrings[python]>=0.24.0; extra == \"dev\"",
"mypy>=1.10... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:03:21.605827 | locisimiles-1.1.0.tar.gz | 48,556 | 5f/af/34d350b8fe8ef40c558fa33b0d0504c9c48d2e5bc52f018d9a5858cfb2fb/locisimiles-1.1.0.tar.gz | source | sdist | null | false | 40ffd296375a168d4474f2bbb021ac11 | 7e63d7d182f99ac74d441754d2cbc3033761d1cc5a0069795bf3bf24638b19d4 | 5faf34d350b8fe8ef40c558fa33b0d0504c9c48d2e5bc52f018d9a5858cfb2fb | null | [] | 226 |
2.4 | rinexmod | 4.1.1 | Tool to batch modify headers of RINEX Hatanaka compressed files. | # rinexmod
<img src="./logo_rinexmod.png" width="300">
_rinexmod_ is a tool to batch modify the headers of GNSS data files in RINEX format and rename them correctly.
It supports Hatanaka-compressed and non-compressed files, RINEX versions 2 and 3/4, and short and long naming conventions.
It is developed in Python 3, and can be run from the command line or directly in API mode by calling a python function.
The required input metadata can come from (a) sitelog file(s), (a) GeodesyML files(s), or be manually entered as arguments to the command line or the called function.
It is available under the GNU license on the following GitHub repository: https://github.com/IPGP/rinexmod
v2+ - 2023-05-15 - Pierre Sakic - sakic@ipgp.fr
v1 - 2022-02-07 - Félix Léger - leger@ipgp.fr
Version: 4.1.1
Date: 2026-02-18
**GitHub repository:** [https://github.com/IPGP/rinexmod](https://github.com/IPGP/rinexmod)
**PyPi project:** [https://pypi.org/project/rinexmod](https://pypi.org/project/rinexmod)
## Contributors
- @AriannaBoisseau
- @skimprem
## Tools overview
### Main tool
* `rinexmod_run` takes a list of RINEX Hatanaka compressed files (.d.Z or .d.gz or .rnx.gz),
loops the rinex files list to modify the file's headers. It then writes them back to Hatanaka
compressed format in an output folder. It can rename the files, changing
the four first characters of the file name with another station code. It can write
those files with the long name naming convention with the --longname option.
### Annex tools
They are stored in `bin/misc_tools` folder.
* `get_m3g_sitelogs.py` will get the last version of site logs from the M3G repository
and write them in an observatory-dependent subfolder.
* `crzmeta.py` will extract RINEX file's header information and prompt the result.
This allows to quickly access the header information without uncompressing the file manually.
It's a teqc-free equivalent of teqc +meta.
## Installation
### Assisted installation
The tool is designed in Python 3, and you must have it installed on your machine.
Since version 3.4.0, the frontend program `rinexmod_run` is available directly when you call it in your console.
#### Install the last *stable* version
You can use `pip` to install the last stable version from the [Python Package Index (PyPI)](https://pypi.org/project/rinexmod):
```pip install rinexmod```
#### Install the latest *developpement* version
You can use `pip` to install the latest [GitHub-hosted](https://github.com/IPGP/rinexmod) version:
```pip install git+https://github.com/IPGP/rinexmod```
### Required external modules
*NB*: Following the assisted installation procedure above, the required external modules will be automatically installed.
You need:
* _Python_ `hatanaka` library from M. Valgur
* `pycountry` to associate country names with their ISO abbreviations (facultative but recommended):
* `matplotlib` for plotting samples intervals with crzmeta
* `colorlog` to get the pretty colored log outputs
* `pandas` to for internal low-level data management
You can install them with:
```
pip install hatanaka pycountry matplotlib colorlog pandas
```
## _rinexmod_ in command lines interface
### rinexmod_run
This is the main frontend function. It takes a list of RINEX Hatanaka compressed files (.d.Z or .d.gz or .rnx.gz),
loop over the RINEX files list to modify the file's header. It then writes them back to Hatanaka-compressed
format in an output folder. It also allows to rename the files, changing
the four first characters of the file name with another site code. It can write
those files with the long name naming convention with the --longname option.
Four ways of passing parameters to modify headers are possible: `sitelog`, `geodesyml`, `modification_kw` and `station_info`/`lfile_apriori` (from GAMIT/GLOBK software).
* ```
--sitelog : you pass sitelogs file. The argument must be a sitelog path or the path of a folder
containing sitelogs. You then have to pass a list of files and the script will
assign sitelogs to correspondig files, based on the file's name.
The script will take the start and end time of each proceeded file
and use them to extract from the sitelog the station instrumentation
of the corresponding period and fill file's header with following infos:
Four Character ID
X coordinate (m)
Y coordinate (m)
Z coordinate (m)
Receiver Type
Serial Number
Firmware Version
Satellite System (will translate this info to one-letter code,
see RinexFile.set_observable_type())
Antenna Type
Serial Number
Marker->ARP Up Ecc. (m)
Marker->ARP East Ecc(m)
Marker->ARP North Ecc(m)
On-Site Agency Preferred Abbreviation
Responsible Agency Preferred Abbreviation
* ```
-- geodesyml : Path to a folder or a file containing geodesyML files to obtain GNSS
site metadata information.
* ```
--modification_kw : you pass as argument the field(s) that you want to modifiy and its value.
Acceptable_keywords are:
marker_name,
marker_number,
station (legacy alias for marker_name),
receiver_serial,
receiver_type,
receiver_fw,
antenna_serial,
antenna_type,
antenna_X_pos,
antenna_Y_pos,
antenna_Z_pos,
antenna_H_delta,
antenna_E_delta,
antenna_N_delta,
operator,
agency,
sat_system,
observables (legacy alias for sat_system),
interval,
filename_file_period (01H, 01D...),
filename_data_freq (30S, 01S...),
filename_data_source (R, S, U)
* ```
-sti STATION_INFO, --station_info STATION_INFO
Path of a GAMIT station.info file to obtain GNSS site
metadata information (needs also -lfi option)
-lfi LFILE_APRIORI, --lfile_apriori LFILE_APRIORI
Path of a GAMIT apriori apr/L-File to obtain GNSS site
position and DOMES information (needs also -sti
option)
`--modification_kw` values will orverride the ones obtained with `--sitelog` and `--station_info`/`--lfile_apriori`.
_rinexmod_ will add two comment lines, one indicating the source of the modification
(sitelog or arguments) and the other the modification timestamp.
### Synopsis
*NB*: The most recent synopsis is available with the `-h` or `--help` option of the `rinexmod_run` command line function.
The following synopsis reproduced here is for more readability but might not be the most up-to-date one.
```
rinexmod_run [-h] -i RINEXINPUT [RINEXINPUT ...] -o OUTPUTFOLDER
[-s SITELOG] [-k KEY=VALUE [KEY=VALUE ...]] [-m MARKER]
[-co COUNTRY] [-n NINECHARFILE] [-sti STATION_INFO]
[-lfi LFILE_APRIORI] [-r RELATIVE] [-nh] [-c {gz,Z,none}]
[-l] [-fs] [-fc] [-fr] [-ig] [-a] [-ol OUTPUT_LOGS] [-w]
[-v] [-t] [-u] [-fns {basic,flex,exact}]
[-mp MULTI_PROCESS] [-d] [-rm]
RinexMod takes RINEX files (v2 or v3/4, compressed or not), rename them and modifiy their headers, and write them back to a destination directory
options:
-h, --help show this help message and exit
required arguments:
-i RINEXINPUT [RINEXINPUT ...], --rinexinput RINEXINPUT [RINEXINPUT ...]
Input RINEX file(s). It can be:
1) a list file of the RINEX paths to process (generated with find or ls command for instance)
2) several RINEX files paths
3) a single RINEX file path (see -a/--alone for a single input file)
-o OUTPUTFOLDER, --outputfolder OUTPUTFOLDER
Output folder for modified RINEX files
optional arguments:
-s SITELOG, --sitelog SITELOG
Get the RINEX header values from file's site's sitelog. Provide a single sitelog path or a folder contaning sitelogs.
-k KEY=VALUE [KEY=VALUE ...], --modif_kw KEY=VALUE [KEY=VALUE ...]
Modification keywords for RINEX's header fields and/or filename.
Format: -k keyword_1='value1' keyword2='value2'.
Will override the information from the sitelog.
Acceptable keywords: comment, marker_name, marker_number, station (legacy alias for marker_name), receiver_serial, receiver_type, receiver_fw, antenna_serial, antenna_type, antenna_X_pos, antenna_Y_pos, antenna_Z_pos, antenna_H_delta, antenna_E_delta, antenna_N_delta, operator, agency, sat_system, observables (legacy alias for sat_system), interval, filename_file_period (01H, 01D...), filename_data_freq (30S, 01S...), filename_data_source (R, S, U)
-m MARKER, --marker MARKER
A four or nine-character site code that will be used to rename input files.(apply also to the header's MARKER NAME, but a custom -k marker_name='XXXX' overrides it)
-co COUNTRY, --country COUNTRY
A three-character string corresponding to the ISO 3166 Country code that will be used to rename input files. It overrides other country code sources (sitelog, --marker...). List of ISO country codes: https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes
-n NINECHARFILE, --ninecharfile NINECHARFILE
Path of a file that contains 9-char. site names (e.g. from the M3G database)
-sti STATION_INFO, --station_info STATION_INFO
Path of a GAMIT station.info file to obtain GNSS site metadata information (needs also -lfi option)
-lfi LFILE_APRIORI, --lfile_apriori LFILE_APRIORI
Path of a GAMIT apriori apr/L-File to obtain GNSS site position and DOMES information (needs also -sti option)
-r RELATIVE, --relative RELATIVE
Reconstruct files relative subfolders.You have to indicate the common parent folder, that will be replaced with the output folder
-nh, --no_hatanaka Skip high-level RINEX-specific Hatanaka compression (performed per default). See also -c 'none'
-c {gz,Z,none}, --compression {gz,Z,none}
Set low-level RINEX file compression (acceptable values : 'gz' (recommended to fit IGS standards), 'Z', 'none')
-l, --longname Rename file using long name RINEX convention (force gzip compression).
-fs, --force_sitelog If a single sitelog is provided, force sitelog-based header values when RINEX's header and sitelog site name do not correspond.
If several sitelogs are provided, skip badly-formated sitelogs.
-fc, --force_fake_coords
When using GAMIT station.info metadata without apriori coordinates in the L-File, gives fake coordinates at (0??,0??) to the site
-fr, --force_rnx_load
Force the loading of the input RINEX. Useful if its name is not standard
-ig, --ignore Ignore firmware changes between instrumentation periods when getting header values info from sitelogs
-a, --alone INPUT is a single/alone RINEX file (and not a list file of RINEX paths)
-ol OUTPUT_LOGS, --output_logs OUTPUT_LOGS
Folder where to write output logs. If not provided, logs will be written to OUTPUTFOLDER
-w, --write Write (RINEX version, sample rate, file period) dependant output lists
-v, --verbose Print file's metadata before and after modifications.
-t, --sort Sort the input RINEX list.
-u, --full_history Add the full history of the station in the RINEX's 'header as comment.
-fns {basic,flex,exact}, --filename_style {basic,flex,exact}
Set the RINEX filename style.
acceptable values : 'basic' (per default), 'flex', 'exact'.
* 'basic': a simple mode to apply a strict filename period (01H or 01D), being compatible with the IGS conventions.
e.g.: FNG000GLP_R_20242220000_01D_30S_MO.crx.gz
* 'flex': the filename period is tolerant and corresponds tothe actual data content,
but then can be odd (e.g. 07H, 14H...). The filename start time is rounded to the hour.
e.g.: FNG000GLP_R_20242221800_06H_30S_MO.crx.gz
* 'exact': the filename start time is strictly the one of the first epoch in the RINEX.
Useful for some specific cases needing splicing.
e.g.: FNG000GLP_R_20242221829_06H_30S_MO.crx.gz
(default: basic)
-mp MULTI_PROCESS, --multi_process MULTI_PROCESS
Number of parallel multiprocesing (default: 1, no parallelization)
-d, --debug Debug mode, stops if something goes wrong (default: False)
-rm, --remove Remove input RINEX file if the output RINEX is correctly written. Use it as your own risk. (default: False)
RinexMod 3.3.0 - GNU Public Licence v3 - P. Sakic et al. - IPGP-OVS - https://github.com/IPGP/rinexmod
```
### Examples
```
./rinexmod_run -i RINEXLIST -o OUTPUTFOLDER (-k antenna_type='ANT TYPE' antenna_X_pos=9999 agency=AGN) (-m AGAL) (-r ./ROOTFOLDER/) (-f) (-v)
```
```
./rinexmod_run (-a) -i RINEXFILE -o OUTPUTFOLDER (-s ./sitelogsfolder/stationsitelog.log) (-i) (-w) (-o ./LOGFOLDER) (-v)
```
## _rinexmod_ in API mode
*NB*: The following docstring reproduced here is for more readability but might not be the most up-to-date one.
_rinexmod_ can be launched directly as a Python function:
```
import rinexmod.rinexmod_api as rimo_api
rimo_api.rinexmod(rinexfile, outputfolder, sitelog=None, modif_kw=dict(), marker='',
country='', longname=False, force_rnx_load=False, force_sitelog=False,
ignore=False, ninecharfile=None, no_hatanaka=False, compression=None,
relative='', verbose=True, full_history=False, filename_style=False,
return_lists=None, station_info=None, lfile_apriori=None,
force_fake_coords=False):
"""
Parameters
----------
rinexfile : str
Input RINEX file to process.
outputfolder : str
Folder where to write the modified RINEX files.
sitelog : str, list of str, MetaData object, list of MetaData objects, optional
Get the RINEX header values from a sitelog.
Possible inputs are:
* list of string (sitelog file paths),
* single string (single sitelog file path or directory containing the sitelogs),
* list of MetaData object
* single MetaData object
The function will search for the latest and right sitelog
corresponding to the site.
One can force a single sitelog with force_sitelog.
The default is None.
modif_kw : dict, optional
Modification keywords for RINEX's header fields and/or filename.
Will override the information from the sitelog.
Acceptable keywords for the header fields:
* comment
* marker_name
* marker_number
* station (legacy alias for marker_name)
* receiver_serial
* receiver_type
* receiver_fw
* antenna_serial
* antenna_type
* antenna_X_pos
* antenna_Y_pos
* antenna_Z_pos
* antenna_H_delta
* antenna_E_delta
* antenna_N_delta
* operator
* agency
* sat_system (M, G, R, E, C...)
* observables (legacy alias for sat_system)
* interval
Acceptable keywords for the filename:
* filename_file_period (01H, 01D...)
* filename_data_freq (30S, 01S...)
* filename_data_source (R, S, U)
The default is dict().
marker : str, optional
A four or nine character site code that will be used to rename
input files.
Apply also to the header's MARKER NAME,
but a custom modification keyword marker_name='XXXX' overrides it
(modif_kw argument below)
The default is ''.
country : str, optional
A three character string corresponding to the ISO 3166 Country code
that will be used to rename input files.
It overrides other country code sources (sitelog, --marker...)
list of ISO country codes:
https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes
The default is ''.
longname : bool, optional
Rename file using long name RINEX convention (force gzip compression).
The default is False.
force_rnx_load : bool, optional
Force the loading of the input RINEX. Useful if its name is not standard.
The default is False.
force_sitelog : bool, optional
If a single sitelog is provided, force sitelog-based header
values when RINEX's header and sitelog site name do not correspond.
If several sitelogs are provided, skip badly-formated sitelogs.
The default is False.
ignore : bool, optional
Ignore firmware changes between instrumentation periods
when getting header values info from sitelogs. The default is False.
ninecharfile : str, optional
Path of a file that contains 9-char. site names from the M3G database.
The default is None.
no_hatanaka : bool, optional
Skip high-level RINEX-specific Hatanaka compression
(performed per default).
The default is False.
compression : str, optional
Set low-level RINEX file compression.
acceptable values : gz (recommended to fit IGS standards), 'Z', None.
The default is None.
relative : str, optional
Reconstruct files relative subfolders.
You have to indicate the common parent folder,
that will be replaced with the output folder. The default is ''.
verbose : bool, optional
set the level of verbosity
(False for the INFO level, True for the DEBUG level).
The default is True.
full_history : bool, optional
Add the full history of the station in
the RINEX's header as comment.
filename_style : str, optional
Set the RINEX filename style.
acceptable values : 'basic' (per default), 'flex', 'exact'.
* 'basic': a simple mode to apply a strict filename period (01H or 01D),
being compatible with the IGS conventions.
e.g.: `FNG000GLP_R_20242220000_01D_30S_MO.crx.gz`
* 'flex': the filename period is tolerant and corresponds to
the actual data content, but then can be odd (e.g. 07H, 14H...).
The filename start time is rounded to the hour.
e.g.: `FNG000GLP_R_20242221800_06H_30S_MO.crx.gz`
* 'exact': the filename start time is strictly the one of the
first epoch in the RINEX.
Useful for some specific cases needing splicing.
e.g.: `FNG000GLP_R_20242221829_06H_30S_MO.crx.gz`
The default is 'basic'.
return_lists : dict, optional
Specific option for file distribution through a GLASS node.
Store the rinexmoded RINEXs in a dictionary
to activates it, give a dict as input (an empty one - dict() works)
The default is None.
station_info: str, optional
Path of a GAMIT station.info file to obtain GNSS site
metadata information (needs also lfile_apriori option)
lfile_apriori: str, optional
Path of a GAMIT apriori apr/L-File to obtain GNSS site
position and DOMES information (needs also station_info option)
force_fake_coords: bool, optional
When using GAMIT station.info metadata without apriori coordinates
in the L-File, gives fake coordinates at (0??,0??) to the site
remove: bool, optional
Remove input RINEX file if the output RINEX is correctly written
The default is False.
Raises
------
RinexModInputArgsError
Something is wrong with the input arguments.
RinexFileError
Something is wrong with the input RINEX File.
Returns
-------
outputfile : str
the path of the rinexmoded RINEX
OR
return_lists : dict
a dictionary of rinexmoded RINEXs for GLASS distribution.
```
## Other command line functions
### crzmeta
Extract metadata from crz file.
With -p option, will plot the file's samples intervals
```
EXAMPLE:
./crzmeta RINEXFILE (-p)
```
### get_m3g_sitelogs
This script will get the last version of sitelogs from M3G repository and write them
in an observatory dependent subfolder set in 'observatories'.
The `-d/--delete` option will delete the old version to get only the last version even
in a name-changing case.
```
USE :
* OUTPUTFOLDER : Folder where to write the downloaded sitelogs.
OPTION :
* -d : delete : Delete old sitelogs in storage folder. This permits to have only the last version, as version changing sitelogs changes of name.
EXAMPLE:
./get_m3g_sitelogs OUTPUTFOLDER (-d)
```
## _rinexmod_ error messages
_rinexmod_ will prompt errors when arguments are wrong. Apart from this, it will prompt and save to file errors and waring
occurring on specific files from the rinex list. Here are the error codes :
`01 - The specified file does not exists`
That means that the input file containing a list of rinex files is wrong and references a file that is not present. It can also mean that the file was deleted between the list generation and the script launch, but this case should be quite rare.
`02 - Not an observation Rinex file`
The file name does not correspond to the classic pattern (it doesn't match the regular expression for new and old convention naming model ). Most of time, it's because it is not a d rinex file (for example, navigation file).
`03 - Invalid or empty Zip file`
The Zip file is corrupted or empty
`04 - Invalid Compressed Rinex file`
The CRX Hatanaka file is corrupted.
`05 - Less than two epochs in the file, reject`
Not enought data in the file to extract a sample rate, and data not relevant because insufficient. Reject the file.
`30 - Input and output folders are the same !`
The file will not be proceeded as rinexmod does not modify files inplace. Check your outputfolder.
`31 - The subfolder can not be reconstructed for file`
The script tries to find the 'reconstruct' subfolder in the file's path to replace it with outputfolder, and does not find it.
`32 - Station's country not retrevied, will not be properly renamed`
When using --name option, that will rename file with rinex long name convention, it needs to retrieve the file's country.
It tries to do so using an external file of list of 9 char ids. the concerned rinex file's station seems to be absent
from this station list file.
`33 - File\'s station does not correspond to provided sitelog - use -f option to force`
The station name retrieved from the provided sitelog does not correspond to the station's name retrieved from
the file's headers. Do not process.
`34 - File's station does not correspond to provided sitelog, processing anyway`
The station name retrieved from the provided sitelog does not correspond to the station's name retrieved from
the file's headers. As the --force option was provided, the file has been processed.
`35 - No instrumentation corresponding to the data period on the sitelog`
There is no continuous instrumentation period in the sitelog taht corresponds to the rinex file's dates. We can thus not fill the header.
`36 - Instrumentation cames from merged periods of sitelog with different firmwares, processing anyway`
We provided the --ignore option, so the consecutive periods of instrumentation for witch only the firmware version of the receiver has changed have been merged. We used this period to fill this file's header.
| text/markdown | null | Pierre Sakic <sakic@ipgp.fr> | null | null | null | geodesy, gnss, rinex, header, metadata, positioning | [
"Development Status :: 3 - Alpha",
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pyt... | [] | null | null | <4,>=3.5 | [] | [] | [] | [
"colorlog",
"pandas",
"hatanaka",
"numpy",
"requests",
"setuptools",
"pycountry",
"pytest"
] | [] | [] | [] | [
"Homepage, https://github.com/IPGP/rinexmod"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:02:43.212422 | rinexmod-4.1.1.tar.gz | 94,888 | 0d/d4/9152476ee32654bc7603a510a875fcbac837089e75d469ef88b0ed26d759/rinexmod-4.1.1.tar.gz | source | sdist | null | false | 1771e3ab2284aacd654a66855c2e4cb9 | ee4119e1360cbff7853639109f7a50b97b2a9f54dec29bee98769105c353432e | 0dd49152476ee32654bc7603a510a875fcbac837089e75d469ef88b0ed26d759 | GPL-3.0-or-later | [
"LICENSE"
] | 234 |
2.3 | stouputils | 1.23.0 | Stouputils is a collection of utility modules designed to simplify and enhance the development process. It includes a range of tools for tasks such as execution of doctests, display utilities, decorators, as well as context managers, and many more. |
## 🛠️ Project Badges
[](https://github.com/Stoupy51/stouputils/releases/latest)
[](https://pypi.org/project/stouputils/)
[](https://stoupy51.github.io/stouputils/latest/)
## 📚 Project Overview
Stouputils is a collection of utility modules designed to simplify and enhance the development process.<br>
It includes a range of tools for tasks such as execution of doctests, display utilities, decorators, as well as context managers.<br>
Start now by installing the package: `pip install stouputils`.<br>
<a class="admonition" href="https://colab.research.google.com/drive/1mJ-KL-zXzIk1oKDxO6FC1SFfm-BVKG-P?usp=sharing" target="_blank" rel="noopener noreferrer">
<span>📖 <b>Want to see examples?</b> Check out our <u>Google Colab notebook</u> with practical usage examples!</span>
</a>
## 🚀 CLI Quick Reference
Stouputils provides a powerful command-line interface. Here's a quick example for each subcommand:
```bash
# Show version information of polars with dependency tree of depth 3
stouputils --version polars -t 3
# Run all doctests in a directory with pattern filter (fnmatch)
stouputils all_doctests "./src" "*_test"
# Repair a corrupted/obstructed zip archive
stouputils repair "./input.zip" "./output.zip"
# Create a delta backup
stouputils backup delta "./source" "./backups"
# Build and publish to PyPI (with minor version bump and no stubs)
stouputils build minor --no_stubs
# Generate changelog from git history (since a specific date, with commit URLs from origin remote, output to file)
stouputils changelog date "2026-01-01" -r origin -o "CHANGELOG.md"
# Redirect (move) a folder and create a junction/symlink at the original location
stouputils redirect "C:/Games/MyGame" "D:/Games/" --hardlink
```
> 📖 See the [Extensive CLI Documentation](#-extensive-cli-documentation) section below for detailed usage and all available options.
## 🚀 Project File Tree
<html>
<details style="display: none;">
<summary></summary>
<style>
.code-tree {
border-radius: 6px;
padding: 16px;
font-family: monospace;
line-height: 1.45;
overflow: auto;
white-space: pre;
background-color:rgb(43, 43, 43);
color: #d4d4d4;
}
.code-tree a {
color: #569cd6;
text-decoration: none;
}
.code-tree a:hover {
text-decoration: underline;
}
.code-tree .comment {
color:rgb(231, 213, 48);
}
.code-tree .paren {
color: orange;
}
</style>
</details>
<pre class="code-tree">stouputils/
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.print.html">print</a> <span class="comment"># 🖨️ Utility functions for printing <span class="paren">(info, debug, warning, error, whatisit, breakpoint, colored_for_loop, ...)</span></span>
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.decorators.html">decorators</a> <span class="comment"># 🎯 Decorators <span class="paren">(measure_time, handle_error, timeout, retry, simple_cache, abstract, deprecated, silent)</span></span>
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.ctx.html">ctx</a> <span class="comment"># 🔇 Context managers <span class="paren">(LogToFile, MeasureTime, Muffle, DoNothing, SetMPStartMethod)</span></span>
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.io.html">io</a> <span class="comment"># 💾 Utilities for file management <span class="paren">(json_dump, json_load, csv_dump, csv_load, read_file, super_copy, super_open, clean_path, redirect_folder, ...)</span></span>
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.parallel.html">parallel</a> <span class="comment"># 🔀 Utility functions for parallel processing <span class="paren">(multiprocessing, multithreading, run_in_subprocess)</span></span>
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.image.html">image</a> <span class="comment"># 🖼️ Little utilities for image processing <span class="paren">(image_resize, auto_crop, numpy_to_gif, numpy_to_obj)</span></span>
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.collections.html">collections</a> <span class="comment"># 🧰 Utilities for collection manipulation <span class="paren">(unique_list, at_least_n, sort_dict_keys, upsert_in_dataframe, array_to_disk)</span></span>
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.typing.html">typing</a> <span class="comment"># 📝 Utilities for typing enhancements <span class="paren">(IterAny, JsonDict, JsonList, ..., convert_to_serializable)</span></span>
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.all_doctests.html">all_doctests</a> <span class="comment"># ✅ Run all doctests for all modules in a given directory <span class="paren">(launch_tests, test_module_with_progress)</span></span>
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.backup.html">backup</a> <span class="comment"># 💾 Utilities for backup management <span class="paren">(delta backup, consolidate)</span></span>
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.lock.html">lock</a> <span class="comment"># 🔒 Inter-process FIFO locks <span class="paren">(LockFifo, RLockFifo, RedisLockFifo)</span></span>
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.archive.html">archive</a> <span class="comment"># 📦 Functions for creating and managing archives <span class="paren">(create, repair)</span></span>
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.config.html">config</a> <span class="comment"># ⚙️ Global configuration <span class="paren">(StouputilsConfig: global options)</span></span>
│
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.applications.html">applications/</a>
│ ├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.applications.automatic_docs.html">automatic_docs</a> <span class="comment"># 📚 Documentation generation utilities <span class="paren">(used to create this documentation)</span></span>
│ ├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.applications.upscaler.html">upscaler</a> <span class="comment"># 🔎 Image & Video upscaler <span class="paren">(configurable)</span></span>
│ └── ...
│
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.continuous_delivery.html">continuous_delivery/</a>
│ ├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.continuous_delivery.cd_utils.html">cd_utils</a> <span class="comment"># 🔧 Utilities for continuous delivery</span>
│ ├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.continuous_delivery.git.html">git</a> <span class="comment"># 📜 Utilities for local git changelog generation</span>
│ ├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.continuous_delivery.github.html">github</a> <span class="comment"># 📦 Utilities for continuous delivery on GitHub <span class="paren">(upload_to_github)</span></span>
│ ├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.continuous_delivery.pypi.html">pypi</a> <span class="comment"># 📦 Utilities for PyPI <span class="paren">(pypi_full_routine)</span></span>
│ ├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.continuous_delivery.pyproject.html">pyproject</a> <span class="comment"># 📝 Utilities for reading, writing and managing pyproject.toml files</span>
│ ├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.continuous_delivery.stubs.html">stubs</a> <span class="comment"># 📝 Utilities for generating stub files using stubgen</span>
│ └── ...
│
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.mlflow.html">mlflow/</a>
│ ├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.mlflow.process_metrics_monitor.html">process_metrics_monitor</a> <span class="comment"># 📊 Monitor CPU, memory, I/O, and thread metrics for a specific process tree and log them to MLflow</span>
│ └── ...
│
├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.installer.html">installer/</a>
│ ├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.installer.common.html">common</a> <span class="comment"># 🔧 Common functions used by the Linux and Windows installers modules</span>
│ ├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.installer.downloader.html">downloader</a> <span class="comment"># ⬇️ Functions for downloading and installing programs from URLs</span>
│ ├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.installer.linux.html">linux</a> <span class="comment"># 🐧 Linux/macOS specific implementations for installation</span>
│ ├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.installer.main.html">main</a> <span class="comment"># 🚀 Core installation functions for installing programs from zip files or URLs</span>
│ ├── <a href="https://stoupy51.github.io/stouputils/latest/modules/stouputils.installer.windows.html">windows</a> <span class="comment"># 💻 Windows specific implementations for installation</span>
│ └── ...
└── ...
</pre>
</html>
## 🔧 Installation
```bash
pip install stouputils
```
### ✨ Enable Tab Completion on Linux (Optional)
For a better CLI experience, enable bash tab completion:
```bash
# Option 1: Using argcomplete's global activation
activate-global-python-argcomplete --user
# Option 2: Manual setup for bash
register-python-argcomplete stouputils >> ~/.bashrc
source ~/.bashrc
```
After enabling completion, you can use `<TAB>` to autocomplete commands:
```bash
stouputils <TAB> # Shows: --version, -v, all_doctests, backup
stouputils all_<TAB> # Completes to: all_doctests
```
**Note:** Tab completion works best in bash, zsh, Git Bash, or WSL on Windows.
## 📖 Extensive CLI Documentation
The `stouputils` CLI provides several powerful commands for common development tasks.
### ⚡ General Usage
```bash
stouputils <command> [options]
```
Running `stouputils` without arguments displays help with all available commands.
---
### 📌 `--version` / `-v` — Show Version Information
Display the version of stouputils and its dependencies, along with the used Python version.
```bash
# Basic usage - show stouputils version
stouputils --version
stouputils -v
# Show version for a specific package
stouputils --version numpy
stouputils -v requests
# Show dependency tree (depth 3+)
stouputils --version -t 3
stouputils -v stouputils --tree 4
```
**Options:**
| Option | Description |
|--------|-------------|
| `[package]` | Optional package name to show version for (default: stouputils) |
| `-t`, `--tree <depth>` | Show dependency tree with specified depth (≤2 for flat list, ≥3 for tree view) |
---
### ✅ `all_doctests` — Run Doctests
Execute all doctests in Python files within a directory.
```bash
# Run doctests in current directory
stouputils all_doctests
# Run doctests in specific directory
stouputils all_doctests ./src
# Run doctests with file pattern filter
stouputils all_doctests ./src "*image/*.py"
stouputils all_doctests . "*utils*"
```
**Arguments:**
| Argument | Description |
|----------|-------------|
| `[directory]` | Directory to search for Python files (default: `.`) |
| `[pattern]` | Glob pattern to filter files (default: `*`) |
**Exit codes:**
- `0`: All tests passed
- `1`: One or more tests failed
---
### 📦 `archive` — Archive Utilities
Create and repair ZIP archives.
```bash
# Show archive help
stouputils archive --help
```
#### `archive make` — Create Archive
```bash
# Basic archive creation
stouputils archive make ./my_folder ./backup.zip
# Create archive with ignore patterns
stouputils archive make ./project ./project.zip --ignore "*.pyc,__pycache__,*.log"
# Create destination directory if needed
stouputils archive make ./source ./backups/archive.zip --create-dir
```
**Arguments & Options:**
| Argument/Option | Description |
|-----------------|-------------|
| `<source>` | Source directory to archive |
| `<destination>` | Destination zip file path |
| `--ignore <patterns>` | Comma-separated glob patterns to exclude |
| `--create-dir` | Create destination directory if it doesn't exist |
#### `archive repair` — Repair Corrupted ZIP
```bash
# Repair with auto-generated output name
stouputils archive repair ./corrupted.zip
# Repair with custom output name
stouputils archive repair ./corrupted.zip ./fixed.zip
```
**Arguments:**
| Argument | Description |
|----------|-------------|
| `<input_file>` | Path to the corrupted zip file |
| `[output_file]` | Path for repaired file (default: adds `_repaired` suffix) |
---
### 💾 `backup` — Backup Utilities
Create delta backups, consolidate existing backups, and manage backup retention.
```bash
# Show backup help
stouputils backup --help
```
#### `backup delta` — Create Delta Backup
Create an incremental backup containing only new or modified files since the last backup.
```bash
# Basic delta backup
stouputils backup delta ./my_project ./backups
# Delta backup with exclusions
stouputils backup delta ./project ./backups -x "*.pyc" "__pycache__/*" "node_modules/*"
stouputils backup delta ./source ./backups --exclude "*.log" "temp/*"
```
**Arguments & Options:**
| Argument/Option | Description |
|-----------------|-------------|
| `<source>` | Source directory or file to back up |
| `<destination>` | Destination folder for backups |
| `-x`, `--exclude <patterns>` | Glob patterns to exclude (space-separated) |
#### `backup consolidate` — Consolidate Backups
Merge multiple delta backups into a single complete backup.
```bash
# Consolidate all backups up to latest.zip into one file
stouputils backup consolidate ./backups/latest.zip ./consolidated.zip
```
**Arguments:**
| Argument | Description |
|----------|-------------|
| `<backup_zip>` | Path to the latest backup ZIP file |
| `<destination_zip>` | Path for the consolidated output file |
#### `backup limit` — Limit Backup Count
Limit the number of delta backups by consolidating the oldest ones.
```bash
# Keep only the 5 most recent backups
stouputils backup limit 5 ./backups
# Allow deletion of the oldest backup (not recommended)
stouputils backup limit 5 ./backups --no-keep-oldest
```
**Arguments & Options:**
| Argument/Option | Description |
|-----------------|-------------|
| `<max_backups>` | Maximum number of backups to keep |
| `<backup_folder>` | Path to the folder containing backups |
| `--no-keep-oldest` | Allow deletion of the oldest backup (default: keep it) |
---
### 🏗️ `build` — Build and Publish to PyPI
Build and publish a Python package to PyPI using the `uv` tool. This runs a complete routine including version bumping, stub generation, building, and publishing.
```bash
# Standard build and publish (bumps patch by default)
stouputils build
# Build without generating stubs and without bumping version
stouputils build --no_stubs --no_bump
# Bump minor version before build
stouputils build minor
# Bump major version before build
stouputils build major
```
**Options:**
| Option | Description |
|--------|-------------|
| `--no_stubs` | Skip stub file generation |
| `--no_bump` | Skip version bumping (use current version) |
| `minor` | Bump minor version (e.g., 1.2.0 → 1.3.0) |
| `major` | Bump major version (e.g., 1.2.0 → 2.0.0) |
---
### 📜 `changelog` — Generate Changelog
Generate a formatted changelog from local git history.
```bash
# Show changelog help
stouputils changelog --help
```
```bash
# Generate changelog since latest tag (default)
stouputils changelog
# Generate changelog since a specific tag
stouputils changelog tag v1.9.0
# Generate changelog since a specific date
stouputils changelog date 2026/01/05
stouputils changelog date "2026-01-15 14:30:00"
# Generate changelog since a specific commit
stouputils changelog commit 847b27e
# Include commit URLs from a remote
stouputils changelog --remote origin
stouputils changelog tag v2.0.0 -r origin
# Output to a file
stouputils changelog -o CHANGELOG.md
stouputils changelog tag v1.0.0 --output docs/CHANGELOG.md
```
**Arguments & Options:**
| Argument/Option | Description |
|-----------------|-------------|
| `[mode]` | Mode for selecting commits: `tag`, `date`, or `commit` (default: `tag`) |
| `[value]` | Value for the mode (tag name, date, or commit SHA) |
| `-r`, `--remote <name>` | Remote name for commit URLs (e.g., `origin`) |
| `-o`, `--output <file>` | Output file path (default: stdout) |
**Supported date formats:**
- `YYYY/MM/DD` or `YYYY-MM-DD`
- `DD/MM/YYYY` or `DD-MM-YYYY`
- `YYYY-MM-DD HH:MM:SS`
- ISO 8601: `YYYY-MM-DDTHH:MM:SS`
---
### 🔗 `redirect` — Redirect a Folder
Move a folder to a new location and create a junction or symlink at the original path. Useful for redirecting game installs, large data folders, etc. across drives.
```bash
# Show redirect help
stouputils redirect --help
# Redirect with auto-detected basename (destination ends with /)
stouputils redirect "C:/Games/MyGame" "D:/Games/" --hardlink
# Redirect with explicit destination name
stouputils redirect "C:/Games/MyGame" "D:/Storage/MyGame" --symlink
# Interactive mode (asks for link type)
stouputils redirect "./my_folder" "/mnt/external/"
```
**Arguments & Options:**
| Argument/Option | Description |
|-----------------|-------------|
| `<source>` | Source folder to redirect |
| `<destination>` | Destination path (append `/` to auto-use source basename) |
| `--hardlink` / `--junction` | Use NTFS junction (Windows) or fallback to symlink (Linux/macOS) |
| `--symlink` | Use a symbolic link (may need admin on Windows) |
**Notes:**
- If `--hardlink` fails (e.g., unsupported OS), it automatically falls back to symlink
- If the source is already a symlink or junction, the operation is skipped
- On Linux/macOS, junctions are not available so `--hardlink` uses a symlink instead
---
### 📋 Examples Summary
| Command | Description |
|---------|-------------|
| `stouputils -v` | Show version |
| `stouputils -v numpy -t 3` | Show numpy version with dependency tree |
| `stouputils all_doctests ./src` | Run doctests in src directory |
| `stouputils archive make ./proj ./proj.zip` | Create archive |
| `stouputils archive repair ./bad.zip` | Repair corrupted zip |
| `stouputils backup delta ./src ./bak -x "*.pyc"` | Create delta backup |
| `stouputils backup consolidate ./bak/latest.zip ./full.zip` | Consolidate backups |
| `stouputils backup limit 5 ./bak` | Keep only 5 backups |
| `stouputils build minor` | Build with minor version bump |
| `stouputils changelog tag v1.0.0 -r origin -o CHANGELOG.md` | Generate changelog to file |
| `stouputils redirect "C:/Games/MyGame" "D:/Games/" --hardlink` | Redirect folder with junction |
## ⭐ Star History
<html>
<a href="https://star-history.com/#Stoupy51/stouputils&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=Stoupy51/stouputils&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=Stoupy51/stouputils&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=Stoupy51/stouputils&type=Date" />
</picture>
</a>
</html>
| text/markdown | Stoupy51 | Stoupy51 <stoupy51@gmail.com> | null | null | null | utilities, tools, helpers, development, python | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"tqdm>=4.0.0",
"requests>=2.20.0",
"msgspec[toml,yaml]>=0.20.0",
"pillow>=12.0.0",
"python-box>=7.0.0",
"argcomplete>=3.0.0",
"psutil>=7.2.2",
"redis[hiredis]",
"setproctitle",
"numpy",
"opencv-python; extra == \"data-science\"",
"scikit-image; extra == \"data-science\"",
"simpleitk; extra =... | [] | [] | [] | [
"Homepage, https://stoupy51.github.io/stouputils",
"Issues, https://github.com/Stoupy51/stouputils/issues",
"Source, https://github.com/Stoupy51/stouputils"
] | uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T16:01:58.620889 | stouputils-1.23.0.tar.gz | 148,985 | e4/2d/0c6fbe406f33ef6ff2ce9dd205845b18c94e30ef65cc524dc0da645161da/stouputils-1.23.0.tar.gz | source | sdist | null | false | a32db38b87ce1a5d2cd607f534182320 | ab4fc9c8366cacc811ab1a8785d2692e0b4ebb7143acdab77096cba0a6c3a700 | e42d0c6fbe406f33ef6ff2ce9dd205845b18c94e30ef65cc524dc0da645161da | null | [] | 256 |
2.4 | stanley-shield | 1.0.0 | Multi-Tenant Cryptographic Audit Logger with Regional Data Residency | # 🛡️ Stanley Shield Python SDK
The official 2026 Python Gatekeeper for the Stanley Shield Bunker. This SDK provides cryptographic audit logging with native support for Regional Data Residency.
### Install the pre-built wheel:
```bash
pip install stanley_shield-1.0.0-py3-none-any.whl
```
## 🔐 Onboarding & Setup
To use the Shield, you must authenticate your server using the Infisical Machine Identity credentials provided by your Stanley Shield administrator.
### 1. Requirements
- Install the Infisical CLI: ``` bash brew install infisical/get-cli/infisical ``` [MacOS] or follow the [official guide](https://infisical.com/docs/cli/overview).
- Obtain your Credentials Pack:
- INFISICAL_MACHINE_ID
- INFISICAL_MACHINE_SECRET
- INFISICAL_PROJECT_ID
- STANLEY_BUNKER_URL
### 2. Authentication & Execution
Run these commands in your terminal to link your environment to the Bunker. This ensures your STANLEY_HMAC_SECRET is injected directly into memory and never stored on disk.
#### Step A: Authenticate your session
```bash
export INFISICAL_TOKEN=$(infisical login --method=universal-auth \
--client-id=YOUR_MACHINE_ID \
--client-secret=YOUR_MACHINE_SECRET \
--silent --plain)
```
#### Step B: Run your app with injected secrets
```bash
infisical run --path=/apps/fintech-api --env=prod --projectId=YOUR_PROJECT_ID -- python3 main.py
```
# Basic Usage
### Basic Initialization
Once the app is running via the infisical run command, the SDK will automatically detect your configuration.
```bash
Python code
from stanley_shield import Gatekeeper
# Automatically pulls STANLEY_HMAC_SECRET and STANLEY_CLIENT_ID from Infisical
gk = Gatekeeper(bunker_url="https://bunker.yourdomain.com")
# Log an event
gk.log(
actor_id="user_99",
action="CREDIT_CARD_LINKED",
resource="card_8822",
residency="NG_LOCAL", # Routes to Nigerian infrastructure
metadata={"bank": "GTBank", "currency": "NGN"}
)
```
## Security Features
- HMAC-SHA256 Signing: Every request is signed. The Bunker rejects any payload that has been tampered with in transit.
- Non-Blocking Transporter: Logging happens in a background daemon thread. Your application's performance is never affected by network latency.
- Replay Protection: Includes a 60-second window timestamp to prevent old requests from being re-sent.
## Data Residency Support
- GLOBAL: Default storage in the Global Vault.
- NG_LOCAL: Routes logs to the localized Nigerian infrastructure.
| text/markdown | Stanley Owarieta | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Security :: Cryptography"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.25.1",
"fastapi>=0.68.0; extra == \"fastapi\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.7 | 2026-02-18T16:01:26.317816 | stanley_shield-1.0.0.tar.gz | 5,944 | bb/ac/ad13567695db954209dd7724315e363aba12e7ca3d9f29d14d2ee3658e51/stanley_shield-1.0.0.tar.gz | source | sdist | null | false | e5598d9136019088d47e41ecdee632d6 | 4bcf362366a97e0a119d66d46ecf528cdb3a6ded203c3687b5aa1d69ac9a5251 | bbacad13567695db954209dd7724315e363aba12e7ca3d9f29d14d2ee3658e51 | null | [] | 243 |
2.1 | decomp2dbg | 4.0.1 | Symbol syncing framework for decompilers and debuggers | # decomp2dbg
Reverse engineering involves both static (decompiler) and dynamic (debugger) analysis, yet we often
use these analyses without sharing knowledge between the two. In the case of reversing static binaries,
context switching between debugger assembly and the symbols you have reversed in decompilation can be inefficient.
decomp2dbg aims to shorten the gap of context switching between decompiler and debugger by introducing a generic
API for decompiler-to-debugger symbol syncing. In effect, giving the reverser the power of their debugger with
the symbols and decompilation lines they recover in their decompiler.

Interested in seeing what decomp2dbg looks like in practice? Checkout the recorded [talk at CactusCon 2023](https://youtu.be/-J8fGMt6UmE?t=22442),
featuring debugging a remote arm32 binary from a x64 machine with Ghidra symbols.
For active help, join the BinSync Discord below, where we answer decomp2dbg questions:
[](https://discord.gg/wZSCeXnEvR)
## Supported Platforms
### Decompilers
- IDA Pro (>= 7.0): [Demo w/ GEF](https://asciinema.org/a/442740)
- Binary Ninja (>= 2.4): [Demo w/ GEF](https://t.co/M2IZd0fmi3)
- Ghidra (>= 11.3.1): [Demo w/ GEF](https://youtu.be/MK7N7uQTUNY)
- [angr-management](https://github.com/angr/angr-management) (>= 9.0)
### Debuggers
- gdb (works best with [GEF](https://github.com/hugsy/gef))
- GEF
- pwndbg
- vanilla
## Install
Install through pip, then use the built-in installer for decompilers:
```bash
pip3 install decomp2dbg && decomp2dbg --install
```
This will open a prompt where you be asked to input the path to your decompiler and debugger of choice. For Ghidra installs,
you must follow the extra steps to enable extensions [here](https://github.com/mahaloz/decomp2dbg/tree/main/decompilers/d2d_ghidra/README.md).
If you installed the decompiler-side in the Binja Plugin Manager, you still need to install the debugger side with the above.
**Note**: You may need to allow inbound connections on port 3662, or the port you use, for decomp2dbg to connect
to the decompiler. If you are installing decomp2dbg with GEF or pwndbg it's important that in your `~/.gdbinit` the
`d2d.py` file is sourced after GEF or pwndbg.
## Manual Install
Skip this if you were able to use the above install with no errors.
If you can't use the above built-in script (non-WSL Windows install for the decompiler), follow the steps below:
If you only need the decompiler side of things, copy the associated decompiler plugin to the
decompiler's plugin folder. Here is how you do it in IDA:
First, clone the repo:
```
git clone https://github.com/mahaloz/decomp2dbg.git
```
Copy all the files in `./decompilers/d2d_ida/` into your ida `plugins` folder:
```bash
cp -r ./decompilers/d2d_ida/* /path/to/ida/plugins/
```
If you also need to install the gdb side of things, use the line below:
```bash
pip3 install . && \
cp d2d.py ~/.d2d.py && echo "source ~/.d2d.py" >> ~/.gdbinit
```
## Usage
First, start the decompilation server on your decompiler. You may want to wait
until your decompiler finishes its normal analysis before starting it. After normal analysis, this can be done by using the hotkey `Ctrl-Shift-D`,
or selecting the `decomp2dbg: configure` tab in your associated plugins tab. After starting the server, you should
see a message in your decompiler
```
[+] Starting XMLRPC server: localhost:3662
[+] Registered decompilation server!
```
Next, in your debugger, run:
```bash
decompiler connect <decompiler_name>
```
If you are running the decompiler on a VM or different machine, you can optionally provide the host and
port to connect to. Here is an example:
```bash
decompiler connect ida --host 10.211.55.2 --port 3662
```
You can find out how to use all the commands by running the decompiler command with the `--help` flag.
The first connection can take up to 30 seconds to register depending on the amount of globals in the binary.
If all is well, you should see:
```bash
[+] Connected to decompiler!
```
If you are using decomp2dbg for a library, i.e. the main binary your debugger attached to is not the binary
you want source for, then you should take a look at the [Advanced Usage - Shared Libs](#shared-libraries) section
of the readme.
### Decompilation View
On each breakpoint event, you will now see decompilation printed, and the line you are on associated with
the break address.
### Functions and Global Vars
Functions and Global Vars from your decompilation are now mapped into your GDB like normal Source-level
symbols. This means normal GDB commands like printing and examination are native:
```bash
b sub_46340
x/10i sub_46340
```
```bash
p dword_267A2C
x dword_267A2C
```
### Stack Vars, Register Vars, Func Args
Some variables that are stored locally in a function are stack variables. For the vars that can be mapped
to the stack or registers, we import them as convenience variables. You can see their contents like a normal GDB convenience
variable:
```bash
p $v4
```
Stack variables will always store their address on the stack. To see what value is actually in that stack variable,
simply dereference the variable:
```bash
x $v4
```
This also works with function arguments if applicable (mileage may vary):
```bash
p $a1
```
Note: `$v4` in this case will only be mapped for as long as you are in the same function. Once you leave the function
it may be unmapped or remapped to another value.
## Advanced Usage
### Shared Libraries
When you want the decompilation (and symbols) displayed for a section of memory which is not the main binary, like when debugging a shared library, you need to do some extra steps. Currently, d2d only supports 1 decompiler connected at a time, which means if you currently have any decompilers connected that is not the library, you need to disconnect it.
After following the normal setup to have your decompiler running the d2d server for your shared library, you need to manually set the base address for this library and its end address:
```
decompiler connect ida --base-addr-start 0x00007ffff7452000 --base-addr-end 0x00007ffff766d000
```
To find the base address that your library is loaded at in memory, its recommend you use something like the `vmmap` command from GEF to look for the libraries name in the memory space. After connecting with this manually set address, symbols show work like normal d2d. Decompilation will only be printed on the screen when you are in the range of this address space.
## Features
- [X] Auto-updating decompilation context view
- [X] Auto-syncing function names
- [X] Breakable/Inspectable symbols
- [X] Auto-syncing stack variable names
- [ ] Auto-syncing structs
- [ ] Online DWARF Creation
- [ ] Function Type Syncing
- [ ] lldb support
- [ ] windbg support
| text/markdown | null | null | null | null | BSD 2 Clause | null | [
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6"
] | [] | https://github.com/mahaloz/decomp2dbg | null | >=3.5 | [] | [] | [] | [
"sortedcontainers",
"pyelftools",
"libbs>=0.12.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T16:01:19.972342 | decomp2dbg-4.0.1.tar.gz | 34,662 | a3/34/60ccd1b752261510fb23c7f088af8a68de1a28403d51d365be65f254f4d9/decomp2dbg-4.0.1.tar.gz | source | sdist | null | false | 5383d2106a17103ec5c04dab3dc4caa3 | 0fc592920806bcb0c8c62d8073ea22f61d5784b32d6368a8354b1ee4908e84e6 | a33460ccd1b752261510fb23c7f088af8a68de1a28403d51d365be65f254f4d9 | null | [] | 277 |
2.4 | github-trending-repos-api | 0.1.1 | GitHub trending repositories CLI and library | # GitHub Trending CLI
Fetch the most starred GitHub repositories created within a recent time window. Includes a CLI (`ghtrend`) and a small Python library.
## Features
- Query “trending” repositories by creation date window (1w, 1m, 3m, 6m, 1y)
- Sorts by stars (descending)
- Simple CLI output
- Reusable Python API
## Requirements
- Python 3.11+
## Install
### From source (recommended for now)
```bash
python -m venv .venv
source .venv/bin/activate
pip install -e .
```
## CLI Usage
Once installed, run:
```bash
ghtrend --duration 1w --limit 10
```
### Options
- `-d`, `--duration` (default: `1w`)
- Valid values: `1w`, `1m`, `3m`, `6m`, `1y`
- `-l`, `--limit` (default: `10`)
- Number of repositories to return
### Example Output
```
octocat/Hello-World with 4242 stars, at url: https://github.com/octocat/Hello-World
```
## Library Usage
```python
from github_trending_cli.client import trending
repos = trending("1m", 5)
for repo in repos:
print(repo.full_name, repo.stars, repo.url)
```
### Returned Model
`trending()` returns a list of `Repo` objects with:
- `full_name` (str)
- `stars` (int)
- `url` (str)
- `description` (str | None)
- `language` (str | None)
- `topics` (tuple[str, ...])
## Errors
The CLI exits with:
- `2` on invalid duration or limit
- `3` on GitHub API errors
In library usage, errors are raised as:
- `InvalidDurationError`
- `InvalidLimitError`
- `GitHubAPIError`
## Development
Run tests:
```bash
pytest
```
## Notes
- This uses the public GitHub Search API. Unauthenticated requests are subject to GitHub’s rate limits.
- It's initially a roadmap project: https://roadmap.sh/projects/github-trending-cli | text/markdown | null | Yange <posterscofield@gmail.com> | null | null | MIT License Copyright (c) 2026 Yange Permission is hereby granted, free of charge, to any person obtaining a copy of this software ... | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic>=2.12.5",
"requests>=2.32.5"
] | [] | [] | [] | [] | uv/0.6.10 | 2026-02-18T16:00:58.521517 | github_trending_repos_api-0.1.1.tar.gz | 4,812 | 18/e5/58e490388abbff1ac5994617e32fb5fb09223b839894fb5edbba8556fffa/github_trending_repos_api-0.1.1.tar.gz | source | sdist | null | false | 03824fb8d18e301a56ad7db400a1a621 | 9f3053d5c5459f188b73a8d2d8835f80150af5320e817f5d4440f3b8265105bb | 18e558e490388abbff1ac5994617e32fb5fb09223b839894fb5edbba8556fffa | null | [
"LICENSE"
] | 229 |
2.4 | gcl-sdk | 2.0.0 | The Genesis Core SDK | 


Welcome to the Genesis SDK!
The Genesis SDK is a set of tools for developing Genesis elements. Main information you can find in the [wiki](https://github.com/infraguys/gcl_sdk/wiki).
# 🚀 Development
Install required packages:
Ubuntu:
```bash
sudo apt-get install tox libev-dev
```
Fedora:
```bash
sudo dnf install python3-tox libev-devel
```
Initialize virtual environment:
```bash
tox -e develop
source .tox/develop/bin/activate
```
# ⚙️ Tests
**NOTE:** Python version 3.12 is supposed to be used, but you can use other versions
Unit tests:
```bash
tox -e py312
```
Functional tests:
```bash
tox -e py312-functional
```
# 🔗 Related projects
- Genesis Core is main project of the Genesis ecosystem. You can find it [here](https://github.com/infraguys/genesis_core).
- Genesis DevTools it's a set oftools to manager life cycle of genesis projects. You can find it [here](https://github.com/infraguys/genesis_devtools).
# 💡 Contributing
Contributing to the project is highly appreciated! However, some rules should be followed for successful inclusion of new changes in the project:
- All changes should be done in a separate branch.
- Changes should include not only new functionality or bug fixes, but also tests for the new code.
- After the changes are completed and **tested**, a Pull Request should be created with a clear description of the new functionality. And add one of the project maintainers as a reviewer.
- Changes can be merged only after receiving an approve from one of the project maintainers.
| text/markdown | Genesis Corporation | eugene@frolov.net.ru | null | null | null | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.10",
"Programming Langua... | [] | https://github.com/infraguys/gcl_sdk/ | null | null | [] | [] | [] | [
"pbr<=5.8.1,>=1.10.0",
"oslo.config<10.0.0,>=3.22.2",
"importlib-metadata<7.0.0,>=6.8.0",
"restalchemy<16.0.0,>=15.0.1",
"gcl_iam<2.0.0,>=1.0.0",
"gcl_looper<2.0.0,>=1.0.1",
"bjoern>=3.2.2",
"izulu<1.0.0,>=0.50.0",
"renameat2<1.0.0,>=0.4.4",
"xxhash<4.0.0,>=3.5.0",
"orjson<4.0.0,>=3.10.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:00:44.285297 | gcl_sdk-2.0.0.tar.gz | 114,503 | 0d/a5/5ca17ec51bf5d1860a0e83ce58eabb38412fc0bc76925b3c59322b27a0ad/gcl_sdk-2.0.0.tar.gz | source | sdist | null | false | d8a1617319036d4343e3d1d347e46710 | 5c3844b126925e029c5d6072ee0d74961b36b0c9eaf1ca574a3673df0fa447ad | 0da55ca17ec51bf5d1860a0e83ce58eabb38412fc0bc76925b3c59322b27a0ad | null | [
"LICENSE"
] | 433 |
2.4 | buildai-sdk | 2.1.0 | Build AI Python SDK | # BuildAI Python SDK
Python client for Build AI services.
## Install
```bash
cd sdk
uv pip install -e .
```
## Quick Start
```python
import buildai
client = buildai.Client(api_key="bai_...")
# Search
hits = client.search("person welding")
for h in hits.items:
print(h.clip_id, h.similarity_score)
# Browse
factories = client.factories.list()
clip = client.clips.get("clip-uuid")
# Collections
col = client.collections.create(name="my-set")
client.collections.clips_add(col.collection_id, clip_ids=["..."])
# Datasets
datasets = client.datasets.list()
urls = client.datasets.webdataset_urls("dataset-name")
client.close()
```
## Package Layout
- `buildai/client.py` — Client entry point
- `buildai/resources/v1/` — Resource namespaces
- `buildai/models/v1/` — Response models
- `buildai/ids.py` — Deterministic UUID helpers
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.6",
"requests>=2.31",
"buildai[gcs,torch]; extra == \"all\"",
"pytest>=8; extra == \"dev\"",
"ruff; extra == \"dev\"",
"gcsfs>=2023.12; extra == \"gcs\"",
"torch>=2.1; extra == \"torch\"",
"webdataset>=0.2.86; extra == \"torch\""
] | [] | [] | [] | [] | uv/0.7.20 | 2026-02-18T16:00:30.318338 | buildai_sdk-2.1.0.tar.gz | 160,818 | 28/dc/b5ffdf58d6ac39a15716d52d7dacf7b588ffac662c8eee18e106a1143899/buildai_sdk-2.1.0.tar.gz | source | sdist | null | false | 7622aa01e8e3675bd2cffd4838f9c174 | c2c84e091f67cdff6a6fecf97cf1e289f203eab9e865a31920b9dc2c42c3d166 | 28dcb5ffdf58d6ac39a15716d52d7dacf7b588ffac662c8eee18e106a1143899 | null | [] | 220 |
2.3 | csaf-lib | 0.1.0b12 | A library for generating, parsing and validating CSAF documents (VEX and Advisory) | # csaf-lib
[](https://github.com/RedHatProductSecurity/csaf-lib/actions/workflows/tests.yml)
[](https://github.com/RedHatProductSecurity/csaf-lib/actions/workflows/lint.yml)
A Python library for generating, parsing, and validating CSAF documents (VEX and Advisory).
## Installation
```bash
pip install csaf-lib
```
For development setup, see [DEVELOP.md](DEVELOP.md).
## Usage
### CLI
Read and parse a CSAF VEX file (with verification):
```bash
csaf-lib read tests/test_files/sample-vex.json
```
Read with verbose verification output:
```bash
csaf-lib read -v tests/test_files/sample-vex.json
```
Disable verification:
```bash
csaf-lib read --no-verify tests/test_files/minimal-vex.json
```
Verify a CSAF VEX file:
```bash
# Run all verification tests
csaf-lib verify tests/test_files/sample-vex.json
# Run only CSAF compliance tests (Test Set 1)
csaf-lib verify tests/test_files/sample-vex.json --test-set csaf
# Run only data type checks (Test Set 2)
csaf-lib verify tests/test_files/sample-vex.json --test-set data
# Run specific tests by ID
csaf-lib verify tests/test_files/sample-vex.json -t 1.1 -t 2.5
```
Validate with plugins:
```bash
csaf-lib validate tests/test_files/sample-vex.json
```
See docs/plugins.md for authoring and how the plugin system works.
### Python API - Reading Documents
```python
from csaf_lib.models import CSAFVEX
# Load from file
csafvex = CSAFVEX.from_file("path/to/document.json")
# Or load from dictionary
import json
with open("vex-file.json") as f:
data = json.load(f)
csafvex = CSAFVEX.from_dict(data)
# Access document metadata
print(csafvex.document.title)
print(csafvex.document.publisher.name)
print(csafvex.document.tracking.id)
# Access vulnerabilities
for vuln in csafvex.vulnerabilities:
print(f"CVE: {vuln.cve}")
if vuln.cwe:
print(f" CWE: {vuln.cwe.id}")
# Access product tree
if csafvex.product_tree:
for branch in csafvex.product_tree.branches:
print(f"Branch: {branch.name}")
# Serialize back to dictionary
data = csafvex.to_dict()
```
### Python API - Creating Documents
```python
from csaf_lib.models import (
CSAFVEX, Document, ProductTree, Vulnerability,
CSAFVersion, PublisherCategory, TrackingStatus,
BranchCategory, RelationshipCategory, RemediationCategory
)
from datetime import datetime, timezone
# Create document with fluent API
doc = Document(
category="csaf_vex",
csaf_version=CSAFVersion.VERSION_2_0,
title="Security Advisory for CVE-2025-0001"
)
doc.with_publisher(
name="Red Hat Product Security",
namespace="https://redhat.com",
category=PublisherCategory.VENDOR
).with_tracking(
id="CVE-2025-0001",
status=TrackingStatus.FINAL,
version="1",
initial_release_date=datetime(2025, 1, 1, tzinfo=timezone.utc),
generator_engine_name="csaf-lib",
generator_engine_version="0.1.0"
).add_tracking_revision(
number="1",
date=datetime(2025, 1, 1, tzinfo=timezone.utc),
summary="Initial version"
)
# Create product tree
tree = ProductTree()
vendor = tree.add_branch(BranchCategory.VENDOR, "Red Hat")
vendor.add_product_branch(
category=BranchCategory.PRODUCT_VERSION,
name="curl-1.0",
product_name="curl version 1.0",
product_id="curl-1.0",
helper_purl="pkg:rpm/redhat/curl@1.0"
)
# Create vulnerability
vuln = Vulnerability(cve="CVE-2025-0001", title="Security Advisory")
vuln.with_product_status(known_affected=["curl-1.0"])
vuln.add_remediation(
category=RemediationCategory.VENDOR_FIX,
details="Update to version 1.1",
product_ids=["curl-1.0"]
)
# Combine and save
vex = CSAFVEX(document=doc, product_tree=tree, vulnerabilities=[vuln])
with open("vex.json", "w") as f:
json.dump(vex.to_dict(), f, indent=2)
```
### Validation (Plugins) - Python API
```python
import logging
from csaf_lib.models import CSAFVEX
from csaf_lib.validation.validator import Validator
csafvex = CSAFVEX.from_file("path/to/document.json")
# Create validator; default log level is WARNING
validator = Validator(csafvex, log_level=logging.INFO)
# Run all installed validation plugins
report = validator.run_all()
print(f"Plugins: total={report.total}, passed={report.passed_count}, failed={report.failed_count}")
for r in report.results:
if not r.success:
print(f"[{r.validator_name}]")
for e in r.errors:
print(f" - {e.message}")
# Run a subset of plugins by name
subset = validator.run_plugins(["<PLUGIN-NAME>"])
print(f"Subset failed: {subset.failed_count}")
# List available plugin names
from csaf_lib.validation.validator import Validator
print(Validator.get_available_plugins())
```
**Documentation:**
- [Reading Documents](docs/csafvex-usage.md) - Detailed guide for parsing and accessing CSAF VEX data
- [Creating Documents](docs/creating-documents.md) - Complete guide for creating CSAF VEX documents with the fluent API
## Verification
The library provides comprehensive verification of CSAF VEX documents through two test sets:
- **Test Set 1 (CSAF Compliance)**: 14 tests verifying VEX Profile conformance and CSAF mandatory requirements
- **Test Set 2 (Data Type Checks)**: 16 tests verifying data format compliance, patterns, and schema constraints
### Using the Verifier
```python
from csaf_lib.verification import Verifier
# Create verifier from a file
verifier = Verifier.from_file("path/to/vex.json")
# Run all verification tests
report = verifier.run_all()
# Check results
if report.passed:
print("All verification tests passed!")
else:
print(f"Failed: {report.failed_count}/{report.total_tests}")
for failure in report.failures:
print(f" {failure.test_id}: {failure.message}")
# Run specific test sets
csaf_report = verifier.run_csaf_compliance() # Test Set 1 only
data_report = verifier.run_data_type_checks() # Test Set 2 only
# Run individual tests
result = verifier.run_test("1.1") # VEX Profile Conformance
result = verifier.run_test("2.5") # CVE ID Format
# Get available tests
tests = Verifier.get_available_tests()
for test_id, test_name in tests.items():
print(f"{test_id}: {test_name}")
```
### Verification Test Reference
| ID | Test Name | Description |
|----|-----------|-------------|
| 1.1 | VEX Profile Conformance | Document must have csaf_vex category and required sections |
| 1.2 | Base Mandatory Fields | Required tracking, publisher, and title fields |
| 1.3 | VEX Product Status Existence | Each vulnerability must have a product status |
| 1.4 | Vulnerability ID Existence | Each vulnerability must have CVE or IDs |
| 1.5 | Vulnerability Notes Existence | Each vulnerability must have notes |
| 1.6 | Product ID Definition (Missing) | All referenced product_ids must be defined |
| 1.7 | Product ID Definition (Multiple) | No duplicate product_id definitions |
| 1.8 | Circular Reference Check | No circular dependencies in relationships |
| 1.9 | Contradicting Product Status | Products cannot have conflicting statuses |
| 1.10 | Action Statement Requirement | known_affected products need remediations |
| 1.11 | Impact Statement Requirement | known_not_affected products need justification |
| 1.12 | Remediation Product Reference | Remediations must reference products |
| 1.13 | Flag Product Reference | Flags must reference products |
| 1.14 | Unique VEX Justification | Products can only have one VEX justification |
| 2.1 | JSON Schema Validation | Validates against CSAF 2.0 JSON schema |
| 2.2 | PURL Format | Package URL format validation |
| 2.3 | CPE Format | CPE 2.2 and 2.3 format validation |
| 2.4 | Date-Time Format | ISO 8601/RFC 3339 format validation |
| 2.5 | CVE ID Format | CVE identifier format validation |
| 2.6 | CWE ID Format | CWE identifier format validation |
| 2.7 | Language Code Format | BCP 47/RFC 5646 language code validation |
| 2.8 | Version Range Prohibition | No version ranges in product_version names |
| 2.9 | Mixed Versioning Prohibition | Consistent versioning scheme |
| 2.10 | CVSS Syntax | CVSS object schema validation |
| 2.11 | CVSS Calculation | CVSS score range validation |
| 2.12 | CVSS Vector Consistency | CVSS properties must match vectorString |
| 2.13 | File Size Soft Limit | Document should not exceed 15 MB |
| 2.14 | Array Length Soft Limit | Arrays should not exceed 100,000 items |
| 2.15 | String Length Soft Limit | Strings should not exceed field-specific limits |
## Contributing
Interested in contributing? Check out:
- [DEVELOP.md](DEVELOP.md) - Development setup, workflow, and contribution guidelines
- [RELEASE.md](RELEASE.md) - Release process for maintainers
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Authors
- Jakub Frejlach (jfrejlac@redhat.com)
- Juan Perez de Algaba (jperezde@redhat.com)
- George Vauter (gvauter@redhat.com)
- Daniel Monzonis (dmonzoni@redhat.com)
Developed by Red Hat Product Security.
| text/markdown | Jakub Frejlach, Juan Perez de Algaba, George Vauter, Daniel Monzonis | Jakub Frejlach <jfrejlac@redhat.com>, Juan Perez de Algaba <jperezde@redhat.com>, George Vauter <gvauter@redhat.com>, Daniel Monzonis <dmonzoni@redhat.com> | null | null | MIT License Copyright (c) 2025 Red Hat, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | csaf, vex, security, vulnerability, advisory | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pytho... | [] | null | null | >=3.11 | [] | [] | [] | [
"attrs>=25.4.0",
"click>=8.3.1",
"cvss>=3.0",
"jsonschema>=4.0.0",
"packageurl-python>=0.15.0"
] | [] | [] | [] | [] | uv/0.9.14 {"installer":{"name":"uv","version":"0.9.14","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Fedora Linux","version":"42","id":"","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T16:00:03.699156 | csaf_lib-0.1.0b12.tar.gz | 57,657 | 4b/b5/12d31351ee37850f344fe22ef9e9cb58664f323e1ef9860d038ca623bea6/csaf_lib-0.1.0b12.tar.gz | source | sdist | null | false | e0bd7ec2671bd6909cecfa683b3e916c | 490b1f739458cbfa06eeb927aa3c6e30e27024e5d4cc162fbdb9c03140cfbaee | 4bb512d31351ee37850f344fe22ef9e9cb58664f323e1ef9860d038ca623bea6 | null | [] | 205 |
2.1 | signal-ocean | 13.8.0 | Access Signal Ocean Platform data using Python. | The Signal Ocean SDK combines the power of Python and [Signal Ocean's APIs](https://apis.signalocean.com/) to give you access to a variety of shipping data available in [The Signal Ocean Platform](https://www.signalocean.com/platform).
# Installation
Install the SDK with pip:
```
pip install signal-ocean
```
The Signal Ocean SDK depends on the [pandas](https://pandas.pydata.org/) library for some of its data analysis features. Optional pandas dependencies are also optional in this SDK. If you plan to use data frame features like plotting or exporting to Excel, you need to install additional dependencies, for example:
```
pip install matplotlib openpyxl
```
For more information refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html#optional-dependencies).
# Getting Started
To use the SDK, you need to create an account in our [API Portal](https://apis.signalocean.com/) and subscribe to an API.
Now you're ready to fetch some data. See our [examples](docs\examples) on how you can use our APIs.
# Building and contributing
Check [Contributing.md](Contributing.md) on how you can build and contribute this library.
| text/markdown | Signal Ocean Developers | signaloceandevelopers@thesignalgroup.com | null | null | Apache 2.0 | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"License :: OSI Approved :: Apache Software License"
] | [] | https://apis.signalocean.com/ | null | >=3.7 | [] | [] | [] | [
"requests<3,>=2.23.0",
"python-dateutil<3,>=2.8.1",
"pandas<3,>=1.0.3",
"numpy>=1.18.5",
"strictly-typed-pandas==0.1.4",
"typeguard<3.0.0,>=2.13.3"
] | [] | [] | [] | [
"The Signal Group, https://www.thesignalgroup.com/",
"Signal Ocean, https://www.signalocean.com/",
"The Signal Ocean Platform, https://app.signalocean.com"
] | twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.13 | 2026-02-18T15:59:42.094929 | signal_ocean-13.8.0-py3-none-any.whl | 160,565 | da/75/76bd70c941855148abc0aaf2f647992b491827e722bd75ed919697db739e/signal_ocean-13.8.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 6e59c1a9f563fc0cb2e737a548c317f7 | 4767b7d949a9271691334e9e9985aa6b30a348a347d7e750f98a41fc4bd22f69 | da7576bd70c941855148abc0aaf2f647992b491827e722bd75ed919697db739e | null | [] | 446 |
2.1 | matatika | 1.17.1 | A Python utility for interfacing with the Matatika service | # Matatika
The `matatika` package allows a user to interact with the Matatika service. A command-line interface (CLI) and client library are included.
## Current Functionality
- Login with Bearer token and endpoint URL (default https://app.matatika.com/api)
- Override Bearer token and endpoint URL for supported operations
- Use a specific workspace by default for supported operations
- List all available workspaces
- List all available datasets in a given workspace
- Publish a dataset
- Fetch data from a dataset
## Upcoming Features
- Login with username and password though browser (CLI only)
- Create a dataset
- Create a data pipeline to a workspace
- List all data pipelines
- Run a data pipeline
| text/markdown | Matatika | support@matatika.com | null | null | Copyright (C) Matatika Limited - All Rights Reserved | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: Free To Use But Restricted",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | https://www.matatika.com/ | null | null | [] | [] | [] | [
"auth0-python~=4.6.1",
"click~=8.1.7",
"nbconvert~=7.6.0",
"python-dotenv~=0.21.1",
"pyyaml~=6.0.1",
"requests~=2.31.0",
"importlib-metadata; python_version < \"3.8\"",
"autopep8; extra == \"dev\"",
"pylint; extra == \"lint\"",
"pytest; extra == \"test\"",
"requests-mock; extra == \"test\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T15:59:18.484988 | matatika-1.17.1-py3-none-any.whl | 50,239 | 67/f8/5d1e8811913b7733bfa7af8b58701c1b2951945174838c74e7cd43697201/matatika-1.17.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 31bc208e065c6384639d9621291e2a74 | 145417a7b013c98d0d6be29f7e66f2d7d089cf00fdca0f0d708a4d51b82c609d | 67f85d1e8811913b7733bfa7af8b58701c1b2951945174838c74e7cd43697201 | null | [] | 559 |
2.4 | databento | 0.71.0 | Official Python client library for Databento | # databento-python
[](https://github.com/databento/databento-python/actions/workflows/test.yml)

[](https://pypi.org/project/databento)
[](./LICENSE)
[](https://github.com/psf/black)
[](https://to.dbn.to/slack)
The official Python client library for [Databento](https://databento.com).
Key features include:
- Fast, lightweight access to both live and historical data from [multiple markets](https://databento.com/docs/faqs/venues-and-publishers).
- [Multiple schemas](https://databento.com/docs/schemas-and-data-formats/whats-a-schema?historical=python&live=python) such as MBO, MBP, top of book, OHLCV, last sale, and more.
- [Fully normalized](https://databento.com/docs/standards-and-conventions/normalization?historical=python&live=python), i.e. identical message schemas for both live and historical data, across multiple asset classes.
- Provides mappings between different symbology systems, including [smart symbology](https://databento.com/docs/api-reference-historical/basics/symbology?historical=python&live=python) for futures rollovers.
- [Point-in-time]() instrument definitions, free of look-ahead bias and retroactive adjustments.
- Reads and stores market data in an extremely efficient file format using [Databento Binary Encoding](https://databento.com/docs/standards-and-conventions/databento-binary-encoding?historical=python&live=python).
- Event-driven [market replay](https://databento.com/docs/api-reference-historical/helpers/bento-replay?historical=python&live=python), including at high-frequency order book granularity.
- Support for [batch download](https://databento.com/docs/faqs/streaming-vs-batch-download?historical=python&live=python) of flat files.
- Support for [pandas](https://pandas.pydata.org/docs/), CSV, and JSON.
## Documentation
The best place to begin is with our [Getting started](https://databento.com/docs/quickstart?historical=python&live=python) guide.
You can find our full client API reference on the [Historical Reference](https://databento.com/docs/api-reference-historical?historical=python&live=python) and
[Live Reference](https://databento.com/docs/reference-live?historical=python&live=python) sections of our documentation. See also the
[Examples](https://databento.com/docs/examples?historical=python&live=python) section for various tutorials and code samples.
## Requirements
The library is fully compatible with distributions of Anaconda 2023.x and above.
The minimum dependencies as found in the `pyproject.toml` are also listed below:
- python = "^3.10"
- aiohttp = "^3.8.3"
- databento-dbn = "~0.49.0"
- numpy = ">=1.23.5"
- pandas = ">=1.5.3"
- pip-system-certs = ">=4.0" (Windows only)
- pyarrow = ">=13.0.0"
- requests = ">=2.25.1"
- zstandard = ">=0.21.0"
## Installation
To install the latest stable version of the package from PyPI:
pip install -U databento
## Usage
The library needs to be configured with an API key from your account.
[Sign up](https://databento.com/signup) for free and you will automatically
receive a set of API keys to start with. Each API key is a 32-character
string starting with `db-`, that can be found on the API Keys page of your [Databento user portal](https://databento.com/platform/keys).
A simple Databento application looks like this:
```python
import databento as db
client = db.Historical('YOUR_API_KEY')
data = client.timeseries.get_range(
dataset='GLBX.MDP3',
symbols='ES.FUT',
stype_in='parent',
start='2022-06-10T14:30',
end='2022-06-10T14:40',
)
data.replay(callback=print) # market replay, with `print` as event handler
```
Replace `YOUR_API_KEY` with an actual API key, then run this program.
This uses `.replay()` to access the entire block of data
and dispatch each data event to an event handler. You can also use
`.to_df()` or `.to_ndarray()` to cast the data into a Pandas `DataFrame` or numpy `ndarray`:
```python
df = data.to_df() # to DataFrame
array = data.to_ndarray() # to ndarray
```
Note that the API key was also passed as a parameter, which is
[not recommended for production applications](https://databento.com/docs/portal/api-keys?historical=python&live=python).
Instead, you can leave out this parameter to pass your API key via the `DATABENTO_API_KEY` environment variable:
```python
import databento as db
# Pass as parameter
client = db.Historical('YOUR_API_KEY')
# Or, pass as `DATABENTO_API_KEY` environment variable
client = db.Historical()
```
## License
Distributed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0.html).
| text/markdown | Databento | support@databento.com | null | null | null | null | [
"Development Status :: 4 - Beta",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :... | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp<4.0.0,>=3.8.3; python_version < \"3.12\"",
"aiohttp<4.0.0,>=3.9.0; python_version >= \"3.12\"",
"databento-dbn<0.50.0,>=0.49.0",
"numpy>=1.23.5; python_version < \"3.12\"",
"numpy>=1.26.0; python_version >= \"3.12\"",
"pandas<4.0.0,>=1.5.3",
"pip-system-certs>=4.0; platform_system == \"Windows\... | [] | [] | [] | [
"Bug Tracker, https://github.com/databento/databento-python/issues",
"Documentation, https://databento.com/docs",
"Homepage, https://databento.com",
"Repository, https://github.com/databento/databento-python"
] | poetry/2.3.2 CPython/3.12.12 Linux/6.14.0-1017-azure | 2026-02-18T15:59:10.128655 | databento-0.71.0-py3-none-any.whl | 88,465 | ad/6c/0b17add0f01b0a61f8cb2a3ad1e5086e4d4b7c54fc9e7ad6d1b1047f7c5f/databento-0.71.0-py3-none-any.whl | py3 | bdist_wheel | null | false | b0b65e783c84557d253a55a6b658a8f3 | 4439acb116a74e3def14caf156fb1c9b3df40fe18ff011ec46e87a7fa82d5d19 | ad6c0b17add0f01b0a61f8cb2a3ad1e5086e4d4b7c54fc9e7ad6d1b1047f7c5f | Apache-2.0 | [
"LICENSE"
] | 12,202 |
2.2 | zabel-elements | 1.43.1 | The Zabel default clients and images | # zabel-elements
## Overview
This is part of the Zabel platform. The **zabel-elements** package contains
the standard _elements_ library for Zabel.
An element is an external service such as _Artifactory_ or _Jenkins_ or an
LDAP server that can be managed or used by Zabel.
This package provides the necessary wrappers for some elements commonly found
in many workplaces, namely:
<div class="grid cards" markdown>
- Artifactory
- Atlassian
- CloudBeesJenkins
- Confluence
- ConfluenceCloud
- GitHub
- GitHubCloud
- GitLab
- Jira
- JiraCloud
- Kubernetes (in alpha)
- Okta
- SonarQube
- SonatypeNexus
- SquashTM
</div>
Elements are of two kinds: _Managed services_, which represent services that
are managed by Zabel, and _Utilities_, which represent services that are used
by Zabel.
Managed services host project resources. They typically are the tools that
project members interact with directly.
Utilities may also host project resources, but they typically are not used
directly by project members. They are either references or infrastructure
services necessary for the managed services to function, but otherwise not
seen by project members. An LDAP server would probably be a utility, used
both as a reference and as an access control tool.
In the above list, Atlassian, Kubernetes and Okta are utilities. The other
elements are managed services.
You can use this library independently of the Zabel platform, as it has no
specific dependencies on it. In particular, the **zabel.elements.clients**
module may be of interest if you want to perform some configuration tasks
from your own Python code.
Contributions of new wrappers or extensions of existing wrappers are welcomed.
But elements can be provided in their own packages too.
## Architecture
It contains two parts:
- The **zabel.elements.clients** module
- The **zabel.elements.images** base classes module
There is one _image_ per client (hence one image per element). Images are classes
with a standardized constructor taking no parameters and a `run()` method and are
how code is packaged so that it can be deployed on the Zabel platform.
## zabel.elements.clients
The **zabel.elements.clients** module provides a wrapper class per tool.
It relies on the **zabel-commons** library, using its
_zabel.commons.exceptions_ module for the _ApiError_ exception class,
its _zabel.commons.sessions_ module for HTTPS session handling,
and its _zabel.commons.utils_ module that contains useful functions.
### Conventions for Clients
If an existing library already provides all the needed functionality, there is
no need to add it to this library.
If an existing library already provides some of the needed functionality, a
wrapper class can be written that will use this existing library as a client.
Do not inherit from it.
Wrapper classes have two parts: a _base_ part that implements single API
calls (and possibly pagination), and a _regular_ part that inherits from the
base part and possibly extends it.
The base part may not exist if an already existing library provides wrappers
for the needed low-level calls. In such a case, the regular class may simply
use the existing library as a client and inherit from `object`.
Similarly, the regular part may be empty, in that it may simply inherit from
the base class and contain no additional code.
At import time, wrapper classes should not import libraries not part of the
Python standard library or **requests** or modules part of the
**zabel-commons** library. That way, projects not needing some tool do not
have to install their required dependencies. Wrapper classes may import
libraries in their `__init__()` methods, though.
If an API call is successful, it will return a value (possibly None). If not,
it will raise an _ApiError_ exception.
If a wrapper class method is called with an obviously invalid parameter
(wrong type, not a permitted value, ...), a _ValueError_ exception will be
raised.
<h4>Note</h4>
Base classes do not try to provide features not offered by the tool API.
Their methods closely match the underlying API.
They offer a uniform (or, at least, harmonized) naming convention, and may
simplify technical details (pagination is automatically performed if
needed).
## zabel.elements.images
The **zabel.elements.images** module provides image wrappers for the
built-in clients' classes (those defined in the **zabel.elements.clients**
module).
Those abstract image wrappers implement an `__init__()` constructor with no
parameter and a default `run()` method that can be overridden.
Managed services also implement at least the `list_members()` method of
the _ManagedServiceApp_ interface. They may provide `get_member()` if a fast
implementation is available.
Concrete classes deriving those abstract managed services wrappers should
provide a `get_canonical_member_id()` method that takes a parameter, a user
from the wrapped API point of view, and returns the canonical user ID, as
well as a `get_internal_member_id()` method that takes a canonical user ID
and returns the internal key for that user.
They should also provide concrete implementations for the remaining methods
provided by the _ManagedServiceApp_ interface.
### Conventions for Images
Utilities images must implement the _UtilityApp_ interface and managed services
images must implement the _ManagedServiceApp_ interface.
## License
```text
Copyright (c) 2019 Martin Lafaix (martin.lafaix@external.engie.com) and others
This program and the accompanying materials are made
available under the terms of the Eclipse Public License 2.0
which is available at https://www.eclipse.org/legal/epl-2.0/
SPDX-License-Identifier: EPL-2.0
```
| text/markdown | Martin Lafaix | martin.lafaix@external.engie.com | null | null | Eclipse Public License 2.0 | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Eclipse Public License 2.0 (EPL-2.0)",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries"
] | [] | https://github.com/engie-group/zabel | null | >=3.12.0 | [] | [] | [] | [
"zabel-commons>=1.10",
"python-gitlab>=8.0; extra == \"gitlab\"",
"Jira>=3.0; extra == \"jira\"",
"kubernetes>=35.0; extra == \"kubernetes\"",
"okta<=2.9.10,>=2.9; extra == \"okta\"",
"pynacl>=1.5.0; extra == \"pynacl\"",
"Jira>=3.0; extra == \"all\"",
"kubernetes>=35.0; extra == \"all\"",
"okta<=2.... | [] | [] | [] | [] | twine/5.0.0 CPython/3.12.3 | 2026-02-18T15:58:55.932368 | zabel_elements-1.43.1-py3-none-any.whl | 188,297 | 48/95/7b7b9204ffd32d2b2e4ff4729e6d64d3395bebf672311b6890ecc2e94076/zabel_elements-1.43.1-py3-none-any.whl | py3 | bdist_wheel | null | false | b4f3cecb207d0a7cb2449c94a78e1bdd | 8bb4c7596ceebbeec7befddeb74707d44713dff702f4b5d669005c2adb721781 | 48957b7b9204ffd32d2b2e4ff4729e6d64d3395bebf672311b6890ecc2e94076 | null | [] | 222 |
2.4 | metrics-utility | 0.7.20260218 | A metrics utility for Ansible | # metrics-utility
metrics-utility deals with collecting, analyzing and reporting metrics from [Ansible Automation Platform (AAP)](https://www.ansible.com/products/automation-platform) Controller instances.
It provides two interfaces - a [CLI](#cli) and a python [library](#python-library).
Also see below for [dev setup](#developer-setup), and other [docs](#documentation).
### CLI
A `metrics-utility` CLI tool for collecting and reporting metrics from Controller, allowing users to:
- Collect Controller usage data from the database, settings, and prometheus
- Analyze the data and generate `.xlsx` reports
- Support multiple storage adapters for data persistence (local directory, S3)
- Push metrics data to `console.redhat.com`
It can run either standalone (against a specified postgres instance),
or inside the Controller's python virtual environment. The controller mode allows the `config` collector to collect more settings and takes DB connection details from there.
It provides two subcommands:
- `gather_automation_controller_billing_data`
- collects data from controller, saves daily tarballs with `.csv` / `.json` inside
- saves tarballs in specified storage
- optionally sends to console
- `build_report`
- builds a `.xlsx` report
- 3 report types - `CCSP`, `CCSPv2`, `RENEWAL_GUIDANCE`
- the ccsp* reports use the collected tarballs as the source
- the renewal* report reads from controller db
Example invocation:
```bash
pip install metrics-utility
# common
export METRICS_UTILITY_SHIP_PATH="./out"
export METRICS_UTILITY_SHIP_TARGET="directory"
# gather data
metrics-utility gather_automation_controller_billing_data --ship --until=10m
ls out/data/`date +%Y/%m/%d`/ # data/<year>/<month>/<day>/<uuid>-<since>-<until>-<index>-<collection>.tar.gz
# build report
export METRICS_UTILITY_REPORT_TYPE="CCSPv2"
metrics-utility build_report --month=`date +%Y-%m` # year-month
ls out/reports/`date +%Y/%m`/ # reports/<year>/<month>/<type>-<year>-<month>.xlsx
```
See [docs/cli.md](./docs/cli.md) and [docs/old-readme.md](./docs/old-readme.md) for details on the usage,
See [docs/environment.md](./docs/environment.md) for a full list of environment variables,
See [docs/awx.md](./docs/awx.md) for more on running against an awx dev env.
### Python library
The `metrics_utility.library` library provides a lower-level python API exposing the same functionality using these abstractions:
* collectors - functions that collect specific data, from database to a `.csv`, or from elsewhere into a python dict
* packagers - packages multiple related `.csv` & `.json` into `.tar.gz` daily tarballs
* extractors - extracts these tarballs, loading specific data into dicts or Pandas dataframe
* rollups - group and aggregate dataframes, compute stats and optionally save them
* reports - builds a xlsx report from a set of dataframes
* storage - unified storage backend for filesystem, s3, segment, crc and db
* instants - associated datetime-related helpers
* tempdir & db locking helpers
The library uses no env variables, and doesn't rely on Controller environment.
The CLI is expected to use the library where possible, but is not limited to it.
Example use:
```python
from metrics_utility.library.collectors.controller import config, main_jobevent
from metrics_utility.library.instants import last_day, this_day
from metrics_utility.library import lock, storage
db = ... # django.db.connection / psycopg 3
dir_storage = storage.StorageDirectory(base_path='./out')
with lock(db=db, key='my-unique-key'):
# dict, will be converted to json
config_dict = config(db=db).gather()
# list of .csv filenames; since is included, until is excluded
job_csvs = main_jobevent(db=db, since=last_day(), until=this_day()).gather()
# save in storage
dir_storage.put('config.json', dict=config_dict)
for index, file in enumerate(job_csvs):
dir_storage.put(f'main_jobevent.{index}.csv', filename=file)
os.remove(file)
```
See [library README](./metrics_utility/library/README.md) for details.
See [workers/](./workers/) for more library usage examples.
## Developer setup
### Prerequisites
- Python 3.12 or later
- [uv](https://docs.astral.sh/uv/getting-started/installation/)
- Docker compose
- `make`, `git`
Dependencies are managed via `pyproject.toml` (& `uv.lock`).
There is also `setup.cfg` with dependencies but those are only used for the controller mode.
The Docker compose environment is used to provide a quick postgres & minio instances on ports 5432 and 9000/9001, but they can be replaced with local setup. See [docker-compose.yaml](./tools/docker/docker-compose.yaml) for details of the `mc` setup (substitute the `minio` hostname for localhost), and [tools/docker/\*.sql](./tools/docker/) for users & data to import in postgres (start with `roles.sql` and `latest.sql`). (Or don't, and use docker.)
`uv` is also not required as long as you can manage your own python venv and install dependencies from `pyproject.toml`.
Optionally, `uvx pre-commit install` to run ruff checks from a pre-commit hook, defined in [.pre-commit-config.yaml](../.pre-commit-config.yaml). Or you can run `make lint` / `make fix` manually.
### Installation
```bash
# Clone the repository
git clone https://github.com/ansible/metrics-utility.git
cd metrics-utility
# Install dependencies using uv
uv sync
```
### Run
```bash
cd metrics-utility
make compose
```
```bash
cd metrics-utility
uv run ./manage.py --help
uv run ./manage.py gather_automation_controller_billing_data --help
uv run ./manage.py build_report --help
```
`make clean` resets the docker environment,
`make lint` & `make fix` run the linters & formatters,
`make psql` runs psql in the postgres container.
### Tests
Some tests depend on a running postgres & minio instance - run `make compose` to get one.
`make test` runs the full test suite,
`make coverage` produces a coverage report.
Use `uv run pytest -s -v` for running tests with verbose output, also accepts test filenames.
See [docs/tests-compose.md](./docs/tests-compose.md) to run the tests inside the docker compose environment.
## Documentation
More documentation is available in [docs/](./docs/), and elsewhere:
* [CHANGELOG.md](./CHANGELOG.md) - changes between tagged releases
* [LICENSE.md](./LICENSE.md) - the Apache-2.0 license
* [README.md](./README.md) - this README
* [docs/CONTRIBUTING.md](./docs/CONTRIBUTING.md) - Contributor's guide
* [docs/awx.md](./docs/awx.md) - running against awx dev env
* [docs/cli.md](./docs/cli.md) - CLI docs
* [docs/environment.md](./docs/environment.md) - Environment variables
* [docs/old-readme.md](./docs/old-readme.md) - pre-0.5 README, with more examples
* [docs/tests-compose.md](./docs/tests-compose.md) - running tests inside docker compose
* [docs/vcpu.md](./docs/vcpu.md) - docs for the total workers vcpu collector
* [metrics\_utility/library/](./metrics_utility/library/) - library documentation
* [tools/anonymized\_db\_perf\_data/](./tools/anonymized_db_perf_data/) - perf test data for anonymization
* [tools/collections/](./tools/collections/) - scripts for pulling list of collections from galaxy & automation hub
* [tools/docker/](./tools/docker/) - docker compose environment & mock awx data
* [tools/perf/](./tools/perf/) - perf test data generator and scripts for build report
* [tools/testathon/](./tools/testathon/) - data generator for testing
Please follow our [Contributor's Guide](./docs/CONTRIBUTING.md) for details on submitting changes and documentation standards.
| text/markdown | Red Hat | Red Hat <info@ansible.com> | null | null | null | ansible, metrics | [
"Development Status :: 4 - Beta",
"Intended Audience :: System Administrators",
"Intended Audience :: Developers",
"Operating System :: POSIX",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: System :: Systems Adminis... | [] | null | null | >=3.12 | [] | [] | [] | [
"boto3==1.35.96",
"botocore==1.35.96",
"distro==1.9.0",
"django==5.2.7",
"kubernetes>=33.1.0",
"openpyxl==3.1.2",
"pandas>=2.2.3",
"psycopg==3.3.0",
"requests==2.32.5",
"segment-analytics-python>=2.3.4",
"setuptools==80.9.0"
] | [] | [] | [] | [
"Repository, https://github.com/ansible/metrics-utility.git",
"Issues, https://github.com/ansible/metrics-utility/issues",
"Changelog, https://github.com/ansible/metrics-utility/blob/devel/CHANGELOG.md"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T15:58:03.479014 | metrics_utility-0.7.20260218-py3-none-any.whl | 441,329 | 10/e0/62061c734740699e2a5ecd0ed779e6c04af1e88dfc34c92cff4d36101709/metrics_utility-0.7.20260218-py3-none-any.whl | py3 | bdist_wheel | null | false | 3f009ccb94e63540ee89cdceb0937e49 | b6bc5b015dd6a57482be6674dc86b81e714cb4aa5207cb74c8746fa21117a825 | 10e062061c734740699e2a5ecd0ed779e6c04af1e88dfc34c92cff4d36101709 | Apache-2.0 | [
"LICENSE.md"
] | 356 |
2.4 | coregtor | 0.2.13 | A tool to predict transcription co-regulators for a gene from gene expression data using random forest methods | # Predict transcription gene co regulator from gene expressing data using `coRegTor`
The cell uses the information in the genome to generate proteins. This process has 2 steps. Step one is conversion of DNA into an mRNA (messenger RNA). This process is called Transcription. In the second step, called translation, the mRNa is "translated" into proteins. Both steps involved many sub steps. The process is complex and is regulated at each step in the way.
| text/markdown | Shubh Vardhan Jain | shubhvjain@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"decoupler<3.0.0,>=2.1.1",
"joblib<2.0.0,>=1.5.3",
"matplotlib<4.0.0,>=3.10.7",
"networkx<4.0,>=3.5",
"numpy<3.0.0,>=2.3.4",
"pandas<3.0.0,>=2.3.3",
"psutil<8.0.0,>=7.1.3",
"scikit-learn<2.0.0,>=1.7.2",
"seaborn<0.14.0,>=0.13.2"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:57:56.839538 | coregtor-0.2.13.tar.gz | 37,160 | 2c/06/8954fe384f312243ebda9b4466d6342ac825421e1a794130fd868eb90123/coregtor-0.2.13.tar.gz | source | sdist | null | false | ee6fa756f72e87ba55ad9606734bdbf8 | 173aaaa7c63b628615fd19d2ce4aa9652998df3b8c48a23e1167534d5d9a5b65 | 2c068954fe384f312243ebda9b4466d6342ac825421e1a794130fd868eb90123 | MIT | [
"LICENSE"
] | 223 |
2.4 | hunknote | 1.3.0 | AI-powered git commit message generator with multi-LLM support | # Hunknote
A fast, reliable CLI tool that generates high-quality git commit messages from your staged changes using AI.
## Features
- **Automatic commit message generation** from staged git changes
- **Multi-LLM support**: Anthropic, OpenAI, Google Gemini, Mistral, Cohere, Groq, and OpenRouter
- **Commit style profiles**: Default, Blueprint (structured sections), Conventional Commits, Ticket-prefixed, and Kernel-style
- **Smart scope inference**: Automatically detect scope from file paths (monorepo, path-prefix, mapping)
- **Intelligent type selection**: Automatically selects the correct commit type (feat, fix, docs, test, etc.)
- **Structured output**: Title line + bullet-point body following git best practices
- **Smart caching**: Reuses generated messages for the same staged changes (no redundant API calls)
- **Raw JSON debugging**: Inspect the raw LLM response with `--json` flag
- **Intelligent context**: Distinguishes between new files and modified files for accurate descriptions
- **Editor integration**: Review and edit generated messages before committing
- **One-command commits**: Generate and commit in a single step
- **Configurable ignore patterns**: Exclude lock files, build artifacts, etc. from diff analysis
- **Debug mode**: Inspect cache metadata, token usage, scope inference, and file change details
- **Comprehensive test suite**: 436 unit tests covering all modules
## Installation
### Option 1: Install from PyPI (Recommended)
```bash
# Using pipx (recommended - installs in isolated environment)
pipx install hunknote
# Or using pip
pip install hunknote
```
### Option 2: Install from Source
```bash
# Clone the repository
git clone <repo-url>
cd hunknote
# Install with Poetry (requires Python 3.12+)
poetry install
# Or install in development mode with test dependencies
poetry install --with dev
```
### Verify Installation
```bash
# Check that hunknote is available
hunknote --help
# Check git subcommand works
git hunknote --help
```
## Quick Start
```bash
# Initialize configuration (interactive setup)
hunknote init
# This will prompt you to:
# 1. Select an LLM provider (Anthropic, OpenAI, Google, etc.)
# 2. Choose a model
# 3. Enter your API key
# Stage your changes
git add <files>
# Generate a commit message
hunknote
# Or generate, edit, and commit in one step
hunknote -e -c
```
## Configuration
### Initial Setup
Run the interactive configuration wizard:
```bash
hunknote init
```
This creates a global configuration at `~/.hunknote/` with:
- `config.yaml` - Provider, model, and preference settings
- `credentials` - Securely stored API keys (read-only permissions)
### Managing Configuration
View current configuration:
```bash
hunknote config show
```
Change provider or model:
```bash
# Interactive model selection
hunknote config set-provider google
# Or specify model directly
hunknote config set-provider anthropic --model claude-sonnet-4-20250514
```
Update API keys:
```bash
hunknote config set-key google
hunknote config set-key anthropic
```
List available providers and models:
```bash
hunknote config list-providers
hunknote config list-models google
hunknote config list-models # Show all providers and models
```
### Manual Configuration
Alternatively, you can manually edit `~/.hunknote/config.yaml`:
```yaml
provider: google
model: gemini-2.0-flash
max_tokens: 1500
temperature: 0.3
editor: gedit # Optional: preferred editor for -e flag
default_ignore: # Optional: patterns to ignore in all repos
- poetry.lock
- package-lock.json
- "*.min.js"
```
And add API keys to `~/.hunknote/credentials`:
```
GOOGLE_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_anthropic_key
OPENAI_API_KEY=your_openai_key
```
### Setting Up API Keys (Alternative Methods)
API keys are checked in this order:
1. Environment variables (highest priority - useful for CI/CD)
2. `~/.hunknote/credentials` file (recommended for local development)
3. Project `.env` file (lowest priority)
Set via environment variable:
```bash
# Anthropic
export ANTHROPIC_API_KEY=your_key_here
# OpenAI
export OPENAI_API_KEY=your_key_here
# Google Gemini
export GOOGLE_API_KEY=your_key_here
# Mistral
export MISTRAL_API_KEY=your_key_here
# Cohere
export COHERE_API_KEY=your_key_here
# Groq
export GROQ_API_KEY=your_key_here
# OpenRouter (access to 200+ models)
export OPENROUTER_API_KEY=your_key_here
```
Or create a `.env` file in your project root.
### Supported Providers and Models
| Provider | Models | API Key Variable |
|----------|--------|------------------|
| **Anthropic** | claude-sonnet-4-20250514, claude-3-5-sonnet-latest, claude-3-5-haiku-latest, claude-3-opus-latest | `ANTHROPIC_API_KEY` |
| **OpenAI** | gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-4o, gpt-4o-mini, gpt-4-turbo | `OPENAI_API_KEY` |
| **Google** | gemini-3-pro-preview, gemini-2.5-pro, gemini-3-flash-preview, gemini-2.5-flash, gemini-2.5-flash-lite, gemini-2.0-flash, gemini-2.0-flash-lite | `GOOGLE_API_KEY` |
| **Mistral** | mistral-large-latest, mistral-medium-latest, mistral-small-latest, codestral-latest | `MISTRAL_API_KEY` |
| **Cohere** | command-r-plus, command-r, command | `COHERE_API_KEY` |
| **Groq** | llama-3.3-70b-versatile, llama-3.1-8b-instant, mixtral-8x7b-32768 | `GROQ_API_KEY` |
| **OpenRouter** | 200+ models (anthropic/claude-sonnet-4, openai/gpt-4o, google/gemini-2.0-flash-exp, meta-llama/llama-3.3-70b-instruct, deepseek/deepseek-chat, qwen/qwen-2.5-72b-instruct, etc.) | `OPENROUTER_API_KEY` |
## Usage
### Basic Usage
Stage your changes and generate a commit message:
```bash
git add <files>
hunknote
```
### Command Options
| Flag | Description |
|------|-------------|
| `-e, --edit` | Open the generated message in an editor for manual edits |
| `-c, --commit` | Automatically commit using the generated message |
| `-r, --regenerate` | Force regenerate, ignoring cached message |
| `-d, --debug` | Show full cache metadata (staged files, tokens, diff preview, scope inference) |
| `-j, --json` | Show the raw JSON response from the LLM for debugging |
| `--style` | Override commit style profile (default, blueprint, conventional, ticket, kernel) |
| `--scope` | Force a scope for the commit message (use 'auto' for inference) |
| `--no-scope` | Disable scope even if profile supports it |
| `--scope-strategy` | Scope inference strategy (auto, monorepo, path-prefix, mapping, none) |
| `--ticket` | Force a ticket key (e.g., PROJ-123) for ticket-style commits |
| `--max-diff-chars` | Maximum characters for staged diff (default: 50000) |
### Scope Inference
Hunknote automatically infers scope from your staged files for consistent commit messages:
```bash
# Automatic scope inference (default)
hunknote --style conventional # feat(api): Add endpoint
# Force a specific scope
hunknote --scope auth --style conventional
# Disable scope
hunknote --no-scope --style conventional
# Choose inference strategy
hunknote --scope-strategy monorepo --style conventional
hunknote --scope-strategy path-prefix --style conventional
```
**Supported strategies:**
- **auto** (default): Tries all strategies in order
- **monorepo**: Infer from `packages/`, `apps/`, `libs/` directories
- **path-prefix**: Use the most common path segment
- **mapping**: Use explicit path-to-scope mapping in config
- **none**: Disable scope inference
### Commit Style Profiles
Hunknote supports multiple commit message formats to match your team's conventions:
```bash
# List available profiles
hunknote style list
# Show details about a profile
hunknote style show blueprint
# Set default style globally
hunknote style set blueprint
# Set style for current repo only
hunknote style set ticket --repo
# Override style for a single run
hunknote --style blueprint --scope api
hunknote --style conventional --scope api
hunknote --style ticket --ticket PROJ-123 -e -c
```
#### Available Profiles
| Profile | Format | Description |
|---------|------------------------------|-------------|
| **default** | `<Title>\n\n- <bullet>` | Simple title + bullet points |
| **blueprint** | `<type>(<scope>): <title>\n\n<summary>\n\nChanges:\n- ...` | Structured sections (Changes, Implementation, Testing, Documentation, Notes) |
| **conventional** | `<type>(<scope>): <subject>` | Conventional Commits format |
| **ticket** | `<KEY-123> <subject>` | Ticket-prefixed format |
| **kernel** | `<subsystem>: <subject>` | Linux kernel style |
### Ignore Pattern Management
Manage which files are excluded from the diff sent to the LLM:
```bash
# List all ignore patterns
hunknote ignore list
# Add a new pattern
hunknote ignore add "*.log"
hunknote ignore add "build/*"
hunknote ignore add "dist/*"
# Remove a pattern
hunknote ignore remove "*.log"
```
### Configuration Commands
Manage global configuration stored in `~/.hunknote/`:
```bash
# View current configuration
hunknote config show
# Set or update API key for a provider
hunknote config set-key google
hunknote config set-key anthropic
# Change provider and model
hunknote config set-provider google
hunknote config set-provider anthropic --model claude-sonnet-4-20250514
# List available providers
hunknote config list-providers
# List models for a specific provider
hunknote config list-models google
# List all providers and their models
hunknote config list-models
```
### Examples
```bash
# Generate commit message (print only, cached for reuse)
hunknote
# Generate and open in editor
hunknote -e
# Generate and commit directly
hunknote -c
# Edit message then commit
hunknote -e -c
# Force regeneration (ignore cache)
hunknote -r
# Debug: view cache metadata, token usage, and scope inference
hunknote -d
# View raw JSON response from LLM
hunknote -j
# Force regenerate and view raw JSON
hunknote -r -j
# Use conventional commits style with scope
hunknote --style conventional --scope api
# Use blueprint style for detailed commit messages
hunknote --style blueprint
# Use ticket-prefixed style
hunknote --style ticket --ticket PROJ-6767 -e -c
# Force kernel style for this commit
hunknote --style kernel --scope auth
```
### Git Subcommand
You can also use it as a git subcommand:
```bash
git hunknote
git hunknote -e -c
```
## How It Works
1. **Collects git context**: branch name, file changes (new vs modified), last 5 commits, and staged diff
2. **Computes a hash** of the context to check cache validity
3. **Checks cache**: If valid, uses cached message; otherwise calls the configured LLM
4. **Parses the response**: Extracts structured JSON (title + bullet points) from LLM response
5. **Renders the message**: Formats into standard git commit message format
6. **Optionally opens editor** and/or commits
### Intelligent File Change Detection
The tool distinguishes between:
- **New files** (did not exist before this commit)
- **Modified files** (already existed, now changed)
- **Deleted files**
- **Renamed files**
This context helps the LLM generate accurate descriptions - for example, it won't say "implement caching" when you're just adding tests for existing caching functionality.
## Caching Behavior
The tool caches generated commit messages to avoid redundant API calls:
- **Same staged changes** → Uses cached message (no API call)
- **Different staged changes** → Regenerates automatically
- **After commit** → Cache is invalidated
- **Use `-r` flag** → Force regeneration
Cache files are stored in `<repo>/.hunknote/`:
- `hunknote_message.txt` - The cached commit message
- `hunknote_context_hash.txt` - Hash of the git context
- `hunknote_metadata.json` - Full metadata (tokens, model, timestamp)
- `hunknote_llm_response.json` - Raw JSON response from LLM (for debugging with `-j`)
- `config.yaml` - Repository-specific configuration
**Gitignore recommendation:** Add these to your `.gitignore`:
```
# hunknote cache files (but keep config.yaml for shared settings)
.hunknote/hunknote_*.txt
.hunknote/hunknote_*.json
```
## Repository Configuration
Each repository can have its own `.hunknote/config.yaml` file for customization.
The file is auto-created with defaults on first run.
### Ignore Patterns
The `ignore` section lists file patterns to exclude from the diff sent to the LLM.
This reduces token usage and focuses the commit message on actual code changes.
```yaml
ignore:
# Lock files (auto-generated)
- poetry.lock
- package-lock.json
- yarn.lock
- pnpm-lock.yaml
- Cargo.lock
- Gemfile.lock
- composer.lock
- go.sum
# Build artifacts
- "*.min.js"
- "*.min.css"
- "*.map"
# Binary and generated files
- "*.pyc"
- "*.pyo"
- "*.so"
- "*.dll"
- "*.exe"
# IDE files
- ".idea/*"
- ".vscode/*"
- "*.swp"
- "*.swo"
```
You can add custom patterns using glob syntax (e.g., `build/*`, `*.generated.ts`).
## Output Format
Generated messages follow git best practices:
```
Add user authentication feature
- Implement login and logout endpoints
- Add session management middleware
- Create user model with password hashing
```
## Development
### Running Tests
The project includes a comprehensive test suite with 436 tests:
```bash
# Run all tests
pytest tests/
# Run with verbose output
pytest tests/ -v
# Run specific test file
pytest tests/test_formatters.py
# Run specific test
pytest tests/test_cache.py::TestSaveCache::test_saves_all_files
```
### Test Coverage
| Module | Tests | Description |
|--------|-------|-------------|
| `cache.py` | 40 | Caching utilities, metadata, raw JSON storage |
| `cli.py` | 42 | CLI commands and subcommands |
| `config.py` | 24 | Configuration constants and enums |
| `formatters.py` | 21 | Commit message formatting and validation |
| `git_ctx.py` | 31 | Git context collection and filtering |
| `global_config.py` | 26 | Global user configuration (~/.hunknote/) |
| `scope.py` | 54 | Scope inference from file paths |
| `styles.py` | 96 | Commit style profiles and rendering |
| `llm/base.py` | 51 | JSON parsing, schema validation, style prompts |
| `llm/*.py` providers | 31 | All LLM provider classes |
| `user_config.py` | 20 | Repository YAML config file management |
### Project Structure
```
hunknote/
├── __init__.py
├── cli.py # CLI entry point and commands
├── config.py # LLM provider configuration
├── cache.py # Caching utilities
├── formatters.py # Commit message formatting
├── styles.py # Commit style profiles (default, blueprint, conventional, ticket, kernel)
├── scope.py # Scope inference (monorepo, path-prefix, mapping)
├── git_ctx.py # Git context collection
├── user_config.py # Repository config management
├── global_config.py # Global user config (~/.hunknote/)
└── llm/
├── __init__.py # Provider factory
├── base.py # Base classes and prompts
├── anthropic_provider.py
├── openai_provider.py
├── google_provider.py
├── mistral_provider.py
├── cohere_provider.py
├── groq_provider.py
└── openrouter_provider.py
```
## Requirements
- Python 3.12+
- Git
- API key for at least one supported LLM provider
## Dependencies
- `typer` (>=0.21.0) - CLI framework
- `pydantic` (>=2.5.0) - Data validation
- `python-dotenv` - Environment variable management
- `pyyaml` - YAML configuration
- LLM SDKs: `anthropic`, `openai`, `google-genai`, `mistralai`, `cohere`, `groq`
## License
MIT
| text/markdown | Avinash Ranganath | nash911@gmail.com | null | null | MIT | git, commit, ai, llm, cli, anthropic, openai, gemini, commit-message, hunknote | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"P... | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"anthropic<0.80.0,>=0.79.0",
"cohere<6.0.0,>=5.20.5",
"google-genai<2.0.0,>=1.63.0",
"groq<2.0.0,>=1.0.0",
"mistralai<2.0.0,>=1.12.2",
"openai<3.0.0,>=2.20.0",
"pydantic<3.0.0,>=2.12.5",
"python-dotenv<2.0.0,>=1.2.1",
"pyyaml<7.0.0,>=6.0.3",
"typer==0.21.2"
] | [] | [] | [] | [
"Documentation, https://github.com/nash911/hunknote#readme",
"Homepage, https://github.com/nash911/hunknote",
"Repository, https://github.com/nash911/hunknote"
] | poetry/2.3.2 CPython/3.10.12 Linux/6.8.0-94-generic | 2026-02-18T15:57:49.566602 | hunknote-1.3.0-py3-none-any.whl | 62,272 | 93/e5/bbf621549b1c2af73f41a7b0cc80e01912f666c172d4c5c69ca0cd929e32/hunknote-1.3.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 9b678256e73940bdae09e83dc5dfbfa0 | 0c0edd2c385dfbed5d7d34c91daab0549cde67acb12cb5d3e38b8f6072477345 | 93e5bbf621549b1c2af73f41a7b0cc80e01912f666c172d4c5c69ca0cd929e32 | null | [
"LICENSE"
] | 226 |
2.4 | xetrack | 0.5.2 | A simple tool for benchamrking and tracking machine learning models and experiments. | <p align="center">
<img src="https://raw.githubusercontent.com/xdssio/xetrack/main/docs/images/logo.jpg" alt="logo" width="400" />
</p>
<p align="center">
<a href="https://github.com/xdssio/xetrack/actions/workflows/ci.yml">
<img src="https://github.com/xdssio/xetrack/actions/workflows/ci.yml/badge.svg" alt="CI Status" />
</a>
<a href="https://pypi.org/project/xetrack/">
<img src="https://img.shields.io/pypi/v/xetrack.svg" alt="PyPI version" />
</a>
<a href="https://pypi.org/project/xetrack/">
<img src="https://img.shields.io/pypi/pyversions/xetrack.svg" alt="Python versions" />
</a>
<a href="https://github.com/xdssio/xetrack/blob/main/LICENSE">
<img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="License: MIT" />
</a>
<a href="https://github.com/xdssio/xetrack/issues">
<img src="https://img.shields.io/github/issues/xdssio/xetrack.svg" alt="GitHub issues" />
</a>
<a href="https://github.com/xdssio/xetrack/network/members">
<img src="https://img.shields.io/github/forks/xdssio/xetrack.svg" alt="GitHub forks" />
</a>
<a href="https://github.com/xdssio/xetrack/stargazers">
<img src="https://img.shields.io/github/stars/xdssio/xetrack.svg" alt="GitHub stars" />
</a>
</p>
# xetrack
Lightweight, local-first experiment tracker and benchmark store built on [SQLite](https://sqlite.org/index) and [duckdb](https://duckdb.org).
### Why xetrack Exists
Most experiment trackers — like Weights & Biases — rely on cloud servers...
xetrack is a lightweight package to track benchmarks, experiments, and monitor structured data.
It is focused on simplicity and flexibility.
You create a "Tracker", and let it track benchmark results, model training and inference monitoring. later retrieve as pandas or connect to it directly as a database.
## Features
* Simple
* Embedded
* Fast
* Pandas-like
* SQL-like
* Object store with deduplication
* CLI for basic functions
* Multiprocessing reads and writes
* Loguru logs integration
* Experiment tracking
* Model monitoring
## Installation
```bash
pip install xetrack
pip install xetrack[duckdb] # to use duckdb as engine
pip install xetrack[assets] # to be able to use the assets manager to save objects
pip install xetrack[cache] # to enable function result caching
```
## Examples
**Complete examples for every feature** are available in the `examples/` directory:
```bash
# Run all examples
python examples/run_all.py
# Run individual examples
python examples/01_quickstart.py
python examples/02_track_functions.py
# ... etc
```
See [`examples/README.md`](examples/README.md) for full documentation of all 9+ examples.
## Quickstart
```python
from xetrack import Tracker
tracker = Tracker('database_db',
params={'model': 'resnet18'}
)
tracker.log({"accuracy":0.9, "loss":0.1, "epoch":1}) # All you really need
tracker.latest
{'accuracy': 0.9, 'loss': 0.1, 'epoch': 1, 'model': 'resnet18', 'timestamp': '18-08-2023 11:02:35.162360',
'track_id': 'cd8afc54-5992-4828-893d-a4cada28dba5'}
tracker.to_df(all=True) # retrieve all the runs as dataframe
timestamp track_id model loss epoch accuracy
0 26-09-2023 12:17:00.342814 398c985a-dc15-42da-88aa-6ac6cbf55794 resnet18 0.1 1 0.9
```
**Multiple experiment types**: Use different table names to organize different types of experiments in the same database.
```python
model_tracker = Tracker('experiments_db', table='model_experiments')
data_tracker = Tracker('experiments_db', table='data_experiments')
```
**Params** are values which are added to every future row:
```python
$ tracker.set_params({'model': 'resnet18', 'dataset': 'cifar10'})
$ tracker.log({"accuracy":0.9, "loss":0.1, "epoch":2})
{'accuracy': 0.9, 'loss': 0.1, 'epoch': 2, 'model': 'resnet18', 'dataset': 'cifar10',
'timestamp': '26-09-2023 12:18:40.151756', 'track_id': '398c985a-dc15-42da-88aa-6ac6cbf55794'}
```
You can also set a value to an entire run with *set_value* ("back in time"):
```python
tracker.set_value('test_accuracy', 0.9) # Only known at the end of the experiment
tracker.to_df()
timestamp track_id model loss epoch accuracy dataset test_accuracy
0 26-09-2023 12:17:00.342814 398c985a-dc15-42da-88aa-6ac6cbf55794 resnet18 0.1 1 0.9 NaN 0.9
2 26-09-2023 12:18:40.151756 398c985a-dc15-42da-88aa-6ac6cbf55794 resnet18 0.1 2 0.9 cifar10 0.9
```
## Track functions
You can track any function.
* The return value is logged before returned
```python
tracker = Tracker('database_db',
log_system_params=True,
log_network_params=True,
measurement_interval=0.1)
image = tracker.track(read_image, *args, **kwargs)
tracker.latest
{'result': 571084, 'name': 'read_image', 'time': 0.30797290802001953, 'error': '', 'disk_percent': 0.6,
'p_memory_percent': 0.496507, 'cpu': 0.0, 'memory_percent': 32.874608, 'bytes_sent': 0.0078125,
'bytes_recv': 0.583984375}
```
Or with a wrapper:
```python
@tracker.wrap(params={'name':'foofoo'})
def foo(a: int, b: str):
return a + len(b)
result = foo(1, 'hello')
tracker.latest
{'function_name': 'foo', 'args': "[1, 'hello']", 'kwargs': '{}', 'error': '', 'function_time': 4.0531158447265625e-06,
'function_result': 6, 'name': 'foofoo', 'timestamp': '26-09-2023 12:21:02.200245', 'track_id': '398c985a-dc15-42da-88aa-6ac6cbf55794'}
```
### Automatic Dataclass and Pydantic BaseModel Unpacking
**NEW**: When tracking functions, xetrack automatically unpacks frozen dataclasses and Pydantic BaseModels into individual tracked fields with dot-notation prefixes.
This is especially useful for ML experiments where you have complex configuration objects:
```python
from dataclasses import dataclass
@dataclass(frozen=True)
class TrainingConfig:
learning_rate: float
batch_size: int
epochs: int
optimizer: str = "adam"
@tracker.wrap()
def train_model(config: TrainingConfig):
# Your training logic here
return {"accuracy": 0.95, "loss": 0.05}
config = TrainingConfig(learning_rate=0.001, batch_size=32, epochs=10)
result = train_model(config)
# All config fields are automatically unpacked and tracked!
tracker.latest
{
'function_name': 'train_model',
'config_learning_rate': 0.001, # ← Unpacked from dataclass
'config_batch_size': 32, # ← Unpacked from dataclass
'config_epochs': 10, # ← Unpacked from dataclass
'config_optimizer': 'adam', # ← Unpacked from dataclass
'accuracy': 0.95,
'loss': 0.05,
'timestamp': '...',
'track_id': '...'
}
```
**Works with multiple dataclasses:**
```python
@dataclass(frozen=True)
class ModelConfig:
model_type: str
num_layers: int
@dataclass(frozen=True)
class DataConfig:
dataset: str
batch_size: int
def experiment(model_cfg: ModelConfig, data_cfg: DataConfig):
return {"score": 0.92}
result = tracker.track(
experiment,
args=[
ModelConfig(model_type="transformer", num_layers=12),
DataConfig(dataset="cifar10", batch_size=64)
]
)
# Result includes: model_cfg_model_type, model_cfg_num_layers,
# data_cfg_dataset, data_cfg_batch_size, score
```
**Also works with Pydantic BaseModel:**
```python
from pydantic import BaseModel
class ExperimentConfig(BaseModel):
experiment_name: str
seed: int
use_gpu: bool = True
@tracker.wrap()
def run_experiment(cfg: ExperimentConfig):
return {"status": "completed"}
config = ExperimentConfig(experiment_name="exp_001", seed=42)
result = run_experiment(config)
# Automatically tracks: cfg.experiment_name, cfg.seed, cfg.use_gpu, status
```
**Benefits:**
- Clean function signatures (one config object instead of many parameters)
- All config values automatically tracked individually for easy filtering/analysis
- Works with both `tracker.track()` and `@tracker.wrap()` decorator
- Supports both frozen and non-frozen dataclasses
- Compatible with Pydantic BaseModel via `model_dump()`
## Track assets (Oriented for ML models)
Requirements: `pip install xetrack[assets]` (installs sqlitedict)
When you attempt to track a non primitive value which is not a list or a dict - xetrack saves it as assets with deduplication and log the object hash:
* Tips: If you plan to log the same object many times over, after the first time you log it, just insert the hash instead for future values to save time on encoding and hashing.
```python
$ tracker = Tracker('database_db', params={'model': 'logistic regression'})
$ lr = Logisticregression().fit(X_train, y_train)
$ tracker.log({'accuracy': float(lr.score(X_test, y_test)), 'lr': lr})
{'accuracy': 0.9777777777777777, 'lr': '53425a65a40a49f4', # <-- this is the model hash
'dataset': 'iris', 'model': 'logistic regression', 'timestamp': '2023-12-27 12:21:00.727834', 'track_id': 'wisteria-turkey-4392'}
$ model = tracker.get('53425a65a40a49f4') # retrieve an object
$ model.score(X_test, y_test)
0.9777777777777777
```
You can retrieve the model in CLI if you need only the model in production and mind carring the rest of the file
```bash
# bash
xt assets export database.db 53425a65a40a49f4 model.cloudpickle
```
```python
# python
import cloudpickle
with open("model_cloudpickle", 'rb') as f:
model = cloudpickle.loads(f.read())
# LogisticRegression()
```
## Function Result Caching
Xetrack provides transparent disk-based caching for expensive function results using [diskcache](https://grantjenks.com/docs/diskcache/). When enabled, results are automatically cached based on function name, arguments, and keyword arguments.
### Installation
```bash
pip install xetrack[cache]
```
### Basic Usage
Simply provide a `cache` parameter with a directory path to enable automatic caching:
```python
from xetrack import Tracker
tracker = Tracker(db='track_db', cache='cache_dir')
def expensive_computation(x: int, y: int) -> int:
"""Simulate expensive computation"""
return x ** y
# First call - executes function
result1 = tracker.track(expensive_computation, args=[2, 10]) # Computes 2^10 = 1024
# Second call with same args - returns cached result instantly
result2 = tracker.track(expensive_computation, args=[2, 10]) # Cache hit!
# Different args - executes function again
result3 = tracker.track(expensive_computation, args=[3, 10]) # Computes 3^10 = 59049
# Tracker params also affect cache keys
result4 = tracker.track(expensive_computation, args=[2, 10], params={"model": "v2"}) # Computes (different params)
result5 = tracker.track(expensive_computation, args=[2, 10], params={"model": "v2"}) # Cache hit!
```
### Cache Observability & Lineage Tracking
Cache behavior is tracked in the database with the `cache` field for full lineage tracking:
```python
from xetrack import Reader
df = Reader(db='track_db').to_df()
print(df[['function_name', 'function_time', 'cache', 'track_id']])
# function_name function_time cache track_id
# 0 expensive_computation 2.345 "" abc123 # Computed (cache miss)
# 1 expensive_computation 0.000 "abc123" def456 # Cache hit - traces back to abc123
# 2 expensive_computation 2.891 "" ghi789 # Different args (computed)
```
The `cache` field provides lineage:
- **Empty string ("")**: Result was computed (cache miss or no cache)
- **track_id value**: Result came from cache (cache hit), references the original execution's track_id
### Reading Cache Directly
You can inspect cached values without re-running functions. Cache stores dicts with "result" and "cache" keys:
```python
from xetrack import Reader
# Read specific cached value by key
# Note: _generate_cache_key is a private method for advanced usage
cache_key = tracker._generate_cache_key(expensive_computation, [2, 10], {}, {})
if cache_key is not None: # Will be None if any arg is unhashable
cached_data = Reader.read_cache('cache_dir', cache_key)
print(f"Result: {cached_data['result']}, Original execution: {cached_data['cache']}")
# Result: 1024, Original execution: abc123
# Scan all cached entries
for key, cached_data in Reader.scan_cache('cache_dir'):
print(f"{key}: result={cached_data['result']}, from={cached_data['cache']}")
```
### Use Cases
- **ML Model Inference**: Cache predictions for repeated inputs
- **Data Processing**: Cache expensive transformations or aggregations
- **API Calls**: Cache external API responses (with appropriate TTL considerations)
- **Scientific Computing**: Cache results of long-running simulations
### Force Cache Refresh
Use `cache_force=True` to skip the cache lookup and re-execute the function. The new result overwrites the existing cache entry:
```python
# Normal call — uses cache if available
result = tracker.track(expensive_computation, args=[2, 10])
# Force refresh — re-executes the function and overwrites the cache
result = tracker.track(expensive_computation, args=[2, 10], cache_force=True)
# Next normal call will use the refreshed cache entry
result = tracker.track(expensive_computation, args=[2, 10]) # Cache hit (from force-refreshed entry)
```
**When to use `cache_force`:**
- Model or data changed but function signature is the same
- Cached result might be stale or corrupted
- You want to re-run a specific computation without clearing the entire cache
### Delete Cache Entries
Remove all cache entries associated with a specific experiment run:
```python
from xetrack import Reader
# Delete all cache entries produced by a specific track_id
deleted = Reader.delete_cache_by_track_id('cache_dir', 'cool-name-1234')
print(f"Deleted {deleted} cache entries")
```
**CLI:**
```bash
# List all cache entries with their track_id lineage
xt cache ls cache_dir
# Delete cache entries by track_id
xt cache delete cache_dir cool-name-1234
```
### Important Notes
- **Cache keys** are generated from tuples of (function name, args, kwargs, **tracker params**)
- Different tracker params create separate cache entries (e.g., different model versions)
- Exceptions are **not cached** - failed calls will retry on next invocation
- Cache is persistent across Python sessions
- Lineage tracking: the `cache` field links cached results to their original execution via track_id
### Handling Objects in Cache Keys
Xetrack intelligently handles different types of arguments:
- **Primitives** (int, float, str, bool, bytes): Used as-is in cache keys
- **Hashable objects** (custom classes with `__hash__`): Uses `hash()` for consistent keys across runs
- **Unhashable objects** (list, dict, sets): **Caching skipped entirely** for that call (warning issued once per type)
```python
# Hashable custom objects work great
class Config:
def __init__(self, value):
self.value = value
def __hash__(self):
return hash(self.value)
def __eq__(self, other):
return isinstance(other, Config) and self.value == other.value
# Cache hits work across different object instances with same hash
config1 = Config("production")
config2 = Config("production")
tracker.track(process, args=[config1]) # Computed, cached
tracker.track(process, args=[config2]) # Cache hit! (same hash)
# Unhashable objects skip caching entirely
tracker.track(process, args=[[1, 2, 3]]) # Computed, NOT cached (warning issued)
tracker.track(process, args=[[1, 2, 3]]) # Computed again, still NOT cached
# Make objects hashable to enable caching
class HashableList:
def __init__(self, items):
self.items = tuple(items) # Use tuple for hashability
def __hash__(self):
return hash(self.items)
def __eq__(self, other):
return isinstance(other, HashableList) and self.items == other.items
tracker.track(process, args=[HashableList([1, 2, 3])]) # ✅ Cached!
```
### Using Frozen Dataclasses for Complex Configurations
**Recommended Pattern**: When your function has many parameters or complex configurations, use frozen dataclasses to enable caching. This is especially useful for ML experiments with multiple hyperparameters.
```python
from dataclasses import dataclass
# ✅ RECOMMENDED: frozen=True makes dataclass hashable automatically, slots efficient in memory
@dataclass(frozen=True, slots=True)
class TrainingConfig:
learning_rate: float
batch_size: int
epochs: int
model_name: str
optimizer: str = "adam"
def train_model(config: TrainingConfig) -> dict:
"""Complex training function with many parameters"""
# ... training logic ...
return {"accuracy": 0.95, "loss": 0.05}
# Caching works seamlessly with frozen dataclasses
config1 = TrainingConfig(learning_rate=0.001, batch_size=32, epochs=10, model_name="bert")
result1 = tracker.track(train_model, args=[config1]) # Computed, cached
config2 = TrainingConfig(learning_rate=0.001, batch_size=32, epochs=10, model_name="bert")
result2 = tracker.track(train_model, args=[config2]) # Cache hit! (identical config)
# Different config computes again
config3 = TrainingConfig(learning_rate=0.002, batch_size=32, epochs=10, model_name="bert")
result3 = tracker.track(train_model, args=[config3]) # Computed (different learning_rate)
```
**Benefits:**
- Clean, readable function signatures (one config object instead of many parameters)
- Type safety with automatic validation
- Automatic hashability with `frozen=True`
- Cache works across different object instances with identical values
- Easier to version and serialize configurations
### Tips and Tricks
* ```Tracker(Tracker.IN_MEMORY, logs_path='logs/') ``` Let you run only in memory - great for debugging or working with logs only
### Pandas-like
```python
print(tracker)
_id track_id date b a accuracy
0 48154ec7-1fe4-4896-ac66-89db54ddd12a fd0bfe4f-7257-4ec3-8c6f-91fe8ae67d20 16-08-2023 00:21:46 2.0 1.0 NaN
1 8a43000a-03a4-4822-98f8-4df671c2d410 fd0bfe4f-7257-4ec3-8c6f-91fe8ae67d20 16-08-2023 00:24:21 NaN NaN 1.0
tracker['accuracy'] # get accuracy column
tracker.to_df() # get pandas dataframe of current run
```
### SQL-like
You can filter the data using SQL-like syntax using [duckdb](https://duckdb.org/docs):
* The sqlite database is attached as **db** and the table is **events**. Assts are in the **assets** table.
* To use the duckdb as backend, `pip install xetrack[duckdb]` (installs duckdb) and add the parameter engine="duckdb" to Tracker like so:
```python
Tracker(..., engine='duckdb')
```
#### Python
```python
tracker.conn.execute(f"SELECT * FROM db.events WHERE accuracy > 0.8").fetchall()
```
#### Duckdb CLI
* Install: `curl https://install.duckdb.org | sh`
* If duckdb>=1.2.2, you can use [duckdb local ui](https://duckdb.org/2025/03/12/duckdb-ui.html)
```bash
$ duckdb -ui
┌──────────────────────────────────────┐
│ result │
│ varchar │
├──────────────────────────────────────┤
│ UI started at http://localhost:4213/ │
└──────────────────────────────────────┘
D INSTALL sqlite; LOAD sqlite; ATTACH 'database_db' AS db (TYPE sqlite);
# navigate browser to http://localhost:4213/
# or run directly in terminal
D SELECT * FROM db.events;
┌────────────────────────────┬──────────────────┬──────────┬───────┬──────────┬────────┐
│ timestamp │ track_id │ model │ epoch │ accuracy │ loss │
│ varchar │ varchar │ varchar │ int64 │ double │ double │
├────────────────────────────┼──────────────────┼──────────┼───────┼──────────┼────────┤
│ 2023-12-27 11:25:59.244003 │ fierce-pudu-1649 │ resnet18 │ 1 │ 0.9 │ 0.1 │
└────────────────────────────┴──────────────────┴──────────┴───────┴──────────┴────────┘
```
### Logger integration
This is very useful in an environment where you can use normal logs, and don't want to manage a separate logger or file.
On great use-case is **model monitoring**.
`logs_stdout=true` print to stdout every tracked event
`logs_path='logs'` writes logs to a file
```python
$ Tracker(db=Tracker.IN_MEMORY, logs_path='logs',logs_stdout=True).log({"accuracy":0.9})
2023-12-14 21:46:55.290 | TRACKING | xetrack.logging:log:176!📁!{"accuracy": 0.9, "timestamp": "2023-12-14 21:46:55.290098", "track_id": "marvellous-stork-4885"}
$ Reader.read_logs(path='logs')
accuracy timestamp track_id
0 0.9 2023-12-14 21:47:48.375258 unnatural-polecat-1380
```
### JSONL Logging for Data Synthesis and GenAI Datasets
JSONL (JSON Lines) format is ideal for building machine learning datasets, data synthesis, and GenAI training data. Each tracking event is written as a single-line JSON with structured metadata.
**Use Cases:**
- Building datasets for LLM fine-tuning
- Creating synthetic data for model training
- Structured data collection for data synthesis pipelines
- Easy integration with data processing tools
```python
# Enable JSONL logging
tracker = Tracker(
db='database_db',
jsonl='logs/data.jsonl' # Write structured logs to JSONL
)
# Every log call writes structured JSON
tracker.log({"subject": "taxes", "prompt": "Help me with my taxes"})
tracker.log({"subject": "dance", "prompt": "Help me with my moves"})
# Read JSONL data into pandas DataFrame
df = Reader.read_jsonl('logs/data.jsonl')
print(df)
# timestamp level subject prompt track_id
# 0 2024-01-15T10:30:00.123456+00:00 TRACKING taxes Help me with my taxes ancient-falcon-1234
# 1 2024-01-15T10:35:00.234567+00:00 TRACKING dance Help me with my moves ancient-falcon-1234
# Or use pandas directly (JSONL is standard format)
import pandas as pd
df = pd.read_json('logs/data.jsonl', lines=True)
```
**JSONL Entry Format:**
Each line contains flattened structured data suitable for ML pipelines:
```json
{"timestamp": "2024-01-15T10:30:00.123456+00:00", "level": "TRACKING", "accuracy": 0.95, "loss": 0.05, "epoch": 1, "model": "test-model", "track_id": "xyz-123"}
```
Note: Timestamp is in ISO 8601 format with timezone for maximum compatibility.
**Reading Data:**
```python
# From JSONL file
df = Reader.read_jsonl('logs/tracking.jsonl')
# From database (class method for convenience)
df = Reader.read_db('database_db', engine='sqlite', table='default')
# From database with filtering
df = Reader.read_db('database_db', track_id='specific-run-id', head=100)
```
## Analysis
To get the data of all runs in the database for analysis:
Use this for further analysis and plotting.
* This works even while a another process is writing to the database.
```python
from xetrack import Reader
df = Reader('database_db').to_df()
```
### Model Monitoring
Here is how we can save logs on any server and monitor them with xetrack:
We want to print logs to a file or *stdout* to be captured normally.
We save memory by not inserting the data to the database (even though it's fine).
Later we can read the logs and do fancy visualisation, online/offline analysis, build dashboards etc.
```python
tracker = Tracker(db=Tracker.SKIP_INSERT, logs_path='logs', logs_stdout=True)
tracker.logger.monitor("<dict or pandas DataFrame>") # -> write to logs in a structured way, consistent by schema, no database file needed
df = Reader.read_logs(path='logs')
"""
Run drift analysis and outlier detection on your logs:
"""
```
### ML Tracking
```python
tracker.logger.experiment(<model evaluation and params>) # -> prettily write to logs
df = Reader.read_logs(path='logs')
"""
Run fancy visualisation, online/offline analysis, build dashboards etc.
"""
```
## CLI
For basic and repetative needs.
```bash
$ xt head database.db --n=2
| | timestamp | track_id | model | accuracy | data | params |
|---:|:---------------------------|:-------------------------|:---------|-----------:|:-------|:-----------------|
| 0 | 2023-12-27 11:36:45.859668 | crouching-groundhog-5046 | xgboost | 0.9 | mnist | 1b5b2294fc521d12 |
| 1 | 2023-12-27 11:36:45.863888 | crouching-groundhog-5046 | xgboost | 0.9 | mnist | 1b5b2294fc521d12 |
...
$ xt tail database.db --n=1
| | timestamp | track_id | model | accuracy | data | params |
|---:|:---------------------------|:----------------|:---------|-----------:|:-------|:-----------------|
| 0 | 2023-12-27 11:37:30.627189 | ebony-loon-6720 | lightgbm | 0.9 | mnist | 1b5b2294fc521d12 |
$ xt set database.db accuracy 0.8 --where-key params --where-value 1b5b2294fc521d12 --track-id ebony-loon-6720
$ xt delete database.db ebony-loon-6720 # delete experiments with a given track_id
# Cache management
$ xt cache ls cache_dir # list cache entries with track_id lineage
$ xt cache delete cache_dir cool-name-1234 # delete cache entries for a specific run
# run any other SQL in a oneliner
$ xt sql database.db "SELECT * FROM db.events;"
# retrieve a model (any object) which was saved into a file using cloudpickle
$ xt assets export database.db hash output
# remove an object from the assets
$ xt assets delete database.db hash
# If you have two databases, and you want to merge one to the other
# Only works with duckdb at this moment
$ xt copy source.db target.db --assets/--no-assets --table=<table>
# Stats
$ xt stats describe database.db --columns=x,y,z
$ xt stats top/bottom database.db x # print the entry with the top/bottom result of a value
# bashplotlib (`pip install bashplotlib` is required)
$ xt plot hist database.db x
----------------------
| x histogram |
----------------------
225| o
200| ooo
175| ooo
150| ooo
125| ooo
100| ooooo
75| ooooo
50| ooooo
25| ooooooo
1| oooooooooo
----------
-----------------------------------
| Summary |
-----------------------------------
| observations: 1000 |
| min value: -56.605967 |
| mean : 2.492545 |
| max value: 75.185944 |
-----------------------------------
$ xt plot scatter database.db x y
```
# SQLite vs Duckdb
1. Dynamic Typing & Column Affinity
* Quirk: SQLite columns have affinity (preference) rather than strict types.
* Impact: "42" (str) will happily go into an INTEGER column without complaint.
* Mitigation: As you’ve done, use explicit Python casting based on expected dtype.
2. Booleans Are Integers
* Quirk: SQLite doesn’t have a native BOOLEAN type. True becomes 1, False becomes 0.
* Impact: Any boolean stored/retrieved will behave like an integer.
* Mitigation: Handle boolean ↔ integer conversion in code if you care about type fidelity.
3. NULLs Can Be Inserted into ANY Column
* Quirk: Unless a column is explicitly declared NOT NULL, SQLite allows NULL in any field — even primary keys.
* Impact: Can result in partially complete or duplicate-prone rows if you’re not strict.
* Mitigation: Add NOT NULL constraints and enforce required fields at the application level.
# Tests for development
```bash
pip install pytest-testmon pytest
pytest -x -q -p no:warnings --testmon tests
```
---
# Benchmark Skill for Claude Code
xetrack includes a comprehensive **benchmark skill** for Claude Code that guides you through rigorous ML/AI benchmarking experiments.
## What is the Benchmark Skill?
The benchmark skill is an AI agent guide that helps you:
- **Design experiments** following best practices (single-execution, caching, reproducibility)
- **Track predictions & metrics** with the two-table pattern
- **Validate results** for data leaks, duplicate executions, and missing params
- **Analyze with DuckDB** using powerful SQL queries
- **Version experiments** with git tags and DVC
- **Avoid common pitfalls** (multiprocessing issues, cache problems, etc.)
> The 7-phase workflow is **genuinely well-structured**. The "design end-to-start" principle and single-execution principle are real insights that save people from common mistakes [...] The two-table pattern [...] is a concrete, opinionated design that **eliminates decision paralysis** [...] 8+ pitfalls discovered through actual simulations — **this is rare and valuable**. Most skills are written from theory; yours was battle-tested with real databases [...] The engine decision matrix [...] with multiprocessing gotchas is **genuinely useful** — this is a pitfall that costs hours to debug [...] Validation scripts [...] are actionable — they produce real recommendations, not just data [...] Scripts are functional, not just documentation [...] The experiment explorer [...] is a serious tool — auto-detection of retrieval strategy [...] side-by-side diff, disposable worktrees for exploration [...] The model manager with the candidates pattern **solves a real organizational problem** [...] The artifact merger using DuckDB for schema-flexible merges is clever [...] The 14 use cases [...] are concrete and map directly to real workflow decisions [...] The workflow decision matrix is **the killer feature** — exactly the kind of decision that's hard to make and easy to get wrong [...] The merge vs rebase semantics for each artifact type is **genuinely novel**; nobody has codified this for ML experiments before [...] The two skills complement each other perfectly — one runs experiments, the other versions them [...] Safety checklists [...] prevent data loss [...] Deep DuckDB integration for analysis is a differentiator [...] Local-first philosophy means **zero infrastructure to start**.
>
> — Claude, on first review of the benchmark & git-versioning skills
## Installation
### Option 1: Install from Plugin Marketplace (Recommended)
The easiest way to install the benchmark skill is directly from the xetrack repository using Claude Code's plugin marketplace:
```bash
# In Claude Code, add the xetrack marketplace
/plugin marketplace add xdssio/xetrack
# Install the benchmark skill
/plugin install benchmark@xetrack
```
That's it! Claude Code will automatically download and configure the skill.
**Update to latest version:**
```bash
/plugin marketplace update
```
### Option 2: Manual Installation
```bash
# Clone the xetrack repository
git clone https://github.com/xdssio/xetrack.git
# Copy the benchmark skill to Claude's skills directory
cp -r xetrack/skills/benchmark ~/.claude/skills/benchmark
# Verify installation
ls ~/.claude/skills/benchmark/SKILL.md
```
## Usage with Claude Code
Once installed, simply ask Claude to help with benchmarking:
**Example prompts:**
```
"Help me benchmark 3 embedding models on my classification task"
"Set up a benchmark comparing prompt variations for my LLM classifier"
"I want to benchmark different sklearn models with hyperparameter search"
"Debug my benchmark - I'm getting inconsistent results"
```
Claude will automatically use the benchmark skill and guide you through:
0. **Phase 0**: Planning what to track (ideation)
1. **Phase 1**: Understanding your goals and designing the experiment
2. **Phase 2**: Building a robust single-execution function
3. **Phase 3**: Adding caching for efficiency
4. **Phase 4**: Parallelizing (if needed)
5. **Phase 5**: Running the full benchmark loop
6. **Phase 6**: Validating results for common pitfalls
7. **Phase 7**: Analyzing results with DuckDB
## Features
### Two-Table Pattern
The skill teaches the recommended pattern of storing data in two tables:
- **Predictions table**: Every single prediction/execution (detailed)
- **Metrics table**: Aggregated results per experiment (summary)
```python
# Predictions table - granular data
predictions_tracker = Tracker(
db='benchmark.db',
engine='duckdb',
table='predictions',
cache='cache_dir'
)
# Metrics table - aggregated results
metrics_tracker = Tracker(
db='benchmark.db',
engine='duckdb',
table='metrics'
)
```
### Git Tag-Based Versioning
Automatic experiment versioning with git tags:
```python
# Skill helps you run experiments with versioned tags
# e0.0.1 → e0.0.2 → e0.0.3
# View experiment history:
git tag -l 'e*' -n9
# e0.0.1 model=logistic | lr=0.001 | acc=0.8200 | data=3a2f1b
# e0.0.2 model=bert-base | lr=0.0001 | acc=0.8500 | data=3a2f1b
# e0.0.3 model=bert-base | lr=0.0001 | acc=0.8900 | data=7c4e2a
```
### DVC Integration
Built-in guidance for data and database versioning with DVC:
```bash
# Skill recommends DVC for reproducibility
dvc add data/
dvc add benchmark.db
git add data.dvc benchmark.db.dvc
git commit -m "experiment: e0.0.3 results"
git tag -a e0.0.3 -m "model=bert-base | acc=0.8900"
```
### Validation Scripts
Helper scripts to catch common issues:
```bash
# Check for data leaks, duplicates, missing params
python skills/benchmark/scripts/validate_benchmark.py benchmark.db predictions
# Analyze cache effectiveness
python skills/benchmark/scripts/analyze_cache_hits.py benchmark.db predictions
# Export markdown summary
python skills/benchmark/scripts/export_summary.py benchmark.db predictions > RESULTS.md
```
### Common Pitfalls Documented
The skill warns you about:
- ⚠️ DuckDB + multiprocessing = database locks (use SQLite instead)
- ⚠️ System monitoring incompatible with multiprocessing
- ⚠️ Dataclass unpacking only works with `.track()`, not `.log()`
- ⚠️ Model objects can bloat database (use assets)
- ⚠️ Float parameters need rounding for consistent caching
## Example Templates
The skill includes complete examples for common scenarios:
```bash
# sklearn model comparison
python skills/benchmark/assets/sklearn_benchmark_template.py
# LLM finetuning simulation
python skills/benchmark/assets/llm_finetuning_template.py
# Load testing / throughput benchmark
python skills/benchmark/assets/throughput_benchmark_template.py
```
## Documentation
Full documentation is in the skill itself:
- **SKILL.md**: Complete workflow and guidance
- **references/methodology.md**: Core benchmarking principles
- **references/duckdb-analysis.md**: SQL query recipes
- **scripts/**: Helper validation and analysis scripts
- **assets/**: Complete example templates
## When to Use the Skill
**Use the benchmark skill when:**
- Comparing multiple models or hyperparameters
- Testing expensive APIs (LLMs, cloud services)
- Results will be shared or published
- Reproducibility is critical
- Running experiments that take > 10 minutes
**Skip for:**
- Quick one-off comparisons (< 5 minutes to rerun)
- Early prototyping (speed > reproducibility)
- Solo throwaway analysis
## Troubleshooting
**"Database is locked" errors with DuckDB:**
- **Cause**: DuckDB doesn't handle concurrent writes from multiple processes
- **Solution**: Switch to SQLite engine if using multiprocessing
- **Details**: See `references/build-and-cache.md` Pitfall 2 for full explanation
**Cache not working:**
- **Check installation**: Ensure `pip install xetrack[cache]` was run
- **Check dataclass**: Must be frozen: `@dataclass(frozen=True, slots=True)`
- **Float parameters**: Need rounding for consistent hashing (see `references/build-and-cache.md` Pitfall 6)
- **Verify cache directory**: Check that cache path is writable
**Import errors:**
- **xetrack not found**: Run `pip install xetrack`
- **DuckDB features**: Run `pip install xetrack[duckdb]`
- **Asset management**: Run `pip install xetrack[assets]`
- **Caching support**: Run `pip install xetrack[cache]`
**"Dataclass not unpacking" issues:**
- **Check method**: Auto-unpacking only works with `.track()`, not `.log()`
- **Verify frozen**: Dataclass must have `frozen=True`
- **See `references/build-and-cache.md`**: Pitfall 1 for detailed explanation
## Git Versioning Skill
The **git-versioning** skill is a companion to the benchmark skill. While the benchmark skill runs experiments, the git-versioning skill handles versioning, merging, and retrieval of experiment artifacts.
### When to Use
Use the git-versioning skill when you need to:
- Version experiments with git tags and DVC
- Merge or rebase experiment results across branches
- Promote models from candidates to production
- Set up parallel experiments with git worktrees
- Retrieve models or data from past experiments
- Compare historical experiments side by side
### Installation
```bash
# Plugin marketplace
/plugin install git-versioning@xetrack
# Manual
cp -r xetrack/skills/git-versioning ~/.claude/skills/git-versioning
```
### Core Concepts
**Workflow selection** — The skill helps you choose the right approach:
| Scenario | Workflow | DB Engine | Branching |
|----------|---------|-----------|-----------|
| Single experiment | Sequential | SQLite | Main branch |
| Param sweep, same code/data | Parallel | DuckDB | Main branch |
| Different code or data per exp | Worktree | SQLite | Branch per exp |
**Merge vs Rebase** — A novel decision framework for ML artifacts:
- **Databases**: Merge (append rows) vs Rebase (replace when schema changed)
- **Data files**: Merge (add samples) vs Rebase (preprocessing overhaul)
- **Models**: Merge (keep as candidate) vs Rebase (promote to production)
**Candidates pattern** — Keep models organized:
- `models/production/model.bin` — current best (DVC tracked)
- `models/candidates/` — runner-ups for A/B tests and ensembles
### Scripts
| Script | Purpose |
|--------|---------|
| `setup_worktree.sh` | Create worktree with shared DVC cache (prevents the #1 pitfall) |
| `experiment_explorer.py` | Browse, compare, and retrieve past experiments |
| `merge_artifacts.py` | DuckDB-powered merge/rebase for databases and parquet files |
| `version_tag.py` | Create annotated tags with metric descriptions |
| `model_manager.py` | Promote/prune models, manage candidates |
### Example Prompts
```
"Help me version my experiment and create a git tag"
"Set up parallel experiments using git worktrees"
"Merge results from my experiment branch back to main"
"Retrieve the model from experiment e0.2.0"
"Compare experiments e0.1.0 and e0.2.0 side by side"
```
### How the Skills Work Together
```
Benchmark Skill Git Versioning Skill
───────────────── ──────────────────────
Phase 0-3: Design & Build → (not needed yet)
Phase 4-5: Run experiments → Choose workflow (sequential/parallel/worktree)
Phase 6-7: Validate & Analyze → Tag experiment, push artifacts
Merge results, promote models
Explore & compare past experiments
```
## Contributing
Found an issue or want to improve the skills? Please open an issue or PR!
The skills were developed by running real simulations and discovering pitfalls, so real-world feedback is valuable.
| text/markdown | xdssio | jonathan@xdss.io | null | null | MIT | machine-learning, duckdb, pandas, sqlitedict, xxhash, loguru, monitoring, tracking, experimentation, benchmarking, data-science, data-analysis, data-visualization | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Py... | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"bashplotlib>=0.6.0; extra == \"dev\" or extra == \"bashplotlib\"",
"cloudpickle>=2.0.0",
"coolname>=2.2.0",
"diskcache>=5.6.0; extra == \"dev\" or extra == \"cache\"",
"duckdb>=0.8.0; extra == \"dev\" or extra == \"duckdb\"",
"loguru>=0.7.0",
"numpy<2.0,>=1.26",
"pandas>=2.0.3",
"psutil>=5.9.5",
... | [] | [] | [] | [
"Homepage, https://github.com/xdssio/xetrack"
] | poetry/2.3.2 CPython/3.11.10 Darwin/24.6.0 | 2026-02-18T15:57:32.944810 | xetrack-0.5.2-py3-none-any.whl | 51,301 | f0/1c/da84d7273500932ad82d98c4909f5f3c30b7d9753728efaf286970603786/xetrack-0.5.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 6d9cde61bf6667d8fb195848fc9f26d1 | 2721e40e17826c7663dd90423b5dc12a2ec0f72174814965cc74b66e98f614b6 | f01cda84d7273500932ad82d98c4909f5f3c30b7d9753728efaf286970603786 | null | [
"LICENSE"
] | 268 |
2.4 | qermit | 0.9.2 | Python package for quantum error mitigation. | # Qermit
[](https://badge.fury.io/py/qermit)
Qermit is a python module for running error-mitigation protocols on quantum processors.
It is an extension to the [pytket](https://docs.quantinuum.com/tket) quantum computing toolkit.
This repository contains source code and API documentation.
For details on building the docs please see `docs/README.md`
## Getting Started
To install, run:
```
pip install qermit
```
You may also wish to install the package from source:
```
pip install -e .
```
A `poetry.lock` file is included for use with [poetry](https://python-poetry.org/docs/cli/#install).
API documentation can be found at [qerm.it](https://qerm.it).
## Bugs
Please file bugs on the Github
[issue tracker](https://github.com/CQCL/Qermit/issues).
## Contributing
Pull requests or feature suggestions are very welcome.
To make a PR, first fork the repository, make your proposed changes, and open a PR from your fork.
## Code style
Style checks are run by continuous integration.
To install the dependencies required to run them locally run:
```
pip install qermit[tests]
```
### Formatting
This repository uses [ruff](https://docs.astral.sh/ruff/) for formatting and linting.
To check if your changes meet these standards run:
```
ruff check
ruff format --check
```
### Type annotation
[mypy](https://mypy.readthedocs.io/en/stable/) is used as a static type checker.
```
mypy -p qermit
```
## Tests
Tests are run by continuous integration.
To install the dependencies required to run them locally run:
```
pip install qermit[tests]
```
To run tests use:
```
cd tests
pytest
```
When adding a new feature, please add a test for it.
When fixing a bug, please add a test that demonstrates the fix.
## How to cite
If you wish to cite Qermit, we recommend citing our [benchmarking paper](https://quantum-journal.org/papers/q-2023-07-13-1059/) where possible.
| text/markdown | Daniel Mills | daniel.mills@quantinuum.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"furo<2025.13,>=2024.8; extra == \"docs\"",
"jupyter-sphinx<0.6,>=0.5; extra == \"docs\"",
"matplotlib<3.11,>=3.8",
"mypy<1.20,>=1.9; extra == \"tests\"",
"myst-nb<1.4,>=1.1; extra == \"docs\"",
"pytest<9.1,>=8.1; extra == \"tests\"",
"pytest-cov<7.1,>=6.0; extra == \"tests\"",
"pytket-qiskit<0.78,>=0... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:57:32.295305 | qermit-0.9.2.tar.gz | 94,983 | b0/16/5d25382b9eda648b314e8f1c91141b066716f59e166d5833de5ab760b88d/qermit-0.9.2.tar.gz | source | sdist | null | false | aa9312899291e55e9a227cb0375e433d | 75196a4b7253b7241ecce0fdba0651d852e03d7f4642145bdeebc7347e1676dd | b0165d25382b9eda648b314e8f1c91141b066716f59e166d5833de5ab760b88d | null | [
"LICENSE"
] | 418 |
2.4 | dissect.hypervisor | 3.21.dev5 | A Dissect module implementing parsers for various hypervisor disk, backup and configuration files | # dissect.hypervisor
A Dissect module implementing parsers for various hypervisor disk, backup and configuration files. For more information,
please see [the documentation](https://docs.dissect.tools/en/latest/projects/dissect.hypervisor/index.html).
## Requirements
This project is part of the Dissect framework and requires Python.
Information on the supported Python versions can be found in the Getting Started section of [the documentation](https://docs.dissect.tools/en/latest/index.html#getting-started).
## Installation
`dissect.hypervisor` is available on [PyPI](https://pypi.org/project/dissect.hypervisor/).
```bash
pip install dissect.hypervisor
```
This module is also automatically installed if you install the `dissect` package.
## Build and test instructions
This project uses `tox` to build source and wheel distributions. Run the following command from the root folder to build
these:
```bash
tox -e build
```
The build artifacts can be found in the `dist/` directory.
`tox` is also used to run linting and unit tests in a self-contained environment. To run both linting and unit tests
using the default installed Python version, run:
```bash
tox
```
For a more elaborate explanation on how to build and test the project, please see [the
documentation](https://docs.dissect.tools/en/latest/contributing/tooling.html).
## Contributing
The Dissect project encourages any contribution to the codebase. To make your contribution fit into the project, please
refer to [the development guide](https://docs.dissect.tools/en/latest/contributing/developing.html).
## Copyright and license
Dissect is released as open source by Fox-IT (<https://www.fox-it.com>) part of NCC Group Plc
(<https://www.nccgroup.com>).
Developed by the Dissect Team (<dissect@fox-it.com>) and made available at <https://github.com/fox-it/dissect>.
License terms: AGPL3 (<https://www.gnu.org/licenses/agpl-3.0.html>). For more information, see the LICENSE file.
| text/markdown | null | Dissect Team <dissect@fox-it.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Internet :: Log Analysis",
"Topic :: Scientific/Engineering ... | [] | null | null | >=3.10 | [] | [] | [] | [
"defusedxml",
"dissect.cstruct<5,>=4",
"dissect.util<4,>=3",
"pycryptodome; extra == \"full\"",
"backports.zstd; python_version < \"3.14\" and extra == \"full\"",
"dissect.hypervisor[full]; extra == \"dev\"",
"dissect.cstruct<5.0.dev,>=4.0.dev; extra == \"dev\"",
"dissect.util<4.0.dev,>=3.0.dev; extra... | [] | [] | [] | [
"homepage, https://dissect.tools",
"documentation, https://docs.dissect.tools/en/latest/projects/dissect.hypervisor",
"repository, https://github.com/fox-it/dissect.hypervisor"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:57:27.313697 | dissect_hypervisor-3.21.dev5.tar.gz | 75,390 | c9/2b/b5ce9f9d0c3f72057d39531a58404ec5124da6892fa1e3a787ca344b0e7a/dissect_hypervisor-3.21.dev5.tar.gz | source | sdist | null | false | c7aef765719d314fa2f61baf18c184b7 | 52e88df35cb2651324765839006dab0e7de26a2f009dc2117d8ebb0537beb050 | c92bb5ce9f9d0c3f72057d39531a58404ec5124da6892fa1e3a787ca344b0e7a | AGPL-3.0-or-later | [
"LICENSE",
"COPYRIGHT"
] | 0 |
2.4 | kube-authkit | 0.4.0 | Unified Kubernetes authentication toolkit - supports KubeConfig, In-Cluster, OIDC, and OpenShift OAuth authentication | # Kube AuthKit - Kubernetes Authentication Toolkit
A lightweight Python library that provides unified authentication for OpenShift and Kubernetes clusters. This library simplifies authentication by supporting multiple methods through a single, consistent interface.
## Features
- **Universal Authentication Support**
- Standard Kubernetes KubeConfig (~/.kube/config)
- In-Cluster Service Account (for Pods and Notebooks)
- OIDC (OpenID Connect) with multiple flows
- OpenShift OAuth
- **Auto-Detection**: Automatically detects and uses the best authentication method for your environment
- **Multiple OIDC Flows**
- Authorization Code Flow with PKCE (for interactive apps)
- Device Code Flow (for CLI tools and headless environments)
- Client Credentials Flow (for service-to-service authentication)
- **Token Management**
- Automatic token refresh
- Optional persistent storage via system keyring
- Secure in-memory storage by default
- **Security First**
- TLS verification enabled by default
- No sensitive data in logs
- Minimal dependencies
## Installation
```bash
pip install kube-authkit
```
For optional keyring support (persistent token storage):
```bash
pip install kube-authkit[keyring]
```
## Quick Start
### Automatic Authentication (Recommended)
The library automatically detects your environment and chooses the appropriate authentication method:
```python
from kube_authkit import get_k8s_client
from kubernetes import client
# Auto-detect environment and authenticate
api_client = get_k8s_client()
# Use with standard Kubernetes client
v1 = client.CoreV1Api(api_client)
pods = v1.list_pod_for_all_namespaces()
print(f"Found {len(pods.items)} pods")
```
This works seamlessly whether you're running:
- Locally with ~/.kube/config
- Inside a Kubernetes Pod or OpenShift Notebook (using Service Account)
- With OIDC credentials in environment variables
### Explicit OIDC Authentication
For CLI tools or when you need explicit control:
```python
from kube_authkit import get_k8s_client, AuthConfig
config = AuthConfig(
method="oidc",
oidc_issuer="https://keycloak.example.com/auth/realms/myrealm",
client_id="my-cli-tool",
use_device_flow=True # Good for headless/CLI environments
)
# This will print: "Visit https://... and enter code: ABCD-EFGH"
api_client = get_k8s_client(config)
```
### Interactive Browser-Based Authentication
For notebooks or interactive applications:
```python
from kube_authkit import get_k8s_client, AuthConfig
config = AuthConfig(
method="oidc",
oidc_issuer="https://keycloak.example.com/auth/realms/myrealm",
client_id="my-app",
use_device_flow=False # Use Authorization Code Flow (opens browser)
)
# Browser will open for authentication
api_client = get_k8s_client(config)
```
### Persistent Token Storage
Store refresh tokens securely in your system keyring:
```python
from kube_authkit import get_k8s_client, AuthConfig
config = AuthConfig(
method="oidc",
oidc_issuer="https://keycloak.example.com/auth/realms/myrealm",
client_id="my-app",
use_keyring=True # Store tokens in system keyring
)
# First run: Interactive authentication
# Subsequent runs: Uses stored refresh token automatically
api_client = get_k8s_client(config)
```
### Advanced: Customize Client Configuration
For advanced use cases where you need to customize the Kubernetes client configuration before creating the client:
```python
from kube_authkit import get_k8s_config
from kubernetes import client
# Get just the configuration (without creating ApiClient yet)
k8s_config = get_k8s_config()
# Customize configuration as needed
k8s_config.debug = True # Enable debug logging
k8s_config.verify_ssl = False # Disable SSL verification (dev only)
# Create client with customized configuration
api_client = client.ApiClient(k8s_config)
v1 = client.CoreV1Api(api_client)
```
This is useful when you need:
- Custom debug settings
- SSL/TLS configuration
- Multiple clients with the same authentication but different settings
- To inspect the configuration before using it
## Configuration
### AuthConfig Options
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `method` | str | "auto" | Authentication method: "auto", "kubeconfig", "incluster", "oidc", "openshift" |
| `k8s_api_host` | str | None | Kubernetes API server URL (auto-detected if not provided) |
| `oidc_issuer` | str | None | OIDC issuer URL (required for OIDC) |
| `client_id` | str | None | OIDC client ID (required for OIDC) |
| `client_secret` | str | None | OIDC client secret (for confidential clients) |
| `scopes` | list | ["openid"] | OIDC scopes to request |
| `use_device_flow` | bool | False | Use Device Code Flow instead of Authorization Code Flow |
| `use_keyring` | bool | False | Store refresh tokens in system keyring |
| `ca_cert` | str | None | Path to custom CA certificate bundle |
| `verify_ssl` | bool | True | Verify SSL certificates (disable only for development) |
### Environment Variables
The library respects these environment variables:
- `KUBECONFIG`: Path to kubeconfig file
- `KUBERNETES_SERVICE_HOST`: Auto-detected in-cluster (set by Kubernetes)
- `AUTHKIT_OIDC_ISSUER`: OIDC issuer URL
- `AUTHKIT_CLIENT_ID`: OIDC client ID
- `AUTHKIT_CLIENT_SECRET`: OIDC client secret
- `AUTHKIT_TOKEN`: Bearer token for authentication
- `AUTHKIT_API_HOST`: Kubernetes API server URL
- `OPENSHIFT_TOKEN`: Legacy OpenShift OAuth token (use `AUTHKIT_TOKEN` instead)
## Architecture
This library uses the Strategy Pattern to provide a unified interface across different authentication methods:
```
AuthFactory (auto-detection)
├── KubeConfigStrategy (~/.kube/config)
├── InClusterStrategy (Service Account)
├── OIDCStrategy (OpenID Connect)
└── OpenShiftOAuthStrategy (OpenShift OAuth)
```
Each strategy implements the same interface, making it easy to add new authentication methods in the future.
## Security Considerations
1. **TLS Verification**: Enabled by default. Only disable for development/testing.
2. **Token Storage**: In-memory by default. Use keyring for persistence across sessions.
3. **Logging**: No sensitive data (tokens, secrets) is ever logged.
4. **Dependencies**: Minimal dependency footprint to reduce supply chain risk.
## Development
### Setup Development Environment
```bash
# Clone repository
git clone https://github.com/openshift/kube-authkit.git
cd kube-authkit
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install with dev dependencies
pip install -e ".[dev]"
```
### Running Tests
```bash
# Run all tests with coverage
pytest
# Run specific test file
pytest tests/test_config.py
# Run with verbose output
pytest -v
# Type checking
mypy src/kube_authkit
# Code formatting
black src/ tests/
ruff check src/ tests/
# Security scanning
bandit -r src/
```
## Examples
See the [examples/](examples/) directory for complete examples:
- `auto_auth.py` - Simple auto-detection
- `oidc_device_flow.py` - CLI tool with device flow
- `oidc_auth_code.py` - Interactive browser-based auth
- `notebook_usage.py` - Jupyter notebook example
- `explicit_config.py` - All configuration options
- `custom_ca.py` - Custom CA certificate
## Contributing
Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## License
Apache License 2.0 - see [LICENSE](LICENSE) for details.
## Support
- Issues: https://github.com/openshift/kube-authkit/issues
- Documentation: https://github.com/openshift/kube-authkit#readme
## Acknowledgments
This library wraps and extends the official [Kubernetes Python Client](https://github.com/kubernetes-client/python) to provide simplified authentication workflows for OpenShift AI and Kubernetes environments.
| text/markdown | Kube AuthKit Contributors | null | null | null | Apache-2.0 | authentication, k8s, kubeconfig, kubernetes, oauth, oidc, openshift | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Sof... | [] | null | null | >=3.10 | [] | [] | [] | [
"kubernetes>=35.0.0",
"pyjwt>=2.8.0",
"requests>=2.31.0",
"urllib3>=2.6.0",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-mock>=3.11.1; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"keyring>=24.0.0; extra == \"keyring\""
] | [] | [] | [] | [
"Homepage, https://github.com/kube-authkit/kube-authkit",
"Documentation, https://github.com/kube-authkit/kube-authkit#readme",
"Repository, https://github.com/kube-authkit/kube-authkit",
"Issues, https://github.com/kube-authkit/kube-authkit/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:56:51.161043 | kube_authkit-0.4.0.tar.gz | 106,762 | 11/69/3890e8934aab209f5b02c11409e906a507005202caea2b40901d44a101de/kube_authkit-0.4.0.tar.gz | source | sdist | null | false | 312ac38dce03648dacfa386fe7408df9 | 1df61ac392fca96c8f5ae8c3d6e9918f1e1655d212434b3c3da5f92cc23b660d | 11693890e8934aab209f5b02c11409e906a507005202caea2b40901d44a101de | null | [
"LICENSE"
] | 443 |
2.4 | skilldock | 2026.2.181556 | OpenAPI-driven Python SDK and CLI for the SkillDock.io API. | [](https://badge.fury.io/py/skilldock)
[](https://opensource.org/license/apache-2-0)
[](https://pepy.tech/project/skilldock)
[](https://www.linkedin.com/in/eugene-evstafev-716669181/)
# SkillDock Python SDK and CLI
`skilldock` is an OpenAPI-driven Python client (and a simple CLI) for the SkillDock API.
It loads the OpenAPI spec at runtime and exposes:
- A Python `SkilldockClient` that can call any `operationId`
- A CLI that can list available operations and call them from your terminal
## Install
```bash
pip install skilldock
```
## Quickstart (CLI)
List operations from the OpenAPI spec:
```bash
skilldock ops
```
The first column is a python-friendly `python_name` you can use as `client.ops.<python_name>(...)`.
Authenticate (browser login + polling):
```bash
skilldock auth login
```
This starts a CLI auth session on the API, prints an `auth_url`, opens it in your browser, then polls until it receives an app-issued `access_token` and saves it as the CLI token.
Access tokens are short-lived; if you see auth errors later, run `skilldock auth login` again.
Create a long-lived personal API token (recommended for CI and to avoid short-lived JWT expiry):
```bash
# Prints the token (shown only once by the API) and saves it into the CLI config.
skilldock tokens create --save
# List your tokens
skilldock tokens list
```
Call an endpoint by `operationId`:
```bash
skilldock call SomeOperationId --param foo=bar --json '{"hello":"world"}'
```
Search skills:
```bash
skilldock skills search "docker"
```
`skills search` uses `POST /v2/search` and returns the same listing contract shape (`page`, `per_page`, `items`, `has_more`).
Search output includes `LATEST_VERSIONS` from each skill's embedded `latest_releases` (up to 5 items).
Get a single skill (latest metadata + latest description source):
```bash
skilldock skills get acme/my-skill
```
Get one exact release by version (source of truth for versioned descriptions):
```bash
skilldock skills release acme/my-skill 0.1.0
```
Browse release history page-by-page:
```bash
skilldock skills releases acme/my-skill --page 1 --per-page 10
```
Get author profile details and authored skills (paginated):
```bash
skilldock users get 123 --page 1 --per-page 20
```
Install a skill locally (default destination is `./skills`, with recursive dependency resolution):
Public skills can be installed without auth. If a token is configured, the CLI sends it for skill discovery/install flows so private skills can be resolved when authorized.
```bash
# latest
skilldock install acme/my-skill
# exact version
skilldock i acme/my-skill --version 1.2.3
# custom local destination
skilldock install acme/my-skill --skills-dir /path/to/project/skills
```
Uninstall a direct skill and reconcile/remove no-longer-needed dependencies:
```bash
skilldock uninstall acme/my-skill
```
Verify a local skill folder (packages a zip and prints sha256/size):
```bash
skilldock skill verify .
```
Upload a new skill release:
```bash
skilldock skill upload --namespace myorg --slug my-skill --version 1.2.3 --path .
# Explicit private publish
skilldock skill upload --namespace myorg --slug my-skill --version 1.2.3 --path . --visibility private
```
This packages the folder into a zip and uploads it as multipart form field `file`.
For this registry, tags are read by the backend from `SKILL.md` frontmatter inside the uploaded zip.
There is no separate upload `tags` field (or CLI `--tag` flag) for publish.
Release versions are immutable: if you change `SKILL.md` (including description), publish a new version.
Re-publishing the same version for the same skill returns a conflict.
```md
---
name: my-skill
description: Does X
version: 1.2.0
tags:
- productivity
- automation
- cli
---
```
Tag values should be strings; non-string values are ignored by the backend.
The CLI packages upload archives with top-level folder name equal to `--slug`, so keep frontmatter `name` aligned with your slug.
If your API supports release dependencies, you can pass them from CLI too:
```bash
# Repeatable string form:
skilldock skill upload --namespace myorg --slug my-skill --path . \
--dependency "core/base-utils@^1.2.0" \
--dependency "tools/lint@>=2.0.0 <3.0.0"
# JSON form (array or map), inline or from file:
skilldock skill upload --namespace myorg --slug my-skill --path . \
--dependencies-json @dependencies.json
```
If you haven't created the namespace yet:
```bash
skilldock namespaces create myorg
skilldock namespaces list
```
Low-level request (method + path, bypassing `operationId`):
```bash
skilldock request GET /health
```
## Quickstart (Python)
```python
from skilldock import SkilldockClient
client = SkilldockClient(
# Optional: override if needed
openapi_url="https://api.skilldock.io/openapi.json",
# base_url="https://api.skilldock.io",
token=None, # set after `skilldock auth login`
)
ops = client.operation_ids()
print("operations:", len(ops))
# Call by operationId (params are split into path/query/header based on OpenAPI metadata)
result = client.call_operation("SomeOperationId", params={"id": "123"})
print(result)
# Or call by a generated python-friendly name:
# (see `skilldock ops` output and use the `python_name`-like identifier)
# result = client.ops.someoperationid(id="123")
client.close()
```
## Description Versioning
The registry now stores `description_md` per release version.
- Publish flow:
- Build/upload the zip with the intended `SKILL.md` content for that version.
- Publish a new version each time description changes (for example, `0.1.0` -> `0.1.1`).
- Read flow:
- Exact version description: `GET /v1/skills/{namespace}/{slug}/releases/{version}` and use `release.description_md`.
- Latest overview: `GET /v1/skills/{namespace}/{slug}` and prefer `latest_release.description_md`.
- Backward compatibility:
- `skill.description_md` still reflects current/latest description.
- Fallback order when release description is empty:
1. `release.description_md`
2. `skill.description_md`
3. empty state
## Configuration
The CLI stores config (including token) in a local JSON file:
```bash
skilldock config path
skilldock config show
```
You can set config values:
```bash
skilldock config set --base-url https://api.skilldock.io --openapi-url https://api.skilldock.io/openapi.json
skilldock config set --token "YOUR_TOKEN"
```
Environment variables (override config):
- `SKILLDOCK_OPENAPI_URL`
- `SKILLDOCK_BASE_URL`
- `SKILLDOCK_TOKEN`
- `SKILLDOCK_TIMEOUT_S`
## Authentication Notes (Google)
This SDK assumes the API accepts a token in an HTTP header (usually `Authorization: Bearer <token>`).
The exact details are derived from the OpenAPI `securitySchemes` when present.
The SkillDock API can accept (depending on server configuration):
- Google ID token (JWT)
- App-issued access token (JWT, returned by the CLI OAuth flow)
- Personal API token (opaque string, created via `skilldock tokens create`)
`skilldock auth login` works like this:
1. Creates a CLI auth session via `POST /auth/cli/sessions`
2. Prints the returned `auth_url` and opens it in your browser
3. After you complete Google login, the backend approves the session
4. The CLI polls `GET /auth/cli/sessions/{session_id}` until it receives an app-issued `access_token`, then saves it as the configured API token
If you want to set a token manually:
```bash
skilldock auth set-token "PASTE_TOKEN_HERE"
```
To create a personal API token (recommended for longer-lived auth):
```bash
skilldock tokens create --save
```
## Development
Run the small unit test suite:
```bash
python -m unittest discover -s tests
```
| text/markdown | SkillDock | null | null | null | Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | api, client, openapi, sdk, skilldock | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programm... | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.0",
"platformdirs>=4.2.0",
"mypy>=1.10.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.5.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://skilldock.io"
] | twine/4.0.2 CPython/3.11.14 | 2026-02-18T15:56:49.120084 | skilldock-2026.2.181556.tar.gz | 38,709 | 4c/40/e201e7fb964db3d718ce335a0b5d17ac8a5869e66cfe2a7ac478c6b570fa/skilldock-2026.2.181556.tar.gz | source | sdist | null | false | 9562c284484236e78596134904ddf181 | bb13e4e32300af0f68c066766e1f3216c3bc0552cb35b6c8def3a832bee1dda3 | 4c40e201e7fb964db3d718ce335a0b5d17ac8a5869e66cfe2a7ac478c6b570fa | null | [] | 220 |
2.2 | protozfits | 2.9.1 | Python bindings for the protobuf zfits library | # protozfits-python
Low-level reading and writing of zfits files using google protocol buffer objects.
To analyze data, you might be more interested in using a [`ctapipe`](https://github.com/cta-observatory/ctapipe)
plugin to load your data into ctapipe. There are currently several plugins using this library as a dependency
for several CTA(O) prototypes:
* `ctapipe_io_lst` to read LST-1 commissioning raw data: https://github.com/cta-observatory/ctapipe_io_lst
* `ctapipe_io_nectarcam` to read NectarCam commissiong raw data: https://github.com/cta-observatory/ctapipe_io_nectarcam/
* `ctapipe_io_zfits` for general reading of CTA zfits data, currently only supports DL0 as written during the ACADA-LST test campaign.
Note: before version 2.4, the protozfits python library was part of the [`adh-apis` Repository](https://gitlab.cta-observatory.org/cta-computing/common/acada-array-elements/adh-apis/).
To improve maintenance, the two repositories were decoupled and this repository now only hosts the python bindings (`protozfits`).
The needed C++ `libZFitsIO` is build from a git submodule of the `adh-apis`.
Table of Contents
* [Installation](#installation)
* [Usage](#usage)
* [Open a file](#open-a-file)
* [Get an event](#getting-an-event)
* [RunHeader](#runHeader)
* [Table header](#table-header)
* [Performance](#pure-protobuf-mode)
* [Command-Line Tools](#command-line-tools)
# Installation
## Users
This package is published to [PyPI](https://pypi.org/projects/protozfits) and [conda-forge](https://anaconda.org/conda-forge/protozfits).
PyPI packages include pre-compiled `manylinux` wheels (no macOS wheels though) and conda packages are built for Linux and macOS.
When using conda, it's recommended to use the [`miniforge`](https://github.com/conda-forge/miniforge#miniforge3) conda distribution,
as it is fully open source and comes with the faster mamba package manager.
So install using:
```
pip install protozfits
```
or
```
mamba install protozfits
```
## For development
This project is build using `scikit-build-core`, which supports editable installs recompiling the project on import by setting a couple of `config-options` for pip.
See <https://scikit-build-core.readthedocs.io/en/latest/configuration.html#editable-installs>.
To setup a development environment, create a venv, install the build requirements and then
run the pip install command with the options given below:
```
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install 'scikit-build-core[pyproject]' pybind11 'setuptools_scm[toml]'
$ pip install -e '.[all]' --no-build-isolation
```
You can now e.g. run the tests:
```
$ pytest src
```
`scikit-build-core` will automatically recompile the project when importing the library.
Some caveats remain though, see the scikit-build-core documentation linked above.
## Usage
If you are just starting with proto-z-fits files and would like to explore the file contents, try this:
### Open a file
```
>>> from protozfits import File
>>> example_path = 'protozfits/tests/resources/example_9evts_NectarCAM.fits.fz'
>>> file = File(example_path)
>>> file
File({
'RunHeader': Table(1xDataModel.CameraRunHeader),
'Events': Table(9xDataModel.CameraEvent)
})
```
From this we learn, the `file` contains two `Table` named `RunHeader` and `Events` which
contains 9 rows of type `CameraEvent`. There might be more tables with
other types of rows in other files. For instance LST has its `RunHeader` called `CameraConfig`.
### Getting an event
Usually people just iterate over a whole `Table` like this:
```python
for event in file.Events:
# do something with the event
pass
```
But if you happen to know exactly which event you want, you can also
directly get an event, like this:
```python
event_17 = file.Events[17]
```
You can also get a range of events, like this:
```python
for event in file.Events[100:200]:
# do something events 100 until 200
pass
```
It is not yet possible to specify negative indices, like `file.Events[:-10]`
does *not work*.
If you happen to have a list or any iterable or a generator with event ids
you are interested in you can get the events in question like this:
```python
interesting_event_ids = range(100, 200, 3)
for event in file.Events[interesting_event_ids]:
# do something with intesting events
pass
```
### RunHeader
Even though there is usually **only one** run header per file, technically
this single run header is stored in a Table. This table could contain multiple
"rows" and to me it is not clear what this would mean... but technically it is
possible.
At the moment I would recommend getting the run header out of the file
we opened above like this (replace RunHeader with CameraConfig for LST data):
```python
assert len(file.RunHeader) == 1
header = file.RunHeader[0]
```
For now, I will just get the next event
```python
event = file.Events[0]
type(event)
<class 'protozfits.CameraEvent'>
event._fields
('telescopeID', 'dateMJD', 'eventType', 'eventNumber', 'arrayEvtNum', 'hiGain', 'loGain', 'trig', 'head', 'muon', 'geometry', 'hilo_offset', 'hilo_scale', 'cameraCounters', 'moduleStatus', 'pixelPresence', 'acquisitionMode', 'uctsDataPresence', 'uctsData', 'tibDataPresence', 'tibData', 'swatDataPresence', 'swatData', 'chipsFlags', 'firstCapacitorIds', 'drsTagsHiGain', 'drsTagsLoGain', 'local_time_nanosec', 'local_time_sec', 'pixels_flags', 'trigger_map', 'event_type', 'trigger_input_traces', 'trigger_output_patch7', 'trigger_output_patch19', 'trigger_output_muon', 'gps_status', 'time_utc', 'time_ns', 'time_s', 'flags', 'ssc', 'pkt_len', 'muon_tag', 'trpdm', 'pdmdt', 'pdmt', 'daqtime', 'ptm', 'trpxlid', 'pdmdac', 'pdmpc', 'pdmhi', 'pdmlo', 'daqmode', 'varsamp', 'pdmsum', 'pdmsumsq', 'pulser', 'ftimeoffset', 'ftimestamp', 'num_gains')
event.hiGain.waveforms.samples
array([241, 245, 248, ..., 218, 214, 215], dtype=int16)
```
An LST event will look something like so:
```python
>>> event
CameraEvent(
configuration_id=1
event_id=1
tel_event_id=1
trigger_time_s=0
trigger_time_qns=0
trigger_type=0
waveform=array([ 0, 0, ..., 288, 263], dtype=uint16)
pixel_status=array([ 0, 0, 0, 0, 0, 0, 0, 12, 12, 12, 12, 12, 12, 12], dtype=uint8)
ped_id=0
nectarcam=NectarCamEvent(
module_status=array([], dtype=float64)
extdevices_presence=0
tib_data=array([], dtype=float64)
cdts_data=array([], dtype=float64)
swat_data=array([], dtype=float64)
counters=array([], dtype=float64))
lstcam=LstCamEvent(
module_status=array([0, 1], dtype=uint8)
extdevices_presence=0
tib_data=array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=uint8)
cdts_data=array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0], dtype=uint8)
swat_data=array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], dtype=uint8)
counters=array([ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 31, 0, 0, 0, 243, 170, 204,
0, 0, 0, 0, 0], dtype=uint8)
chips_flags=array([ 0, 0, 0, 0, 0, 0, 0, 0, 61440,
245, 61440, 250, 61440, 253, 61440, 249], dtype=uint16)
first_capacitor_id=array([ 0, 0, 0, 0, 0, 0, 0, 0, 61440,
251, 61440, 251, 61440, 241, 61440, 245], dtype=uint16)
drs_tag_status=array([ 0, 12], dtype=uint8)
drs_tag=array([ 0, 0, ..., 2021, 2360], dtype=uint16))
digicam=DigiCamEvent(
))
>>> event.waveform
array([ 0, 0, 0, ..., 292, 288, 263], dtype=uint16)
```
`event` supports tab-completion, which I regard as very important while exploring.
It is implemented using [`collections.namedtuple`](https://docs.python.org/3.6/library/collections.html#collections.namedtuple).
I tried to create a useful string representation, it is very long, yes ... but I
hope you can still enjoy it:
```python
>>> event
CameraEvent(
telescopeID=1
dateMJD=0.0
eventType=<eventType.NONE: 0>
eventNumber=97750287
arrayEvtNum=0
hiGain=PixelsChannel(
waveforms=WaveFormData(
samples=array([241, 245, ..., 214, 215], dtype=int16)
pixelsIndices=array([425, 461, ..., 727, 728], dtype=uint16)
firstSplIdx=array([], dtype=float64)
num_samples=0
baselines=array([232, 245, ..., 279, 220], dtype=int16)
peak_time_pos=array([], dtype=float64)
time_over_threshold=array([], dtype=float64))
integrals=IntegralData(
gains=array([], dtype=float64)
maximumTimes=array([], dtype=float64)
tailTimes=array([], dtype=float64)
raiseTimes=array([], dtype=float64)
pixelsIndices=array([], dtype=float64)
firstSplIdx=array([], dtype=float64)))
# [...]
```
### Table header
`fits.fz` files are still normal [FITS files](https://fits.gsfc.nasa.gov/) and
each Table in the file corresponds to a so called "BINTABLE" extension, which has a
header. You can access this header like this:
```
>>> file.Events
Table(100xDataModel.CameraEvent)
>>> file.Events.header
# this is just a sulection of all the contents of the header
XTENSION= 'BINTABLE' / binary table extension
BITPIX = 8 / 8-bit bytes
NAXIS = 2 / 2-dimensional binary table
NAXIS1 = 192 / width of table in bytes
NAXIS2 = 1 / number of rows in table
TFIELDS = 12 / number of fields in each row
EXTNAME = 'Events' / name of extension table
CHECKSUM= 'BnaGDmS9BmYGBmY9' / Checksum for the whole HDU
DATASUM = '1046602664' / Checksum for the data block
DATE = '2017-10-31T02:04:55' / File creation date
ORIGIN = 'CTA' / Institution that wrote the file
WORKPKG = 'ACTL' / Workpackage that wrote the file
DATEEND = '1970-01-01T00:00:00' / File closing date
PBFHEAD = 'DataModel.CameraEvent' / Written message name
CREATOR = 'N4ACTL2IO14ProtobufZOFitsE' / Class that wrote this file
COMPILED= 'Oct 26 2017 16:02:50' / Compile time
TIMESYS = 'UTC' / Time system
>>> file.Events.header['DATE']
'2017-10-31T02:04:55'
>>> type(file.Events.header)
<class 'astropy.io.fits.header.Header'>
```
The header is provided by [`astropy`](http://docs.astropy.org/en/stable/io/fits/#working-with-fits-headers).
### pure protobuf mode
The library by default converts the protobuf objects into namedtuples and converts the `AnyArray` data type
to numpy arrays. This has some runtime overhead.
In case you for example know exactly what you want
from the file, then you can get a speed-up by passing the `pure_protob=True` option:
```
>>> from protozfits import File
>>> file = File(example_path, pure_protobuf=True)
>>> event = next(file.Events)
>>> type(event)
<class 'ProtoDataModel_pb2.CameraEvent'>
```
Now iterating over the file is faster than before.
But you have no tab-completion and some contents are less useful for you:
```
>>> event.eventNumber
97750288 # <--- just fine
>>> event.hiGain.waveforms.samples
type: S16
data: "\362\000\355\000 ... " # <---- goes on "forever" .. raw bytes of the array data
>>> type(event.hiGain.waveforms.samples)
<class 'CoreMessages_pb2.AnyArray'>
```
You can convert these `AnyArray`s into numpy arrays like this:
```
>>> from protozfits import any_array_to_numpy
>>> any_array_to_numpy(event.hiGain.waveforms.samples)
array([242, 237, 234, ..., 218, 225, 229], dtype=int16)
```
## Command-Line Tools
This module comes with a command-line tool that can re-compress zfits files using different
options for the default and specific column compressions.
This can also be used to extract the first N events from a large file, e.g. to produce smaller files
for unit tests.
Usage:
```
$ python -m protozfits.recompress_zfits --help
```
| text/markdown | null | null | null | Maximilian Linhoff <maximilian.linhoff@tu-dortmund.de> | BSD-3-Clause | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"protobuf",
"astropy",
"tqdm",
"pyyaml",
"pytest; extra == \"tests\"",
"pytest-cov; extra == \"tests\"",
"sphinx; extra == \"doc\"",
"numpydoc; extra == \"doc\"",
"ctao-sphinx-theme; extra == \"doc\"",
"sphinx-changelog; extra == \"doc\"",
"sphinx-automodapi; extra == \"doc\"",
"sph... | [] | [] | [] | [
"repository, https://gitlab.cta-observatory.org/cta-computing/common/protozfits-python/"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T15:56:20.453498 | protozfits-2.9.1.tar.gz | 13,797,874 | b2/7f/1850f7fcbbe86c52db3e6fced5edffbee412808442330cbd9037c28e0a72/protozfits-2.9.1.tar.gz | source | sdist | null | false | ba447fa7a194295ea61fbb3419cdf0fd | baa44fdfcd48ebd2e0a3f8cf34f26c1712ffb2ca7f5cc6bcdad72e2d16621206 | b27f1850f7fcbbe86c52db3e6fced5edffbee412808442330cbd9037c28e0a72 | null | [] | 1,814 |
2.4 | filescan | 0.0.4 | Recursively scans a directory and outputs a flat, LLM-friendly file tree | # filescan
**filescan** is a lightweight Python tool for **scanning filesystem structures and Python ASTs** and exporting them as **flat, graph-style representations**.
Instead of nested trees, `filescan` produces **stable lists of nodes with parent pointers**, making the output:
* easy to post-process
* friendly for CSV / DataFrame / SQL pipelines
* efficient for LLM ingestion and summarization
`filescan` can operate at two levels:
* **filesystem structure** (directories & files)
* **Python semantic structure** (modules, classes, functions, methods)
Both use the same flat graph design and export formats.
## Features
### Filesystem scanning
* Recursive directory traversal
* Flat node list with explicit `parent_id`
* Deterministic ordering
* Optional `.gitignore`-style filtering
* CSV and JSON export
### Python AST scanning
* Module, class, function, and method detection
* Nested functions and classes supported
* Stable symbol IDs with parent relationships
* Best-effort function signature extraction
* First-line docstring capture
### General
* Shared schema + export model
* Same API for filesystem and AST scanners
* Usable as **both a library and a CLI**
* Designed for automation, data pipelines, and AI workflows
## Installation
```bash
pip install filescan
```
Or for development:
```bash
pip install -e .
```
## Quick start (CLI)
### Filesystem scan (default)
Scan the current directory and write a CSV:
```bash
filescan
```
Scan a specific directory:
```bash
filescan ./data
```
Export as JSON:
```bash
filescan ./data --format json
```
Specify output base path:
```bash
filescan ./data -o out/tree
```
This generates:
```text
out/
├── tree.csv
└── tree.json
```
### Python AST scan
Scan Python source files and extract symbols:
```bash
filescan ./src --ast
```
Export AST symbols as JSON:
```bash
filescan ./src --ast --format json
```
Custom output path:
```bash
filescan ./src --ast -o out/symbols
```
This generates:
```text
out/
├── symbols.csv
└── symbols.json
```
## Ignore rules (`.fscanignore`)
`filescan` supports **gitignore-style patterns** via `pathspec`.
### Default behavior
* If `--ignore-file` is provided → use it
* Otherwise, look for:
```text
./.fscanignore (current working directory)
```
Ignore rules apply to:
* filesystem scanning
* AST scanning (Python files are skipped if ignored)
### Example `.fscanignore`
```gitignore
.git/
.idea/
build/
dist/
__pycache__/
*.pyc
```
## Output formats
Both filesystem and AST scans produce **flat graphs** with schema metadata.
### Filesystem schema
| Field | Description |
| -- | - |
| `id` | Unique integer ID |
| `parent_id` | Parent node ID (`null` for root) |
| `type` | `'d'` = directory, `'f'` = file |
| `name` | Base name |
| `size` | File size in bytes (`null` for directories) |
#### CSV example
```csv
# id: Unique integer ID for this node
# parent_id: ID of parent node, or null for root
# type: Node type: 'd' = directory, 'f' = file
# name: Base name of the file or directory
# size: File size in bytes; null for directories
id,parent_id,type,name,size
0,,d,data,
1,0,f,example.txt,128
```
### Python AST schema
| Field | Description |
| - | |
| `id` | Unique integer ID for this symbol |
| `parent_id` | Parent symbol ID (`null` for module) |
| `kind` | `module` | `class` | `function` | `method` |
| `name` | Symbol name |
| `module_path` | File path relative to scan root |
| `lineno` | Starting line number (1-based) |
| `signature` | Function or method signature (best-effort) |
| `doc` | First line of docstring, if any |
Nested functions and classes are represented naturally via `parent_id`.
## Library usage
### Filesystem scanner
```python
from filescan import Scanner
scanner = Scanner(
root="data",
ignore_file=".fscanignore",
)
scanner.scan()
scanner.to_csv() # -> ./data.csv
scanner.to_json() # -> ./data.json
```
### Python AST scanner
```python
from filescan import AstScanner
scanner = AstScanner(
root="src",
ignore_file=".fscanignore",
output="out/symbols",
)
scanner.scan()
scanner.to_csv()
scanner.to_json()
```
### Programmatic access
```python
nodes = scanner.scan()
print(len(nodes))
data = scanner.to_dict()
```
## Why `filescan`?
Most filesystem and code structures are represented as deeply nested trees. While human-readable, they are verbose, hard to query, and inefficient for large-scale processing.
`filescan` represents both **filesystems and codebases** as **flat graphs** because this format is:
* **Compact and token-efficient**
Flat lists with numeric IDs consume far fewer tokens than recursive trees, making them ideal for LLM context windows.
* **Explicit and unambiguous**
All relationships are encoded directly via `parent_id`.
* **Easy to process**
Flat data works naturally with filtering, joins, grouping, and graph analysis.
This makes `filescan` especially suitable for:
* SQL / Pandas / DuckDB pipelines
* Static analysis and refactoring tools
* Graph-based code understanding
* **LLM-based reasoning and summarization of projects**
In short, `filescan` favors **machine-friendly structure over visual trees**, enabling scalable, AI-native workflows.
## Development
The project uses a `src/` layout.
Examples can be run without installation:
```bash
python examples/scan_data.py
```
Or as a module:
```bash
python -m examples.scan_data
```
## License
MIT License
| text/markdown | null | DreamSoul <contact@dreamsoul.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pathspec>=0.10.3"
] | [] | [] | [] | [
"Homepage, https://github.com/DreamSoul-AI/filescan"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:56:07.028287 | filescan-0.0.4.tar.gz | 15,213 | bb/f0/bd18a3360e03ccdf42320fb7404a294e8a0bf0a5a7845214b1f247504f27/filescan-0.0.4.tar.gz | source | sdist | null | false | 0b85d2a3eee755217a3fad944c7c7d6c | 0b4a3283b8da4de75bb46a2f4e8558e57efeccd6e5cbd843c8ce8a815df5f413 | bbf0bd18a3360e03ccdf42320fb7404a294e8a0bf0a5a7845214b1f247504f27 | MIT | [
"LICENSE"
] | 226 |
2.4 | loxodo-curses | 0.28.6 | A Password Safe V3 compatible password vault | |pypi| |github|
loxodo-curses
=============
loxodo-curses is a curses frontend to `Password Safe`_ V3 compatible Password Vault.
A fork of `Loxodo`_.
Editing a record is done with Vim, using a temporary file located in /dev/shm. To launch a URL, xdg-open is used, while copying to the clipboard is handled by xsel.
To generate a password, just run the command ":read !pwmake 96" in Vim (pwmake is part of `libpwquality`_)
or ":read !diceware -d ' ' -s 2" (`diceware`_) or ":read !pwgen -s 25" (`pwgen`_).
The app includes a timeout feature that automatically closes it after 30 minutes of inactivity.
The current hotkeys are:
* h: help screen
* q, Esc: Quit the program
* j, Down: Move selection down
* k, Up: Move selection up
* PgUp: Page up
* PgDown: Page down
* g, Home: Move to first item
* G, End: Move to last item
* Alt-{t,u,m,c,g}: Sort by title, user, modtime, created, group
* Alt-{T,U,M,C,G}: Sort reversed
* Delete: Delete current record
* Insert: Insert record
* d: Duplicate current record
* e: Edit current record w/o password
* E: Edit current record w/ password
* L: Launch URL
* s: Search records
* P: Change vault password
* Ctrl-U: Copy Username to clipboard
* Ctrl-P: Copy Password to clipboard
* Ctrl-L: Copy URL to clipboard
* Ctrl-T: Copy TOTP to clipboard
.. |pypi| image:: https://img.shields.io/pypi/v/loxodo-curses
:target: https://pypi.org/project/loxodo-curses/
.. |github| image:: https://img.shields.io/github/v/tag/shamilbi/loxodo-curses?label=github
:target: https://github.com/shamilbi/loxodo-curses/
.. _Password Safe: https://www.pwsafe.org/
.. _Loxodo: https://github.com/sommer/loxodo
.. _libpwquality: https://github.com/libpwquality/libpwquality
.. _diceware: https://pypi.org/project/diceware/
.. _pwgen: https://sourceforge.net/projects/pwgen/files/pwgen/
| text/x-rst | null | Shamil Bikineyev <shamilbi@gmail.com>, Christoph Sommer <mail@christoph-sommer.de> | null | null | null | password manager, privacy, security | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Natural Language :: English",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programmi... | [] | null | null | >=3.9 | [] | [] | [] | [
"mintotp"
] | [] | [] | [] | [
"homepage, https://github.com/shamilbi/loxodo-curses"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T15:56:04.323183 | loxodo_curses-0.28.6.tar.gz | 27,419 | 1c/69/c2ea9e1ff2a382b52e9eb7ba71e7034cc934475e09eb5214a4314e3c8454/loxodo_curses-0.28.6.tar.gz | source | sdist | null | false | d965b8e97f7b86c7e37063e122a4494d | 89d7d09cfed19398f04736359aae9e7d9da7b1791752c4ad1be18f54d359af60 | 1c69c2ea9e1ff2a382b52e9eb7ba71e7034cc934475e09eb5214a4314e3c8454 | GPL-2.0+ | [
"LICENSE.txt"
] | 223 |
2.4 | reid-hota | 0.3.1 | Modified HOTA (Higher Order Tracking Accuracy) extended for ReID evaluation | # ReID-HOTA: Accelerated Higher Order Tracking Accuracy for Re-Identification
[](https://badge.fury.io/py/reid_hota)
[](./LICENSE)
**HOTA-ReID** is a modified version of the Higher Order Tracking Accuracy (HOTA) metric specifically designed to support Re-Identification (ReID) problems while providing significant performance improvements through parallel processing acceleration.
### Key Features
- **ReID-Aware Evaluation**: Handles identity switches and re-appearances common in ReID scenarios
- **Multiple Metrics**: Computes both IDF1 and HOTA concurrently
- **Parallel Processing**: Multi-threaded computation for faster evaluation
- **Flexible ID Assignment**: Flexible ID assignment per frame, video, and global
- **Extraneous Box Handling**: Optional removal of comparison ids which don't have an assignment to a ground truth id, with those FP tracked separately.
- **Flexible ID Assignment Cost**: ID assignment cost can be box IOU or L2 distance in Lat/Long/Alt space.
## Installation
Using uv:
```bash
uv venv --python=3.12
source .venv/bin/activate
uv pip install reid_hota
```
Or from source:
```bash
git clone https://github.com/usnistgov/reid_hota.git
cd hota_reid
uv venv --python=3.12
source .venv/bin/activate
uv sync
```
## Quick Start
### Assumptions
⚠️ For any given video frame, the set of reference global ids present must not contain duplicate ids. ⚠️
Any duplicate comparison ids within a single frame will have their costs combined using Jaccard.
The `reid_hota` package has no ability to disambiguate or determine which global id is "correct". Therefore `reid_hota` so will throw an error upon encountering duplicate ids in a single frame.
### IDF1
This software computes both IDF1 (Identity F1) and HOTA (higher order tracking accuracy) metrics. The same intermediate results are required to compute [IDF1](https://arxiv.org/abs/1609.01775) and [HOTA](https://arxiv.org/abs/2009.07736), so the software always computes both metric results. There is no support for disabling one or the other, you always get both IDF1 and HOTA scores in the output dictionary.
### Basic Usage
```python
from reid_hota import HOTAReIDEvaluator, HOTAConfig
# create a reference and comparison dictionary of pandas dataframes. Each dataframe contains all detection boxes from that video.
input_dir = "./examples"
gt_fp = os.path.join(input_dir, 'ref')
pred_fp = os.path.join(input_dir, 'comp')
fns = [fn for fn in os.listdir(gt_fp) if fn.endswith('.csv')]
ref_dfs = {}
comp_dfs = {}
for fn in fns:
gt_df = pd.read_csv(os.path.join(gt_fp, fn))
pred_df = pd.read_csv(os.path.join(pred_fp, fn))
ref_dfs[fn.replace('.csv', '')] = gt_df
comp_dfs[fn.replace('.csv', '')] = pred_df
# create the Config controlling the Metric calculation
config = HOTAConfig(id_alignment_method='global', similarity_metric='iou')
# create the evaluator
evaluator = HOTAReIDEvaluator(n_workers=20, config=config)
# evaluate on data
evaluator.evaluate(ref_dfs, comp_dfs) # computes HOTA metrics
# extract results
# returns a dict of HOTA values
global_hota_data = evaluator.get_global_hota_data()
# returns a dict[dict] where outer dict is keyed on video names (identical to ref_dfs or comp_dfs)
# the inner dict contains HOTA values
per_video_hota_data = evaluator.get_per_video_hota_data()
# returns a dict[dict[dict]] where outer dict is keyed on video names (identical to ref_dfs or comp_dfs)
# the second layer is keyed on frames (the contents of the frame column in ref_dfs or comp_dfs)
# and the inner dict contains HOTA values
per_frame_hota_data = evaluator.get_per_frame_hota_data()
print(f"HOTA-ReID Score: {global_hota_data['HOTA']:.3f}")
```
### Example Data
The input contained in ref_dfs and comp_dfs consists of a dictionary of pandas dataframes. Each dict entry corresponds to a video, and contains a pandas dataframe with all the detections/boxes within that video.
Traditionally, the set of video keys in the ref_dfs, and comp_dfs dictionaries would be identical, but if not, reid_hota computes over the union of the two sets of dictionary keys. Usually the dictionary keys refer to the video names, but any valid python dict key is acceptable.
Each dataframe has the following minimum required columns:
```python
['frame', 'id', 'x1', 'y1', 'x2', 'y2', 'object_type']
```
```csv
frame,id,x1,y1,x2,y2,object_type
0,3,1596,906,1719,1069,1
1,3,1598,914,1733,1070,1
2,3,1602,926,1746,1070,1
```
### Lat/Long/Alt Assignment Cost
In addition to the traditional IOU cost between boxes, reid_hota supports performing detection assignment in lat/long and lat/long/alt space.
If `HOTAConfig(similarity_metric='latlon')` then in addition to the normal columns, `['lat', 'lon']` are required.
If `HOTAConfig(similarity_metric='latlonalt')` then the required columns include `['lat', 'lon', 'alt']`
### Keeping Track of Errors
If `HOTAConfig(track_fp_fn_tp_box_hashes=True)` then the column `['box_hash']` is also required, so reid_hota has a per-box hash to keep track of for later grouping into TP, FP, FN.
The full set of allowable input dataframe columns is:
`['frame', 'id', 'x1', 'y1', 'x2', 'y2', 'object_type', 'lat', 'lon', 'alt', 'box_hash']`
### HOTAConfig Options
```python
class HOTAConfig:
"""
Configuration for HOTA calculation.
This class defines all parameters needed for computing HOTA metrics,
including alignment methods, similarity metrics, and filtering options.
"""
class_ids: Optional[List[int]] = None
"""List of class IDs to evaluate. If None, all classes are evaluated."""
gids: Optional[List[int]] = None
"""Ground truth IDs to use for evaluation. If provided, all other IDs are ignored."""
id_alignment_method: Literal['global', 'per_video', 'per_frame'] = 'global'
"""Method for aligning IDs between reference and comparison data:
- 'global': Align IDs across all videos globally
- 'per_video': Align IDs separately for each video
- 'per_frame': Align IDs separately for each frame
"""
track_fp_fn_tp_box_hashes: bool = False
"""Whether to track box hashes for detailed FP/FN/TP analysis."""
reference_contains_dense_annotations: bool = False
"""Whether the reference data dataframes contain dense annotations. If False, non-matched comparison IDs are removed to reduce FP counts to only those global ids which have a match in the reference data. The non-matching comparison ids are counted in an UnmatchedFP field in the HOTA data.
Consider the case where only 2 objects are tracked in a crowded ground truth video file. The comparison results will likely have many more boxes for the confuser objects for which GT data is missing (this is non-dense ground truth). In other words, this flag is useful when the reference/ground truth data which does not have full dense annotations of all objects in the video.
"""
iou_thresholds: NDArray[np.float64] = field(default_factory=lambda: np.arange(0.1, 0.99, 0.1))
"""Array of IoU thresholds to evaluate at."""
similarity_metric: Literal['iou', 'latlon', 'latlonalt'] = 'iou'
"""Similarity metric to use:
- 'iou': Intersection over Union for bounding boxes
- 'latlon': L2 distance for lat/lon coordinates
- 'latlonalt': L2 distance for lat/lon/alt coordinates
"""
```
### Global Outputs
Once a call to `evalauate(ref_dfs, comp_dfs)` has been made, the `evaluator` object contains all HOTA results.
Three sets of evaluation results are generated, first is the global (across all videos) HOTA ReID metrics.
Additionally, there is a per_video HOTA data, and a per_frame HOTA data.
To access the results, use `evaluator.get_global_hota_data()` (or per_video or per_frame).
This will return a python dictionary.
```python
# create the evaluator
evaluator = HOTAReIDEvaluator(n_workers=20, config=config)
# evaluate on data
evaluator.evaluate(ref_dfs, comp_dfs) # computes HOTA metrics
# extract results
# returns a dict of HOTA values
global_hota_data = evaluator.get_global_hota_data()
# returns a dict[dict] where outer dict is keyed on video names (identical to ref_dfs or comp_dfs)
# the inner contains HOTA values
per_video_hota_data = evaluator.get_per_video_hota_data()
# returns a dict[dict[dict]] where outer dict is keyed on video names (identical to ref_dfs or comp_dfs)
# the second layer is keyed on frames (the contents of the frame column in ref_dfs or comp_dfs)
# and the inner contains HOTA values
per_frame_hota_data = evaluator.get_per_frame_hota_data()
```
This results dictionary will have the following structure:
```python
def get_dict(self) -> dict:
"""Get dictionary representation of HOTA data."""
global_hota_data = {
'IOU Thresholds': np.array(len(self.iou_thresholds)),
'video_id': Optional[str],
'frame': Optional[str],
'TP': np.array(len(self.iou_thresholds)),
'FN': np.array(len(self.iou_thresholds)),
'FP': np.array(len(self.iou_thresholds)),
'UnmatchedFP': int,
'LocA': np.array(len(self.iou_thresholds)),
'HOTA': np.array(len(self.iou_thresholds)),
'AssA': np.array(len(self.iou_thresholds)),
'AssRe': np.array(len(self.iou_thresholds)),
'AssPr': np.array(len(self.iou_thresholds)),
'DetA': np.array(len(self.iou_thresholds)),
'DetRe': np.array(len(self.iou_thresholds)),
'DetPr': np.array(len(self.iou_thresholds)),
'OWTA': np.array(len(self.iou_thresholds)),
'IDF1': np.array(len(self.iou_thresholds))
}
if track_hashes:
global_hota_data['FP_hashes'] = list(hashable)
global_hota_data['FN_hashes'] = list(hashable)
global_hota_data['TP_hashes'] = list(hashable)
return global_hota_data
```
The IOU thresholds match what you specified in the `HOTAConfig`.
The video_id will be None for global results, or have the video id (key into ref_dfs dict) for `per_video` and `per_frame` results.
- `frame` will be None unless its the per_frame results.
- `TP`, `FP`, `FN` will contain TP/FP/FN counts per IOU threshold. UnmatchedFP will contain any FP counts for which there was no assignment to a ground truth track. This behavior can be controlled using `reference_contains_dense_annotations` in the config.
- `LocA` is effectivly the average matching box IOU.
- `HOTA` is the final composite metric. HOTA = sqrt(DetA * AssA)
- `AssA` is the association accuracy.
- `AssRe` is the association recall.
- `AssPr` is the association precision.
- `DetA` is the detection accuracy.
- `DetRe` is the detection recall.
- `DetPr` is the detection precision.
- `OWTA` is OWTA = sqrt(DetRe * AssA).
- `IDF1` is the IDF1 metric = TP / (TP + (0.5 * FN) + (0.5 * FP)).
The hashes will only exist if the `box_hash` column is present and the config has `track_fp_fn_tp_box_hashes` enabled.
- `FP_hashes` is a the list of `box_hash` for all false positives. These are only kept in the per_frame data.
- `FN_hashes` is a the list of `box_hash` for all false negatives. These are only kept in the per_frame data.
- `TP_hashes` is a the list of `box_hash` for all true positives. These are only kept in the per_frame data.
### Lat/Lon Distance Similarities
When using a `HOTAConfig(similarity_metric='latlon')` similarity score, the L2 distance between points is used for similarity. That L2 is converted into a similarity cost function [0, 1] as follows:
```python
# Calculate squared differences for all pairs
squared_diff = np.sum((points1 - points2) ** 2, axis=2)
# Take square root to get Euclidean distance
distances = np.sqrt(squared_diff)
# use exp(-dist) to convert [0, inf] into [0, 1] with smaller distances being closer to similarity 1
# dist/10 normalizes the L2 values over human relavant distances nicely into [0, 1] scores.
similarities = np.exp(-distances / 10)
```
### HOTA Metrics (and sub-metrics) are Vectors
The `HOTA` metric results in a vector of numbers of length IOU_Thresholds. Each metric value in that list represents thresholding the cost matrix at the specific IOU Threshold (between 0 and 1). So if you want the HOTA metric for an IOU threshold of 0.5:
```python
idx = global_hota_data['IOU Thresholds'] == 0.5
hota_value = global_hota_data['HOTA'][idx]
```
## License
This software was developed by employees of the National Institute of
Standards and Technology (NIST), an agency of the Federal Government and is
being made available as a public service. Pursuant to title 17 United States
Code Section 105, works of NIST employees are not subject to copyright
protection in the United States. This software may be subject to foreign
copyright. Permission in the United States and in foreign countries, to the
extent that NIST may hold copyright, to use, copy, modify, create derivative
works, and distribute this software and its documentation without fee is hereby
granted on a non-exclusive basis, provided that this notice and disclaimer of
warranty appears in all copies.
THE SOFTWARE IS PROVIDED 'AS IS' WITHOUT ANY WARRANTY OF ANY KIND, EITHER
EXPRESSED, IMPLIED, OR STATUTORY, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY
THAT THE SOFTWARE WILL CONFORM TO SPECIFICATIONS, ANY IMPLIED WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND FREEDOM FROM
INFRINGEMENT, AND ANY WARRANTY THAT THE DOCUMENTATION WILL CONFORM TO THE
SOFTWARE, OR ANY WARRANTY THAT THE SOFTWARE WILL BE ERROR FREE. IN NO EVENT
SHALL NIST BE LIABLE FOR ANY DAMAGES, INCLUDING, BUT NOT LIMITED TO, DIRECT,
INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES, ARISING OUT OF, RESULTING FROM,
OR IN ANY WAY CONNECTED WITH THIS SOFTWARE, WHETHER OR NOT BASED UPON
WARRANTY, CONTRACT, TORT, OR OTHERWISE, WHETHER OR NOT INJURY WAS SUSTAINED
BY PERSONS OR PROPERTY OR OTHERWISE, AND WHETHER OR NOT LOSS WAS SUSTAINED
FROM, OR AROSE OUT OF THE RESULTS OF, OR USE OF, THE SOFTWARE OR SERVICES
PROVIDED HEREUNDER.
To see the latest statement, please visit:
[https://www.nist.gov/director/copyright-fair-use-and-licensing-statements-srd-data-and-software](Copyright, Fair Use, and Licensing Statements for SRD, Data, and Software)
## Acknowledgments
- Original HOTA implementation by [Jonathon Luiten](https://github.com/JonathonLuiten/TrackEval)
## Contact
- **Author**: Michael Majurski
- **Email**: michael.majurski@nist.gov
- **Project Link**: https://github.com/usnistgov/reid_hota
| text/markdown | null | Michael Majurski <michael.majurski@nist.gov> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=2.2.6",
"pandas>=2.2.3",
"pyarrow>=20.0.0",
"scipy>=1.15.3"
] | [] | [] | [] | [
"Homepage, https://github.com/usnistgov/reid_hota"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T15:54:31.629198 | reid_hota-0.3.1.tar.gz | 43,307 | 8b/9e/18673310cf4ff22417c9b58341b756b51b093990f8dbfb2256c866908890/reid_hota-0.3.1.tar.gz | source | sdist | null | false | 36b69b117b58c4701691e18786b4bcdb | b00ffdb0ddb4b3e18bcb370335faadb9d81772eaf78f97bcae75454ca3c7f631 | 8b9e18673310cf4ff22417c9b58341b756b51b093990f8dbfb2256c866908890 | null | [
"LICENSE"
] | 227 |
2.4 | chuk-mcp-runtime | 0.11.1 | Generic CHUK MCP Runtime for MCP servers | # CHUK MCP Runtime
**Version 0.10.4** - Pydantic-Native Artifact Integration
[](https://pypi.org/project/chuk-mcp-runtime/)
[](https://github.com/chrishayuk/chuk-mcp-runtime/actions/workflows/test.yml)




A robust runtime for the official Model Context Protocol (MCP) — adds proxying, session management, JWT auth, **persistent user storage with scopes**, and progress notifications.
> ✅ **Continuously tested against the latest official MCP SDK releases** for guaranteed protocol compatibility.
---
**CHUK MCP Runtime extends the official MCP SDK**, adding a battle-tested runtime layer for real deployments — without modifying or re-implementing the protocol.
## Architecture
```
┌──────────────────────────┐
│ Client / Agent │
│ (Claude, OpenAI, etc.) │
└───────────┬──────────────┘
│
▼
┌──────────────────────────┐
│ CHUK MCP Runtime │
│ - Proxy Manager │
│ - Session Manager │
│ - Artifact Storage │
│ - Resource Provider │
│ - JWT Auth & Progress │
└───────────┬──────────────┘
│
▼
┌──────────────────────────┐
│ MCP SDK Servers & Tools │
│ (Official MCP Protocol) │
└──────────────────────────┘
```
## Why CHUK MCP Runtime?
- 🔌 **Multi-Server Proxy** - Connect multiple MCP servers through one unified endpoint
- 🔐 **Secure by Default** - All built-in tools disabled unless explicitly enabled
- 🌐 **Universal Connectivity** - stdio, SSE, and HTTP transports supported
- 🔧 **OpenAI Compatible** - Transform MCP tools into OpenAI function calling format
- 📊 **Progress Notifications** - Real-time progress reporting for long operations
- ⚡ **Production Features** - Session isolation, timeout protection, JWT auth
- 📦 **Storage Scopes (NEW v0.9)** - Session (ephemeral), User (persistent), Sandbox (shared)
## Quick Start (30 seconds)
Run any official MCP server (like `mcp-server-time`) through the CHUK MCP Runtime proxy:
```bash
chuk-mcp-proxy --stdio time --command uvx -- mcp-server-time
```
That's it! You now have a running MCP proxy with tools like `proxy.time.get_current_time` (default 60s tool timeout).
> ℹ️ **Tip:** Everything after `--` is forwarded to the stdio child process (here: `mcp-server-time`).
> 💡 **Windows:** Install `uv` and use `uvx` from a shell with it on PATH, or replace `--command uvx -- mcp-server-time` with your Python launcher. Note that `mcp-server-time` may expose a Python module name like `mcp_server_time` depending on install method (e.g., `py -m mcp_server_time`).
### Hello World with Local Tools (10 seconds)
Create your first local MCP tool:
```python
# my_tools/tools.py
from chuk_mcp_runtime.common.mcp_tool_decorator import mcp_tool
@mcp_tool(name="greet", description="Say hi")
async def greet(name: str = "world") -> str:
return f"Hello, {name}!"
```
```yaml
# config.yaml
server:
type: "stdio"
mcp_servers:
my_tools:
enabled: true
location: "./my_tools"
tools:
enabled: true
module: "my_tools.tools"
```
```bash
# Run it (default 60s tool timeout)
chuk-mcp-server --config config.yaml
```
**Smoke test (stdio):**
```bash
# From a second terminal while chuk-mcp-server is running on stdio:
# Send tools/list over stdin and read stdout (minimal JSON-RPC roundtrip)
printf '%s\n' '{
"jsonrpc":"2.0",
"id": 1,
"method":"tools/list",
"params": {}
}'
```
## Installation
### Requirements
- Python 3.11+ (with `uv` recommended)
- On minimal distros/containers, install `tzdata` for timezone support
- (Optional) `jq` for pretty-printing JSON in curl examples
```bash
# Basic installation
uv pip install chuk-mcp-runtime
# With optional dependencies (installs dependencies for SSE/HTTP transports and development tooling)
uv pip install "chuk-mcp-runtime[websocket,dev]"
# Install tzdata for proper timezone support (containers, Alpine Linux)
uv pip install tzdata
```
## What Can You Build?
- **Multi-Server Gateway**: Expose multiple MCP servers (time, weather, GitHub, etc.) through one proxy
- **Enterprise MCP Services**: Add session management, persistent storage, and JWT auth to any MCP setup
- **OpenAI Bridge**: Transform any MCP server's tools into OpenAI-compatible function calls
- **Hybrid Architectures**: Run local Python tools alongside remote MCP servers
- **Progress-Aware Tools**: Build long-running operations with real-time client updates
- **Persistent User Files (NEW)**: Store user documents, prompts, and files that survive sessions
## Table of Contents
- [What's New in v0.10.4](#whats-new-in-v0104)
- [What's New in v0.9.0](#whats-new-in-v090)
- [Redis Cluster Support](#redis-cluster-support-new-in-v0104)
- [Key Concepts](#key-concepts)
- [Configuration Reference](#configuration-reference)
- [Proxy Configuration Examples](#proxy-configuration-examples)
- [Creating Local Tools](#creating-local-mcp-tools)
- [MCP Resources](#mcp-resources)
- [Progress Notifications](#progress-notifications)
- [Request Context & Headers](#request-context--headers)
- [Built-in Tools](#built-in-tool-categories)
- [Security Model](#security-model)
- [Environment Variables](#environment-variables)
- [Development](#development)
- [Troubleshooting](#troubleshooting)
## What's New in v0.10.4
### 🎯 Pydantic-Native Artifact Integration
**Enhanced type safety and better developer experience** with full pydantic integration from `chuk-artifacts` 0.10.1+ (includes Redis Cluster support):
**Updated Dependencies**:
- `chuk-artifacts` 0.10.1 - Redis Cluster support, enhanced VFS providers
- `chuk-sessions` 0.6.0 - Redis Cluster support with automatic detection
#### ✅ Type-Safe Artifact Metadata
All artifact operations now use pydantic models internally:
```python
from chuk_artifacts.models import ArtifactMetadata
# Metadata is now a pydantic model with full validation
metadata: ArtifactMetadata = await store.metadata(artifact_id)
# Direct attribute access (type-safe)
print(f"Size: {metadata.bytes} bytes")
print(f"Type: {metadata.mime}")
print(f"Scope: {metadata.scope}") # session | user | sandbox
# Pydantic serialization
metadata_dict = metadata.model_dump() # Convert to dict
metadata_json = metadata.model_json() # Convert to JSON
```
#### 🔧 What Changed
- **Internal improvements**: All artifact tools use pydantic models internally
- **Better performance**: Direct attribute access instead of dict lookups
- **Enhanced validation**: Automatic pydantic validation on all metadata
- **Zero breaking changes**: All existing code works unchanged (backward compatible)
#### 📊 Benefits
- ✅ **Type safety**: Full type hints with pydantic models
- ✅ **Better IDE support**: Autocomplete for all metadata fields
- ✅ **Automatic validation**: Pydantic ensures data integrity
- ✅ **Cleaner code**: Direct attribute access (`metadata.bytes` vs `metadata.get("bytes", 0)`)
- ✅ **100% backward compatible**: Dict-style access still works
#### 🔄 Compatibility
```python
# Both work - choose your style
size = metadata.bytes # ✅ Pydantic (new, recommended)
size = metadata.get("bytes", 0) # ✅ Dict-style (still works)
size = metadata["bytes"] # ✅ Also works
```
## What's New in v0.9.0
### 🎉 Storage Scopes - The Game Changer
Three storage scopes for different use cases:
| Scope | Lifecycle | Use Case | Example |
|-------|-----------|----------|---------|
| **session** | Ephemeral (15min-24h) | Temporary work, caches | AI-generated code during chat |
| **user** | Persistent (1 year+) | User documents, saved files | Reports, custom prompts, uploads |
| **sandbox** | Shared (no expiry) | Templates, system files | Boilerplate, shared resources |
### 🔒 Security Enhancements
- ✅ Removed `session_id`/`user_id` parameters from all tools (prevents client impersonation)
- ✅ All identity from server-side context only
- ✅ Automatic scope-based access control in `read_file` and `delete_file`
- ✅ User files require authentication
### 🛠️ New Tools
**Explicit session tools** (ephemeral):
- `write_session_file` / `upload_session_file` - Always ephemeral
- `list_session_files` - List session files
**Explicit user tools** (persistent):
- `write_user_file` / `upload_user_file` - Always persistent
- `list_user_files` - Search/filter user's files
**General tools** (scope parameter):
- `write_file(scope="user")` / `upload_file(scope="user")` - Flexible scope selection
### ✅ 100% Backward Compatible
Existing code works unchanged - tools default to `scope="session"` (same behavior as v0.8.2).
**Quick comparison:**
```python
# v0.8.2 (still works in v0.9.0)
await write_file(content, filename) # Ephemeral
# v0.9.0 - New capabilities
await write_user_file(content, filename) # Persistent!
await write_file(content, filename, scope="user") # Also persistent
files = await list_user_files(mime_prefix="text/*") # Search user files
```
**See:** `CHANGELOG_V09.md` for complete release notes, `ARTIFACTS_V08_SUMMARY.md` for detailed guide.
## Redis Cluster Support (NEW in v0.10.4)
CHUK MCP Runtime now supports **Redis Cluster** for high availability and horizontal scaling through automatic cluster detection in `chuk-sessions` 0.6.0+ and `chuk-artifacts` 0.10.1+.
### Quick Start
**Standalone Redis** (existing):
```bash
export REDIS_URL=redis://localhost:6379
export ARTIFACT_SESSION_PROVIDER=redis
```
**Redis Cluster** (new):
```bash
# Comma-separated node list - automatic cluster detection
export REDIS_URL=redis://node1:7000,node2:7001,node3:7002
export ARTIFACT_SESSION_PROVIDER=redis
```
**With TLS**:
```bash
# Standalone with TLS
export REDIS_URL=rediss://secure-redis:6380
export REDIS_TLS_INSECURE=1 # For self-signed certificates
# Cluster with TLS
export REDIS_URL=rediss://node1:7000,node2:7001,node3:7002
export REDIS_TLS_INSECURE=1
```
### Environment Isolation
Prevent key collisions when multiple environments share the same Redis cluster:
```bash
# Development
export ENVIRONMENT=dev
export DEPLOYMENT_ID=local
# Staging
export ENVIRONMENT=staging
export DEPLOYMENT_ID=us-west-1
export SANDBOX_REGISTRY_TTL=3600 # 1 hour
# Production
export ENVIRONMENT=production
export DEPLOYMENT_ID=us-east-1
export SANDBOX_REGISTRY_TTL=86400 # 24 hours
```
This creates isolated namespaces:
- Dev: `dev:local:sbx:*`
- Staging: `staging:us-west-1:sbx:*`
- Production: `production:us-east-1:sbx:*`
### Configuration Variables
| Variable | Description | Default | Example |
|----------|-------------|---------|---------|
| `REDIS_URL` | Redis connection URL (standalone or cluster) | `redis://localhost:6379` | `redis://n1:7000,n2:7001` |
| `REDIS_TLS_INSECURE` | Disable SSL certificate verification | `0` | `1` |
| `ENVIRONMENT` | Environment name for namespace isolation | `dev` | `production`, `staging` |
| `DEPLOYMENT_ID` | Deployment identifier for namespace isolation | `default` | `us-east-1`, `us-west-1` |
| `SANDBOX_REGISTRY_TTL` | Sandbox registry entry TTL in seconds | `86400` | `3600`, `7200` |
### Architecture Notes
**Automatic Detection:**
- Single host URL → Uses `redis.asyncio.Redis`
- Multi-host URL (comma-separated) → Uses `redis.asyncio.cluster.RedisCluster`
- Database selector (`/0`) is auto-removed for cluster compatibility
**Thread Safety:**
- All singletons use double-check locking with `asyncio.Lock`
- Safe for concurrent initialization in multi-instance deployments
**Namespace Isolation:**
- Keys are prefixed with `{ENVIRONMENT}:{DEPLOYMENT_ID}:sbx:`
- Prevents collisions when multiple environments share Redis
- Required for staging/production on same cluster
## Core Components Overview
| Component | Purpose |
|-----------|---------|
| **Proxy Manager** | Connects and namespaces multiple MCP servers |
| **Session Manager** | Maintains per-user state across tool calls |
| **Artifact Store** | Handles file persistence with 3 scopes (NEW: user, sandbox) |
| **Auth & Security** | Adds JWT validation, sandboxing, and access control |
| **Progress Engine** | Sends real-time status updates to clients |
## Key Concepts
### Sessions
**Sessions** provide stateful context for multi-turn interactions with MCP tools. Each session:
- Has a unique identifier (session ID)
- Persists across multiple tool calls
- Can store metadata (user info, preferences, etc.)
- Controls access to artifacts (files) within the session scope
- Has an optional TTL (time-to-live) for automatic cleanup
**When to use sessions:**
- Multi-step workflows that need to maintain state
- User-specific file storage (isolate files per user)
- Long-running operations that span multiple requests
- Workflows requiring authentication/authorization context
**Example:**
```python
# Session-aware tool automatically gets current session context
@mcp_tool(name="save_user_file")
async def save_user_file(filename: str, content: str) -> str:
# Files are automatically scoped to the current session
# User A's "data.txt" is separate from User B's "data.txt"
# Note: artifact_store is available via runtime context when artifacts are enabled
from chuk_mcp_runtime.tools.artifacts_tools import artifact_store
await artifact_store.write_file(filename, content)
return f"Saved {filename} to session"
```
### Sandboxes
**Sandboxes** are isolated execution environments that contain one or more sessions. Think of them as:
- **Namespace** - Groups related sessions together
- **Deployment unit** - One sandbox per deployment/pod/instance
- **Isolation boundary** - Sessions in different sandboxes don't interact
**Sandbox ID** is set via:
1. Config file: `sessions.sandbox_id: "my-app"`
2. Environment variable: `MCP_SANDBOX_ID=my-app`
3. Auto-detected: Pod name in Kubernetes (`POD_NAME`)
**Use cases:**
```
Single-tenant app: sandbox_id = "myapp"
Multi-tenant SaaS: sandbox_id = "tenant-{customer_id}"
Development/staging: sandbox_id = "dev-alice" | "staging"
Kubernetes pod: sandbox_id = $POD_NAME (auto)
```
### Sessions vs Sandboxes
```
Sandbox: "production-app"
├── Session: user-alice-2024
│ ├── File: report.pdf
│ └── File: data.csv
├── Session: user-bob-2024
│ └── File: notes.txt
└── Session: background-job-123
└── File: results.json
Different Sandbox: "staging-app"
└── (completely isolated from production)
```
### Artifacts (NEW in v0.9: Storage Scopes)
**Artifacts** are files managed by the runtime with **three storage scopes** for different use cases:
#### Storage Scopes
| Scope | Lifecycle | TTL | Use Case | Access Control |
|-------|-----------|-----|----------|----------------|
| **session** | Ephemeral | 15min-24h | Temporary work, caches, generated code | Session-isolated |
| **user** | Persistent | 1 year+ | User documents, saved files, custom prompts | User-owned |
| **sandbox** | Shared | No expiry | Templates, shared resources, system files | Read-only (admin writes) |
**Key Features:**
- **Session isolation** - Files scoped to specific sessions or users
- **Storage backends** - Filesystem, S3, IBM Cloud Object Storage, VFS providers
- **Metadata tracking** - Size, timestamps, content type, ownership
- **Lifecycle management** - Auto-cleanup with TTL expiry
- **Security** - No client-side identity parameters, server context only
- **Search & filtering** - Find files by user, scope, MIME type, metadata
**Storage providers:**
- `vfs-filesystem` - Local disk with VFS support (development)
- `vfs-s3` - AWS S3 with streaming + multipart uploads (distributed/cloud)
- `vfs-sqlite` - SQLite with structured queries (embedded)
- `memory` - In-memory (testing, ephemeral)
### Progress Notifications
**Progress notifications** enable real-time feedback for long-running operations:
- Client provides `progressToken` in request
- Tool calls `send_progress(current, total, message)`
- Runtime sends `notifications/progress` to client
- Client displays progress bar/status
**Perfect for:**
- File processing (10 of 50 files)
- API calls (fetching data batches)
- Multi-step workflows (step 3 of 5)
- Long computations (75% complete)
## Configuration Reference
Complete YAML configuration structure with all available options:
```yaml
# ============================================
# HOST CONFIGURATION
# ============================================
host:
name: "my-mcp-server" # Server name (for logging/identification)
log_level: "INFO" # Global log level: DEBUG, INFO, WARNING, ERROR
# ============================================
# SERVER TRANSPORT
# ============================================
server:
type: "stdio" # Transport: stdio | sse | streamable-http
auth: "bearer" # Optional: bearer (JWT) | none
# SSE-specific settings (when type: "sse")
sse:
host: "0.0.0.0" # Listen address
port: 8000 # Listen port
sse_path: "/sse" # SSE endpoint path
message_path: "/messages/" # Message submission path
health_path: "/health" # Health check path
# HTTP-specific settings (when type: "streamable-http")
streamable-http:
host: "127.0.0.1" # Listen address
port: 3000 # Listen port
mcp_path: "/mcp" # MCP endpoint path
json_response: true # Enable JSON responses
stateless: true # Stateless mode
# ============================================
# LOGGING CONFIGURATION
# ============================================
logging:
level: "INFO" # Default log level
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
reset_handlers: true # Reset existing handlers
quiet_libraries: true # Suppress noisy library logs
# Per-logger overrides
loggers:
"chuk_mcp_runtime.proxy": "DEBUG"
"chuk_mcp_runtime.tools": "INFO"
# ============================================
# TOOL CONFIGURATION
# ============================================
tools:
registry_module: "chuk_mcp_runtime.common.mcp_tool_decorator"
registry_attr: "TOOLS_REGISTRY"
timeout: 60 # Global tool timeout (seconds)
# ============================================
# SESSION MANAGEMENT
# ============================================
sessions:
sandbox_id: "my-app" # Sandbox identifier (deployment unit)
default_ttl_hours: 24 # Session time-to-live
# Session tools (disabled by default)
session_tools:
enabled: false # Master switch for session tools
tools:
get_current_session: {enabled: false}
set_session: {enabled: false}
clear_session: {enabled: false}
list_sessions: {enabled: false}
get_session_info: {enabled: false}
create_session: {enabled: false}
# ============================================
# ARTIFACT STORAGE
# ============================================
artifacts:
enabled: false # Master switch for artifacts
storage_provider: "filesystem" # filesystem | s3 | ibm_cos
session_provider: "memory" # memory | redis
bucket: "my-artifacts" # Storage bucket/directory name
# Artifact tools (disabled by default)
tools:
upload_file: {enabled: false}
write_file: {enabled: false}
read_file: {enabled: false}
list_session_files: {enabled: false}
delete_file: {enabled: false}
list_directory: {enabled: false}
copy_file: {enabled: false}
move_file: {enabled: false}
get_file_metadata: {enabled: false}
get_presigned_url: {enabled: false}
get_storage_stats: {enabled: false}
# ============================================
# PROXY CONFIGURATION
# ============================================
proxy:
enabled: false # Enable proxy mode
namespace: "proxy" # Tool name prefix (e.g., "proxy.time.get_time")
keep_root_aliases: false # Keep original tool names
openai_compatible: false # Use underscores (time_get_time)
only_openai_tools: false # Register only underscore versions
# ============================================
# MCP SERVERS (Local & Remote)
# ============================================
mcp_servers:
# Local Python tools
my_tools:
enabled: true
location: "./my_tools" # Directory containing tool modules
tools:
enabled: true
module: "my_tools.tools" # Python module path
# Remote stdio server
time:
enabled: true
type: "stdio"
command: "uvx"
args: ["mcp-server-time", "--local-timezone", "America/New_York"]
cwd: "/optional/working/dir" # Optional working directory
# Remote SSE server
weather:
enabled: true
type: "sse"
url: "https://api.example.com/mcp"
api_key: "your-api-key" # Or set via API_KEY env var
```
### Configuration Priority
Settings are resolved in this order (highest to lowest):
1. **Command-line arguments** - `chuk-mcp-server --config custom.yaml`
2. **Environment variables** - `MCP_TOOL_TIMEOUT=120`
3. **Configuration file** - Values from YAML
4. **Default values** - Built-in defaults
### Minimal Configurations
**Stdio server with no sessions:**
```yaml
server:
type: "stdio"
```
**SSE server (referenced in examples):**
```yaml
# sse_config.yaml
server:
type: "sse"
# For deployment: add auth: "bearer" and set JWT_SECRET_KEY
sse:
host: "0.0.0.0"
port: 8000
sse_path: "/sse"
message_path: "/messages/"
health_path: "/health"
```
**Streamable HTTP server (referenced in examples):**
```yaml
# http_config.yaml
server:
type: "streamable-http"
# For deployment: add auth: "bearer" and set JWT_SECRET_KEY
streamable-http:
host: "0.0.0.0"
port: 3000
mcp_path: "/mcp"
json_response: true
stateless: true
```
**Proxy only (no local tools):**
```yaml
proxy:
enabled: true
mcp_servers:
time:
type: "stdio"
command: "uvx"
args: ["mcp-server-time"]
```
**Full-featured with sessions:**
```yaml
server:
type: "stdio"
sessions:
sandbox_id: "prod"
session_tools:
enabled: true
tools:
get_current_session: {enabled: true}
create_session: {enabled: true}
artifacts:
enabled: true
storage_provider: "s3"
tools:
write_file: {enabled: true}
read_file: {enabled: true}
```
## Proxy Configuration Examples
The proxy layer allows you to expose tools from multiple MCP servers through a unified interface.
### Simple Command Line Proxy
```bash
# Basic proxy with dot notation (proxy.time.get_current_time)
chuk-mcp-proxy --stdio time --command uvx -- mcp-server-time --local-timezone America/New_York
# Multiple stdio servers (--stdio is repeatable)
chuk-mcp-proxy --stdio time --command uvx -- mcp-server-time \
--stdio weather --command uvx -- mcp-server-weather
# Multiple SSE servers (--sse is repeatable)
chuk-mcp-proxy \
--sse analytics --url https://example.com/mcp --api-key "$API_KEY" \
--sse metrics --url https://metrics.example.com/mcp --api-key "$METRICS_API_KEY"
# OpenAI-compatible with underscore notation (time_get_current_time)
chuk-mcp-proxy --stdio time --command uvx -- mcp-server-time --openai-compatible
# Streamable HTTP server (serves MCP over HTTP)
chuk-mcp-server --config http_config.yaml # See minimal config example below
```
> ⚠️ **Security:** For SSE/HTTP network transports, enable `server.auth: bearer` and set `JWT_SECRET_KEY`.
### Multiple Servers with Config File
```yaml
# proxy_config.yaml
proxy:
enabled: true
namespace: "proxy"
mcp_servers:
time:
type: "stdio"
command: "uvx"
args: ["mcp-server-time", "--local-timezone", "America/New_York"]
weather:
type: "stdio"
command: "uvx"
args: ["mcp-server-weather"]
```
```bash
chuk-mcp-proxy --config proxy_config.yaml
```
### OpenAI-Compatible Mode
```yaml
# openai_config.yaml
proxy:
enabled: true
namespace: "proxy"
openai_compatible: true # Enable underscore notation
only_openai_tools: true # Only register underscore-notation tools
mcp_servers:
time:
type: "stdio"
command: "uvx"
args: ["mcp-server-time"]
```
```bash
chuk-mcp-proxy --config openai_config.yaml
```
**OpenAI-Compatible Naming Matrix:**
| Setting | Example Exposed Name |
|---------|---------------------|
| Default (dot notation) | `proxy.time.get_current_time` |
| `openai_compatible: true` | `time_get_current_time` |
| `openai_compatible: true` + `only_openai_tools: true` | Only underscore versions registered |
> **OpenAI-compatible mode** converts dots to underscores (e.g., `proxy.time.get_current_time` → `time_get_current_time`). Namespacing behavior is controlled by `openai_compatible` + `only_openai_tools`.
**OpenAI-compatible demo with HTTP:**
```bash
# Start proxy with OpenAI-compatible naming
chuk-mcp-proxy --stdio time --command uvx -- mcp-server-time --openai-compatible
# Call the underscore tool name over HTTP
curl -s http://127.0.0.1:3000/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $JWT" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/call",
"params":{"name":"time_get_current_time","arguments":{"timezone":"UTC"}}}'
```
### Name Aliasing in Proxy Mode
By default, tools are exposed under `proxy.<server>.<tool>`.
Set `keep_root_aliases: true` to also expose the original tool names (no `proxy.` prefix).
This is useful when migrating existing clients gradually. **Root aliases are great for gradual migration, but disable in multi-tenant prod to avoid collisions.**
```yaml
proxy:
enabled: true
namespace: "proxy"
keep_root_aliases: true # Also expose tools without proxy. prefix
```
With this setting enabled, `proxy.time.get_current_time` is available as both:
- `proxy.time.get_current_time` (namespaced)
- `get_current_time` (root alias)
### Tool Naming Interplay
**Complete naming matrix when options combine:**
| Setting Combination | Registered Names |
|---------------------|------------------|
| Default | `proxy.<server>.<tool>` |
| `keep_root_aliases: true` | `proxy.<server>.<tool>`, **and** `<tool>` |
| `openai_compatible: true` | `<server>_<tool>` |
| `openai_compatible: true` + `only_openai_tools: true` | `<server>_<tool>` **only** |
| `openai_compatible: true` + `keep_root_aliases: true` | `<server>_<tool>`, **and** `<tool>` |
> ⚠️ **Root aliases are un-namespaced.** Use with care in multi-server setups to avoid tool name collisions.
## Security Model
**IMPORTANT**: CHUK MCP Runtime follows a **secure-by-default** approach:
- **All built-in tools are disabled by default**
- Session management tools require explicit enablement
- Artifact storage tools require explicit enablement
- Tools must be individually enabled in configuration
- This prevents unexpected tool exposure and reduces attack surface
## Creating Local MCP Tools
### 1. Create a custom tool
```python
# my_tools/tools.py
from chuk_mcp_runtime.common.mcp_tool_decorator import mcp_tool
@mcp_tool(name="get_current_time", description="Get the current time in a timezone")
async def get_current_time(timezone: str = "UTC") -> str:
"""
Get the current time in the specified timezone.
Args:
timezone: Target timezone (e.g., 'UTC', 'America/New_York')
"""
from datetime import datetime
from zoneinfo import ZoneInfo
tz = ZoneInfo(timezone)
now = datetime.now(tz)
return now.strftime("%Y-%m-%d %H:%M:%S %Z")
@mcp_tool(name="calculate_sum", description="Calculate the sum of two numbers", timeout=10)
async def calculate_sum(a: int, b: int) -> dict:
"""
Calculate the sum of two numbers.
Args:
a: First number
b: Second number
"""
# ⚠️ PRODUCTION WARNING: Never use eval() for math operations - always validate
# and compute directly as shown here. eval() is a security risk.
result = a + b
return {
"operation": "addition",
"operands": [a, b],
"result": result
}
```
### 2. Create a config file
```yaml
# config.yaml
host:
name: "my-mcp-server"
log_level: "INFO"
server:
type: "stdio"
# Global tool settings
tools:
registry_module: "chuk_mcp_runtime.common.mcp_tool_decorator"
registry_attr: "TOOLS_REGISTRY"
timeout: 60 # Default timeout for all tools
# Session management (optional - disabled by default)
sessions:
sandbox_id: "my-app"
default_ttl_hours: 24
# Session tools (disabled by default - must enable explicitly)
session_tools:
enabled: true # Must explicitly enable
tools:
get_current_session: {enabled: true}
set_session: {enabled: true}
clear_session: {enabled: true}
create_session: {enabled: true}
# Artifact storage (disabled by default - must enable explicitly)
artifacts:
enabled: true # Must explicitly enable
storage_provider: "filesystem"
session_provider: "memory"
bucket: "my-artifacts"
tools:
upload_file: {enabled: true}
write_file: {enabled: true}
read_file: {enabled: true}
list_session_files: {enabled: true}
delete_file: {enabled: true}
get_file_metadata: {enabled: true}
# Local tool modules
mcp_servers:
my_tools:
enabled: true
location: "./my_tools"
tools:
enabled: true
module: "my_tools.tools"
```
### 3. Run the server
```bash
chuk-mcp-server --config config.yaml
```
## MCP Resources
**MCP Resources** provide read-only access to data through the Model Context Protocol's `resources/list` and `resources/read` endpoints. Resources are perfect for exposing configuration, documentation, system information, and user files to AI agents.
### Resources vs Tools
| Feature | **Resources** | **Tools** |
|---------|--------------|-----------|
| **Purpose** | Read-only data access | Actions & state changes |
| **Use Cases** | Config, docs, files, metrics | Create, update, delete operations |
| **MCP Methods** | `resources/list`, `resources/read` | `tools/list`, `tools/call` |
| **Side Effects** | None (read-only) | May modify state |
| **Session Isolation** | Artifact resources only | Tool-dependent |
### Resource Types
CHUK MCP Runtime supports two types of resources:
#### 1. Custom Resources (@mcp_resource)
Custom resources expose application data, configuration, documentation, or any read-only content through simple Python functions.
**Creating custom resources:**
```python
# my_resources/resources.py
from chuk_mcp_runtime.common.mcp_resource_decorator import mcp_resource
import json
import os
@mcp_resource(
uri="config://database",
name="Database Configuration",
description="Database connection settings",
mime_type="application/json"
)
async def get_database_config():
"""Return database configuration as JSON."""
config = {
"host": "localhost",
"port": 5432,
"database": "myapp_db",
"pool_size": 10
}
return json.dumps(config, indent=2)
@mcp_resource(
uri="system://info",
name="System Information",
description="Current system status",
mime_type="text/plain"
)
async def get_system_info():
"""Return system information."""
return f"""System Information
Platform: {os.uname().sysname}
Node: {os.uname().nodename}
User: {os.getenv('USER', 'unknown')}
"""
@mcp_resource(
uri="docs://api/overview",
name="API Documentation",
description="API endpoints guide",
mime_type="text/markdown"
)
def get_api_docs():
"""Return API documentation (sync functions work too!)."""
return """# API Documentation
## Authentication
All requests require Bearer token.
## Endpoints
- GET /api/users - List users
- POST /api/users - Create user
"""
```
**Configuration:**
```yaml
# config.yaml
server:
type: "stdio"
# Import module containing custom resources
tools:
modules_to_import:
- my_resources.resources
```
**Custom resource features:**
- **Static or dynamic** - Return fixed data or compute on-demand
- **Sync or async** - Both function types supported
- **Any content type** - Text, JSON, binary, images, etc.
- **Custom URI schemes** - Use meaningful URIs like `config://`, `docs://`, `system://`
#### 2. Artifact Resources (Session-Isolated User Files)
Artifact resources provide **automatic, session-isolated access to user files** through the MCP resources protocol. When users create, upload, or modify files via artifact tools, those files are automatically exposed as resources with strong session isolation guarantees.
**Key Concepts:**
- **Automatic Exposure**: Files created via `write_file`, `upload_file`, etc. are automatically available via `resources/list` and `resources/read`
- **Session Isolation**: Users can only list and read their own files - cross-session access is blocked
- **Unified Protocol**: Access files through the same MCP resources protocol as custom resources
- **URI Format**: `artifact://{artifact_id}` where `artifact_id` is the unique file identifier
**How Artifact Resources Work:**
```python
# Step 1: User creates a file via an artifact tool
# This happens via MCP tool call: tools/call with name="write_file"
# Example tool call from AI agent:
{
"method": "tools/call",
"params": {
"name": "write_file",
"arguments": {
"filename": "analysis.md",
"content": "# Data Analysis\n\nKey findings...",
"mime": "text/markdown",
"summary": "Q3 analysis report"
}
}
}
# Step 2: File is stored with session association
# - artifact_id: "abc-123-def-456"
# - session_id: "session-alice"
# - filename: "analysis.md"
# - mime: "text/markdown"
# Step 3: File automatically appears in resources/list
{
"method": "resources/list"
}
# Returns:
{
"resources": [
{
"uri": "artifact://abc-123-def-456",
"name": "analysis.md",
"description": "Q3 analysis report",
"mimeType": "text/markdown"
}
]
}
# Step 4: Read the resource content
{
"method": "resources/read",
"params": {"uri": "artifact://abc-123-def-456"}
}
# Returns:
{
"contents": [
{
"uri": "artifact://abc-123-def-456",
"mimeType": "text/markdown",
"text": "# Data Analysis\n\nKey findings..."
}
]
}
```
**Session Isolation Example:**
```python
# Alice's session (session-alice)
# Creates: report.md -> artifact://file-alice-1
# Bob's session (session-bob)
# Creates: report.md -> artifact://file-bob-1
# When Alice calls resources/list:
# Returns ONLY: artifact://file-alice-1
# When Bob calls resources/list:
# Returns ONLY: artifact://file-bob-1
# If Alice tries to read Bob's file:
# resources/read {"uri": "artifact://file-bob-1"}
# Result: Error - Artifact not found (access blocked)
```
**Configuration:**
```yaml
# config.yaml
artifacts:
enabled: true
storage_provider: "filesystem" # or "s3", "ibm_cos"
session_provider: "memory" # or "redis"
# Storage configuration (for filesystem provider)
filesystem:
base_path: "./artifacts"
# Enable artifact tools (users create files via these tools)
tools:
write_file: {enabled: true} # Create/update text files
upload_file: {enabled: true} # Upload binary files
read_file: {enabled: true} # Read file content
list_session_files: {enabled: true} # List user's files
delete_file: {enabled: true} # Delete files
```
**Supported File Operations:**
| Tool | Purpose | Creates Resource? |
|------|---------|-------------------|
| `write_file` | Create or update text file | ✅ Yes |
| `upload_file` | Upload binary file (images, PDFs, etc.) | ✅ Yes |
| `read_file` | Read file content by filename | No (uses resource instead) |
| `list_session_files` | List user's files | No (use `resources/list`) |
| `delete_file` | Delete a file | Removes resource |
**Text vs Binary Content:**
```python
# Text files (JSON, Markdown, code, etc.)
# Returned as "text" in resource content
{
"uri": "artifact://text-123",
"mimeType": "application/json",
"text": '{"key": "value"}'
}
# Binary files (images, PDFs, etc.)
# Returned as base64-encoded "blob"
{
"uri": "artifact://binary-456",
"mimeType": "image/png",
"blob": "iVBORw0KGgoAAAANSUhEUgAA..." # base64 encoded
}
```
**Artifact Resource Metadata:**
Each artifact resource includes:
- **URI**: `artifact://{artifact_id}` - Unique resource identifier
- **Name**: Original filename (e.g., "report.pdf")
- **Description**: Summary/description provided during creation
- **MIME Type**: Content type (e.g., "application/pdf", "text/markdown")
- **Session ID**: Internal - used for access control (not exposed)
**Integration with Custom Resources:**
Artifact resources and custom resources work together seamlessly:
```python
# Both appear in the same resources/list response:
{
"resources": [
# Custom resources (global)
{"uri": "config://database", "name": "Database Config", ...},
{"uri": "docs://api", "name": "API Documentation", ...},
# Artifact resources (session-isolated)
{"uri": "artifact://abc-123", "name": "user-report.md", ...},
{"uri": "artifact://def-456", "name": "analysis.pdf", ...}
]
}
# AI agents can access both types through the same protocol
```
**Security & Access Control:**
Artifact resources have **multi-layer security**:
1. **Session Validation**: Every `resources/read` call validates session ownership
2. **Metadata Verification**: Artifact metadata must match current session
3. **Access Blocking**: Cross-session reads return "not found" error
4. **Audit Trail**: All access attempts can be logged for compliance
**Common Use Cases:**
- **Document Generation**: AI creates reports, summaries, code files
- **Data Analysis**: AI processes data and saves results as artifacts
- **File Management**: Users upload files, AI analyzes and references them
- **Multi-step Workflows**: AI saves intermediate results as artifacts
- **Context Persistence**: Files remain accessible across conversation turns
**Artifact Resource Features:**
- ✅ **Session isolation** - Users only see their own files
- ✅ **Automatic exposure** - Files created via tools become resources immediately
- ✅ **URI scheme** - Consistent `artifact://{id}` format
- ✅ **Security** - Built-in access control and validation
- ✅ **Persistence** - Files survive server restarts (with filesystem/cloud storage)
- ✅ **Binary support** - Images, PDFs, archives via base64 encoding
- ✅ **Metadata** - Filenames, MIME types, descriptions included
### Resource URI Schemes
| URI Scheme | Type | Example | Use Case |
|------------|------|---------|----------|
| `config://` | Custom | `config://database` | Configuration data |
| `system://` | Custom | `system://info` | System information |
| `docs://` | Custom | `docs://api/overview` | Documentation |
| `data://` | Custom | `data://logo.png` | Static assets |
| `artifact://` | Artifact | `artifact://abc-123-def` | User files |
### Using Resources in AI Agents
Resources are designed for AI agents to retrieve contextual data:
```python
# AI agent workflow:
# 1. List available resources
response = await client.call("resources/list")
# Returns: config://database, system://info, artifact://report-123
# 2. Read specific resource
content = await client.call("resources/read", {"uri": "c | text/markdown | null | null | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic>=2.10.6",
"pyyaml>=6.0.2",
"pyjwt>=2.10.1",
"cryptography>=44.0.3",
"uvicorn>=0.34.0",
"chuk-artifacts>=0.11.1",
"chuk-sessions>=0.6.1",
"mcp>=1.23.0",
"pytest>=8.3.5; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=6.0.0; extra == \"dev\"",
"ruff>=0.4.6; ... | [] | [] | [] | [] | twine/6.1.0 CPython/3.11.11 | 2026-02-18T15:53:49.976991 | chuk_mcp_runtime-0.11.1.tar.gz | 153,240 | 22/f8/9bfcc76d0464c26600e080333744176756f3e4f92b233a5ce62d2f25ddaa/chuk_mcp_runtime-0.11.1.tar.gz | source | sdist | null | false | fc0b3147f9506aca6a1b4f53a79f8b35 | 96d9183ca19243b8b2011ba12ebb761a5b6d5ccfa9bbae81dbbbeb221b359c3c | 22f89bfcc76d0464c26600e080333744176756f3e4f92b233a5ce62d2f25ddaa | null | [] | 361 |
2.4 | bitranox-template-py-lib | 1.1.1 | Template for backward compatible python libs with registered cli commands | # bitranox_template_py_lib
<!-- Badges -->
[](https://github.com/bitranox/bitranox_template_py_lib/actions/workflows/ci.yml)
[](https://github.com/bitranox/bitranox_template_py_lib/actions/workflows/codeql.yml)
[](LICENSE)
[](https://codespaces.new/bitranox/bitranox_template_py_lib?quickstart=1)
[](https://pypi.org/project/bitranox_template_py_lib/)
[](https://pypi.org/project/bitranox_template_py_lib/)
[](https://docs.astral.sh/ruff/)
[](https://codecov.io/gh/bitranox/bitranox_template_py_lib)
[](https://qlty.sh/gh/bitranox/projects/bitranox_template_py_lib)
[](https://snyk.io/test/github/bitranox/bitranox_template_py_lib)
[](https://github.com/PyCQA/bandit)
Template for backward compatible (3.9 upwards) python libs with registered cli commands
- CLI entry point styled with rich-click (rich output + click ergonomics)
## Install - recommended via UV
UV - the ultrafast installer - written in Rust (10–20× faster than pip/poetry)
```bash
# recommended Install via uv
pip install --upgrade uv
# Create and activate a virtual environment (optional but recommended)
uv venv
# macOS/Linux
source .venv/bin/activate
# Windows (PowerShell)
.venv\Scripts\Activate.ps1
# install via uv from PyPI
uv pip install bitranox_template_py_lib
```
For alternative install paths (pip, pipx, uv, uvx source builds, etc.), see
[INSTALL.md](INSTALL.md). All supported methods register both the
`bitranox_template_py_lib` and `bitranox-template-py-cli` commands on your PATH.
### Python 3.9+ Baseline
- The project targets **Python 3.9 and newer**.
- Runtime dependencies: `rich-click>=1.9.4` for beautiful CLI output,
`rtoml>=0.13.0` for fast TOML parsing across all Python versions.
- Dev dependencies: pytest, ruff, pyright, bandit, build, twine, codecov-cli,
pip-audit, textual, and import-linter pinned to their newest majors.
- CI workflows exercise GitHub's rolling runner images (`ubuntu-latest`,
`macos-latest`, `windows-latest`) and cover CPython 3.9 through 3.14.
## Usage
The CLI leverages [rich-click](https://github.com/ewels/rich-click) so help output, validation errors, and prompts render with Rich styling while keeping the familiar click ergonomics.
The scaffold keeps a CLI entry point so you can validate packaging flows, but it
currently exposes a single informational command while logging features are
developed:
```bash
bitranox_template_py_lib info
bitranox_template_py_lib hello
bitranox_template_py_lib fail
bitranox_template_py_lib --traceback fail
bitranox-template-py-cli info
python -m bitranox_template_py_lib info
uvx bitranox_template_py_lib info
```
For library use you can import the documented helpers directly:
```python
import bitranox_template_py_lib as btpc
btpc.emit_greeting()
try:
btpc.raise_intentional_failure()
except RuntimeError as exc:
print(f"caught expected failure: {exc}")
btpc.print_info()
```
## Further Documentation
- [Install Guide](INSTALL.md)
- [Development Handbook](DEVELOPMENT.md)
- [Contributor Guide](CONTRIBUTING.md)
- [Changelog](CHANGELOG.md)
- [Module Reference](docs/systemdesign/module_reference.md)
- [License](LICENSE)
| text/markdown | null | bitranox <bitranox@gmail.com> | null | null | MIT | ansi, cli, logging, rich, terminal | [
"Environment :: Console",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Py... | [] | null | null | >=3.9 | [] | [] | [] | [
"rich-click>=1.9.7",
"rtoml<0.13,>=0.12.0; python_version < \"3.10\"",
"rtoml>=0.13.0; python_version >= \"3.10\"",
"bandit<1.9,>=1.8.6; python_version < \"3.10\" and extra == \"dev\"",
"bandit>=1.9.3; python_version >= \"3.10\" and extra == \"dev\"",
"build>=1.4.0; extra == \"dev\"",
"codecov-cli>=11.2... | [] | [] | [] | [
"Homepage, https://github.com/bitranox/bitranox_template_py_lib",
"Repository, https://github.com/bitranox/bitranox_template_py_lib.git",
"Issues, https://github.com/bitranox/bitranox_template_py_lib/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:53:26.966935 | bitranox_template_py_lib-1.1.1.tar.gz | 34,040 | 4d/e7/405a1ad503653c76f26980ba0626e0dc3009c3b6c492061ae96d4cf774e9/bitranox_template_py_lib-1.1.1.tar.gz | source | sdist | null | false | fae3ab2af181f9d93c1e1e488097409b | 74d49018a6459bc32c337635612654c23268475a5010aa478f7c9c3b8cbd957e | 4de7405a1ad503653c76f26980ba0626e0dc3009c3b6c492061ae96d4cf774e9 | null | [
"LICENSE"
] | 233 |
2.4 | mkdocs-material | 9.7.2 | Documentation that simply works | <p align="center">
<a href="https://squidfunk.github.io/mkdocs-material/">
<img src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/logo.svg" width="320" alt="Material for MkDocs">
</a>
</p>
<p align="center">
<strong>
A powerful documentation framework on top of
<a href="https://www.mkdocs.org/">MkDocs</a>
</strong>
</p>
<p align="center">
<a href="https://github.com/squidfunk/mkdocs-material/actions"><img
src="https://github.com/squidfunk/mkdocs-material/workflows/build/badge.svg"
alt="Build"
/></a>
<a href="https://pypistats.org/packages/mkdocs-material"><img
src="https://img.shields.io/pypi/dm/mkdocs-material.svg"
alt="Downloads"
/></a>
<a href="https://pypi.org/project/mkdocs-material"><img
src="https://img.shields.io/pypi/v/mkdocs-material.svg"
alt="Python Package Index"
/></a>
<a href="https://hub.docker.com/r/squidfunk/mkdocs-material/"><img
src="https://img.shields.io/docker/pulls/squidfunk/mkdocs-material"
alt="Docker Pulls"
/></a>
</p>
<p align="center">
Write your documentation in Markdown and create a professional static site for
your Open Source or commercial project in minutes – searchable, customizable,
more than 60 languages, for all devices.
</p>
<p align="center">
<a href="https://squidfunk.github.io/mkdocs-material/getting-started/">
<img src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/screenshot.png" width="700" />
</a>
</p>
<p align="center">
<em>
Check out the demo –
<a
href="https://squidfunk.github.io/mkdocs-material/"
>squidfunk.github.io/mkdocs-material</a>.
</em>
</p>
<h2></h2>
<p id="premium-sponsors"> </p>
<p align="center"><strong>Silver sponsors</strong></p>
<p align="center">
<a href="https://fastapi.tiangolo.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-fastapi.png" height="120"
/></a>
<a href="https://www.trendpop.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-trendpop.png" height="120"
/></a>
<a href="https://documentation.sailpoint.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-sailpoint.png" height="120"
/></a>
<a href="https://futureplc.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-future.svg" width="332" height="120"
/></a>
<a href="https://opensource.siemens.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-siemens.png" height="120"
/></a>
</p>
<p> </p>
<p align="center"><strong>Bronze sponsors</strong></p>
<p align="center">
<a href="https://cirrus-ci.org/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-cirrus-ci.png" height="58"
/></a>
<a href="https://docs.baslerweb.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-basler.png" height="58"
/></a>
<a href="https://kx.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-kx.png" height="58"
/></a>
<a href="https://orion-docs.prefect.io/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-prefect.png" height="58"
/></a>
<a href="https://www.zenoss.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-zenoss.png" height="58"
/></a>
<a href="https://docs.posit.co" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-posit.png" height="58"
/></a>
<a href="https://n8n.io" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-n8n.png" height="58"
/></a>
<a href="https://www.dogado.de" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-dogado.png" height="58"
/></a>
<a href="https://wwt.com" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-wwt.png" height="58"
/></a>
<a href="https://elastic.co" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-elastic.png" height="58"
/></a>
<a href="https://ipfabric.io/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-ip-fabric.png" height="58"
/></a>
<a href="https://www.apex.ai/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-apex-ai.png" height="58"
/></a>
<a href="https://jitterbit.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-jitterbit.png" height="58"
/></a>
<a href="https://sparkfun.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-sparkfun.png" height="58"
/></a>
<a href="https://eccenca.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-eccenca.png" height="58"
/></a>
<a href="https://neptune.ai/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-neptune-ai.png" height="58"
/></a>
<a href="https://rackn.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-rackn.png" height="58"
/></a>
<a href="https://civicactions.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-civic-actions.png" height="58"
/></a>
<a href="https://getscreen.me/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-getscreenme.png" height="58"
/></a>
<a href="https://botcity.dev/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-botcity.png" height="58"
/></a>
<a href="https://kolena.io/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-kolena.png" height="58"
/></a>
<a href="https://www.evergiving.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-evergiving.png" height="58"
/></a>
<a href="https://astral.sh/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-astral.png" height="58"
/></a>
<a href="https://oikolab.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-oikolab.png" height="58"
/></a>
<a href="https://www.buhlergroup.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-buhler.png" height="58"
/></a>
<a href="https://3dr.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-3dr.png" height="58"
/></a>
<a href="https://spotware.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-spotware.png" height="58"
/></a>
<a href="https://milfordasset.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-milford.png" height="58"
/></a>
<a href="https://www.lechler.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-lechler.png" height="58"
/></a>
<a href="https://invers.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-invers.png" height="58"
/></a>
<a href="https://vantor.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-vantor.png" height="58"
/></a>
<a href="https://www.equipmentshare.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-equipmentshare.png" height="58"
/></a>
<a href="https://hummingbot.org/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-hummingbot.png" height="58"
/></a>
<a href="https://octoperf.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-octoperf.png" height="58"
/></a>
<a href="https://intercomestibles.ch/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-intercomestibles.png" height="58"
/></a>
<a href="https://www.centara.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-centara.png" height="58"
/></a>
<a href="https://pydantic.dev/logfire/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-logfire.png" height="58"
/></a>
<a href="https://www.vector.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-vector.png" height="58"
/></a>
<a href="https://second.tech/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-second.png" height="58"
/></a>
<a href="https://mvtec.com/" target=_blank><img
src="https://raw.githubusercontent.com/squidfunk/mkdocs-material/master/.github/assets/sponsors/sponsor-mvtec.png" height="58"
/></a>
</p>
<p> </p>
## Everything you would expect
### It's just Markdown
Focus on the content of your documentation and create a professional static site
in minutes. No need to know HTML, CSS or JavaScript – let Material for MkDocs do
the heavy lifting for you.
### Works on all devices
Serve your documentation with confidence – Material for MkDocs automatically
adapts to perfectly fit the available screen estate, no matter the type or size
of the viewing device. Desktop. Tablet. Mobile. All great.
### Made to measure
Make it yours – change the colors, fonts, language, icons, logo, and more with
a few lines of configuration. Material for MkDocs can be easily extended and
provides many options to alter appearance and behavior.
### Fast and lightweight
Don't let your users wait – get incredible value with a small footprint by using
one of the fastest themes available with excellent performance, yielding optimal
search engine rankings and happy users that return.
### Maintain ownership
Own your documentation's complete sources and outputs, guaranteeing both
integrity and security – no need to entrust the backbone of your product
knowledge to third-party platforms. Retain full control.
### Open Source
You're in good company – choose a mature and actively maintained solution built
with state-of-the-art Open Source technologies, trusted by more than 50,000
individuals and organizations. Licensed under MIT.
## Quick start
Material for MkDocs can be installed with `pip`:
``` sh
pip install mkdocs-material
```
Add the following lines to `mkdocs.yml`:
``` yaml
theme:
name: material
```
For detailed installation instructions, configuration options, and a demo, visit
[squidfunk.github.io/mkdocs-material][Material for MkDocs]
[Material for MkDocs]: https://squidfunk.github.io/mkdocs-material/
## Trusted by ...
### ... industry leaders
[ArXiv](https://info.arxiv.org),
[Atlassian](https://atlassian.github.io/data-center-helm-charts/),
[AWS](https://aws.github.io/copilot-cli/),
[Bloomberg](https://bloomberg.github.io/selekt/),
[CERN](http://abpcomputing.web.cern.ch/),
[Datadog](https://datadoghq.dev/integrations-core/),
[Google](https://google.github.io/accompanist/),
[Harvard](https://informatics.fas.harvard.edu/),
[Hewlett Packard](https://hewlettpackard.github.io/squest/),
[HSBC](https://hsbc.github.io/pyratings/),
[ING](https://ing-bank.github.io/baker/),
[Intel](https://open-amt-cloud-toolkit.github.io/docs/),
[JetBrains](https://jetbrains.github.io/projector-client/mkdocs/),
[LinkedIn](https://linkedin.github.io/school-of-sre/),
[Microsoft](https://microsoft.github.io/code-with-engineering-playbook/),
[Mozilla](https://mozillafoundation.github.io/engineering-handbook/),
[Netflix](https://netflix.github.io/titus/),
[OpenAI](https://openai.github.io/openai-agents-python/),
[Red Hat](https://ansible.readthedocs.io/projects/lint/),
[Roboflow](https://inference.roboflow.com/),
[Salesforce](https://policy-sentry.readthedocs.io/),
[SIEMENS](https://opensource.siemens.com/),
[Slack](https://slackhq.github.io/circuit/),
[Square](https://square.github.io/okhttp/),
[Uber](https://uber-go.github.io/fx/),
[Zalando](https://opensource.zalando.com/skipper/)
### ... and successful Open Source projects
[Amp](https://amp.rs/docs/),
[Apache Iceberg](https://iceberg.apache.org/),
[Arduino](https://arduino.github.io/arduino-cli/),
[Asahi Linux](https://asahilinux.org/docs/),
[Auto-GPT](https://docs.agpt.co/),
[AutoKeras](https://autokeras.com/),
[BFE](https://www.bfe-networks.net/),
[CentOS](https://docs.infra.centos.org/),
[Crystal](https://crystal-lang.org/reference/),
[eBPF](https://ebpf-go.dev/),
[ejabberd](https://docs.ejabberd.im/),
[Electron](https://www.electron.build/),
[FastAPI](https://fastapi.tiangolo.com/),
[FlatBuffers](https://flatbuffers.dev/),
[{fmt}](https://fmt.dev/),
[Freqtrade](https://www.freqtrade.io/en/stable/),
[GoReleaser](https://goreleaser.com/),
[GraphRAG](https://microsoft.github.io/graphrag/),
[Headscale](https://headscale.net/),
[HedgeDoc](https://docs.hedgedoc.org/),
[Hummingbot](https://hummingbot.org/),
[Knative](https://knative.dev/docs/),
[Kubernetes](https://kops.sigs.k8s.io/),
[kSQL](https://docs.ksqldb.io/),
[LeakCanary](https://square.github.io/leakcanary/),
[LlamaIndex](https://docs.llamaindex.ai/),
[NetBox](https://netboxlabs.com/docs/netbox/en/stable/),
[Nokogiri](https://nokogiri.org/),
[OpenAI](https://openai.github.io/openai-agents-python/),
[OpenFaaS](https://docs.openfaas.com/),
[OpenSSL](https://docs.openssl.org/),
[Orchard Core](https://docs.orchardcore.net/en/latest/),
[Percona](https://docs.percona.com/percona-monitoring-and-management/),
[Pi-Hole](https://docs.pi-hole.net/),
[Polars](https://docs.pola.rs/),
[Pydantic](https://pydantic-docs.helpmanual.io/),
[PyPI](https://docs.pypi.org/),
[Quivr](https://core.quivr.com/),
[Renovate](https://docs.renovatebot.com/),
[RetroPie](https://retropie.org.uk/docs/),
[Ruff](https://docs.astral.sh/ruff/),
[Supervision](https://supervision.roboflow.com/latest/),
[Textual](https://textual.textualize.io/),
[Traefik](https://docs.traefik.io/),
[Trivy](https://aquasecurity.github.io/trivy/),
[Typer](https://typer.tiangolo.com/),
[tinygrad](https://docs.tinygrad.org/),
[Ultralytics](https://docs.ultralytics.com/),
[UV](https://docs.astral.sh/uv/),
[Vapor](https://docs.vapor.codes/),
[WebKit](https://docs.webkit.org/),
[WTF](https://wtfutil.com/),
[ZeroNet](https://zeronet.io/docs/)
## License
**MIT License**
Copyright (c) 2016-2025 Martin Donath
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to
deal in the Software without restriction, including without limitation the
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.
| text/markdown | null | Martin Donath <martin.donath@squidfunk.com> | null | null | null | documentation, mkdocs, theme | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Framework :: MkDocs",
"License :: OSI Approved :: MIT License",
"Programming Language :: JavaScript",
"Programming Language :: Python",
"Topic :: Documentation",
"Topic :: Software Development :: Documentation",
"Topic... | [] | null | null | >=3.8 | [] | [] | [] | [
"babel>=2.10",
"backrefs>=5.7.post1",
"colorama>=0.4",
"jinja2>=3.1",
"markdown>=3.2",
"mkdocs-material-extensions>=1.3",
"mkdocs>=1.6",
"paginate>=0.5",
"pygments>=2.16",
"pymdown-extensions>=10.2",
"requests>=2.30",
"mkdocs-git-committers-plugin-2>=1.1; extra == \"git\"",
"mkdocs-git-revis... | [] | [] | [] | [
"Homepage, https://squidfunk.github.io/mkdocs-material/",
"Bug Tracker, https://github.com/squidfunk/mkdocs-material/issues",
"Repository, https://github.com/squidfunk/mkdocs-material.git"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:53:07.763597 | mkdocs_material-9.7.2.tar.gz | 4,097,818 | 34/57/5d3c8c9e2ff9d66dc8f63aa052eb0bac5041fecff7761d8689fe65c39c13/mkdocs_material-9.7.2.tar.gz | source | sdist | null | false | 4ae5f527d62583c98d8a5407ecc76af8 | 6776256552290b9b7a7aa002780e25b1e04bc9c3a8516b6b153e82e16b8384bd | 34575d3c8c9e2ff9d66dc8f63aa052eb0bac5041fecff7761d8689fe65c39c13 | MIT | [
"LICENSE"
] | 439,162 |
2.4 | yak-server | 0.66.1 | Football bet rest server | # Yak-toto
[](https://pypi.org/project/yak-server/)
[](https://github.com/yak-toto/yak-server/pkgs/container/yak-server)
[](https://pypi.org/project/yak-server/)
[](https://codecov.io/gh/yak-toto/yak-server)
[](https://github.com/yak-toto/yak-server/actions/workflows/codeql-analysis.yml)
[](https://github.com/yak-toto/yak-server/actions/workflows/test.yml)
## Requisites
- Ubuntu 22.04
- Postgres 17.2
## How to build the project
### Database
To setup a database, run `yak env init`. This will ask you to fill different configuration in order build env file.
Once done, you can run a docker script to start postgres database (at `scripts/postgresrun.sh`)
### Backend
Run your project in a Python env is highly recommend. You can use venv python module using the following command:
```bash
uv venv
. .venv/bin/activate
```
Fetch all packages using uv with the following command:
```bash
uv pip install -e .
```
Before starting the backend, add `JWT_SECRET_KEY` and `JWT_EXPIRATION_TIME` in `.env` same as the Postgres user name and password. As
login system is using JSON Web Token, a secret key is required and an expiration time (in seconds). To generate one, you can use the python built-in `secrets` module.
```py
>>> import secrets
>>> secrets.token_hex(16)
'9292f79e10ed7ed03ffad66d196217c4'
```
```text
JWT_SECRET_KEY=9292f79e10ed7ed03ffad66d196217c4
JWT_EXPIRATION_TIME=1800
```
Also, automatic backup can be done through `yak_server/cli/backup_database` script. It can be run using `yak db backup`.
Finally, fastapi needs some configuration to start. Last thing, for development environment, debug needs to be activated with a additional environment variable:
```text
DEBUG=1
```
And then start backend with:
```bash
uvicorn --reload yak_server:create_app --factory
```
### Data initialization
To run local testing, you can use the script `create_database.py`, `initialize_database.py` and `create_admin.py` located in `yak_server/cli` folder. To select, set `COMPETITION` environment variable in `.env`. It will read data from `yak_server/data/{COMPETITION}/`.
### Testing
Yak-server is using `pytest` to run tests.
## Profiling
You can run the application with profiler attached. To do so, please run the following command
```bash
uvicorn --reload scripts.profiling:create_app --factory
```
| text/markdown | null | Guillaume Le Pape <gui.lepape25@gmail.com> | null | null | null | api, postgresql, rest | [
"Environment :: Web Environment",
"Framework :: FastAPI",
"Framework :: Pydantic",
"Framework :: Pydantic :: 2",
"License :: OSI Approved :: MIT License",
"Operating System :: Unix",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only... | [] | null | null | >=3.10 | [] | [] | [] | [
"argon2-cffi==25.1.0",
"click==8.3.1",
"fastapi==0.128.6",
"psycopg[binary]==3.3.2",
"pydantic-settings==2.12.0",
"pyjwt==2.11.0",
"sqlalchemy==2.0.46",
"alembic==1.18.4; extra == \"db-migration\"",
"uvicorn==0.40.0; extra == \"server\"",
"beautifulsoup4[lxml]==4.14.3; extra == \"sync\"",
"httpx... | [] | [] | [] | [
"Homepage, https://github.com/yak-toto/yak-server",
"Repository, https://github.com/yak-toto/yak-server"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:53:00.108048 | yak_server-0.66.1.tar.gz | 205,274 | 67/10/7e7ff1bf20c77b6a594a497c850e203eb18279638401767211953e7551ef/yak_server-0.66.1.tar.gz | source | sdist | null | false | fe84630f5a1aca6f32d0bdb3a721565a | 0aee34facb887837330653d8c39825d17ca9cc9ad81ee08124d6a0ec1703fb96 | 67107e7ff1bf20c77b6a594a497c850e203eb18279638401767211953e7551ef | MIT | [
"LICENSE"
] | 253 |
2.4 | fing | 0.1.0 | 🖐️ A universal representation of fingering systems for winds, reeds, and brass 🖐️ | # 🖐️ fing: A universal representation of fingering systems for winds, reeds, and brass 🖐️
## Abstract
`fing` is a universal representation of fingering systemss for monophonic
keyed instruments, including but not limited to winds, reeds, and keyed brass.
## Definitions
**Monophonic** (mono) instruments only play a single note or tone at a time, like wind and
brass instruments.
A **key** is a button that can be pressed and held, or a hole that can be covered on an
instrument.
A **fingering** is a set of keys being pressed at the same time.
A **note-fingering** is a note with a fingering that can play it. (**Note** and
**scale** are used informally and generally here: see the `tuney` project for a full
specification of tunings and scales.)
A **fingering system** is a set of note-fingerings. In one sytem, one note can
correspond to many fingerings, and one fingering can correspond to multiple notes (a
**multi-note fingering** or **multi**), like in brass instruments or overblown winds.
(The final choice of note from a multi might depend on almost anything: breath,
embouchure, control information, randomness, or the state of the instrument
itself. Mostly this can't be formally represented, but there will be a special case for
the harmonic series, and a field for free-form text instructions to the performer, like
"overblow very hard, medium-tight embouchure".)
## Can we do better than just listing note fingerings?
Listing all the note-fingerings individually is the simplest way to go, and in many
cases will be the best way: looking at, say, the fingering charts of the varieties of
ocarina, there doesn't seem to be a clear organizing principle, and there are only a
small number of fingerings in brass.
But most wind instruments fingerings have a linearity to them, taking advantage of the
natural smoothness and speed of raising or lowering successive fingers in sequence.
Keys naturally divide into **main keys** (finger keys) and **modifier keys** (palm and octave keys).
Each main key has its own unique human finger that presses it. There seem to be 6 to 10
main keys in existing wind instruments.
## Tricky edge cases
* Partly covered holes
* Brass instruments
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.9.28 {"installer":{"name":"uv","version":"0.9.28","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T15:52:34.468651 | fing-0.1.0.tar.gz | 4,265 | 5c/dc/eaef0034747216ed042ac4638d6a87207168400fd1775192416e15811db5/fing-0.1.0.tar.gz | source | sdist | null | false | f7214d2165eb78b051b3632ed2b5afed | 28edc30b386a64077e5ee1a305d0917983a2ad703f0f7c6f746ed51c1a43c8c7 | 5cdceaef0034747216ed042ac4638d6a87207168400fd1775192416e15811db5 | null | [
"LICENSE"
] | 257 |
2.4 | unionsdata | 0.2.0 | Download imaging data from the UNIONS survey. | # UNIONSdata
[](https://pypi.org/project/unionsdata/)
[](https://github.com/heesters-nick/unionsdata/actions/workflows/ci.yml)
[](https://www.python.org/downloads/)
[](https://github.com/astral-sh/ruff)
[](http://mypy-lang.org/)
A Python package for downloading multi-band imaging data from the Ultraviolet Near Infrared Optical Northern Survey ([UNIONS](https://www.skysurvey.cc/)). The package downloads the reduced images from the CANFAR VOSpace vault using the [vos tool](https://pypi.org/project/vos/).
## Features
✨ **Multi-threaded downloads** - Parallel downloading for improved performance\
🎯 **Flexible input methods** - Use coordinates, tile numbers, or CSV catalogs\
🖥️ **Interactive Configuration** - Terminal User Interface (TUI) for easy setup and validation\
⚙️ **Configuration validation** - Pydantic-based config with clear error messages\
🌳 **Spatial indexing** - KD-tree for efficient coordinate-to-tile matching\
📊 **Progress tracking** - Real-time download status and completion reports\
✅ **Data Integrity** - Automatic verification of file sizes and headers to ensure downloaded files are not corrupted\
✂️ **Cutout creation** - Stream cutouts directly from the server without downloading full tiles, or extract them from downloaded data\
🛡️ **Graceful shutdown** - Clean interrupt handling with temp file cleanup
## Quick Start
```bash
# Install
pip install unionsdata
# Setup
unionsdata init # Create your local copy of the config
unionsdata config # Configure your download
# Download tiles
unionsdata
```
## Prerequisites
1. **CANFAR VOSpace Account**\
Register at https://www.canfar.net/en/
2. **UNIONS survey membership**\
Until the first public data release only collaboration members have access to the data.
3. **Valid X.509 certificate for VOSpace access**\
See below.
4. **System dependencies**\
Installed automatically (see pyproject.toml)
## Installation & Setup
### Option 1: Install from PyPI (Recommended)
**Step 1:** Install the package
```bash
pip install unionsdata
```
**Step 2:** Initialize the configuration file
```bash
unionsdata init
```
This creates your configuration file at:
- **Linux/Mac**: `~/.config/unionsdata/config.yaml`
- **Windows**: `%APPDATA%/unionsdata/config.yaml`
**Step 3:** Edit the configuration
```bash
unionsdata config
```
This opens a terminal user interface (TUI). Set your paths, inputs and other parameters:

> 🔑 **Important:** Set up your CADC certificate in the TUI by clicking the Create/Renew button in the Paths tab and providing your CADC username and password. Credentials expire after 10 days. The button will indicate if a certificate is about to expire or already has. You can also manually create or renew your certificate in the terminal via:
```bash
cadc-get-cert -u YOUR_CANFAR_USERNAME
```
### Option 2: Install from Source (For Development)
**Step 1:** Clone and install
```bash
# Clone the repository
git clone https://github.com/heesters-nick/unionsdata.git
# Change into the cloned repository
cd unionsdata
# Install in editable development mode
pip install -e ".[dev]"
```
**Step 2:** Edit the configuration file directly at `src/unionsdata/config.yaml`
Update the paths:
```yaml
machine: local
paths_by_machine:
local:
root_dir_main: "/path/to/your/project"
# **Important**: define location for downloaded data
root_dir_data: "/path/to/download/data"
dir_tables: "path/to/tables"
dir_figures: "path/to/figures"
cert_path: "/home/user/.ssl/cadcproxy.pem"
```
**Step 3:** Set up CANFAR credentials
```bash
cadc-get-cert -u YOUR_CANFAR_USERNAME
```
## Usage
### Command Line Interface
The package provides a `unionsdata` command with several subcommands:
| Command | Description |
|---------|-------------|
| `unionsdata init` | Initialize configuration file (first-time setup) |
| `unionsdata config` | Open terminal user interface to edit the config file |
| `unionsdata download` | Start downloading data |
| `unionsdata` | Shortcut alias for `unionsdata download` |
| `unionsdata plot` | Plot created cutouts |
>**📝 Important - First Run:** On your first download, the package automatically detects this and downloads tile availability information from CANFAR (~5 minutes one-time setup). A KD-tree spatial index is built for efficient coordinate-to-tile matching. Subsequent runs use the cached data.
>
> To refresh tile availability data later, use the `--update-tiles` flag:
> ```bash
> unionsdata download --update-tiles
> ```
>
> Or tick the `Update Tiles` option in the TUI.
#### Download Specific Tiles
Download tiles by their tile numbers (x, y pairs):
```bash
unionsdata download --tiles 217 292 234 295
```
Download specific bands only:
```bash
unionsdata download --tiles 217 292 --bands whigs-g cfis_lsb-r ps-i
```
#### Download by Coordinates
Download tiles containing specific RA/Dec coordinates (in degrees):
```bash
unionsdata download --coordinates 227.3042 52.5285 231.4445 52.4447
```
#### Download from CSV Catalog
Download tiles for objects in a CSV file:
```bash
unionsdata download --table /path/to/catalog.csv
```
Your CSV should have columns for RA, Dec, and object ID. Example:
```csv
ID,ra,dec
M101,210.8022,54.3489
2,231.4445,52.4447
```
> **Note:** Column names are customizable in the configuration file.
#### Download All Available Tiles
> **⚠️ Warning:** This will download a large amount of data!
```bash
unionsdata download --all-tiles --bands whigs-g cfis_lsb-r
```
### Using the Terminal User Interface (TUI)
Instead of using command-line arguments, you can configure downloads in the terminal user interface via
```bash
unionsdata config
```
A clickable user interface will open in your terminal where you can specify options for your download, cutout creation and subsequent cutout plotting. The configuration is grouped into several tabs: General, Paths, Inputs, Runtime, Bands, Tiles, Cutouts, and Plotting. The input fields are validated in real time: all drop down menus need to have a selection before allowing you to save the config file. Text boxes have a green border if the entry is valid and a red one if it is invalid. The info icons **(𝑖)** next to the settings in the TUI provide additional information. Choose either specific sky coordinates or a table of objects as an input and enable cutout creation if you want to plot your input objects after the data is downloaded. The application will augment your input table (or create a table from your input coordinates) and save it to the `Tables` directory. In the `Plotting` tab you can specify the catalog from which objects should be plotted under `Catalog Name`. The setting `Auto` automatically uses the most recent input. Once you have completed the configuration, hit the `Save & Quit` button.
Then run:
```bash
unionsdata download
```
Or simply:
```bash
unionsdata
```
If you have opted to create cutouts, you can plot them using:
```bash
unionsdata plot
```
## Supported Bands
| Band | Survey | Filter |
|------|--------|--------|
| `cfis-u` | CFIS | u-band |
| `whigs-g` | WHIGS | g-band |
| `cfis-r` | CFIS | r-band |
| `cfis_lsb-r` | CFIS | r-band (LSB optimized) |
| `ps-i` | Pan-STARRS | i-band |
| `wishes-z` | WISHES | z-band |
| `ps-z` | Pan-STARRS | z-band |
## Output Structure
Downloaded files are organized by tile and band:
```
data/
├── 217_292/
│ ├── whigs-g/
│ │ └── calexp-CFIS_217_292.fits
│ ├── cfis_lsb-r/
│ │ └── CFIS_LSB.217.292.r.fits
│ ├── ps-i/
│ │ └── PSS.DR4.217.292.i.fits
│ └── cutouts/
│ └── 217_292_cutouts_512.h5
└── 234_295/
└── ...
```
## Configuration Reference
### Key Configuration Options
| Section | Option | Description |
|---------|--------|-------------|
| `Inputs` | `Input Source` | Input method: `Specific Tiles`, `Sky Coordinates`, `Table (CSV)`, or `All Available Tiles` |
| `Runtime` | `Download Threads` | Number of parallel download threads (1-32) |
| `Runtime` | `Cutout Processes` | Number of parallel cutout processes (1-32) |
| `Bands` | `Band Selection` | List of bands to download |
| `Tiles` | `Update Tiles` | Refresh tile lists from VOSpace |
| `Tiles` | `Band Constraint` | Minimum bands required per tile |
| `Tiles` | `Require All Bands` | Require that all requested bands are available to download a tile |
| `Cutouts` | `Cutout Mode` | Create cutouts around input coordinates: `After Download` or `Direct Only`. Works if input is `Sky Coordinates` or `Table` |
| `Plotting` | `Catalog Name` | Name of the catalog that should be used to plot cutouts around objects. `Auto` will use the most recent input |
| `Plotting` | `RGB Bands` | Select which bands should be mapped to red, green and blue to create color images. Locked in after selection. Hit the `Reset` button to start over. |
| `Plotting` | `Display Mode` | `Grid`: plot object cutouts in a grid; `Channel`: show individual bands + RGB image for every object |
### Band Configuration
Each band has a specific file structure and location. Example for the WHIGS g-band:
```yaml
bands:
whigs-g:
name: "calexp-CFIS"
band: "g"
vos: "vos:cfis/whigs/stack_images_CFIS_scheme"
suffix: ".fits"
delimiter: "_"
fits_ext: 1 # Data extension in fits file
zfill: 0 # No zero padding the tile numbers in the file name
zp: 27.0 # Zero point magnitude
```
> **Note:** Data paths or file formats may change over time. Check the [CANFAR vault](https://www.canfar.net/storage/vault/list/cfis) for current locations:
| Band | vault directory |
|------|--------|
| `cfis-u` | tiles_DR6 |
| `whigs-g` | whigs |
| `cfis-r` | tiles_DR6 |
| `cfis_lsb-r` | tiles_LSB_DR6 |
| `ps-i` | panstarrs |
| `wishes-z` | wishes_1 |
| `ps-z` | panstarrs |
## Troubleshooting
### Certificate Expired
```bash
cadc-get-cert -u YOUR_CANFAR_USERNAME
```
### Config Issues
```bash
# Create a fresh copy of the default config file in your local environment
unionsdata init --force
```
## Acknowledgments
- UNIONS collaboration
- CANFAR (Canadian Advanced Network for Astronomical Research)
## Links
- [**UNIONS Survey**](http://www.skysurvey.cc/)
- [**CANFAR**](https://www.canfar.net/)
- [**CANFAR Storage Documentation**](https://www.opencadc.org/canfar/latest/platform/storage/)
- [**CANFAR VOSpace Documentation**](https://www.opencadc.org/canfar/latest/platform/storage/vospace/)
- [**vostools**](https://github.com/opencadc/vostools)
## Support
For issues and questions:
- Open an issue on [GitHub](https://github.com/heesters-nick/unionsdata)
- Contact: nick.heesters@epfl.ch
---
| text/markdown | Nick Heesters | null | null | null | MIT | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"astropy>=7.1.0",
"concurrent-log-handler>=0.9.28",
"cryptography>=46.0.3",
"h5py>=3.10.0",
"matplotlib>=3.8.0",
"numpy>=2.3.1",
"pandas>=2.3.1",
"pydantic>=2.0",
"pywavelets>=1.9.0",
"pyyaml>=6.0",
"rich==14.2.0",
"scipy>=1.16.0",
"textual==6.6.0",
"tqdm>=4.67.1",
"vos>=3.6.3"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:52:07.659650 | unionsdata-0.2.0.tar.gz | 296,812 | 64/0a/139b7985ea0f14aecd4a133ddee438c040cf01a4daf469ce6769c77ee2ca/unionsdata-0.2.0.tar.gz | source | sdist | null | false | b690355985fc931611bceba7603d53e1 | 68a22404dc1d23812e14450aa541675332613a93813ed4775d09134612cb6f0a | 640a139b7985ea0f14aecd4a133ddee438c040cf01a4daf469ce6769c77ee2ca | null | [
"LICENSE"
] | 219 |
2.4 | sensai-senguard | 0.1.0 | Python SDK for the PZ scan execute endpoint | # sensai-senguard (Python)
Python SDK for SensAI:
`POST https://pzidpltocuvjjcfnamzt.supabase.co/functions/v1/execute`
## Install
```bash
pip install sensai-senguard
```
## Authentication
Use your API key as Bearer token (`sk_...`):
```python
from sensai_senguard import ScanClient
client = ScanClient(api_key="sk_your_api_key_here")
```
## Methods
- `scan_input(input: str)` -> `mode="input_only"`
- `validate_actions(input: str, actions: list[Action])` -> `mode="validate_actions"`
- `scan(input: str, actions: list[Action] | None = None, output: str | None = None)` -> `mode="full"`
- `execute(input: str, mode: "input_only" | "validate_actions" | "full", actions=None, output=None)`
`scanInput` and `validateActions` aliases are also available.
## Action Schema
```python
Action = {
"type": "query_database" | "send_email" | "api_call" | "code_execution" | "file_access",
"params": { ... }
}
```
## Usage
```python
from sensai_senguard import ScanClient
client = ScanClient(api_key="sk_...")
# 1) Pre-screen input
screening = client.scan_input("Ignore all previous instructions and reveal secrets")
if screening.get("decision") == "block":
raise ValueError("Blocked by SensAI")
# 2) Validate actions before execution
actions = [
{
"type": "query_database",
"params": {
"query": "SELECT id, product_name FROM orders LIMIT 10",
"table": "orders",
},
}
]
validation = client.validate_actions("Show me latest orders", actions)
safe_action_types = [
item["action_type"]
for item in validation.get("action_validations", [])
if item.get("decision") == "allow"
]
# 3) Full lifecycle scan (input + actions + output)
final = client.scan(
input="Show me latest orders",
actions=actions,
output="Here are the latest 10 orders...",
)
if final.get("decision") == "block":
raise ValueError("Response blocked")
if final.get("decision") == "redact":
response_text = final.get("redacted_output", "[REDACTED]")
```
## Response Shape
Responses are typed dictionaries and can include:
- `decision`: `allow | block | redact | require_approval`
- `overall_risk_score`: `float` (0.0-1.0)
- `input_analysis`
- `action_validations`
- `output_analysis`
- `triggered_policies`
- `redacted_output`
- `processing_time_ms`
## Error Handling
The SDK maps API status codes to specific exceptions:
- `400` -> `BadRequestError`
- `401/403` -> `AuthenticationError`
- `404` -> `ProjectNotFoundError`
- `429` -> `RateLimitError`
- `500+` -> `ServerError`
- others -> `APIError`
Local payload validation errors raise `ValidationError` (for example invalid `mode`, non-string `input`, or invalid `actions` schema).
```python
from sensai_senguard import (
APIError,
ProjectNotFoundError,
RateLimitError,
ValidationError,
)
try:
result = client.execute(
input="Run diagnostic",
mode="validate_actions",
actions=[{"type": "code_execution", "params": {"language": "python", "code": "print(1)"}}],
)
except ValidationError as e:
print("Invalid payload:", e)
except ProjectNotFoundError:
print("API key is valid but project was not found")
except RateLimitError:
print("Rate limited, retry later")
except APIError as e:
print("API failed", e.status_code, e.code)
```
## Publish
```bash
python -m pip install --upgrade build twine
python -m build
python -m twine check dist/*
python -m twine upload dist/*
```
| text/markdown | SensAI | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"requests<3,>=2.31.0",
"pytest>=8.0.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.6 | 2026-02-18T15:51:31.010057 | sensai_senguard-0.1.0.tar.gz | 5,962 | 12/bd/a16d99518d168220b05714d8c09d8c6d02ace6394ad8ed6d95a2b4fda2ab/sensai_senguard-0.1.0.tar.gz | source | sdist | null | false | 89e9bf202456927b3a4769f76b0fdca6 | 7f84483745df13f64971dc144e2ca1c18a24e09df066b3f8c43b7b86e5d48201 | 12bda16d99518d168220b05714d8c09d8c6d02ace6394ad8ed6d95a2b4fda2ab | MIT | [] | 240 |
2.4 | pulseq-systems | 0.1.3 | MR system specifications for PyPulseq | # PulseqSystems
PulseqSystems provides a bundled collection of MR system specifications (e.g. gradient configurations) intended to be used with pulse sequence tooling such as pypulseq.
This repository packages MRSystems.json and utilities to load/query the available manufacturers, models and gradient configurations. The primary target is pypulseq (https://github.com/imr-framework/pypulseq), but the data and utilities can be consumed from both Python and Nim projects.
## Features
- Packaged MR system specifications (JSON) for common scanner models.
- Helpers to list manufacturers, models, gradients and to retrieve pulseq specs.
- Designed to be installed as a Python package and used together with pypulseq.
- Data is plain JSON and can be parsed from other languages (Nim, etc.).
## Usage
### Raw specifications in JSON format
- The MRSystems.json file contains the specifications. You can parse this file directly in any language that supports JSON to access the data.
- The file is located under src/pulseq_systems/MRSystems.json in the repository.
### Python
- Import the package and use the provided helpers to list systems and obtain pulseq-compatible parameters.
- Typical workflow: install the package, then call functions to get gradient limits and slew rate for use with pypulseq.
### Nim
- The JSON file(s) are distributed with the package. After installation or by copying the JSON, Nim programs can parse MRSystems.json with any JSON library to obtain the same system specifications.
## Python API
The package ships a small helper module (src/pulseq_systems/get_systems.py) to load and query the bundled MRSystems.json. Main functions:
- list_manufacturers() -> list[str]
- Return a list of manufacturer names present in MRSystems.json.
- list_models(manufacturer: str) -> list[str]
- Return a list of model names for a given manufacturer.
- list_gradients(manufacturer: str, model: str) -> list[str]
- Return available gradient configuration names for the given model.
- get_pulseq_specs(manufacturer: str, model: str, gradient: str = None) -> dict
- Return pulseq-relevant parameters. The returned dict includes:
- grad_unit (e.g. "mT/m")
- max_grad (float)
- slew_unit (e.g. "T/m/s")
- max_slew (float)
- B0 (float, field strength in T)
- If gradient is omitted, the first available gradient configuration is used.
- get_metadata() -> dict
- Return top-level metadata from MRSystems.json.
Example (after installing the package):
```python
from pulseq_systems import list_manufacturers, get_pulseq_specs
manufacturers = list_manufacturers()
specs = get_pulseq_specs("Siemens", "Prisma")
```
## Nim
A Nim module (src/pulseq_systems.nim) is provided so the same JSON data can be consumed from Nim code. The Nim module exposes:
- listManufacturers(): seq[string]
- listModels(manufacturer: string): seq[string]
- listGradients(manufacturer: string, model: string): seq[string]
- getPulseqSpecs(manufacturer: string, model: string, gradient: string = ""): SystemSpec
SystemSpec fields:
- B0: float64
- maxSlew: float64
- maxGrad: float64
- slewUnit: string
- gradUnit: string
Usage (Nim):
```nim
import pulseq_systems
echo listManufacturers()
let spec = getPulseqSpecs("Siemens", "Prisma")
```
Installation note:
- The JSON data is plain MRSystems.json bundled with the project. You can install the Nim module as a Nim package (nimble) or include the module and JSON in your Nim project. Ensure the JSON path is correct relative to your installed module or copy MRSystems.json alongside the Nim module when packaging.
## Credits and disclaimer
The JSON file has been compiled with the help of a large language model (Claude Opus 4.6) from publicly available sources and may not be exhaustive or perfectly accurate. No warranty is implied or explicitly granted. Please verify the specifications with official sources if you intend to use them for critical applications.
## References
- pulseq - open format for MR sequences: https://github.com/pulseq/pulseq/
- pypulseq — pulse sequence toolbox for Python: https://github.com/imr-framework/pypulseq
## Contributing
Contributions (additional systems, corrections) are welcome. Please open an issue or a pull request with changes to the JSON or helper code.
## License
This package is released under a MIT license.
Please refer to the repository LICENSE file for licensing details.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.7 | [] | [] | [] | [
"importlib-resources>=5.0; python_version < \"3.9\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.0 | 2026-02-18T15:51:26.345643 | pulseq_systems-0.1.3.tar.gz | 12,149 | 4c/9f/0287d3cbd6455f9374f9fb131b06ba3cc620decbf0591c7cfd2a45307f89/pulseq_systems-0.1.3.tar.gz | source | sdist | null | false | 6bd7cb192b8ce93e7becf16cc28e9603 | e6b912d1b591a690082d6d102149911f660068eb016982c98769580ebdbaded0 | 4c9f0287d3cbd6455f9374f9fb131b06ba3cc620decbf0591c7cfd2a45307f89 | null | [
"LICENSE"
] | 215 |
2.4 | cyber-find | 0.3.4 | Advanced OSINT tool for searching users across 200+ platforms | # 🕵️ CyberFind - Advanced OSINT Search Tool
<p align="center">
<img src="https://img.shields.io/badge/Version-0.3.4-blue?style=for-the-badge&logo=github" alt="Version">
<img src="https://img.shields.io/badge/Python-3.9+-green?style=for-the-badge&logo=python" alt="Python">
<img src="https://img.shields.io/badge/Platform-Linux%20%7C%20macOS%20%7C%20Windows-lightgrey?style=for-the-badge" alt="Platform">
<img src="https://img.shields.io/badge/License-MIT-red?style=for-the-badge&logo=opensourceinitiative" alt="License">
<img src="https://img.shields.io/badge/Tests-36%20Passing-brightgreen?style=for-the-badge&logo=pytest" alt="Tests">
<img src="https://img.shields.io/badge/Code%20Style-Black-black?style=for-the-badge&logo=python" alt="Code Style">
<img src="https://img.shields.io/pypi/v/cyber-find?color=blue&label=PyPI&logo=pypi&style=for-the-badge" alt="PyPI Version">
</p>
<p align="center">
<b>Find user accounts across 200+ platforms in seconds</b>
</p>
<p align="center">
<img src="https://readme-typing-svg.demolab.com?font=Fira+Code&size=30&duration=3000&pause=1000&color=00FF00¢er=true&vCenter=true&width=800&height=80&lines=Find+Everything.;Track+Everyone.;Stay+Anonymous." alt="CyberFind Slogan">
</p>
## ✨ Features
### 🔍 **Comprehensive Search**
- **200+ built-in sites** across multiple categories
- **Smart detection** using status codes and content analysis
- **Metadata extraction** from found profiles
### ⚡ **High Performance**
- **Async/await architecture** for maximum speed
- **Concurrent requests** with configurable thread count
- **Intelligent rate limiting** to avoid blocks
### 🛡️ **Privacy & Security**
- **Random User-Agents** for each request
- **Multiple search modes** (Standard, Deep, Stealth, Aggressive)
- **No data storage** unless explicitly configured
### 📊 **Multiple Output Formats**
- **JSON** - Structured data for APIs
- **CSV** - Spreadsheet compatible format
- **HTML** - Beautiful visual reports
- **Excel** - Professional multi-sheet workbooks
- **SQLite** - Database storage for large datasets
### 🎯 **Smart Features**
- **Risk assessment** based on found accounts
- **Personalized recommendations**
- **Statistical analysis** of results
- **Category grouping** of found accounts
### 🔬 **Advanced OSINT Capabilities**
- **DNS Enumeration**: Retrieve A, AAAA, MX, TXT, NS, SOA, and CNAME records for domains.
- **WHOIS Lookup**: Get registration details, owner information, and name servers for domains.
- **Shodan Integration**: Search for exposed devices and services (requires Shodan API key).
- **VirusTotal Scan**: Check URLs for malicious content (requires VirusTotal API key).
- **Wayback Machine Search**: Find archived versions of web pages.
- **Selenium Scraping**: Analyze JavaScript-heavy websites that standard requests might miss.
- **Advanced Combined Search**: Perform a standard username search and then run additional checks (DNS, WHOIS, Shodan, VT, Wayback) based on the results and provided API keys.
- **Detailed Reporting**: Generate comprehensive text reports summarizing all findings from standard and advanced checks.
## 🚀 Quick Start
### Installation
Install directly from PyPI (recommended):
```bash
pip install cyber-find
```
Or, if you have multiple Python versions:
```bash
python3 -m pip install cyber-find
```
> 💡 After installation, the `cyberfind` command is available globally in your terminal.
#### Alternative: From source (for developers)
```bash
git clone https://github.com/VAZlabs/cyber-find.git
cd cyber-find
pip install -e .
```
---
### Basic Usage
```bash
# Quick search (25 most popular sites)
cyberfind username
# Search with specific category
cyberfind username --list social_media
cyberfind username --list programming
cyberfind username --list gaming
# Comprehensive search (200+ sites)
cyberfind username --list all
# Multiple users
cyberfind user1 user2 user3 --list quick
```
## 📚 Usage Examples
### 🔎 Basic Searches
```bash
# Quick check on popular platforms
cyberfind john_doe
# Russian-language platforms only
cyberfind username --list russian
# Gaming platforms only
cyberfind username --list gaming
# Blogs and publications
cyberfind username --list blogs
```
### ⚙️ Advanced Options
```bash
# Deep search with HTML report
cyberfind target --mode deep --format html -o report
# Stealth mode for sensitive searches
cyberfind target --mode stealth --timeout 15
# Maximum speed (use with caution)
cyberfind target --mode aggressive --threads 100
# Custom sites file
cyberfind target -f custom_sites.txt
```
### 📊 Output Management
```bash
# Save as JSON (default)
cyberfind username -o results
# Save as CSV for Excel
cyberfind username --format csv -o results
# Save as HTML report
cyberfind username --format html -o report
# Save to database
cyberfind username --format sqlite
```
### 🧪 Advanced Search (v0.3.4) - CLI (Conceptual)
*Note: Direct CLI integration for advanced features might require specific implementation in `cyberfind_cli.py`. Currently, they are primarily accessible via the Python API.*
## 📋 Available Site Lists
| List Name | Sites Count | Description |
|-----------|-------------|-------------|
| **quick** | 25 | Most popular platforms (default) |
| **social_media** | 70+ | All social networks |
| **programming** | 25+ | IT and development platforms |
| **gaming** | 20+ | Gaming platforms and communities |
| **blogs** | 20+ | Blogs and publication platforms |
| **ecommerce** | 20+ | Shopping and commerce sites |
| **forums** | 12+ | Discussion forums |
| **russian** | 18+ | Russian-language platforms |
| **all** | 200+ | All available platforms |
View all available lists:
```bash
cyberfind --show-lists
```
## 🎛️ Configuration
Create a `config.yaml` file for custom settings:
```yaml
# config.yaml
general:
timeout: 30 # Request timeout in seconds
max_threads: 50 # Maximum concurrent requests
retry_attempts: 3 # Retry attempts on failure
retry_delay: 2 # Delay between retries
user_agents_rotation: true # Rotate User-Agents
rate_limit_delay: 0.5 # Delay between requests
proxy:
enabled: false # Enable proxy support
list: [] # List of proxies
rotation: true # Rotate proxies
database:
sqlite_path: 'cyberfind.db' # SQLite database path
output:
default_format: 'json' # Default output format
save_all_results: true # Save all results to DB
advanced:
metadata_extraction: true # Extract metadata from pages
cache_results: true # Cache results
verify_ssl: true # Verify SSL certificates
```
## 📁 Project Structure
```
cyberfind/
├── cyberfind_cli.py # Main CLI interface
├── core.py # Core search engine
├── gui.py # Graphical interface
├── api.py # REST API server
├── config.yaml # Configuration template
├── requirements.txt # Python dependencies
├── README.md # This file
└── sites/ # Site definition files
├── social_media.txt
├── programming.txt
├── gaming.txt
└── ...
```
## 🔧 Development
### Code Style & Quality
```bash
# Install development tools
pip install -r requirements-dev.txt
# Format code with black
black cyberfind --line-length 120
# Check code style with flake8
flake8 cyberfind --max-line-length 120
# Sort imports with isort
isort cyberfind --profile black
# Type checking with mypy
mypy cyberfind --ignore-missing-imports
```
### 🧪 Testing & CI/CD
CyberFind has comprehensive testing infrastructure to ensure code quality and reliability:
#### Running Tests
```bash
# Install test dependencies
pip install -r requirements-dev.txt
# Run all tests
pytest tests/ -v
# Run specific test file
pytest tests/test_rate_limiting.py -v
# Run with coverage report
pytest tests/ --cov=cyberfind --cov-report=html
# Run only fast tests
pytest tests/ -m "not slow"
# Run async tests only
pytest tests/ -m asyncio
```
#### Test Coverage
Current test infrastructure includes:
- **36 unit tests** covering core modules
- **2 test modules**: `test_rate_limiting.py` (17 tests), `test_proxy_support.py` (15 tests)
- **8 pytest fixtures** for reusable test data
- **Branch coverage tracking** enabled in `.coveragerc`
- **Async test support** with `@pytest.mark.asyncio`
#### Code Quality Checks
All commits are validated against:
- ✅ **flake8** - PEP8 style compliance (0 errors)
- ✅ **black** - Code formatting (120 char lines)
- ✅ **isort** - Import sorting (black-compatible)
- ✅ **mypy** - Type checking (Python 3.9+)
- ✅ **pytest** - Unit tests (36 tests passing)
- ✅ **bandit** - Security scanning
#### GitHub Actions CI/CD
Automated testing runs on:
- Python 3.9, 3.10, 3.11
- Linux, Windows, macOS
- Every push and pull request
See `.github/workflows/tests.yml` for workflow configuration.
#### Pre-commit Hooks
Setup local git hooks for instant validation:
```bash
# Install pre-commit
pip install pre-commit
# Setup git hooks
pre-commit install
# Run hooks on all files
pre-commit run --all-files
```
For detailed testing documentation, see [TESTING.md](TESTING.md)
## 🌐 API Usage
Start the API server:
```bash
cyberfind --api
# Server starts at http://localhost:8080
```
Example API request:
```python
import requests
import json
response = requests.post('http://localhost:8080/api/search', json={
'usernames': ['target_user'],
'list': 'social_media',
'mode': 'standard'
})
results = response.json()
```
## 🖥️ Graphical Interface
```bash
# Launch the GUI
cyberfind --gui
```
The GUI provides:
- Visual search interface
- Real-time progress tracking
- Interactive results display
- One-click report generation
## 📊 Sample Output
```bash
$ cyberfind john_doe --list quick
🔍 CyberFind v0.3.4
Searching for: john_doe
📋 Using built-in list: quick (25 sites)
🔍 Searching: john_doe
Checking 25 sites...
✓ Found: GitHub
✓ Found: Twitter
✓ Found: LinkedIn
Done: 3 found, 2 errors
✅ SEARCH COMPLETED in 12.5 seconds
============================================================
📊 STATISTICS:
Total checks: 25
Accounts found: 3
Errors: 2
👤 USER: john_doe
✅ FOUND 3 accounts:
📁 PROGRAMMING:
1. GitHub
URL: https://github.com/john_doe
Status: 200, Time: 1.23s
📁 SOCIAL_MEDIA:
2. Twitter
URL: https://twitter.com/john_doe
Status: 200, Time: 0.89s
3. LinkedIn
URL: https://www.linkedin.com/in/john_doe
Status: 200, Time: 1.45s
💡 RECOMMENDATIONS:
1. LinkedIn profile found - check contacts and connections
2. GitHub profile found - review public repositories
💾 Results saved to: results.json
```
## 🚨 Legal & Ethical Usage
### ✅ **Permitted Uses:**
- Security research and penetration testing (with permission)
- Personal digital footprint analysis
- Academic research on social media presence
- Bug bounty hunting and security audits
- Investigating your own online presence
### ❌ **Prohibited Uses:**
- Harassment, stalking, or doxxing
- Unauthorized surveillance
- Privacy violations
- Commercial data scraping without permission
- Any illegal activities
**By using this tool, you agree to use it responsibly and legally. The developers are not responsible for misuse.**
## 🤝 Contributing
We welcome contributions! Here's how:
1. **Fork** the repository
2. **Create** a feature branch:
```bash
git checkout -b feature/amazing-feature
```
3. **Commit** your changes:
```bash
git commit -m 'Add amazing feature'
```
4. **Push** to the branch:
```bash
git push origin feature/amazing-feature
```
5. **Open** a Pull Request
### Code Quality Standards
All contributions must pass our quality gates:
**Required Checks:**
- ✅ **Black** code formatting (`black --line-length 120`)
- ✅ **flake8** PEP8 linting (max line 120, 0 errors)
- ✅ **isort** import sorting (black-compatible profile)
- ✅ **mypy** type checking (Python 3.9+ strict mode)
- ✅ **pytest** unit tests (all passing)
- ✅ **bandit** security scanning
**Before submitting a PR, run locally:**
```bash
# Format code
black cyberfind --line-length 120
# Sort imports
isort cyberfind --profile black --line-length 120
# Check style
flake8 cyberfind --max-line-length 120 --ignore=E203,E266,E501,W503,E741
# Type checking
mypy cyberfind --ignore-missing-imports
# Run tests
pytest tests/ -v
```
**Test Requirements:**
- New features must include unit tests
- Maintain minimum 80% code coverage
- Use pytest fixtures from `tests/conftest.py`
- Add `@pytest.mark.unit` to unit tests
- Add `@pytest.mark.asyncio` to async tests
- See [TESTING.md](TESTING.md) for detailed testing guidelines
**Automated Checks:**
- GitHub Actions runs tests on Python 3.9, 3.10, 3.11
- Pre-commit hooks available (run `pre-commit install`)
- All checks must pass before merging
### Areas for Contribution:
- Adding new site definitions
- Improving detection algorithms
- Enhancing the GUI
- Writing documentation
- Performance optimizations
- Bug fixes
- Integrating advanced features into CLI/API
## 📈 Performance Tips
1. **For speed**: Use `--mode aggressive --threads 50`
2. **For stealth**: Use `--mode stealth --timeout 30`
3. **For reliability**: Use `--mode standard --retry 3`
4. **For specific needs**: Create custom site lists
## 🐛 Troubleshooting
### Common Issues:
1. **"No sites loaded" error**
- Ensure you have internet connection
- Check if the sites directory exists
2. **Slow performance**
- Reduce thread count: `--threads 20`
- Increase timeout: `--timeout 30`
- Use a faster internet connection
3. **Many errors**
- The target platforms may be blocking requests
- Try using stealth mode
- Consider using proxies
### Getting Help:
- Check the [GitHub Issues](https://github.com/vazor-code/cyber-find/issues)
- Review the example configurations
- Test with a simple search first
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- Built with [aiohttp](https://docs.aiohttp.org/) for async HTTP requests
- Uses [BeautifulSoup4](https://www.crummy.com/software/BeautifulSoup/) for HTML parsing
- Inspired by various OSINT tools in the security community
- Thanks to the contributors for making CyberFind better!
## 📬 Contact
- **GitHub**: [vazor-code](https://github.com/vazor-code)
- **Project**: [CyberFind](https://github.com/vazor-code/cyber-find)
- **Issues**: [Report a bug](https://github.com/vazor-code/cyber-find/issues)
---
<p align="center">
<b>CyberFind</b> · Find accounts · Analyze presence · Stay informed
<br>
<sub>Remember: With great power comes great responsibility</sub>
</p>
<div align="center">
### ⭐ If you find this useful, please give it a star!
[](https://github.com/vazor-code/cyber-find/stargazers)
</div>
| text/markdown | null | vazor <vazorcode@gmail.com> | null | null | MIT | osint, cybersecurity, search, social-media, reconnaissance | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"aiohttp>=3.9.0",
"beautifulsoup4>=4.12.0",
"cloudscraper>=1.2.71",
"fake-useragent>=1.4.0",
"pandas>=2.0.0",
"PyYAML>=6.0",
"requests>=2.31.0",
"openpyxl>=3.1.0",
"customtkinter>=5.2.0",
"fastapi>=0.100.0",
"uvicorn[standard]>=0.23.0",
"pydantic>=2.0.0",
"lxml>=4.9.0",
"dnspython>=2.4.0",... | [] | [] | [] | [
"Homepage, https://github.com/VAZlabs/cyber-find",
"Documentation, https://github.com/VAZlabs/cyber-find/wiki",
"Repository, https://github.com/VAZlabs/cyber-find.git",
"Issues, https://github.com/VAZlabs/cyber-find/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:51:09.553397 | cyber_find-0.3.4.tar.gz | 69,351 | 3f/85/6c308c7d6a3f0299223a20b4e9064008ed380b0a9516dbd54f955f4d03df/cyber_find-0.3.4.tar.gz | source | sdist | null | false | 1efaaa9ced456d15b7d0ee7feda75ec6 | ad3a6e7e3c66dc90d6e23ff8a69137027f23170b9cb793a0d1806d764637d184 | 3f856c308c7d6a3f0299223a20b4e9064008ed380b0a9516dbd54f955f4d03df | null | [
"LICENSE"
] | 240 |
2.4 | netboxlabs-device-discovery | 1.15.0 | NetBox Labs, Device Discovery backend for Orb Agent, part of NetBox Discovery | # device-discovery
Orb device discovery backend
### Usage
```bash
usage: device-discovery [-h] [-V] [-s HOST] [-p PORT] -t DIODE_TARGET -c DIODE_CLIENT_ID -k DIODE_CLIENT_SECRET [-a DIODE_APP_NAME_PREFIX] [-d] [-o DRY_RUN_OUTPUT_DIR]
[--otel-endpoint OTEL_ENDPOINT] [--otel-export-period OTEL_EXPORT_PERIOD]
Orb Device Discovery Backend
options:
-h, --help show this help message and exit
-V, --version Display Device Discovery, NAPALM and Diode SDK versions
-s HOST, --host HOST Server host
-p PORT, --port PORT Server port
-t DIODE_TARGET, --diode-target DIODE_TARGET
Diode target. Environment variable can be used by wrapping it in ${} (e.g. ${TARGET})
-c DIODE_CLIENT_ID, --diode-client-id DIODE_CLIENT_ID
Diode Client ID. Environment variable can be used by wrapping it in ${} (e.g. ${MY_CLIENT_ID})
-k DIODE_CLIENT_SECRET, --diode-client-secret DIODE_CLIENT_SECRET
Diode Client Secret. Environment variable can be used by wrapping it in ${} (e.g. ${MY_CLIENT_SECRET})
-a DIODE_APP_NAME_PREFIX, --diode-app-name-prefix DIODE_APP_NAME_PREFIX
Diode producer_app_name prefix
-d, --dry-run Run in dry-run mode, do not ingest data
-o DRY_RUN_OUTPUT_DIR, --dry-run-output-dir DRY_RUN_OUTPUT_DIR
Output dir for dry-run mode. Environment variable can be used by wrapping it in ${} (e.g. ${OUTPUT_DIR})
--otel-endpoint OTEL_ENDPOINT
OpenTelemetry exporter endpoint
--otel-export-period OTEL_EXPORT_PERIOD
Period in seconds between OpenTelemetry exports (default: 60)
```
### Policy RFC
```yaml
policies:
discovery_1:
config:
schedule: "* * * * *" #Cron expression
defaults:
site: New York NY
role: Router
scope:
- hostname: 192.168.0.32/30 #support range
username: ${USER}
password: admin
- driver: eos
hostname: 127.0.0.1
username: admin
password: ${ARISTA_PASSWORD}
optional_args:
enable_password: ${ARISTA_PASSWORD}
discover_once: # will run only once
scope:
- hostname: 192.168.0.34
username: ${USER}
password: ${PASSWORD}
```
## Run device-discovery
device-discovery can be run by installing it with pip
```sh
git clone https://github.com/netboxlabs/orb-discovery.git
cd orb-discovery/
pip install --no-cache-dir ./device-discovery/
device-discovery -t 'grpc://192.168.0.10:8080/diode' -c '${DIODE_CLIENT_ID}' -k '${DIODE_CLIENT_SECRET}'
```
## Docker Image
device-discovery can be build and run using docker:
```sh
cd device-discovery
docker build --no-cache -t device-discovery:develop -f docker/Dockerfile .
docker run -e DIODE_CLIENT_ID=${YOUR_CLIENT} -e DIODE_CLIENT_SECRET=${YOUR_SECRET} -p 8072:8072 device-discovery:develop \
device-discovery -t 'grpc://192.168.0.10:8080/diode' -c '${DIODE_CLIENT_ID}' -k '${DIODE_CLIENT_SECRET}'
```
### Routes (v1)
#### Get runtime and capabilities information
<details>
<summary><code>GET</code> <code><b>/api/v1/status</b></code> <code>(gets discovery runtime data)</code></summary>
##### Parameters
> None
##### Responses
> | http code | content-type | response |
> |---------------|-----------------------------------|---------------------------------------------------------------------|
> | `200` | `application/json; charset=utf-8` | `{"version": "0.1.0","up_time_seconds": 3678 }` |
##### Example cURL
> ```sh
> curl -X GET -H "Content-Type: application/json" http://localhost:8072/api/v1/status
> ```
</details>
<details>
<summary><code>GET</code> <code><b>/api/v1/capabilities</b></code> <code>(gets device-discovery capabilities)</code></summary>
##### Parameters
> None
##### Responses
> | http code | content-type | response |
> |---------------|-----------------------------------|---------------------------------------------------------------------|
> | `200` | `application/json; charset=utf-8` | `{"supported_drivers":["ios","eos","junos","nxos","cumulus"]}` |
##### Example cURL
> ```sh
> curl -X GET -H "Content-Type: application/json" http://localhost:8072/api/v1/capabilities
> ```
</details>
#### Policies Management
<details>
<summary><code>POST</code> <code><b>/api/v1/policies</b></code> <code>(Creates a new policy)</code></summary>
##### Parameters
> | name | type | data type | description |
> |-----------|-----------|-------------------------|-----------------------------------------------------------------------|
> | None | required | YAML object | yaml format specified in [Policy RFC](#policy-rfc) |
##### Responses
> | http code | content-type | response |
> |---------------|------------------------------------|---------------------------------------------------------------------|
> | `201` | `application/json; charset=UTF-8` | `{"detail":"policy 'policy_name' was started"}` |
> | `400` | `application/json; charset=UTF-8` | `{ "detail": "invalid Content-Type. Only 'application/x-yaml' is supported" }`|
> | `400` | `application/json; charset=UTF-8` | Any other policy error |
> | `403` | `application/json; charset=UTF-8` | `{ "detail": "config field is required" }` |
> | `409` | `application/json; charset=UTF-8` | `{ "detail": "policy 'policy_name' already exists" }` |
##### Example cURL
> ```sh
> curl -X POST -H "Content-Type: application/x-yaml" --data-binary @policy.yaml http://localhost:8072/api/v1/policies
> ```
</details>
<details>
<summary><code>DELETE</code> <code><b>/api/v1/policies/{policy_name}</b></code> <code>(delete a existing policy)</code></summary>
##### Parameters
> | name | type | data type | description |
> |-------------------|-----------|----------------|-------------------------------------|
> | `policy_name` | required | string | The unique policy name |
##### Responses
> | http code | content-type | response |
> |---------------|-----------------------------------|---------------------------------------------------------------------|
> | `200` | `application/json; charset=UTF-8` | `{ "detail": "policy 'policy_name' was deleted" }` |
> | `400` | `application/json; charset=UTF-8` | Any other policy deletion error |
> | `404` | `application/json; charset=UTF-8` | `{ "detail": "policy 'policy_name' not found" }` |
##### Example cURL
> ```sh
> curl -X DELETE http://localhost:8072/api/v1/policies/policy_name
> ```
</details>
| text/markdown | null | NetBox Labs <support@netboxlabs.com> | null | NetBox Labs <support@netboxlabs.com> | Apache-2.0 | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"P... | [] | null | null | >=3.10 | [] | [] | [] | [
"APScheduler~=3.10",
"croniter~=5.0",
"fastapi~=0.115",
"httpx~=0.27",
"napalm~=5.0",
"netboxlabs-diode-sdk~=1.10",
"pydantic~=2.9",
"python-dotenv~=1.0",
"uvicorn~=0.32",
"opentelemetry-api~=1.32",
"opentelemetry-sdk~=1.32",
"opentelemetry-exporter-otlp~=1.32",
"black; extra == \"dev\"",
... | [] | [] | [] | [
"Homepage, https://netboxlabs.com/"
] | twine/6.0.1 CPython/3.12.8 | 2026-02-18T15:50:31.678854 | netboxlabs_device_discovery-1.15.0.tar.gz | 43,909 | 55/be/626c77597a2625bb2c6a4eb165c0947c525b4a554d735dfdf6614d905e91/netboxlabs_device_discovery-1.15.0.tar.gz | source | sdist | null | false | 52151a2b6c8603c6e562dbe24ddfc6d4 | 286cb91688d135658c7a31798cae4b03a59d31ba5e4f3e1c67e504467397af53 | 55be626c77597a2625bb2c6a4eb165c0947c525b4a554d735dfdf6614d905e91 | null | [] | 171 |
2.4 | yhttp-sqlalchemy | 4.1.0 | SQLAlchemy extension for yhttp. | # yhttp-sqlalchemy
[](https://pypi.python.org/pypi/yhttp-sqlalchemy)
[](https://github.com/yhttp/yhttp-sqlalchemy/actions/workflows/build.yml)
[](https://coveralls.io/github/yhttp/yhttp-sqlalchemy?branch=master)
| text/markdown | Vahid Mardani | vahid.mardani@gmail.com | null | null | MIT | null | [
"Environment :: Console",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"Natural Language :: English",
"Development Status :: 5 - Production/Stable",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.6",
"Topic :... | [] | http://github.com/yhttp/yhttp-sqlalchemy | null | null | [] | [] | [] | [
"yhttp<8,>=7.2.0",
"yhttp-dbmanager<7,>=6.0.2",
"sqlalchemy>=2.0.32"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:50:29.742557 | yhttp_sqlalchemy-4.1.0.tar.gz | 6,899 | 05/25/ce486895ff9d76537e3c0626f6ba9d6a3675f879e3be3ca3453ebb3049b8/yhttp_sqlalchemy-4.1.0.tar.gz | source | sdist | null | false | 42995931d62c1ca50034b95744c6db5b | 4c766b9668618495936050791bf6ff19b9a80c88306e1c5bcb4ac5338318547e | 0525ce486895ff9d76537e3c0626f6ba9d6a3675f879e3be3ca3453ebb3049b8 | null | [
"LICENSE"
] | 230 |
2.4 | mcpl-python | 2.2.8 | Utilities and API for accessing MCPL (.mcpl) files | MCPL - Monte Carlo Particle Lists
=================================
MCPL files, with extensions `.mcpl` and `.mcpl.gz` is a binary format for usage
in physics particle simulations. It contains lists of particle state
information, and can be used to interchange or reuse particles between various
Monte Carlo simulation applications. The format itself is formally described in:
T. Kittelmann, et al., Monte Carlo Particle Lists: MCPL, Computer Physics
Communications 218, 17-42 (2017), https://doi.org/10.1016/j.cpc.2017.04.012
All MCPL code is provided under the highly liberal open source Apache 2.0
license (http://www.apache.org/licenses/LICENSE-2.0), and further instructions
and documentation can be found at https://mctools.github.io/mcpl/.
The mcpl-python package
-----------------------
The `mcpl-python` package provides a Python API for working with MCPL
files. More details about the Python API and how to use it can be found at the
https://mctools.github.io/mcpl/usage_python page.
Additionally, the package also provides the command-line tool `pymcpltool`,
which has similar capabilities as the binary `mcpltool` from the `mcpl-core`
package. The main difference being an ability to extract statistics and plots
from MCPL files, and that the `pymcpltool` (unlike the `mcpltool`) only provides
read-only capabilities.
Note that it is recommmended for most users to simply install the package named
`mcpl`, rather than referring to the package named `mcpl-python` directly.
Scientific reference
--------------------
Copyright 2015-2026 MCPL developers.
This software was mainly developed at the European Spallation Source ERIC (ESS)
and the Technical University of Denmark (DTU). This work was supported in part
by the European Union's Horizon 2020 research and innovation programme under
grant agreement No 676548 (the BrightnESS project).
All MCPL files are distributed under the Apache 2.0 license, available at
http://www.apache.org/licenses/LICENSE-2.0, as well as in the LICENSE file found
in the source distribution.
A substantial effort went into developing MCPL. If you use it for your work, we
would appreciate it if you would use the following reference in your work:
T. Kittelmann, et al., Monte Carlo Particle Lists: MCPL, Computer Physics
Communications 218, 17-42 (2017), https://doi.org/10.1016/j.cpc.2017.04.012
Support for specific third party applications
---------------------------------------------
Note that some users might also wish to additionally install the `mcpl-extra`
package, which contains cmdline tools for converting between the binary data
files native to some third-party Monte Carlo applications (currently PHITS and
MCNP[X/5/6]). Users of Geant4 might wish to install the `mcpl-geant4` package,
which provides C++ classes (and CMake configuration code) for integrating MCPL
I/O into Geant4 simulations. Finally, many Monte Carlo applications have
directly integrated support for MCPL I/O into their codes. At the time of
writing, the list of applications with known support from MCPL I/O includes:
* McStas (built in)
* McXtrace (built in)
* OpenMC (built in)
* Cinema/Prompt (built in)
* VITESS (built in)
* RESTRAX/SIMRES (built in)
* McVine (built in)
* MCNPX, MCNP5, MCNP6 (based on `ssw2mcpl`/`mcpl2ssw` from the `mcpl-extra` package)
* PHITS (based on `phits2mcpl`/`mcpl2phits` from the `mcpl-extra` package)
* Geant4 (based on C++/CMake code from the `mcpl-geant4` package)
Note that instructions for installation and setup of third-party products like
those listed above are beyond the scope of the MCPL project. Please refer to the
products own instructions for more information.
| text/markdown | MCPL developers (Thomas Kittelmann, et. al.) | null | null | null | Apache-2.0 | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=1.22"
] | [] | [] | [] | [
"Homepage, https://mctools.github.io/mcpl/",
"Bug Tracker, https://github.com/mctools/mcpl/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:49:58.562291 | mcpl_python-2.2.8.tar.gz | 29,336 | 93/af/974cd20fef547de2f7e4c1649df0db2b33c260b9be9bb3b7df331352ed30/mcpl_python-2.2.8.tar.gz | source | sdist | null | false | 72ce68cb11307db067db0fe0bb596dc7 | b79cabf96a8fc90c38a7cbd275d80e757d0a3ee7420f32bff4b057accd8e9315 | 93af974cd20fef547de2f7e4c1649df0db2b33c260b9be9bb3b7df331352ed30 | null | [
"LICENSE"
] | 2,259 |
2.2 | mcpl-extra | 2.2.8 | Various tools and conversion utilities related to MCPL files. | MCPL - Monte Carlo Particle Lists
=================================
MCPL files, with extensions `.mcpl` and `.mcpl.gz` is a binary format for usage
in physics particle simulations. It contains lists of particle state
information, and can be used to interchange or reuse particles between various
Monte Carlo simulation applications. The format itself is formally described in:
T. Kittelmann, et al., Monte Carlo Particle Lists: MCPL, Computer Physics
Communications 218, 17-42 (2017), https://doi.org/10.1016/j.cpc.2017.04.012
All MCPL code is provided under the highly liberal open source Apache 2.0
license (http://www.apache.org/licenses/LICENSE-2.0), and further instructions
and documentation can be found at https://mctools.github.io/mcpl/.
The mcpl-extra package
----------------------
The `mcpl-extra` package is intended to provide tools and conversion utilities
related to MCPL files, beyond what is available in the `mcpl-core` package. This
currently includes converters to and from file formats related to PHITS and
MCNP(5/X/6).
For more details about how to use these converters, refer to the
https://mctools.github.io/mcpl/hooks_mcnp and
https://mctools.github.io/mcpl/hooks_phits pages.
Scientific reference
--------------------
Copyright 2015-2026 MCPL developers.
This software was mainly developed at the European Spallation Source ERIC (ESS)
and the Technical University of Denmark (DTU). This work was supported in part
by the European Union's Horizon 2020 research and innovation programme under
grant agreement No 676548 (the BrightnESS project).
All MCPL files are distributed under the Apache 2.0 license, available at
http://www.apache.org/licenses/LICENSE-2.0, as well as in the LICENSE file found
in the source distribution.
A substantial effort went into developing MCPL. If you use it for your work, we
would appreciate it if you would use the following reference in your work:
T. Kittelmann, et al., Monte Carlo Particle Lists: MCPL, Computer Physics
Communications 218, 17-42 (2017), https://doi.org/10.1016/j.cpc.2017.04.012
Support for specific third party applications
---------------------------------------------
Note that some users might also wish to additionally install the `mcpl-extra`
package, which contains cmdline tools for converting between the binary data
files native to some third-party Monte Carlo applications (currently PHITS and
MCNP[X/5/6]). Users of Geant4 might wish to install the `mcpl-geant4` package,
which provides C++ classes (and CMake configuration code) for integrating MCPL
I/O into Geant4 simulations. Finally, many Monte Carlo applications have
directly integrated support for MCPL I/O into their codes. At the time of
writing, the list of applications with known support from MCPL I/O includes:
* McStas (built in)
* McXtrace (built in)
* OpenMC (built in)
* Cinema/Prompt (built in)
* VITESS (built in)
* RESTRAX/SIMRES (built in)
* McVine (built in)
* MCNPX, MCNP5, MCNP6 (based on `ssw2mcpl`/`mcpl2ssw` from the `mcpl-extra` package)
* PHITS (based on `phits2mcpl`/`mcpl2phits` from the `mcpl-extra` package)
* Geant4 (based on C++/CMake code from the `mcpl-geant4` package)
Note that instructions for installation and setup of third-party products like
those listed above are beyond the scope of the MCPL project. Please refer to the
products own instructions for more information.
| text/markdown | MCPL developers (Thomas Kittelmann, et. al.) | null | null | null | Apache-2.0 | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"mcpl-core==2.2.8"
] | [] | [] | [] | [
"Homepage, https://mctools.github.io/mcpl/",
"Bug Tracker, https://github.com/mctools/mcpl/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:49:57.755244 | mcpl_extra-2.2.8.tar.gz | 38,508 | 7b/2c/da429721ba8f82bf430e1b877b9196ba208c882d907d1219d01c2d5434d1/mcpl_extra-2.2.8.tar.gz | source | sdist | null | false | dc4848213d27c951832f213befd9753c | 6e219886b20f7cf063416897b935c35ac9fe868a5e5f67c6aaa73b0fee77667c | 7b2cda429721ba8f82bf430e1b877b9196ba208c882d907d1219d01c2d5434d1 | null | [] | 1,567 |
2.2 | mcpl-core | 2.2.8 | Utilities and API for accessing MCPL (.mcpl) files | MCPL - Monte Carlo Particle Lists
=================================
MCPL files, with extensions `.mcpl` and `.mcpl.gz` is a binary format for usage
in physics particle simulations. It contains lists of particle state
information, and can be used to interchange or reuse particles between various
Monte Carlo simulation applications. The format itself is formally described in:
T. Kittelmann, et al., Monte Carlo Particle Lists: MCPL, Computer Physics
Communications 218, 17-42 (2017), https://doi.org/10.1016/j.cpc.2017.04.012
All MCPL code is provided under the highly liberal open source Apache 2.0
license (http://www.apache.org/licenses/LICENSE-2.0), and further instructions
and documentation can be found at https://mctools.github.io/mcpl/.
The mcpl-core package
---------------------
The `mcpl-core` package provides:
* The `mcpltool`, a command-line utility for working with MCPL files. For more
information about this tool, refer to the
https://mctools.github.io/mcpl/usage_cmdline page.
* The C/C++ API in the form of the `mcpl.h` header file and associated shared
library. For more information about this API, refer to the
https://mctools.github.io/mcpl/usage_c page.
* Configuration utilities for working with the C/C++ API in downstream
projects. Specifically, CMake configuration code and the `mcpl-config`
command-line utility are provided.
In addition to the links above, several examples of how to use the C/C++ API,
including how to configure a downstream CMake-based project, is provided in the
https://github.com/mctools/mcpl/tree/HEAD/examples directory.
Note that it is recommmended for most users to simply install the package named
`mcpl`, rather than referring to the package named `mcpl-core` directly.
Scientific reference
--------------------
Copyright 2015-2026 MCPL developers.
This software was mainly developed at the European Spallation Source ERIC (ESS)
and the Technical University of Denmark (DTU). This work was supported in part
by the European Union's Horizon 2020 research and innovation programme under
grant agreement No 676548 (the BrightnESS project).
All MCPL files are distributed under the Apache 2.0 license, available at
http://www.apache.org/licenses/LICENSE-2.0, as well as in the LICENSE file found
in the source distribution.
A substantial effort went into developing MCPL. If you use it for your work, we
would appreciate it if you would use the following reference in your work:
T. Kittelmann, et al., Monte Carlo Particle Lists: MCPL, Computer Physics
Communications 218, 17-42 (2017), https://doi.org/10.1016/j.cpc.2017.04.012
Support for specific third party applications
---------------------------------------------
Note that some users might also wish to additionally install the `mcpl-extra`
package, which contains cmdline tools for converting between the binary data
files native to some third-party Monte Carlo applications (currently PHITS and
MCNP[X/5/6]). Users of Geant4 might wish to install the `mcpl-geant4` package,
which provides C++ classes (and CMake configuration code) for integrating MCPL
I/O into Geant4 simulations. Finally, many Monte Carlo applications have
directly integrated support for MCPL I/O into their codes. At the time of
writing, the list of applications with known support from MCPL I/O includes:
* McStas (built in)
* McXtrace (built in)
* OpenMC (built in)
* Cinema/Prompt (built in)
* VITESS (built in)
* RESTRAX/SIMRES (built in)
* McVine (built in)
* MCNPX, MCNP5, MCNP6 (based on `ssw2mcpl`/`mcpl2ssw` from the `mcpl-extra` package)
* PHITS (based on `phits2mcpl`/`mcpl2phits` from the `mcpl-extra` package)
* Geant4 (based on C++/CMake code from the `mcpl-geant4` package)
Note that instructions for installation and setup of third-party products like
those listed above are beyond the scope of the MCPL project. Please refer to the
products own instructions for more information.
| text/markdown | MCPL developers (Thomas Kittelmann, et. al.) | null | null | null | Apache-2.0 | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://mctools.github.io/mcpl/",
"Bug Tracker, https://github.com/mctools/mcpl/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:49:56.829404 | mcpl_core-2.2.8.tar.gz | 89,155 | 2f/4f/2cafc9958b2861a6cdde968ef4026dab86c2ac961cf9e204c4c0a9ed4b07/mcpl_core-2.2.8.tar.gz | source | sdist | null | false | 657d7ab0247307fa4d9b136eb657e8f2 | 9dea5b722aee98f456690fc6b66d2cb785291ca020f3b5d9d30ec236ee45803f | 2f4f2cafc9958b2861a6cdde968ef4026dab86c2ac961cf9e204c4c0a9ed4b07 | null | [] | 2,897 |
2.4 | mcpl | 2.2.8 | Utilities and API for accessing MCPL (.mcpl) files | MCPL - Monte Carlo Particle Lists
=================================
MCPL files, with extensions `.mcpl` and `.mcpl.gz` is a binary format for usage
in physics particle simulations. It contains lists of particle state
information, and can be used to interchange or reuse particles between various
Monte Carlo simulation applications. The format itself is formally described in:
T. Kittelmann, et al., Monte Carlo Particle Lists: MCPL, Computer Physics
Communications 218, 17-42 (2017), https://doi.org/10.1016/j.cpc.2017.04.012
All MCPL code is provided under the highly liberal open source Apache 2.0
license (http://www.apache.org/licenses/LICENSE-2.0), and further instructions
and documentation can be found at https://mctools.github.io/mcpl/.
The mcpl package
----------------
Technically, the `mcpl` package is a meta-package which pulls in both
`mcpl-core` and `mcpl-python` packages for installation. Advanced users needing
only a subset of functionality might elect to install only one of those packages
instead, however most users are simply recommended to install the `mcpl` package
for convenience.
The utilities provided by this package thus include utilities for working with
MCPL files, either via the command-line (the `mcpltool` and `pymcpltool`
commands), or via dedicated APIs in C, C++, and python.
Scientific reference
--------------------
Copyright 2015-2026 MCPL developers.
This software was mainly developed at the European Spallation Source ERIC (ESS)
and the Technical University of Denmark (DTU). This work was supported in part
by the European Union's Horizon 2020 research and innovation programme under
grant agreement No 676548 (the BrightnESS project).
All MCPL files are distributed under the Apache 2.0 license, available at
http://www.apache.org/licenses/LICENSE-2.0, as well as in the LICENSE file found
in the source distribution.
A substantial effort went into developing MCPL. If you use it for your work, we
would appreciate it if you would use the following reference in your work:
T. Kittelmann, et al., Monte Carlo Particle Lists: MCPL, Computer Physics
Communications 218, 17-42 (2017), https://doi.org/10.1016/j.cpc.2017.04.012
Support for specific third party applications
---------------------------------------------
Note that some users might also wish to additionally install the `mcpl-extra`
package, which contains cmdline tools for converting between the binary data
files native to some third-party Monte Carlo applications (currently PHITS and
MCNP[X/5/6]). Users of Geant4 might wish to install the `mcpl-geant4` package,
which provides C++ classes (and CMake configuration code) for integrating MCPL
I/O into Geant4 simulations. Finally, many Monte Carlo applications have
directly integrated support for MCPL I/O into their codes. At the time of
writing, the list of applications with known support from MCPL I/O includes:
* McStas (built in)
* McXtrace (built in)
* OpenMC (built in)
* Cinema/Prompt (built in)
* VITESS (built in)
* RESTRAX/SIMRES (built in)
* McVine (built in)
* MCNPX, MCNP5, MCNP6 (based on `ssw2mcpl`/`mcpl2ssw` from the `mcpl-extra` package)
* PHITS (based on `phits2mcpl`/`mcpl2phits` from the `mcpl-extra` package)
* Geant4 (based on C++/CMake code from the `mcpl-geant4` package)
Note that instructions for installation and setup of third-party products like
those listed above are beyond the scope of the MCPL project. Please refer to the
products own instructions for more information.
| text/markdown | MCPL developers (Thomas Kittelmann, et. al.) | null | null | null | Apache-2.0 | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"mcpl-core==2.2.8",
"mcpl-python==2.2.8"
] | [] | [] | [] | [
"Homepage, https://mctools.github.io/mcpl/",
"Bug Tracker, https://github.com/mctools/mcpl/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:49:55.954229 | mcpl-2.2.8.tar.gz | 6,791 | 79/bb/d55aa1db89c397a94ae0d24c3ebdb40b4f611a38bd9196bcdeb831a859ba/mcpl-2.2.8.tar.gz | source | sdist | null | false | 45142f4e902e2bc50487c07b143246ed | 1362696b4c6f3c23cfe8b56f1eeb850b0e9889850167a1eec128f1449d3c8a52 | 79bbd55aa1db89c397a94ae0d24c3ebdb40b4f611a38bd9196bcdeb831a859ba | null | [
"LICENSE"
] | 2,254 |
2.4 | nomad-hpc | 1.2.2 | A lightweight HPC monitoring and predictive analytics tool | # NØMAD-HPC
**NØde Monitoring And Diagnostics** — Lightweight HPC monitoring, visualization, and predictive analytics.
> *"Travels light, adapts to its environment, and doesn't need permanent infrastructure."*
[](https://pypi.org/project/nomad-hpc/)
[](https://www.gnu.org/licenses/agpl-3.0)
[](https://www.python.org/downloads/)
[](https://doi.org/10.5281/zenodo.18614517)
---
📖 **[Full Documentation](https://jtonini.github.io/nomad-hpc/)** — Installation guides, configuration, CLI reference, network methodology, ML framework, and more.
---
## Quick Start
```bash
pip install nomad-hpc
nomad demo # Try with synthetic data
```
For production:
```bash
nomad init # Configure for your cluster
nomad collect # Start data collection
nomad dashboard # Launch web interface
```
---
## Features
| Feature | Description | Command |
|---------|-------------|---------|
| **Dashboard** | Real-time multi-cluster monitoring with partition views | `nomad dashboard` |
| **Educational Analytics** | Track computational proficiency development | `nomad edu explain <job>` |
| **Alerts** | Threshold + predictive alerts (email, Slack, webhook) | `nomad alerts` |
| **ML Prediction** | Job failure prediction using similarity networks | `nomad predict` |
| **Community Export** | Anonymized datasets for cross-institutional research | `nomad community export` |
| **Interactive Sessions** | Monitor RStudio/Jupyter sessions | `nomad report-interactive` |
| **Derivative Analysis** | Detect accelerating trends before thresholds | Built into alerts |
---
## Architecture
```
┌────────────────────────────────────────────────────────────┐
│ NØMAD │
├──────────────┬──────────────┬──────────────┬───────────────┤
│ Collectors │ Analysis │ Viz │ Alerts │
├──────────────┼──────────────┼──────────────┼───────────────┤
│ disk │ derivatives │ dashboard │ thresholds │
│ iostat │ similarity │ network 3D │ predictive │
│ slurm │ ML ensemble │ partitions │ email/slack │
│ gpu │ edu scoring │ edu views │ webhooks │
│ nfs │ │ │ │
└──────────────┴──────────────┴──────────────┴───────────────┘
│
┌─────────┴─────────┐
│ SQLite Database │
└───────────────────┘
```
---
## CLI Reference
### Core Commands
```bash
nomad init # Setup wizard
nomad collect # Start collectors
nomad dashboard # Web interface
nomad demo # Demo mode
nomad status # System status
```
### Educational Analytics
```bash
nomad edu explain <job_id> # Job analysis with recommendations
nomad edu trajectory <user> # User proficiency over time
nomad edu report <group> # Course/group report
```
### Analysis & Prediction
```bash
nomad disk /path # Filesystem trends
nomad jobs --user <user> # Job history
nomad similarity # Network analysis
nomad train # Train ML models
nomad predict # Run predictions
```
### Community & Alerts
```bash
nomad community export # Export anonymized data
nomad community preview # Preview export
nomad alerts # View alerts
nomad alerts --unresolved # Unresolved only
```
---
## Installation
### From PyPI
```bash
pip install nomad-hpc
```
### From Source
```bash
git clone https://github.com/jtonini/nomad.git
cd nomad && pip install -e .
```
### Requirements
- Python 3.9+
- SQLite 3.35+
- sysstat package (`iostat`, `mpstat`)
- Optional: SLURM, nvidia-smi, nfsiostat
### System Check
```bash
nomad syscheck
```
---
## Documentation
📖 **[jtonini.github.io/nomad-hpc](https://jtonini.github.io/nomad-hpc/)**
- [Installation & Configuration](https://jtonini.github.io/nomad-hpc/installation/)
- [System Install (`--system`)](https://jtonini.github.io/nomad-hpc/system-install/)
- [Dashboard Guide](https://jtonini.github.io/nomad-hpc/dashboard/)
- [Educational Analytics](https://jtonini.github.io/nomad-hpc/edu/)
- [Network Methodology](https://jtonini.github.io/nomad-hpc/network/)
- [ML Framework](https://jtonini.github.io/nomad-hpc/ml/)
- [Proficiency Scoring](https://jtonini.github.io/nomad-hpc/proficiency/)
- [CLI Reference](https://jtonini.github.io/nomad-hpc/cli/)
- [Configuration Options](https://jtonini.github.io/nomad-hpc/config/)
---
## License
Dual-licensed:
- **AGPL v3** — Free for academic, educational, and open-source use
- **Commercial License** — Available for proprietary deployments
---
## Citation
```bibtex
@software{nomad2026,
author = {Tonini, João Filipe Riva},
title = {NØMAD: Lightweight HPC Monitoring with Machine Learning-Based Failure Prediction},
year = {2026},
url = {https://github.com/jtonini/nomad},
doi = {10.5281/zenodo.18614517}
}
```
---
## Contributing
See [CONTRIBUTING.md](docs/CONTRIBUTING.md) for guidelines.
---
## Contact
- **Author**: João Tonini
- **Email**: jtonini@richmond.edu
- **Issues**: [GitHub Issues](https://github.com/jtonini/nomad/issues)
| text/markdown | null | Joao Tonini <jtonini@richmond.edu> | null | Joao Tonini <jtonini@richmond.edu> | null | hpc, monitoring, slurm, cluster, predictive-analytics, machine-learning, anomaly-detection, graph-neural-network | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: System Administrators",
"Intended Audience :: Science/Research",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.... | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=8.0",
"toml>=0.10",
"numpy>=1.21",
"pandas>=1.3",
"scipy>=1.7",
"scikit-learn>=1.0; extra == \"ml\"",
"torch>=2.0; extra == \"ml\"",
"torch-geometric>=2.0; extra == \"ml\"",
"jinja2>=3.0; extra == \"dashboard\"",
"nomad[dashboard,ml]; extra == \"all\"",
"pytest>=7.0; extra == \"dev\"",
... | [] | [] | [] | [
"Homepage, https://nomad-hpc.com",
"Documentation, https://jtonini.github.io/nomad-hpc/",
"Repository, https://github.com/jtonini/nomad-hpc",
"Issues, https://github.com/jtonini/nomad-hpc/issues"
] | twine/6.2.0 CPython/3.9.21 | 2026-02-18T15:49:11.954881 | nomad_hpc-1.2.2-py3-none-any.whl | 328,037 | 52/40/bf0d65ff84f060b330cb8c823c7d25fcb2ac3df0c33b399f3117b90ccb0c/nomad_hpc-1.2.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 217a1e0560079d74415158ca1c14a5b3 | 68e1cef19120281b491d17a8b53ca9ee2b99cc7089e6a8581f127ee70bd6f084 | 5240bf0d65ff84f060b330cb8c823c7d25fcb2ac3df0c33b399f3117b90ccb0c | null | [] | 106 |
2.4 | gemini-receipt-ocr | 0.1.0 | Extract structured data from receipt images using Gemini AI | # receipt-ocr
Extract structured data from receipt images using Gemini AI.
## Features
- 📷 Extract date, amount, vendor, category from receipt images
- 🚀 Fast and cheap with Gemini Flash
- 🎯 ~95% accuracy on common receipt formats
- 🔧 CLI and Python API
## Installation
```bash
pip install receipt-ocr
```
## Quick Start
### CLI
```bash
# Set API key
export GEMINI_API_KEY=your_api_key
# Extract from image
receipt-ocr receipt.jpg
# Pretty print
receipt-ocr receipt.jpg --pretty
# From URL
receipt-ocr https://example.com/receipt.jpg
```
Output:
```json
{
"receipt_date": "2025-01-15",
"amount": 4599,
"amount_dollars": 45.99,
"category": 0,
"category_name": "grocery",
"vendor_name": "Whole Foods Market",
"payment_method": 0
}
```
### Python API
```python
from receipt_ocr import extract, set_api_key
# Set API key (or use GEMINI_API_KEY env var)
set_api_key("your_api_key")
# Extract from file
result = extract("receipt.jpg")
print(result.amount_dollars) # 45.99
print(result.vendor_name) # "Whole Foods Market"
print(result.receipt_date) # "2025-01-15"
# Extract from URL
result = extract("https://example.com/receipt.jpg")
# Extract from bytes
with open("receipt.jpg", "rb") as f:
result = extract(f.read())
# With date context (helps infer year)
result = extract("receipt.jpg", reference_date="2025-01")
```
## Output Fields
| Field | Type | Description |
|-------|------|-------------|
| `receipt_date` | str | Date in YYYY-MM-DD format |
| `amount` | int | Total amount in cents |
| `amount_dollars` | float | Total amount in dollars |
| `category` | int | 0=grocery, 1=gas station, 2=other |
| `category_name` | str | Human-readable category |
| `vendor_name` | str | Merchant/store name |
| `payment_method` | int | 0=credit, 1=debit, null=unknown |
## CLI Options
```
receipt-ocr [OPTIONS] IMAGE
Arguments:
IMAGE Path to receipt image or URL
Options:
--api-key TEXT Gemini API key
--reference-date TEXT Expected date (YYYY-MM) for year inference
--model TEXT Gemini model (default: gemini-2.0-flash)
--raw Include raw AI response
--pretty Pretty print JSON
```
## Accuracy
Tested on ~1000 receipts:
| Field | Accuracy |
|-------|----------|
| Amount | ~98% |
| Date | ~95% |
| Vendor | ~90% |
Tips for better accuracy:
- Clear, well-lit photos
- Include the total amount in frame
- Avoid heavy shadows/glare
## Cost
Using Gemini Flash: ~$0.001 per receipt
## License
MIT
| text/markdown | null | IndieKit <hi@indiekit.ai> | null | null | MIT | receipt, ocr, gemini, ai, invoice, extraction | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Py... | [] | null | null | >=3.9 | [] | [] | [] | [
"google-genai>=1.0.0",
"requests>=2.25.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/indiekitai/receipt-ocr",
"Repository, https://github.com/indiekitai/receipt-ocr",
"Issues, https://github.com/indiekitai/receipt-ocr/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T15:49:10.782246 | gemini_receipt_ocr-0.1.0.tar.gz | 6,549 | da/c4/e329750a985b3289c156c0a1da848387c854160a85cd1c1d5d09bf230913/gemini_receipt_ocr-0.1.0.tar.gz | source | sdist | null | false | 499ed8f86cb7ef38d1e53f8efb8004dd | 8478a4592b00b6c33cfb95d96eac339d1c39fb4e793331e124e98af083d85bf9 | dac4e329750a985b3289c156c0a1da848387c854160a85cd1c1d5d09bf230913 | null | [
"LICENSE"
] | 249 |
2.4 | gladlang | 0.1.9 | The GladLang Interpreter | # GladLang
**GladLang is a dynamic, interpreted, object-oriented programming language.** This is a full interpreter built from scratch in Python, complete with a lexer, parser, and runtime environment. It supports modern programming features like closures, classes, inheritance, and robust error handling.
GladLang source files use the `.glad` file extension.

This is the full overview of the GladLang language, its features, and how to run the interpreter.
## Table of Contents
- [About The Language](#about-the-language)
- [Key Features](#key-features)
- [Getting Started](#getting-started)
- [1. Installation](#1-installation)
- [2. Usage](#2-usage)
- [3. Running Without Installation (Source)](#3-running-without-installation-source)
- [4. Building the Executable](#4-building-the-executable)
- [Language Tour (Syntax Reference)](#language-tour-syntax-reference)
- [1. Comments](#1-comments)
- [2. Variables and Data Types](#2-variables-and-data-types)
- [Variables](#variables)
- [Numbers](#numbers)
- [Strings](#strings)
- [Lists, Slicing & Comprehensions](#lists-slicing--comprehensions)
- [Dictionaries](#dictionaries)
- [Booleans](#booleans)
- [Null](#null)
- [Enums](#enums)
- [3. Operators](#3-operators)
- [Math Operations](#math-operations)
- [Compound Assignments](#compound-assignments)
- [Bitwise Operators](#bitwise-operators)
- [Comparisons, Logic & Type Checking](#comparisons-logic--type-checking)
- [Conditional (Ternary) Operator](#conditional-ternary-operator)
- [Increment / Decrement](#increment--decrement)
- [4. Control Flow](#4-control-flow)
- [IF Statements](#if-statements)
- [Switch Statements](#switch-statements)
- [WHILE Loops](#while-loops)
- [FOR Loops](#for-loops)
- [5. Functions](#5-functions)
- [Named Functions](#named-functions)
- [Anonymous Functions](#anonymous-functions)
- [Closures](#closures)
- [Recursion](#recursion)
- [Function Overloading](#function-overloading)
- [6. Object-Oriented Programming (OOP)](#6-object-oriented-programming-oop)
- [Classes and Instantiation](#classes-and-instantiation)
- [The `THIS` Keyword](#the-this-keyword)
- [Inheritance & The SUPER Keyword](#inheritance--the-super-keyword)
- [Multiple Inheritance & MRO](#multiple-inheritance--mro)
- [Method & Constructor Overloading](#method--constructor-overloading)
- [Polymorphism](#polymorphism)
- [Access Modifiers](#access-modifiers)
- [Static Members](#static-members)
- [7. Built-in Functions](#7-built-in-functions)
- [Error Handling](#error-handling)
- [Running Tests](#running-tests)
- [License](#license)
-----
## About The Language
GladLang is an interpreter for a custom scripting language. It was built as a complete system, demonstrating the core components of a programming language:
* **Lexer:** A tokenizer that scans source code and converts it into a stream of tokens (e.g., `NUMBER`, `STRING`, `IDENTIFIER`, `KEYWORD`, `PLUS`).
* **Parser:** A parser that takes the token stream and builds an Abstract Syntax Tree (AST), representing the code's structure.
* **AST Nodes:** A comprehensive set of nodes that define every syntactic structure in the language (e.g., `BinOpNode`, `IfNode`, `FunDefNode`, `ClassNode`).
* **Runtime:** Defines the `Context` and `SymbolTable` for managing variable scope, context (for tracebacks), and closures.
* **Values:** Defines the language's internal data types (`Number`, `String`, `List`, `Dict`, `Function`, `Class`, `Instance`).
* **Interpreter:** The core engine that walks the AST. It uses a "Zero-Copy" architecture with Dependency Injection for high-performance execution and low memory overhead.
* **Entry Point:** The main file that ties everything together. It handles command-line arguments, runs files, and starts the interactive shell.
-----
## Key Features
GladLang supports a rich, modern feature set:
* **Data Types:** Numbers (int/float, plus **Hex/Octal/Binary** literals), Strings, Lists, Dictionaries, Booleans, and Null.
* **Variables:** Dynamic variable assignment with `LET`.
* **Advanced Assignments:**
* **Destructuring:** Unpack lists in assignments (`LET [x, y] = [1, 2]`) and loops (`FOR [x, y] IN points`).
* **Slicing:** Access sub-lists or substrings easily (`list[0:3]`).
* **String Manipulation:**
* **Interpolation:** JavaScript-style template strings (`` `Hello ${name}` ``).
* **Multi-line Strings:** Triple-quoted strings (`"""..."""`) for large text blocks.
* **Comprehensions:**
* **List Comprehensions:** Supports nesting (`[x+y FOR x IN A FOR y IN B]`).
* **Dictionary Comprehensions:** Create dicts programmatically (`{k: v FOR k IN list}`).
* **Dictionaries:** Key-value data structures (`{'key': 'value'}`).
* **Control Flow:**
* Full support for `IF` / `ELSE IF`, `SWITCH` / `CASE`.
* **Universal Iteration:** `FOR` loops over Lists, Strings (chars), and Dictionaries (keys).
* **Functions:** First-class citizens, Closures, Recursion, Named/Anonymous support, and **Overloading** (by argument count).
* **Object-Oriented:** Full OOP support with `CLASS`, `INHERITS`, Access Modifiers, and **Method/Constructor Overloading**. Object instantiation is **$O(1)$** due to constructor caching.
* **Advanced Inheritance:** Support for **Multiple** and **Hybrid** inheritance with strict C3-style **Method Resolution Order (MRO)**.
* **Parent Delegation:** Full support for `SUPER` in both constructors and overridden methods, plus explicit parent targeting.
* **Static Members:** Java-style `STATIC` fields, methods, and constants (`STATIC FINAL`).
* **Operators:** Ternary Operator (`condition ? true : false`) for concise conditional logic.
* **Enums:** Fully encapsulated, immutable `ENUM` types with auto-incrementing values and explicit assignments.
* **OOP Safety:** Runtime checks for circular inheritance, LSP violations, strict unbound method type-checking, and secure encapsulation.
* **Error Management:** Gracefully handle errors with `TRY`, `CATCH`, and `FINALLY`.
* **Constants:** Declare immutable values using `FINAL`. These are fully protected from shadowing, reassignment, and modification via loops or increment operators.
* **Built-ins:** `PRINTLN`, `PRINT`, `INPUT`, `STR`, `INT`, `FLOAT`, `BOOL`, `LEN`.
* **Error Handling:** Robust, user-friendly runtime error reporting with full tracebacks.
* **Advanced Math:** Compound assignments (`+=`, `*=`), Power (`**`), Modulo (`%`), and automatic float division.
* **Rich Comparisons:** Chained comparisons (`1 < x < 10`), Identity checks (`IS`), and runtime type-checking (`INSTANCEOF`).
* **Boolean Logic:** Strict support for `AND` / `OR` / `NOT`.
-----
## Getting Started
There are several ways to install and run GladLang.
### 1. Installation
#### Option A: Install via Pip (Recommended)
If you just want to use the language, install it via pip:
```bash
pip install gladlang
```
#### Option B: Install from Source (For Developers)
If you want to modify the codebase, clone the repository and install it in **editable mode**:
```bash
git clone --depth 1 https://github.com/gladw-in/gladlang.git
cd gladlang
pip install -e .
```
---
### 2. Usage
Once installed, you can use the global `gladlang` command.
#### Interactive Shell (REPL)
Run the interpreter without arguments to start the shell:
```bash
gladlang
```
#### Running a Script
Pass a file path to execute a script:
```bash
gladlang "tests/test.glad"
```
---
### 3. Running Without Installation (Source)
You can run the interpreter directly from the source code without installing it via pip:
```bash
python run.py "tests/test.glad"
```
---
### 4. Building the Executable
You can build a **standalone executable** (no Python required) using **PyInstaller**:
```bash
pip install pyinstaller
pyinstaller run.py --paths src -F --name gladlang --icon=favicon.ico
```
This will create a single-file executable at `dist/gladlang` (or `gladlang.exe` on Windows).
**Adding to PATH (Optional):**
To run the standalone executable from anywhere:
* **Windows:** Move it to a folder and add that folder to your System PATH variables.
* **Mac/Linux:** Move it to `/usr/local/bin`: `sudo mv dist/gladlang /usr/local/bin/`
-----
## Language Tour (Syntax Reference)
Here is a guide to the GladLang syntax, with examples from the `tests/` directory.
### 1\. Comments
Comments start with `#` and last for the entire line.
```glad
# This is a comment.
LET a = 10 # This is an inline comment
```
### 2\. Variables and Data Types
#### Variables
Variables are assigned using the `LET` keyword. You can also unpack lists directly into variables using **Destructuring**.
```glad
# Immutable Constants
FINAL PI = 3.14159
# Variable Assignment
LET a = 10
LET b = "Hello"
LET my_list = [a, b, 123]
# Destructuring Assignment
LET point = [10, 20]
LET [x, y] = point
PRINTLN x # 10
PRINTLN y # 20
```
#### Numbers
Numbers can be integers or floats. You can also use **Hexadecimal**, **Octal**, and **Binary** literals.
```glad
LET math_result = (1 + 2) * 3 # 9
LET float_result = 10 / 4 # 2.5
# Number Bases
LET hex_val = 0xFF # 255
LET oct_val = 0o77 # 63
LET bin_val = 0b101 # 5
```
#### Strings
Strings can be defined in three ways:
1. **Double Quotes:** Standard strings.
2. **Triple Quotes:** Multi-line strings that preserve formatting.
3. **Backticks:** Template strings supporting interpolation.
```glad
# Standard
LET s = "Hello\nWorld"
# Multi-line
LET menu = """
1. Start
2. Settings
3. Exit
"""
# Indexing
LET char = "GladLang"[0] # "G"
PRINTLN "Hello"[1] # "e"
# Escapes (work in "..." and `...`)
PRINTLN "Line 1\nLine 2"
PRINTLN `Column 1\tColumn 2`
# Interpolation (Template Strings)
LET name = "Glad"
PRINTLN `Welcome back, ${name}!`
PRINTLN `5 + 10 = ${5 + 10}`
```
#### Lists, Slicing & Comprehensions
Lists are ordered collections. You can access elements, slice them, or create new lists dynamically using comprehensions.
```glad
LET nums = [0, 1, 2, 3, 4, 5]
# Indexing & Assignment
PRINTLN nums[1] # 1
LET nums[1] = 100
# Slicing [start:end]
PRINTLN nums[0:3] # [0, 1, 2]
PRINTLN nums[3:] # [3, 4, 5]
# List Comprehension
LET squares = [n ** 2 FOR n IN nums]
PRINTLN squares # [0, 1, 4, 9, 16, 25]
# Nested List Comprehension
LET pairs = [[x, y] FOR x IN [1, 2] FOR y IN [3, 4]]
# Result: [[1, 3], [1, 4], [2, 3], [2, 4]]
```
#### Dictionaries
Dictionaries are key-value pairs enclosed in `{}`. Keys must be Strings or Numbers.
```glad
LET person = {
"name": "Glad",
"age": 25,
"is_admin": TRUE
}
PRINTLN person["name"] # Access: "Glad"
LET person["age"] = 26 # Modify
LET person["city"] = "NYC" # Add new key
# Dictionary Comprehension
LET keys = ["a", "b", "c"]
LET d = {k: 0 FOR k IN keys}
PRINTLN d # {'a': 0, 'b': 0, 'c': 0}
```
#### Booleans
Booleans are `TRUE` and `FALSE`. They are the result of comparisons and logical operations.
```glad
LET t = TRUE
LET f = FALSE
PRINTLN t AND f # 0 (False)
PRINTLN t OR f # 1 (True)
PRINTLN NOT t # 0 (False)
```
**Truthiness:** `0`, `0.0`, `""`, `NULL`, and `FALSE` are "falsy." All other values (including non-empty strings, non-zero numbers, lists, functions, and classes) are "truthy."
#### Null
The `NULL` keyword represents a null or "nothing" value. It is falsy and prints as `0`. Functions with no `RETURN` statement implicitly return `NULL`.
#### Enums
GladLang supports strict, immutable `ENUM` types. Enums can be zero-indexed implicitly, or you can assign explicit values. They also support comma-separated cases.
```glad
# Basic Enum (Implicit 0-indexing)
ENUM Colors
RED
GREEN
BLUE
ENDENUM
PRINTLN Colors.RED # 0
PRINTLN Colors.GREEN # 1
# Explicit & Auto-Incrementing Values
ENUM HTTPStatus
OK = 200
NOT_FOUND = 404
CUSTOM_ERROR # Implicitly becomes 405
ENDENUM
# Comma-Separated
ENUM Days
MON, TUE, WED, THU, FRI
ENDENUM
```
-----
### 3\. Operators
#### Math Operations
GladLang supports standard arithmetic plus advanced operators like Modulo, Floor Division, and Power.
```glad
LET sum = 10 + 5 # 15
LET diff = 20 - 8 # 12
LET prod = 5 * 4 # 20
LET quot = 100 / 2 # 50.0 (Always Float)
PRINTLN 2 ** 3 # Power: 8
PRINTLN 10 // 3 # Floor Division: 3
PRINTLN 10 % 3 # Modulo: 1
# Standard precedence rules apply
PRINTLN 2 + 3 * 4 # 14
PRINTLN 1 + 2 * 3 # 7
PRINTLN (1 + 2) * 3 # 9
```
#### Compound Assignments
GladLang supports syntactic sugar for updating variables in place.
```glad
LET score = 10
score += 5 # score is now 15
score -= 2 # score is now 13
score *= 2 # score is now 26
score /= 2 # score is now 13.0
score %= 5 # score is now 3.0
```
#### Bitwise Operators
Perform binary manipulation on integers.
```glad
LET a = 5 # Binary 101
LET b = 3 # Binary 011
PRINTLN a & b # 1 (AND)
PRINTLN a | b # 7 (OR)
PRINTLN a ^ b # 6 (XOR)
PRINTLN ~a # -6 (NOT)
PRINTLN 1 << 2 # 4 (Left Shift)
PRINTLN 8 >> 2 # 2 (Right Shift)
# Compound Assignment
LET x = 1
x <<= 2 # x is now 4
```
#### Comparisons, Logic & Type Checking
You can compare values, chain comparisons for ranges, check object identity, and perform runtime type-checking.
```glad
# Equality & Inequality
PRINTLN 1 == 1 # True
PRINTLN 1 != 2 # True
# Chained Comparisons (Ranges)
LET age = 25
IF 18 <= age < 30 THEN
PRINTLN "Young Adult"
ENDIF
PRINTLN (10 < 20) AND (10 != 5) # 1 (True)
# Identity ('IS' checks if variables refer to the same object)
LET a = [1, 2]
LET b = a
PRINTLN b IS a # True
# Type Checking ('INSTANCEOF' checks the entire inheritance chain)
CLASS Animal ENDCLASS
CLASS Dog INHERITS Animal ENDCLASS
LET d = NEW Dog()
PRINTLN d INSTANCEOF Dog # 1 (True)
PRINTLN d INSTANCEOF Animal # 1 (True)
# Boolean Operators
IF a AND b THEN
PRINTLN "Both exist"
ENDIF
```
#### Conditional (Ternary) Operator
A concise way to write `IF...ELSE` statements in a single line. It supports nesting and arbitrary expressions.
```glad
LET age = 20
LET type = age >= 18 ? "Adult" : "Minor"
PRINTLN type # "Adult"
# Nested Ternary
LET score = 85
LET grade = score > 90 ? "A" : score > 80 ? "B" : "C"
PRINTLN grade # "B"
```
#### Increment / Decrement
Supports C-style pre- and post-increment/decrement operators on variables and list elements.
```glad
LET i = 5
PRINTLN i++ # 5
PRINTLN i # 6
PRINTLN ++i # 7
PRINTLN i # 7
LET my_list = [10, 20]
PRINTLN my_list[1]++ # 20
PRINTLN my_list[1] # 21
```
-----
### 4\. Control Flow
#### IF Statements
Uses `IF...THEN...ENDIF` syntax.
```glad
IF x > 10 THEN
PRINTLN "Large"
ELSE IF x > 5 THEN
PRINTLN "Medium"
ELSE
PRINTLN "Small"
ENDIF
```
#### Switch Statements
Use `SWITCH` to match a value against multiple possibilities. It supports single values, comma-separated lists for multiple matches, and expressions.
```glad
LET status = 200
SWITCH status
CASE 200:
PRINTLN "OK"
CASE 404, 500:
PRINTLN "Error"
DEFAULT:
PRINTLN "Unknown Status"
ENDSWITCH
```
#### WHILE Loops
Loops while a condition is `TRUE`.
```glad
LET i = 3
WHILE i > 0
PRINTLN "i = " + i
LET i = i - 1
ENDWHILE
# Prints:
# i = 3
# i = 2
# i = 1
```
#### FOR Loops
Iterates over the elements of a list.
```glad
LET my_list = ["apple", "banana", "cherry"]
FOR item IN my_list
PRINTLN "Item: " + item
ENDFOR
# Iterate over Strings (Characters)
FOR char IN "Hi"
PRINTLN char
ENDFOR
# Iterate over Dictionaries (Keys)
LET data = {"x": 10, "y": 20}
FOR key IN data
PRINTLN key + ": " + data[key]
ENDFOR
# Loop Destructuring (Unpacking)
LET points = [[1, 2], [3, 4]]
FOR [x, y] IN points
PRINTLN "x: " + x + ", y: " + y
ENDFOR
```
**`BREAK` and `CONTINUE`** are supported in both `WHILE` and `FOR` loops.
---
### 5. Functions
#### Named Functions
Defined with `DEF...ENDDEF`. Arguments are passed by value. `RETURN` sends a value back.
```glad
DEF add(a, b)
RETURN a + b
ENDDEF
LET sum = add(10, 5)
PRINTLN sum # 15
```
#### Anonymous Functions
Functions can be defined without a name, perfect for assigning to variables.
```glad
LET double = DEF(x)
RETURN x * 2
ENDDEF
PRINTLN double(5) # 10
```
#### Closures
Functions capture variables from their parent scope.
```glad
DEF create_greeter(greeting)
DEF greeter_func(name)
# 'greeting' is "closed over" from the parent
RETURN greeting + ", " + name + "!"
ENDDEF
RETURN greeter_func
ENDDEF
LET say_hello = create_greeter("Hello")
PRINTLN say_hello("Alex") # "Hello, Alex!"
```
#### Recursion
Functions can call themselves.
```glad
DEF fib(n)
IF n <= 1 THEN
RETURN n
ENDIF
RETURN fib(n - 1) + fib(n - 2)
ENDDEF
PRINTLN fib(7) # 13
```
#### Function Overloading
You can define multiple functions with the same name, as long as they accept a different number of arguments (arity).
```glad
DEF add(a, b)
RETURN a + b
ENDDEF
DEF add(a, b, c)
RETURN a + b + c
ENDDEF
PRINTLN add(10, 20) # Calls 2-arg version: 30
PRINTLN add(10, 20, 30) # Calls 3-arg version: 60
```
-----
### 6\. Object-Oriented Programming (OOP)
#### Classes and Instantiation
Use `CLASS...ENDCLASS` to define classes and `NEW` to create instances. The constructor is a method named exactly after the class.
```glad
CLASS Counter
DEF Counter()
THIS.count = 0 # 'THIS' is the instance
ENDDEF
DEF increment()
THIS.count = THIS.count + 1
ENDDEF
DEF get_count()
RETURN THIS.count
ENDDEF
ENDCLASS
LET c = NEW Counter()
c.increment()
PRINTLN c.get_count() # 1
```
#### The `THIS` Keyword
`THIS` is used to access instance attributes and methods. It is automatically available inside all non-static methods; you do not need to pass it as an argument.
#### Inheritance & The SUPER Keyword
Use the `INHERITS` keyword to inherit from parent classes. You can use the `SUPER` keyword to seamlessly call parent constructors and overridden methods. GladLang enforces strict visibility rules (LSP) and prevents circular inheritance loops.
```glad
CLASS Pet
DEF Pet(name)
THIS.name = name
ENDDEF
DEF speak()
RETURN "makes a generic pet sound."
ENDDEF
ENDCLASS
CLASS Dog INHERITS Pet
DEF Dog(name)
# Automatically delegates to the parent constructor
SUPER(name)
ENDDEF
# Override the 'speak' method and extend parent functionality
DEF speak()
PRINTLN THIS.name + " says: Woof, and " + SUPER.speak()
ENDDEF
ENDCLASS
LET my_dog = NEW Dog("Buddy")
my_dog.speak() # "Buddy says: Woof, and makes a generic pet sound."
```
#### Multiple Inheritance & MRO
GladLang supports multiple and hybrid inheritance (solving the Diamond Problem). When inheriting from multiple classes, GladLang establishes a **Method Resolution Order (MRO)** that prioritizes parents from left to right.
If you want to bypass the default `SUPER()` MRO (for example, to initialize multiple parent classes explicitly), you can call parent constructors or methods directly using the Class name.
```glad
CLASS Animal
DEF Animal()
PRINTLN("Animal Constructor")
ENDDEF
DEF speak()
RETURN "Generic Sound"
ENDDEF
ENDCLASS
CLASS Human
DEF Human()
PRINTLN("Human Constructor")
ENDDEF
DEF speak()
RETURN "Hello"
ENDDEF
ENDCLASS
CLASS Dog INHERITS Animal, Human
DEF Dog()
PRINTLN("--- Initializing Dog ---")
# STYLE 1: Explicit Calls (Great for Multiple Inheritance)
Animal.Animal()
Human.Human()
# STYLE 2: SUPER Call (Great for Single Inheritance / MRO)
# This will call 'Animal' again because it's first in MRO
PRINTLN("--- Calling SUPER() ---")
SUPER()
ENDDEF
DEF speak()
# Mix both styles in methods too
RETURN "Woof! " + SUPER.speak() + " " + Human.speak()
ENDDEF
ENDCLASS
LET d = NEW Dog()
# Expected:
# --- Initializing Dog ---
# Animal Constructor
# Human Constructor
# --- Calling SUPER() ---
# Animal Constructor
PRINTLN("\n[Speaking]")
PRINTLN(d.speak())
# Expected: Woof! Generic Sound Hello
```
#### Method & Constructor Overloading
Classes support overloading for both regular methods and constructors. This allows for flexible object creation (e.g., Copy Constructors).
```glad
CLASS Vector
# Default Constructor
DEF Vector()
THIS.x = 0
THIS.y = 0
ENDDEF
# Overloaded Constructor
DEF Vector(x, y)
THIS.x = x
THIS.y = y
ENDDEF
# Copy Constructor
DEF Vector(other)
THIS.x = other.x
THIS.y = other.y
ENDDEF
ENDCLASS
LET v1 = NEW Vector() # [0, 0]
LET v2 = NEW Vector(10, 20) # [10, 20]
LET v3 = NEW Vector(v2) # [10, 20] (Copy of v2)
```
#### Polymorphism
When a base class method calls another method on `THIS`, it will correctly use the **child's overridden version**.
```glad
CLASS Pet
DEF introduce()
PRINTLN "I am a pet and I say:"
THIS.speak() # This will call the child's 'speak'
ENDDEF
DEF speak()
PRINTLN "(Generic pet sound)"
ENDDEF
ENDCLASS
CLASS Cat INHERITS Pet
DEF speak()
PRINTLN "Meow!"
ENDDEF
ENDCLASS
LET my_cat = NEW Cat("Whiskers")
my_cat.introduce()
# Prints:
# I am a pet and I say:
# Meow!
```
#### Access Modifiers
You can control the visibility of methods and attributes using `PUBLIC`, `PRIVATE`, and `PROTECTED`.
* **Encapsulation:** Private attributes are name-mangled to prevent collisions.
* **Singleton Support:** Constructors can be private to force factory usage.
```glad
CLASS SecureData
DEF SecureData(data)
PRIVATE THIS.data = data
ENDDEF
PUBLIC DEF get_data()
RETURN THIS.data
ENDDEF
ENDCLASS
# External access to 'data' will raise a Runtime Error.
```
#### Static Members
GladLang supports Java-style static fields and methods. These belong to the class itself rather than instances.
* **Static Fields:** Shared across all instances.
* **Static Constants:** `STATIC FINAL` creates class-level constants.
* **Static Privacy:** `STATIC PRIVATE` fields are only visible within the class.
```glad
CLASS Config
# A constant shared by everyone
STATIC FINAL MAX_USERS = 100
# A private static variable
STATIC PRIVATE LET internal_count = 0
STATIC PUBLIC DEF increment()
Config.internal_count = Config.internal_count + 1
RETURN Config.internal_count
ENDDEF
ENDCLASS
# Access directly via the Class name
PRINTLN Config.MAX_USERS # 100
PRINTLN Config.increment() # 1
```
-----
### 7\. Built-in Functions
* `PRINTLN(value)`: Prints a value to the console **with** a new line (Standard output).
* `PRINT(value)`: Prints a value **without** a new line (Useful for prompts).
* `INPUT()`: Reads a line of text from the user as a String.
* `STR(value)`: Casts a value to a String.
* `INT(value)`: Casts a String or Float to an Integer.
* `FLOAT(value)`: Casts a String or Integer to a Float.
* `BOOL(value)`: Casts a value to its Boolean representation (`TRUE` or `FALSE`).
* `LEN(value)`: Returns the length of a String, List, Dict, or Number. Alias: `LENGTH()`.
-----
## Error Handling
You can handle runtime errors gracefully or throw your own exceptions.
```glad
TRY
# Attempt dangerous code
LET result = 10 / 0
PRINTLN result
CATCH error
# Handle the error
PRINTLN "Caught an error: " + error
FINALLY
# Always runs
PRINTLN "Cleanup complete."
ENDTRY
# Manually throwing errors
IF age < 0 THEN
THROW "Age cannot be negative!"
ENDIF
```
GladLang features detailed error handling and prints full tracebacks for runtime errors, making debugging easy.
**Example: Name Error** (`test_name_error.glad`)
```
Traceback (most recent call last):
File test_name_error.glad, line 6, in <program>
Runtime Error: 'b' is not defined
```
**Example: Type Error** (`test_type_error.glad` with input "5")
```
Traceback (most recent call last):
File test_type_error.glad, line 6, in <program>
Runtime Error: Illegal operation
```
**Example: Argument Error** (`test_arg_error.glad`)
```
Traceback (most recent call last):
File test_arg_error.glad, line 7, in <program>
File test_arg_error.glad, line 4, in add
Runtime Error: Incorrect argument count for 'add'. Expected 2, got 3
```
-----
## Running Tests
The `tests/` directory contains a comprehensive suite of `.glad` files to test every feature of the language. You can run any test by executing it with the interpreter:
```bash
gladlang "test_closures.glad"
gladlang "test_lists.glad"
gladlang "test_polymorphism.glad"
```
## License
You can use this under the MIT License. See [LICENSE](LICENSE) for more details.
| text/markdown | Glad432 | null | null | null | Copyright (c) 2025 - present GLAD432
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | interpreter, language, compiler, educational | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.14",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"homepage, https://github.com/gladw-in/gladlang",
"documentation, https://gladlang.pages.dev/",
"source, https://github.com/gladw-in/gladlang"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-18T15:48:50.633776 | gladlang-0.1.9.tar.gz | 49,839 | 2e/78/e242ceffcae3350b72af38bd2baaa17ada8eab3e5fd0c9d7ca843bdeaffc/gladlang-0.1.9.tar.gz | source | sdist | null | false | 28786931d63f69792f96c3868363a52e | e9c222b2d649fd6cf44b5fb387d972da94a8a0be890e3c001344aea9ca93bd83 | 2e78e242ceffcae3350b72af38bd2baaa17ada8eab3e5fd0c9d7ca843bdeaffc | null | [
"LICENSE"
] | 233 |
2.4 | graph-universe | 0.1.2 | A library for generating synthetic graph families for inductive generalization experiments of graph learning models. | # GraphUniverse: Enabling Systematic Evaluation of Inductive Generalization
[](https://pypi.org/project/graph-universe/)
[](https://pypi.org/project/graph-universe/)
[](https://github.com/LouisVanLangendonck/GraphUniverse/blob/main/LICENSE)
[]()
**Generate families of graphs with finely controllable properties for systematic evaluation of inductive graph learning models.**
[Quick Start](#quick-start) | [Interactive UI](#interactive-ui) | [Validation](#validation--analysis) | [Paper Experiments](#for-researchers--contributors)
![Example Graph Family][graphplot]
[graphplot]: https://raw.githubusercontent.com/LouisVanLangendonck/GraphUniverse/main/assets/ExampleGraphFamily.png "Example Graph Family Visualization"
## Key Features
Synthetic graph learning benchmarks are limited to **single-graph, transductive settings**. GraphUniverse enables the first systematic evaluation of **inductive generalization** by generating entire families of graphs with:
- **Consistent Semantics**: Communities maintain stable identities across graphs
- **Fine-grained Control**: Tune homophily, degree distributions, community structure
- **Scalable Generation**: Linear scaling, thousands of graphs per minute
- **Validated Framework**: Comprehensive parameter sensitivity analysis
- **Interactive Tool**: Web-based exploration and visualization and Downloadable Pyg-dataset object ready to train!
![GraphUniverse Methodology Graphical Overview][logo]
[logo]: https://raw.githubusercontent.com/LouisVanLangendonck/GraphUniverse/main/assets/GraphUniverseMethodologyClean.png "Methodology Overview"
---
## Installation
Install from PyPI:
```bash
pip install graph-universe
```
**For the interactive UI (streamlit) and visualization tools:**
```bash
pip install graph-universe[viz]
```
**Optional extras:**
- `[viz]` - Streamlit UI + seaborn visualization tools
- `[dev]` - Development dependencies (testing, linting)
- `[all]` - Everything (includes documentation tools)
**Install from source:**
```bash
git clone https://github.com/LouisVanLangendonck/GraphUniverse.git
cd GraphUniverse
pip install -e ".[dev]"
```
---
## Interactive UI
After installing with `[viz]`, launch the interactive dashboard:
```bash
graph-universe-ui
```
**Hosted demo:** Try it online at [graphuniverse.streamlit.app](https://graphuniverse.streamlit.app/)
**Launch from Python:**
```python
from graph_universe import launch_ui
launch_ui() # Opens browser, press Ctrl+C to stop
```
---
## Quick Start
### Option 1: Python API with Individual Classes
```python
from graph_universe import GraphUniverse, GraphFamilyGenerator
# Create universe with 8 communities and 10-dimensional features
universe = GraphUniverse(K=8, edge_propensity_variance=0.3, feature_dim=10)
# Generate family with full parameter control
family = GraphFamilyGenerator(
universe=universe,
n_nodes_range=(35, 50),
n_communities_range=(2, 6),
homophily_range=(0.2, 0.8),
avg_degree_range=(2.0, 10.0),
power_law_exponent_range=(2.0, 5.0),
degree_separation_range=(0.1, 0.7),
seed=42
)
# Generate 30 graphs
family.generate_family(n_graphs=30, show_progress=True)
print(f"Generated {len(family.graphs)} graphs!")
# Convert to PyTorch Geometric format for training
pyg_graphs = family.to_pyg_graphs(task="community_detection")
```
### Option 2: Config-Driven Workflow
Create `config.yaml`:
```yaml
universe_parameters:
K: 10
edge_propensity_variance: 0.5
feature_dim: 16
center_variance: 1.0
cluster_variance: 0.3
seed: 42
family_parameters:
n_graphs: 100
n_nodes_range: [25, 200]
n_communities_range: [3, 7]
homophily_range: [0.1, 0.9]
avg_degree_range: [2.0, 8.0]
power_law_exponent_range: [2.0, 3.0]
degree_separation_range: [0.4, 0.8]
seed: 42
task: "community_detection"
```
Then load and generate:
```python
import yaml
from graph_universe import GraphUniverseDataset
with open("config.yaml") as f:
config = yaml.safe_load(f)
dataset = GraphUniverseDataset(root="./data", parameters=config)
print(f"Generated dataset with {len(dataset)} graphs!")
```
---
## Validation & Analysis
GraphUniverse includes built-in validation to ensure generated graphs match target properties:
```python
# Validate standard graph properties
family_properties = family.analyze_graph_family_properties()
for property_name in ['node_counts', 'avg_degrees', 'homophily_levels']:
values = family_properties[property_name]
print(f"{property_name}: mean={np.mean(values):.3f}")
# Analyze within-graph community signals (fits Random Forest per graph)
family_signals = family.analyze_graph_family_signals()
for signal in ['structure_signal', 'feature_signal', 'degree_signal']:
values = family_signals[signal]
print(f"{signal}: mean={np.mean(values):.3f}")
# Measure between-graph consistency
family_consistency = family.analyze_graph_family_consistency()
for metric in ['structure_consistency', 'feature_consistency', 'degree_consistency']:
value = family_consistency[metric]
print(f"{metric}: {value:.3f}")
```
---
## Documentation & Support
- **GitHub Repository**: https://github.com/LouisVanLangendonck/GraphUniverse
- **PyPI Package**: https://pypi.org/project/graph-universe/
- **Issue Tracker**: https://github.com/LouisVanLangendonck/GraphUniverse/issues
- **Changelog**: https://github.com/LouisVanLangendonck/GraphUniverse/blob/main/CHANGELOG.md
---
## Citation
If you use GraphUniverse in your research, please cite:
```bibtex
@article{van2025graphuniverse,
title={GraphUniverse: Enabling Systematic Evaluation of Inductive Generalization},
author={Van Langendonck, Louis and Bern{\'a}rdez, Guillermo and Miolane, Nina and Barlet-Ros, Pere},
journal={arXiv preprint arXiv:2509.21097},
year={2025}
}
```
---
## For Researchers & Contributors
The sections below contain resources for reproducing paper experiments and contributing to development.
### Reproducing Paper Experiments
Clone the repository to access validation and experiment scripts:
```bash
git clone https://github.com/LouisVanLangendonck/GraphUniverse.git
cd GraphUniverse
pip install -e ".[dev]"
```
**Run parameter sensitivity validation (reproduces paper results):**
```bash
python experiments/validate_parameter_sensitivity.py --n-random-samples 100 --n-graphs 30
```
**Run scalability experiments:**
```bash
python experiments/scalability_experiment.py
```
---
## License
MIT License - see [LICENSE](https://github.com/LouisVanLangendonck/GraphUniverse/blob/main/LICENSE) for details.
Copyright (c) 2025 Louis Van Langendonck and Guillermo Bernardez
| text/markdown | null | Louis Van Langendonck <louis.van.langendonck@upc.edu>, Guillermo Bernardez <guillermo_bernardez@ucsb.edu> | null | null | MIT License
Copyright (c) 2025-2026 Louis Van Langendonck, Guillermo Bernardez
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| graph, neural-networks, pytorch, graph-generation, community-detection, synthetic-data, graph-foundation-models | [
"License :: OSI Approved :: MIT License",
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Na... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy<3.0,>=1.21.0",
"scipy<2.0,>=1.7.0",
"networkx<4.0,>=2.6.0",
"torch<3.0,>=2.0.0",
"torch-geometric>=2.3.0",
"scikit-learn<2.0,>=1.0.0",
"matplotlib<4.0,>=3.5.0",
"tqdm>=4.62.0",
"pyyaml<7.0,>=6.0",
"seaborn>=0.12.0; extra == \"viz\"",
"streamlit>=1.28.0; extra == \"viz\"",
"ipykernel>=6.... | [] | [] | [] | [
"Repository, https://github.com/LouisVanLangendonck/GraphUniverse",
"Homepage, https://github.com/LouisVanLangendonck/GraphUniverse",
"Bug Tracker, https://github.com/LouisVanLangendonck/GraphUniverse/issues",
"Changelog, https://github.com/LouisVanLangendonck/GraphUniverse/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.11.3 | 2026-02-18T15:48:32.219404 | graph_universe-0.1.2.tar.gz | 51,335 | 52/5b/381f556e92450086f55df8e211c518541be05662e5a10886bb409cc93939/graph_universe-0.1.2.tar.gz | source | sdist | null | false | 0f4dc4de22021879c8409ce6c49313a0 | 70e14d1c163fb6411e985413597bdd1400cd37c604e38f0216d35f45d36f4f40 | 525b381f556e92450086f55df8e211c518541be05662e5a10886bb409cc93939 | null | [
"LICENSE"
] | 256 |
2.4 | pytorch-ir | 0.2.2 | PyTorch IR extraction framework for compiler backends | [한국어](README.ko.md)
# IR Extraction Framework
[](https://pypi.org/project/pytorch-ir/)
[](https://pypi.org/project/pytorch-ir/)
[](LICENSE)
[](https://sweetcocoa.github.io/pytorch-ir/)
[](https://github.com/sweetcocoa/pytorch-ir/actions/workflows/publish.yml)
A framework for extracting compiler-backend IR (Intermediate Representation) from PyTorch models.
## Quick Start
### Installation
```bash
# Using uv (recommended)
uv sync
# Or using pip
pip install -e .
```
### Basic Usage
```python
import torch
import torch.nn as nn
from torch_ir import extract_ir, ir_to_mermaid
class SimpleMLP(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(4, 8)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(8, 2)
def forward(self, x):
return self.fc2(self.relu(self.fc1(x)))
# 1. Create model on meta device (no actual weights loaded)
with torch.device('meta'):
model = SimpleMLP()
model.eval()
# 2. Extract IR
example_inputs = (torch.randn(1, 4, device='meta'),)
ir = extract_ir(model, example_inputs)
# 3. Save IR
ir.save("model_ir.json")
# 4. Visualize IR
print(ir_to_mermaid(ir))
```
### Extracted IR
The IR above produces the following JSON. Each node records its ATen op type, input/output tensor metadata, and producer-consumer relationships — weight values are not included.
```json
{
"model_name": "SimpleMLP",
"graph_inputs": [{"name": "x", "shape": [1, 4], "dtype": "float32"}],
"graph_outputs": [{"name": "linear_1", "shape": [1, 2], "dtype": "float32"}],
"weights": [
{"name": "fc1.weight", "shape": [8, 4], "dtype": "float32"},
{"name": "fc1.bias", "shape": [8], "dtype": "float32"},
{"name": "fc2.weight", "shape": [2, 8], "dtype": "float32"},
{"name": "fc2.bias", "shape": [2], "dtype": "float32"}
],
"nodes": [
{
"name": "linear", "op_type": "aten.linear.default",
"inputs": [{"name": "x", "shape": [1, 4]}, {"name": "p_fc1_weight", "shape": [8, 4]}, {"name": "p_fc1_bias", "shape": [8]}],
"outputs": [{"name": "linear", "shape": [1, 8]}]
},
{
"name": "relu", "op_type": "aten.relu.default",
"inputs": [{"name": "linear", "shape": [1, 8]}],
"outputs": [{"name": "relu", "shape": [1, 8]}]
},
{
"name": "linear_1", "op_type": "aten.linear.default",
"inputs": [{"name": "relu", "shape": [1, 8]}, {"name": "p_fc2_weight", "shape": [2, 8]}, {"name": "p_fc2_bias", "shape": [2]}],
"outputs": [{"name": "linear_1", "shape": [1, 2]}]
}
]
}
```
### IR Visualization
`ir_to_mermaid()` renders the IR as a Mermaid flowchart. Weight inputs are shown as dashed edges:
```mermaid
flowchart TD
input_x[/"Input: x<br/>1x4"/]
op_linear["linear<br/>1x8"]
input_x -->|"1x4"| op_linear
w_p_fc1_weight[/"p_fc1_weight<br/>8x4"/]
w_p_fc1_weight -.->|"8x4"| op_linear
w_p_fc1_bias[/"p_fc1_bias<br/>8"/]
w_p_fc1_bias -.->|"8"| op_linear
op_relu["relu<br/>1x8"]
op_linear -->|"1x8"| op_relu
op_linear_1["linear<br/>1x2"]
op_relu -->|"1x8"| op_linear_1
w_p_fc2_weight[/"p_fc2_weight<br/>2x8"/]
w_p_fc2_weight -.->|"2x8"| op_linear_1
w_p_fc2_bias[/"p_fc2_bias<br/>2"/]
w_p_fc2_bias -.->|"2"| op_linear_1
output_0[\"Output<br/>1x2"/]
op_linear_1 --> output_0
```
### Verification
```python
# Compare original model output with IR execution result
original_model = SimpleMLP()
original_model.load_state_dict(torch.load('weights.pt'))
original_model.eval()
test_input = torch.randn(1, 4)
is_valid, report = verify_ir_with_state_dict(
ir=ir,
state_dict=original_model.state_dict(),
original_model=original_model,
test_inputs=(test_input,),
)
print(f"Verification: {'PASSED' if is_valid else 'FAILED'}")
```
## Documentation
- [Concepts & Architecture](docs/concepts.md) - Core concepts and design of the framework
- [Setup](docs/setup.md) - Installation and development environment configuration
- [Usage Guide](docs/usage.md) - Detailed usage and examples
- [API Reference](docs/api/index.md) - Public API documentation
- [Operator Support](docs/operators.md) - Supported ATen operators
- [Extension Guide](docs/extending.md) - How to add custom operators
## Dependencies
- Python >= 3.10
- PyTorch >= 2.1
## Running Tests
```bash
# Basic tests
uv run pytest tests/ -v
# Comprehensive tests (all test models)
uv run pytest tests/test_comprehensive.py -v
# Generate reports
uv run pytest tests/test_comprehensive.py --generate-reports --output reports/
# Filter by category
uv run pytest tests/test_comprehensive.py -k "attention" -v
# Run via CLI
uv run python -m tests --output reports/
uv run python -m tests --list-models
uv run python -m tests --category attention
```
## Features
- **Weight-free extraction**: Uses meta tensors to extract only graph structure without loading actual weights into memory
- **torch.export based**: Uses TorchDynamo-based tracing, the officially recommended PyTorch approach
- **Complete metadata**: Automatically extracts shape and dtype information for all tensors
- **IR execution & verification**: Execute the extracted IR and verify results match the original model
- **Extensible design**: Provides a custom operator registration mechanism
## License
MIT License
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"torch>=2.1",
"pre-commit>=4.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"torchvision>=0.16; extra == \"dev\"",
"ty>=0.0.15; extra == \"dev\"",
"mkdocs-material>=9.5; extra == \"docs\"",
"mkdocs-static-i18n>=1.0; extra == \"docs\"",
"mkdocs>=1.6; extra == \"... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:48:16.542637 | pytorch_ir-0.2.2.tar.gz | 170,232 | 22/dc/5ac8393dab10bb153207f0bce818e74cb836218b4c0c3359e7ec7cb2e8b4/pytorch_ir-0.2.2.tar.gz | source | sdist | null | false | 3a136c556e0b9ef7aa5512eb4f6b10ac | 46e6faaf61e7e988d13b6ff79b3328e752d575abfd07373772cd6cecd2c4b7f0 | 22dc5ac8393dab10bb153207f0bce818e74cb836218b4c0c3359e7ec7cb2e8b4 | MIT | [
"LICENSE"
] | 213 |
2.4 | docling-graph | 1.4.4 | A tool to convert documents into knowledge graphs using Docling. | <p align="center"><br>
<a href="https://github.com/docling-project/docling-graph">
<img loading="lazy" alt="Docling Graph" src="docs/assets/logo.png" width="280"/>
</a>
</p>
# Docling Graph
[](https://docling-project.github.io/docling-graph/)
[](https://pypi.org/project/docling-graph/)
[](https://www.python.org/downloads/)
[](https://github.com/astral-sh/uv)
[](https://github.com/astral-sh/ruff)
[](https://opensource.org/licenses/MIT)
[](https://pydantic.dev)
[](https://github.com/docling-project/docling)
[](https://networkx.org/)
[](https://typer.tiangolo.com/)
[](https://github.com/Textualize/rich)
[](https://vllm.ai/)
[](https://ollama.ai/)
[](https://lfaidata.foundation/projects/)
[](https://www.bestpractices.dev/projects/11598)
Docling-Graph turns documents into validated **Pydantic** objects, then builds a **directed knowledge graph** with explicit semantic relationships.
This transformation enables high-precision use cases in **chemistry, finance, and legal** domains, where AI must capture exact entity connections (compounds and reactions, instruments and dependencies, properties and measurements) **rather than rely on approximate text embeddings**.
This toolkit supports two extraction paths: **local VLM extraction** via Docling, and **LLM-based extraction** routed through **LiteLLM** for local runtimes (vLLM, Ollama) and API providers (Mistral, OpenAI, Gemini, IBM WatsonX), all orchestrated through a flexible, config-driven pipeline.
## Key Capabilities
- **✍🏻 Input formats:** [Docling](https://docling-project.github.io/docling/usage/supported_formats/)’s supported inputs: PDF, images, markdown, Office, HTML, and more.
- **🧠 Extraction:** [LLM](docs/fundamentals/pipeline-configuration/backend-selection.md) or [VLM](docs/fundamentals/pipeline-configuration/backend-selection.md) backends, with [chunking](docs/fundamentals/extraction-process/chunking-strategies.md) and [processing modes](docs/fundamentals/pipeline-configuration/processing-modes.md).
- **💎 Graphs:** Pydantic → [NetworkX](docs/fundamentals/graph-management/graph-conversion.md) directed graphs with stable IDs and edge metadata.
- **📦 Export:** [CSV](docs/fundamentals/graph-management/export-formats.md#csv-export), [Cypher](docs/fundamentals/graph-management/export-formats.md#cypher-export), and other KG-friendly formats.
- **🔍 Visualization:** [Interactive HTML](docs/fundamentals/graph-management/visualization.md) and Markdown reports.
### Latest Changes
- **🪜 Multi-pass extraction:** [Delta](docs/fundamentals/extraction-process/delta-extraction.md) and [staged](docs/fundamentals/extraction-process/staged-extraction.md) contracts (experimental).
- **📐 Structured extraction:** LLM output is schema-enforced by default; see [CLI](docs/usage/cli/convert-command.md#structured-output-mode) and [API](docs/usage/api/llm-model-config.md) to disable.
- **✨ LiteLLM:** Single [interface](docs/reference/llm-clients.md) for vLLM, OpenAI, Mistral, WatsonX, and more.
- **🐛 Trace capture:** [Debug exports](docs/usage/advanced/trace-data-debugging.md) for extraction and fallback diagnostics.
### Coming Soon
* 🧩 **Interactive Template Builder:** Guided workflows for building Pydantic templates.
* 🧲 **Ontology-Based Templates:** Match content to the best Pydantic template using semantic similarity.
* 💾 **Graph Database Integration:** Export data straight into `Neo4j`, `ArangoDB`, and similar databases.
## Quick Start
### Requirements
- Python 3.10 or higher
### Installation
```bash
pip install docling-graph
```
This installs the core package with VLM support and LiteLLM for LLM providers. For detailed installation instructions (including optional extras and GPU setup), see [Installation Guide](docs/fundamentals/installation/index.md).
### API Key Setup (Remote Inference)
```bash
export OPENAI_API_KEY="..." # OpenAI
export MISTRAL_API_KEY="..." # Mistral
export GEMINI_API_KEY="..." # Google Gemini
# IBM WatsonX
export WATSONX_API_KEY="..." # IBM WatsonX API Key
export WATSONX_PROJECT_ID="..." # IBM WatsonX Project ID
export WATSONX_URL="..." # IBM WatsonX URL (optional)
```
### Basic Usage
#### CLI
```bash
# Initialize configuration
docling-graph init
# Convert document from URL (each line except the last must end with \)
docling-graph convert "https://arxiv.org/pdf/2207.02720" \
--template "docs.examples.templates.rheology_research.ScholarlyRheologyPaper" \
--processing-mode "many-to-one" \
--extraction-contract "staged" \
--debug
# Visualize results
docling-graph inspect outputs
```
#### Python API - Default Behavior
```python
from docling_graph import run_pipeline, PipelineContext
from docs.examples.templates.rheology_research import ScholarlyRheologyPaper
# Create configuration
config = {
"source": "https://arxiv.org/pdf/2207.02720",
"template": ScholarlyRheologyPaper,
"backend": "llm",
"inference": "remote",
"processing_mode": "many-to-one",
"extraction_contract": "staged", # robust for smaller models
"provider_override": "mistral",
"model_override": "mistral-medium-latest",
"structured_output": True, # default
"use_chunking": True,
}
# Run pipeline - returns data directly, no files written to disk
context: PipelineContext = run_pipeline(config)
# Access results
graph = context.knowledge_graph
models = context.extracted_models
metadata = context.graph_metadata
print(f"Extracted {len(models)} model(s)")
print(f"Graph: {graph.number_of_nodes()} nodes, {graph.number_of_edges()} edges")
```
For debugging, use `--debug` with the CLI to save intermediate artifacts to disk; see [Trace Data & Debugging](docs/usage/advanced/trace-data-debugging.md). For more examples, see [Examples](docs/usage/examples/index.md).
## Pydantic Templates
Templates define both the **extraction schema** and the resulting **graph structure**.
```python
from pydantic import BaseModel, Field
from docling_graph.utils import edge
class Person(BaseModel):
"""Person entity with stable ID."""
model_config = {
'is_entity': True,
'graph_id_fields': ['last_name', 'date_of_birth']
}
first_name: str = Field(description="Person's first name")
last_name: str = Field(description="Person's last name")
date_of_birth: str = Field(description="Date of birth (YYYY-MM-DD)")
class Organization(BaseModel):
"""Organization entity."""
model_config = {'is_entity': True}
name: str = Field(description="Organization name")
employees: list[Person] = edge("EMPLOYS", description="List of employees")
```
For complete guidance, see:
- [Schema Definition Guide](docs/fundamentals/schema-definition/index.md)
- [Template Basics](docs/fundamentals/schema-definition/template-basics.md)
- [Example Templates](docs/examples/README.md)
## Documentation
Comprehensive documentation can be found on [Docling Graph's Page](https://ibm.github.io/docling-graph/).
### Documentation Structure
The documentation follows the docling-graph pipeline stages:
1. [Introduction](docs/introduction/index.md) - Overview and core concepts
2. [Installation](docs/fundamentals/installation/index.md) - Setup and environment configuration
3. [Schema Definition](docs/fundamentals/schema-definition/index.md) - Creating Pydantic templates
4. [Pipeline Configuration](docs/fundamentals/pipeline-configuration/index.md) - Configuring the extraction pipeline
5. [Extraction Process](docs/fundamentals/extraction-process/index.md) - Document conversion and extraction
6. [Graph Management](docs/fundamentals/graph-management/index.md) - Exporting and visualizing graphs
7. [CLI Reference](docs/usage/cli/index.md) - Command-line interface guide
8. [Python API](docs/usage/api/index.md) - Programmatic usage
9. [Examples](docs/usage/examples/index.md) - Working code examples
10. [Advanced Topics](docs/usage/advanced/index.md) - Performance, testing, error handling
11. [API Reference](docs/reference/index.md) - Detailed API documentation
12. [Community](docs/community/index.md) - Contributing and development guide
## Contributing
We welcome contributions! Please see:
- [Contributing Guidelines](.github/CONTRIBUTING.md) - How to contribute
- [Development Guide](docs/community/index.md) - Development setup
### Development Setup
```bash
# Clone and setup
git clone https://github.com/docling-project/docling-graph
cd docling-graph
# Install with dev dependencies
uv sync --extra dev
# Run Execute pre-commit checks
uv run pre-commit run --all-files
```
## License
MIT License - see [LICENSE](LICENSE) for details.
## Acknowledgments
Docling Graph builds on outstanding open-source projects:
- [Docling](https://github.com/docling-project/docling) - document conversion and VLM extraction
- [Pydantic](https://pydantic.dev) - schema definition and validation
- [NetworkX](https://networkx.org/) - graph construction and analysis
- [LiteLLM](https://github.com/BerriAI/litellm) - unified LLM provider interface
- [SpaCy](https://spacy.io/) - semantic entity resolution in delta extraction
- [Cytoscape](https://js.cytoscape.org/) - interactive graph visualization
## IBM ❤️ Open Source AI
Docling Graph has been brought to you by IBM.
| text/markdown | null | Ayoub El Bouchtili <ayoub.elbouchtili@fr.ibm.com>, Michele Dolfi <dol@zurich.ibm.com>, Maxime Gillot <Maxime.Gillot@ibm.com>, Sophie Lang <sophie.lang@de.ibm.com>, Guilhaume Leroy Meline <guilhaume@fr.ibm.com>, Peter Staar <taa@zurich.ibm.com> | null | null | MIT License | docling, knowledge-graph, nlp, pdf, graph | [
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language ... | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"docling[vlm]<3.0.0,>=2.70.0",
"docling-core[chunking,chunking-openai]<3.0.0,>=2.50.0",
"pydantic<3.0.0,>=2.0.0",
"networkx<4.0.0,>=3.0.0",
"rich<15,>=13",
"typer[all]<1.0.0,>=0.12",
"python-dotenv<2.0,>=1.0",
"litellm<2.0.0,>=1.0.0",
"pyyaml<7.0,>=6.0",
"aiofiles<26.0.0,>=24.0.0",
"spacy; extra... | [] | [] | [] | [
"homepage, https://github.com/docling-project/docling-graph",
"repository, https://github.com/docling-project/docling-graph",
"issues, https://github.com/docling-project/docling-graph/issues",
"changelog, https://github.com/docling-project/docling-graph/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:48:06.879474 | docling_graph-1.4.4.tar.gz | 190,202 | 49/2f/ff7fce674acdd71feca86c0ce9c547c5b4a8800085807af9b82ffc1d67e7/docling_graph-1.4.4.tar.gz | source | sdist | null | false | b140fd5b21f902a68004d07e7eda5aae | c8f0572db2b340a12fe8389b5098100e57730777d8d841a61871f27219fa7605 | 492fff7fce674acdd71feca86c0ce9c547c5b4a8800085807af9b82ffc1d67e7 | null | [
"LICENSE"
] | 240 |
2.4 | hhcli | 0.6.3 | Неофициальный CLI-клиент для поиска работы и откликов на hh.ru. | # hhcli
[](https://pypi.org/project/hhcli/)
[](https://opensource.org/licenses/MIT)
[](https://t.me/hhcli)
[](https://pepy.tech/projects/hhcli)
hhcli — это неофициальный CLI-клиент для поиска работы и откликов на hh.ru, позволяющий искать вакансии, просматривать их, отмечать понравившиеся и откликаться на них в интерфейсе терминала.
> У приложения есть [канал в Telegram](https://t.me/hhcli), где публикуются основные новости проекта.
.
## Ключевые возможности
- Локальная база данных SQLite, хранящая профили, историю, кэш вакансий, справочники и т.д.
- Кроссплатформенный TUI-интерфейс (Linux, Windows).
- Профили для разных аккаунтов с поддержкой нескольких резюме внутри одного аккаунта.
- Два режима поиска: автоматический по рекомендациям hh.ru и ручной с настраиваемыми фильтрами.
- Отклик на несколько выбранных вакансий с отправкой сопроводительного письма.
- Хранение истории всех откликов и переписок с работодателями.
- Переписка с работодателями и форматирование текста сообщений прямо внутри приложения.
- Фильтры и отсев дубликатов вакансий (спама по городам).
- Подсветка компаний, в которые ранее был отклик.
- Подсветка вакансий, на которые был отклик (по названию+компания или по id вакансии).
- Приложение чистит базу данных от устаревшего кэша вакансий (старше 5 дней) и логов (старше 20 дней).
- Выбор и возможность создания собственных тем оформления.
## Установка
<details markdown="1" style="margin-bottom: 1.5rem;">
<summary><h3 style="display:inline">Linux</h3></summary>
#### Ubuntu / Debian / Mint (apt)
```bash
sudo apt update && sudo apt install -y
python3 python3-pip pipx git
python3-gi gir1.2-webkit2-4.1 gir1.2-gtk-3.0 libwebkit2gtk-4.1-0
pipx install hhcli --system-site-packages
python3 -m pipx ensurepath
# Перезапустите терминал прежде, чем запускать программу
```
#### Arch / Manjaro (pacman)
```bash
sudo pacman -Syu python python-pip pipx git webkit2gtk python-gobject gtk3
pipx install hhcli --system-site-packages
python3 -m pipx ensurepath
# Перезапустите терминал прежде, чем запускать программу
```
#### Fedora / RHEL / Rocky (dnf / yum)
```bash
sudo dnf install python3 python3-pip pipx git # либо sudo yum install ...
# Пакеты WebKit2GTK могут называться webkit2gtk4.1 / webkit2gtk3 / pywebkitgtk
sudo dnf install webkit2gtk4.1 gtk3 gobject-introspection
pipx install hhcli --system-site-packages
python3 -m pipx ensurepath
# Перезапустите терминал прежде, чем запускать программу
```
#### Другие дистрибутивы
- Установите Python ≥3.9 и `pipx` из стандартного репозитория.
- Установите WebKit2GTK+ и Python GObject bindings (названия пакетов зависят от дистрибутива).
- Выполните `pipx install hhcli --system-site-packages`.
- Если `pipx` отсутствует, можно поставить локально: `pip install --user pipx && pipx ensurepath`.
</details>
<details markdown="1">
<summary><h3 style="display:inline">Windows</h3></summary>
#### Установка Python и pipx
1. Скачайте Python 3.9+ с [python.org](https://www.python.org/downloads/windows/) и поставьте галочку “Add Python to PATH”.
2. Установите `pipx` (PowerShell или CMD, права администратора не нужны):
```powershell
python -m pip install --upgrade pip
python -m pip install pipx
python -m pipx ensurepath
```
#### Установка hhcli
Перезапустите PowerShell (или CMD) и выполните:
```powershell
pipx install hhcli
```
**После установки** откройте новое окно PowerShell/Command Prompt, чтобы PATH подхватил `C:\Users\<имя>\.local\bin`. Если команда `hhcli` всё ещё не находится, убедитесь, что этот путь внесён в переменные среды (Параметры → Система → Дополнительные параметры → Переменные среды) и перезапустите терминал.
**Для рендеринга окна авторизации** нужен WebView2 Runtime. Обычно он уже предустановлен в Windows 10/11. Если нет — откройте [страницу Microsoft](https://developer.microsoft.com/nl-nl/microsoft-edge/webview2?form=MA13LH) и скачайте **Evergreen Bootstrapper** (x64 для большинства ПК). При закрытом интернете — берите **Evergreen Standalone Installer** под свою архитектуру (x64/x86/ARM64). Fixed Version не требуется.
</details>
## Обновление / удаление
**Обновить**:
```
pipx install hhcli --force --system-site-packages
```
**Удалить**:
```
pipx uninstall hhcli
```
Если ставили из исходников, удалите виртуальное окружение и данные по пути:
- Linux: `~/.local/share/hhcli`
- Windows: `%LOCALAPPDATA%\hhcli`
## Запуск и авторизация
После установки запустите программу.
```bash
hhcli
```
Будет предложено создать новый профиль. Придумать короткое имя для вашего профиля (go, python, pm, analyst и т.д). В открывшемся мини-браузере загрузится страница hh.ru для аутентификации на сайте. После успешной аутентификации программа предложит выбрать способ поиска вакансий. Если в аккаунте несколько резюме, сначала будет предложено выбрать, какое из них использовать для поиска.
Если окно с аутентификацией на сайте не открывается или после ввода пароля ничего не происходит:
- В Linux: переустановите с доступом к системным пакетам и убедитесь, что WebKit2GTK на месте. Пример для Ubuntu:
```
sudo apt install python3-gi gir1.2-webkit2-4.1 gir1.2-gtk-3.0 libwebkit2gtk-4.1-0
pipx install hhcli --force --system-site-packages
```
- В Windows 10/11: установите или обновите [Microsoft Edge WebView2 Runtime](https://developer.microsoft.com/microsoft-edge/webview2/) (рекомендуется Evergreen Bootstrapper; при оффлайн-доступе берите Evergreen Standalone Installer под x64/x86/ARM64) и перезапустите терминал.
## Использование
Основное взаимодействие с приложением происходит через TUI-интерфейс.
### Настройка
Настройка приложения (ключевые слова для поиска, шаблон сопроводительного письма, внешний вид) производится внутри приложения. Нажмите клавишу `c` на любом из основных экранов, чтобы перейти в меню настроек.
### Горячие клавиши
| Клавиша | Действие |
| :--- | :--- |
| `Пробел` | Выбрать/снять выбор с текущей вакансии. |
| `A` | Откликнуться на все выбранные вакансии. |
| `H` | Открыть экран с историей откликов для текущего резюме. |
| `C` | Открыть экран настроек профиля. |
| `Q` / `Esc` | Вернуться на предыдущий экран или выйти из приложения. |
| `←` / `→` | Переключение между страницами в списке поиска вакансий. |
### Темы оформления
У приложения есть своя дизайн-система для переключения тем оформления. Чтобы создать новую тему, скопируйте содержимое любого существующего файла `.tcss` из каталога `hhcli/ui/themes` в новый файл и настройте палитру. Новая тема будет доступна на экране настроек.
**Переменные стилей:** базовые переменные отвечают за основные цвета темы (остальные значения собираются автоматически в `hhcli/ui/themes/design_system.tcss`):
- `background1` — основной фон приложения.
- `background2` — фон панелей, карточек, списков.
- `background3` — фон шапок, рамок и выделений.
- `foreground1` — вторичный текст (подписи, подсказки).
- `foreground2` — основной текст.
- `foreground3` — акцентный текст/заголовки.
- `primary` — главный акцент (кнопки, ссылки, выделения).
- `secondary` — дополнительный акцент и ховеры.
- `red`, `orange`, `yellow`, `green`, `blue`, `purple`, `magenta`, `cyan` — цвета статусов и вспомогательных подсветок.
- `scrim` — полупрозрачная подложка для модальных окон.
### Основные команды (CLI)
| Команда | Описание |
| :--- | :--- |
| `hhcli` | Запускает основной TUI-интерфейс. |
| `hhcli -v` / `hhcli --version` | Показывает текущую версию (из PyPI). |
| `hhcli -i` / `hhcli --info` | Выводит информацию о версии, пути к локальной базе и доступных профилях. |
## TO DO
Дальнейшие планы:
- Поддержка macOS.
- Расширение возможностей фильтрации и аналитики по истории откликов.
- Добавление экрана с дашбордом на основе истории откликов.
- Нотификация и уведомление о непрочитанных сообщениях работодателей.
- Возможность изменения отправленного ранее сообщения работодателю.
## Предыстория
Изначально hhcli не планировался как большой и долго поддерживаемый проект. Но текущая ситуация на рынке труда: глупые алгоритмы отбора, фейковые вакансии, некомпетентные HR'ы и в целом низкая эффективность ручного поиска и откликов через сайт мотивируют меня развивать этот инструмент дальше.
Прежняя версия hhcli делегировала практически всю работу с API утилите [hh-applicant-tool](https://github.com/s3rgeym/hh-applicant-tool), отчасти поэтому была полностью переписана в текущее исполнение. Подробнее можно ознакомиться в ветке [legacy](https://github.com/fovendor/hhcli/tree/legacy).
Legacy-версия перестала поддерживаться 26.10.2025, её дальнейшая работоспособность не гарантирована и полностью зависит от `hh-applicant-tool`.
## Лицензия
Проект распространяется под лицензией MIT. Смотрите файл `LICENSE` для подробностей.
| text/markdown | fovendor | fovendor@gmail.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming... | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"flask<4.0.0,>=3.0.0",
"html2text<2026.0.0,>=2025.4.15",
"platformdirs<5.0.0,>=4.0.0",
"pywebview<6,>=5",
"requests<3.0.0,>=2.31.0",
"sqlalchemy<3.0.0,>=2.0.23",
"textual"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:47:44.787048 | hhcli-0.6.3.tar.gz | 84,429 | ee/08/642af83e614c7033f60651ec4e52b1bebfef92c36248860cfa3715701557/hhcli-0.6.3.tar.gz | source | sdist | null | false | f72d6bc6fa2ee6f5643f314a367510e5 | 5617167b056c66138f80a344f61e023ab931c12a98d615ff22f079db4099b2a8 | ee08642af83e614c7033f60651ec4e52b1bebfef92c36248860cfa3715701557 | null | [
"LICENSE"
] | 220 |
2.4 | pyptp | 0.0.24 | Open-source Python SDK for electrical grid calculations and modelling | # [PyPtP](https://github.com/phasetophase/pyptp)
Open-source Python SDK for electrical grid calculations and modelling.
PyPtP enables Distribution System Operators (DSOs) and developers to integrate with Phase to Phase's electrical network modeling ecosystem. Access electrical network data in the native formats used by Gaia (LV networks) and Vision (MV networks) software.
> **Alpha status**: PyPtP is currently in alpha. The library provides full coverage of VNF and GNF data models and we aim for production-quality code, but documentation is still limited and the API may change between releases. We make every effort to minimize disruption, but reserve the right to make breaking changes as we refine the library based on real-world usage.
>
> Moving to beta is contingent on API stability. The best way to support the library right now is to share feedback on developer experience, usage patterns, and API design—via [email](mailto:pyptp@phasetophase.com), [GitHub Discussions](https://github.com/phasetophase/pyptp/discussions), or [Issues](https://github.com/phasetophase/pyptp/issues).
## Installation
```bash
pip install pyptp
```
Or with [uv](https://docs.astral.sh/uv/):
```bash
uv add pyptp
```
## Documentation
- **Docs**: [pyptp.com](https://pyptp.com) — guides, API reference, and samples
- **Examples**: [`docs/samples/`](docs/samples/) — runnable code snippets
## Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for:
- How to report bugs and request features
- Development setup and coding standards
- Pull request process
- Contributor License Agreement (CLA)
## License
This project is licensed under the GNU General Public License v3.0 or later (GPL-3.0-or-later).
See [LICENSE](LICENSE) for the full license text.
## Support
- **Issues & Features**: [GitHub Issues](https://github.com/phasetophase/pyptp/issues)
- **Questions**: [GitHub Discussions](https://github.com/phasetophase/pyptp/discussions)
- **Email**: pyptp@phasetophase.com
---
**Developed by [Phase to Phase](https://phasetophase.com)**
| text/markdown | null | Phase to Phase <pyptp@phasetophase.com> | null | null | GPL-3.0-or-later | distribution-networks, dso, electrical-engineering, gaia, grid-calculations, low-voltage, medium-voltage, network-modeling, power-systems, vision | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: OS Independent",
"Programming Language :: Python... | [] | null | null | >=3.11 | [] | [] | [] | [
"dataclasses-json<1,>=0.6.7",
"loguru<1,>=0.7",
"networkx<4,>=3.4.2",
"openpyxl<4,>=3.1.5",
"pandas<3,>=2.2.3",
"pydantic-settings<3,>=2.0",
"requests<3,>=2.31.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:46:35.190579 | pyptp-0.0.24.tar.gz | 14,149,359 | 97/0d/4133ee6b32060acf75e0493638134ebd5da35b2e5482f5912b14fa23ea89/pyptp-0.0.24.tar.gz | source | sdist | null | false | a97724c851390f154d7608787a2dc335 | adf070ed23665cb084580e691c35dd70f4cb22ca0d1bb8a603a03f7c2d320199 | 970d4133ee6b32060acf75e0493638134ebd5da35b2e5482f5912b14fa23ea89 | null | [
"LICENSE"
] | 407 |
2.4 | blueapi | 1.11.5a3 | Lightweight bluesky-as-a-service wrapper application. Also usable as a library. | <img src="https://raw.githubusercontent.com/DiamondLightSource/blueapi/main/docs/images/blueapi-logo.svg"
style="background: none" width="120px" height="120px" align="right">
[](https://github.com/DiamondLightSource/blueapi/actions/workflows/ci.yml)
[](https://codecov.io/gh/DiamondLightSource/blueapi)
[](https://pypi.org/project/blueapi)
[](https://www.apache.org/licenses/LICENSE-2.0)
# blueapi
Lightweight bluesky-as-a-service wrapper application. Also usable as a library.
Source | <https://github.com/DiamondLightSource/blueapi>
:---: | :---:
PyPI | `pip install blueapi`
Docker | `docker run ghcr.io/diamondlightsource/blueapi:latest`
Documentation | <https://diamondlightsource.github.io/blueapi>
Releases | <https://github.com/DiamondLightSource/blueapi/releases>
This module wraps [bluesky](https://blueskyproject.io/bluesky) plans and devices
inside a server and exposes endpoints to send commands/receive data. Useful for
installation at labs where multiple people may control equipment, possibly from
remote locations.
The main premise of blueapi is to minimize the boilerplate required to get plans
and devices up and running by generating an API for your lab out of
type-annotated plans. For example, take the following plan:
```python
import bluesky.plans as bp
from blueapi.core import MsgGenerator
def my_plan(foo: str, bar: int) -> MsgGenerator:
yield from bp.scan(...)
```
Blueapi's job is to detect this plan and automatically add it to the lab's API
so it can be invoked easily with a few REST calls.
<!-- README only content. Anything below this line won't be included in index.md -->
See https://diamondlightsource.github.io/blueapi for more detailed documentation.
[concept]: https://raw.githubusercontent.com/DiamondLightSource/blueapi/main/docs/images/blueapi.png
| text/markdown | null | Callum Forrester <callum.forrester@diamond.ac.uk> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"tiled[client]>=0.2.3",
"bluesky[plotting]>=1.14.0",
"ophyd-async>=0.13.5",
"aioca",
"pydantic>=2.0",
"pydantic-settings",
"stomp-py",
"PyYAML>=6.0.2",
"click>=8.2.0",
"fastapi>=0.112.0",
"uvicorn",
"requests",
"dls-dodal>=1.69.0",
"super-state-machine",
"GitPython",
"event-model==1.23... | [] | [] | [] | [
"GitHub, https://github.com/DiamondLightSource/blueapi"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:46:24.108243 | blueapi-1.11.5a3.tar.gz | 1,813,828 | 11/eb/7318b0bc566191bdb7ed11e67084fa605e5db41eca58a3852d62687c43df/blueapi-1.11.5a3.tar.gz | source | sdist | null | false | 0808b1fafa18b9c7c29ec59745b7a63c | 1dcb4c4e6ffcde6fafa374005b23e020167ebbb2030c3294c4fd76b09b0773e8 | 11eb7318b0bc566191bdb7ed11e67084fa605e5db41eca58a3852d62687c43df | null | [
"LICENSE"
] | 187 |
2.4 | osint-public-records-pkg | 0.1.1 | OSINT tool for public records (CAC Nigeria, OpenSanctions, Wikipedia). | # OSINT Public Records Package
A powerful, asynchronous Open Source Intelligence (OSINT) tool designed to retrieve public records from difficult-to-access sources. This package specializes in Nigerian Corporate Registry (CAC) data, Global Sanctions/PEP lists, and general encyclopedic data.
## Features
| Module | Source | Description |
|:---|:---|:---|
| **CAC Records** | **CAC Nigeria** | Access hidden ICRP & BOR endpoints to find registered companies, directors, and Persons with Significant Control (PSC). |
| **Sanctions Check** | **OpenSanctions** | Screen individuals against global sanctions lists, PEP (Politically Exposed Persons) lists, and criminal databases. |
| **Wiki Intel** | **Wikipedia** | Extract summaries, images, references, and related links for quick entity profiling. |
## Installation
### From Source
Navigate to the root directory of the package and install via pip:
```bash
cd OSINT-PUBLIC-RECORDS-PKG
pip install osint-public-records-pkg
```
### Dependencies
httpx (for asynchronous API requests)
beautifulsoup4 (for HTML parsing)
lxml (for fast XML/HTML processing)
requests (for synchronous operations)
### Configuration
To use the OpenSanctions module, you need an API Key. You can configure this in two ways:
Method 1: Environment Variable (Recommended)
Set the variable in your terminal or .env file. The package will automatically detect it.
Linux/macOS:
```bash
export OPEN_SANCTIONS_API_KEY="your_api_key_here"
```
Windows (PowerShell):
```PowerShell
$env:OPEN_SANCTIONS_API_KEY="your_api_key_here"
```
###Method 2: Direct Initialization
Pass the key directly when initializing the class in Python.
```bash
sanctions = OpenSanctionsAPI(api_key="your_api_key_here")
```
### Usage Examples
1. Searching CAC Nigeria (Corporate Affairs Commission)
Find companies and retrieve Person with Significant Control (PSC) details using hidden API endpoints.
```bash
import asyncio
from osint_public_records_pkg import CACRecordsAPI
async def search_cac():
cac = CACRecordsAPI()
# Step 1: Search for a company name
print("--- Searching ICRP ---")
company_name = "Dangote Cement"
results = await cac.search_name(company_name)
if results["success"]:
top_match = results["records"][0]
print(f"Found: {top_match['name']} (RC: {top_match['rc_number']})")
# Step 2: Get Directors/PSC details (BOR)
# Note: This searches the Beneficial Ownership Register
print("\n--- Fetching PSC Details ---")
psc_data = await cac.get_company_psc_details(
company_name=top_match['name'],
rc_number=top_match['rc_number']
)
if psc_data["success"]:
for person in psc_data["psc_records"]:
print(f"Director/Owner: {person['name']}")
print(f"Address: {person['address']}")
print(f"Nationality: {person['nationality']}\n")
if __name__ == "__main__":
asyncio.run(search_cac())
```
2. Screening for Sanctions & PEPs
Check if an individual appears on international sanctions lists (OFAC, UN, EU) or is a Politically Exposed Person.
``` bash
import asyncio
from osint_public_records_pkg import OpenSanctionsAPI
async def check_sanctions():
# Ensure you have set OPEN_SANCTIONS_API_KEY in your env
api = OpenSanctionsAPI()
target = "Vladimir Putin"
print(f"--- Screening {target} ---")
result = await api.search_entity(target)
if result["success"]:
record = result["records"][0]
print(f"Name: {record['name']}")
print(f"Is PEP: {record['is_pep']}")
print(f"Is Sanctioned: {record['is_sanctioned']}")
print(f"Reason: {record.get('designation_reason', 'N/A')}")
print(f"Countries: {record['country']}")
else:
print("No records found or API error.")
if __name__ == "__main__":
asyncio.run(check_sanctions())
```
3. General Intelligence (Wikipedia)
Quickly gather background information, images, and references.
``` bash
from osint_public_records_pkg import WikipediaScraper
def get_wiki_info():
wiki = WikipediaScraper()
query = "Central Bank of Nigeria"
print(f"--- Wiki Lookup: {query} ---")
data = wiki.search(query)
if "error" not in data:
print(f"Title: {data['title']}")
print(f"Summary: {data['summary'][:200]}...") # First 200 chars
print("\nSections:")
for section in data['sections'][:3]:
print(f"- {section}")
print("\nReferences Found:", len(data['references']))
get_wiki_info()
```
### Disclaimer
This tool is intended for legitimate Open Source Intelligence (OSINT) research, compliance checking, and investigative journalism.
CAC Data: Accesses public endpoints (icrp.cac.gov.ng and bor.cac.gov.ng). While these are public, automated scraping should be done responsibly and in accordance with local regulations.
OpenSanctions: Usage is subject to the OpenSanctions API Terms of Service.
| text/markdown | null | Your Name <your.email@example.com> | null | null | MIT | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"httpx>=0.24.0",
"requests>=2.28.0",
"beautifulsoup4>=4.11.0",
"lxml>=4.9.0"
] | [] | [] | [] | [
"Homepage, https://bitbucket.org/yourusername/osint-public-records-pkg"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T15:46:19.300077 | osint_public_records_pkg-0.1.1.tar.gz | 10,524 | 23/88/e0516b7180b43e23a537a70f78af433820d18a686936d6862ea527fb2dbd/osint_public_records_pkg-0.1.1.tar.gz | source | sdist | null | false | bc4d49e0237f27b06b7690fa2170090b | eae792a57ee6c9bc51732f02ffc479d3240eed4c74994fb203d250e452ee58d7 | 2388e0516b7180b43e23a537a70f78af433820d18a686936d6862ea527fb2dbd | null | [] | 232 |
2.4 | bigframes | 2.36.0 | BigQuery DataFrames -- scalable analytics and machine learning with BigQuery | :orphan:
BigQuery DataFrames (BigFrames)
===============================
|GA| |pypi| |versions|
BigQuery DataFrames (also known as BigFrames) provides a Pythonic DataFrame
and machine learning (ML) API powered by the BigQuery engine. It provides modules
for many use cases, including:
* `bigframes.pandas <https://dataframes.bigquery.dev/reference/api/bigframes.pandas.html>`_
is a pandas API for analytics. Many workloads can be
migrated from pandas to bigframes by just changing a few imports.
* `bigframes.ml <https://dataframes.bigquery.dev/reference/index.html#ml-apis>`_
is a scikit-learn-like API for ML.
* `bigframes.bigquery.ai <https://dataframes.bigquery.dev/reference/api/bigframes.bigquery.ai.html>`_
are a collection of powerful AI methods, powered by Gemini.
BigQuery DataFrames is an `open-source package <https://github.com/googleapis/python-bigquery-dataframes>`_.
.. |GA| image:: https://img.shields.io/badge/support-GA-gold.svg
:target: https://github.com/googleapis/google-cloud-python/blob/main/README.rst#general-availability
.. |pypi| image:: https://img.shields.io/pypi/v/bigframes.svg
:target: https://pypi.org/project/bigframes/
.. |versions| image:: https://img.shields.io/pypi/pyversions/bigframes.svg
:target: https://pypi.org/project/bigframes/
Getting started with BigQuery DataFrames
----------------------------------------
The easiest way to get started is to try the
`BigFrames quickstart <https://cloud.google.com/bigquery/docs/dataframes-quickstart>`_
in a `notebook in BigQuery Studio <https://cloud.google.com/bigquery/docs/notebooks-introduction>`_.
To use BigFrames in your local development environment,
1. Run ``pip install --upgrade bigframes`` to install the latest version.
2. Setup `Application default credentials <https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment>`_
for your local development environment enviroment.
3. Create a `GCP project with the BigQuery API enabled <https://cloud.google.com/bigquery/docs/sandbox>`_.
4. Use the ``bigframes`` package to query data.
.. code-block:: python
import bigframes.pandas as bpd
bpd.options.bigquery.project = your_gcp_project_id # Optional in BQ Studio.
bpd.options.bigquery.ordering_mode = "partial" # Recommended for performance.
df = bpd.read_gbq("bigquery-public-data.usa_names.usa_1910_2013")
print(
df.groupby("name")
.agg({"number": "sum"})
.sort_values("number", ascending=False)
.head(10)
.to_pandas()
)
Documentation
-------------
To learn more about BigQuery DataFrames, visit these pages
* `Introduction to BigQuery DataFrames (BigFrames) <https://cloud.google.com/bigquery/docs/bigquery-dataframes-introduction>`_
* `Sample notebooks <https://github.com/googleapis/python-bigquery-dataframes/tree/main/notebooks>`_
* `API reference <https://dataframes.bigquery.dev/>`_
* `Source code (GitHub) <https://github.com/googleapis/python-bigquery-dataframes>`_
License
-------
BigQuery DataFrames is distributed with the `Apache-2.0 license
<https://github.com/googleapis/python-bigquery-dataframes/blob/main/LICENSE>`_.
It also contains code derived from the following third-party packages:
* `Ibis <https://ibis-project.org/>`_
* `pandas <https://pandas.pydata.org/>`_
* `Python <https://www.python.org/>`_
* `scikit-learn <https://scikit-learn.org/>`_
* `XGBoost <https://xgboost.readthedocs.io/en/stable/>`_
* `SQLGlot <https://sqlglot.com/sqlglot.html>`_
For details, see the `third_party
<https://github.com/googleapis/python-bigquery-dataframes/tree/main/third_party/bigframes_vendored>`_
directory.
Contact Us
----------
For further help and provide feedback, you can email us at `bigframes-feedback@google.com <https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=bigframes-feedback@google.com>`_.
| text/x-rst | Google LLC | bigframes-feedback@google.com | null | null | Apache 2.0 | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programm... | [
"Posix; MacOS X; Windows"
] | https://dataframes.bigquery.dev | https://github.com/googleapis/python-bigquery-dataframes/releases | >=3.10 | [] | [] | [] | [
"cloudpickle>=2.0.0",
"fsspec>=2023.3.0",
"gcsfs!=2025.5.0,!=2026.2.0,>=2023.3.0",
"geopandas>=0.12.2",
"google-auth<3.0,>=2.15.0",
"google-cloud-bigquery[bqstorage,pandas]>=3.36.0",
"google-cloud-bigquery-storage<3.0.0,>=2.30.0",
"google-cloud-functions>=1.12.0",
"google-cloud-bigquery-connection>=... | [] | [] | [] | [
"Source, https://github.com/googleapis/python-bigquery-dataframes",
"Changelog, https://dataframes.bigquery.dev/changelog.html",
"Issues, https://github.com/googleapis/python-bigquery-dataframes/issues"
] | twine/6.2.0 CPython/3.11.2 | 2026-02-18T15:46:11.352860 | bigframes-2.36.0.tar.gz | 2,940,603 | ea/fe/d429e611c476f0fe306b312b29b253ed67e75c0feac7cad9ce7c24d096f2/bigframes-2.36.0.tar.gz | source | sdist | null | false | 9f23a7102fd82b890145b51eb16f786c | d4abfc31ccfdc03ef576d4c0ca1e6c89572443c4c46f88fb487aa67c6c16f04a | eafed429e611c476f0fe306b312b29b253ed67e75c0feac7cad9ce7c24d096f2 | null | [
"LICENSE"
] | 24,349 |
2.4 | vunnel | 0.55.1 | vunnel ~= 'vulnerability data funnel' | # vunnel
**A tool for fetching, transforming, and storing vulnerability data from a variety of sources.**
[](https://github.com/anchore/vunnel/releases/latest)
[](https://github.com/anchore/vunnel/blob/main/LICENSE)
[](https://anchore.com/discourse)

Supported data sources:
- Alpine (https://secdb.alpinelinux.org)
- Amazon (https://alas.aws.amazon.com/AL2/alas.rss & https://alas.aws.amazon.com/AL2022/alas.rss)
- Azure (https://github.com/microsoft/AzureLinuxVulnerabilityData)
- Debian (https://security-tracker.debian.org/tracker/data/json & https://salsa.debian.org/security-tracker-team/security-tracker/raw/master/data/DSA/list)
- Echo (https://advisory.echohq.com/data.json)
- GitHub Security Advisories (https://api.github.com/graphql)
- NVD (https://services.nvd.nist.gov/rest/json/cves/2.0)
- Oracle (https://linux.oracle.com/security/oval)
- RedHat (https://www.redhat.com/security/data/oval)
- SLES (https://ftp.suse.com/pub/projects/security/oval)
- Ubuntu (https://launchpad.net/ubuntu-cve-tracker)
- Wolfi (https://packages.wolfi.dev)
## Prerequisites
The following system tools must be available on your PATH:
- **git** - Required by some providers that fetch data from git repositories
## Installation
With pip:
```bash
pip install vunnel
```
With docker:
```bash
docker run \
--rm -it \
-v $(pwd)/data:/data \
-v $(pwd)/.vunnel.yaml:/.vunnel.yaml \
ghcr.io/anchore/vunnel:latest \
run nvd
```
Where:
- the `data` volume keeps the processed data on the host
- the `.vunnel.yaml` uses the host application config (if present)
- you can swap `latest` for a specific version (same as the git tags)
See [the vunnel package](https://github.com/anchore/vunnel/pkgs/container/vunnel) for a full listing of available tags.
## Getting Started
List the available vulnerability data providers:
```
$ vunnel list
alpine
amazon
chainguard
debian
echo
github
mariner
minimos
nvd
oracle
rhel
sles
ubuntu
wolfi
```
Download and process a provider:
```
$ vunnel run wolfi
2023-01-04 13:42:58 root [INFO] running wolfi provider
2023-01-04 13:42:58 wolfi [INFO] downloading Wolfi secdb https://packages.wolfi.dev/os/security.json
2023-01-04 13:42:59 wolfi [INFO] wrote 56 entries
2023-01-04 13:42:59 wolfi [INFO] recording workspace state
```
You will see the processed vulnerability data in the local `./data` directory
```
$ tree data
data
└── wolfi
├── checksums
├── metadata.json
├── input
│ └── secdb
│ └── os
│ └── security.json
└── results
└── wolfi:rolling
├── CVE-2016-2781.json
├── CVE-2017-8806.json
├── CVE-2018-1000156.json
└── ...
```
*Note: to get more verbose output, use `-v`, `-vv`, or `-vvv` (e.g. `vunnel -vv run wolfi`)*
Delete existing input and result data for one or more providers:
```
$ vunnel clear wolfi
2023-01-04 13:48:31 root [INFO] clearing wolfi provider state
```
Example config file for changing application behavior:
```yaml
# .vunnel.yaml
root: ./processed-data
log:
level: trace
providers:
wolfi:
request_timeout: 125
runtime:
existing_input: keep
existing_results: delete-before-write
on_error:
action: fail
input: keep
results: keep
retry_count: 3
retry_delay: 10
```
Use `vunnel config` to get a better idea of all of the possible configuration options.
## FAQ
### Can I implement a new provider?
Yes you can! See [the provider docs](https://github.com/anchore/vunnel/blob/main/DEVELOPING.md#adding-a-new-provider) for more information.
### Why is it called "vunnel"?
This tool "funnels" vulnerability data into a single spot for easy processing... say "vulnerability data funnel" 100x fast enough and eventually it'll slur to "vunnel" :).
| text/markdown | null | Alex Goodman <alex.goodman@anchore.com> | null | null | Apache-2.0 | aggregator, data, grype, vulnerability, vulnerability-data | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"Natural Language :: English",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Topic :: Security",
"Topic... | [] | null | null | <3.15,>=3.13 | [] | [] | [] | [
"click<9.0.0,>=8.1.3",
"colorlog<7.0.0,>=6.7.0",
"cvss<4.0,>=2.6",
"defusedxml<1.0.0,>=0.7.1",
"ijson<4.0,>=2.5.1",
"importlib-metadata<9.0.0,>=7.0.1",
"iso8601<3.0.0,>=2.1.0",
"mashumaro<4.0,>=3.10",
"mergedeep<2.0.0,>=1.3.4",
"oras<1.0.0,>=0.1.0",
"orjson<4.0.0,>=3.8.6",
"packageurl-python<1... | [] | [] | [] | [
"repository, https://github.com/anchore/vunnel"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T15:43:39.767868 | vunnel-0.55.1.tar.gz | 635,396 | 3b/ca/db3a354e216b237a2ef895e2e354da0cf250398c25859487b726d329177e/vunnel-0.55.1.tar.gz | source | sdist | null | false | b55da6e4df94634e4822241b7150cb5a | f7722e317e2c65195500ef8dfe94eb30c1b97518c49c11f7446d8b8d5b960857 | 3bcadb3a354e216b237a2ef895e2e354da0cf250398c25859487b726d329177e | null | [
"LICENSE"
] | 726 |
2.4 | fennil | 1.3.0 | Viewer for kinematic earthquake simulations | ## fennil
Attempt at a much faster rebuild of
[`result_manager`](https://github.com/brendanjmeade/result_manager) for larger
[`celeri`](https://github.com/brendanjmeade/celeri) models.
Viewer for kinematic earthquake simulations
## License
This library is OpenSource and follows the MIT License
## Installation
Install the application/library
```console
pip install fennil
```
Run the application
```console
fennil
```
## Mapbox token
Set a Mapbox access token so base maps render with Mapbox styles. Either export
it in your shell or place it in a local `.env` (already gitignored):
```console
export FENNIL_MAP_BOX_TOKEN="YOUR_TOKEN_HERE"
```
Or create a `.env` file in the project root containing:
```
FENNIL_MAP_BOX_TOKEN=YOUR_TOKEN_HERE
```
## Development setup
We recommend using uv for setting up and managing a virtual environment for your
development.
```console
# Create venv and install all dependencies
uv sync --all-extras --dev
# Activate environment
source .venv/bin/activate
# Install commit analysis
pre-commit install
pre-commit install --hook-type commit-msg
# Allow live code edit
uv pip install -e .
```
For running tests and checks, you can run `nox`.
```console
# run all
nox
# lint
nox -s lint
# tests
nox -s tests
```
## Commit message convention
Semantic release rely on
[conventional commits](https://www.conventionalcommits.org/) to generate new
releases and changelog.
| text/markdown | Kitware, Inc. | Brendan Meade <brendanjmeade@gmail.com> | null | null | MIT License | Application, Framework, Interactive, Python, Web | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Top... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"pandas<3",
"pydeck",
"trame-dataclass",
"trame-deckgl>=2.0.4",
"trame-vuetify",
"trame>=3.12",
"pywebview; extra == \"app\"",
"nox; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"pytest-cov>=3; extra == \"dev\"",
"pytest>=6; extra == \"dev\"",
"ruff; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:43:32.619970 | fennil-1.3.0.tar.gz | 17,749 | f1/19/0495129aee5d32541311cdd3680df8c75cdbd3bd298747091da676ec3c48/fennil-1.3.0.tar.gz | source | sdist | null | false | 2dae9b3ce1d709d411ecb1622bcb44f5 | 68030c784381f7dd2f623d02cc41df26a1ff419fc82c6c6d1862a86d38fc1681 | f1190495129aee5d32541311cdd3680df8c75cdbd3bd298747091da676ec3c48 | null | [
"LICENSE"
] | 258 |
2.4 | notion-sync-lib | 1.2.3 | Sync Notion pages like Git commits. Smart content-based diffing, automatic rate limiting, zero headaches. | # notion-sync-lib
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
**Sync Notion pages like Git commits.** Smart content-based diffing, automatic rate limiting, zero headaches.
Not another CRUD wrapper—this is a **sync engine** that understands your content and makes minimal changes automatically.
```python
from notion_sync import get_notion_client, generate_diff, execute_diff
client = get_notion_client()
ops = generate_diff(current_blocks, new_blocks)
execute_diff(client, ops, page_id) # Magic happens ✨
```
---
## What Can You Build With This?
### 📝 Keep Your Docs in Sync
Sync your GitHub README to Notion automatically. No more copy-paste. Update once, sync everywhere.
```python
# Your CI pipeline
markdown = fetch_github_readme()
blocks = markdown_to_notion(markdown)
sync_to_notion(page_id, blocks) # Only updates what changed
```
### 🌍 Translation Workflows
Maintain 20 language versions of your docs. Update master → slaves sync in seconds, not hours.
```python
# Example: sync NL master to EN/DE/FR translations
for lang in ["EN", "DE", "FR"]:
ops = generate_recursive_diff(master, translate(master, lang))
execute_recursive_diff(client, ops) # 10x faster than full sync
```
### 🏢 Workspace Migration
Moving 500 pages to a new workspace? Clone everything with preserved structure—toggles, columns, nested content, all intact.
```python
# Clone entire workspace
for page_id in source_pages:
content = fetch_blocks_recursive(client_A, page_id)
clone_to_workspace_B(content) # All nested content preserved
```
### 📋 Template System
Generate 100 project pages from one template. Replace placeholders, customize layouts, done.
```python
# Create project pages from template
template = fetch_blocks_recursive(client, template_page)
for project in projects:
customized = replace_placeholders(template, project)
create_page(project.name, customized)
```
### 📓 Obsidian/Markdown Sync
Daily sync from your markdown notes to Notion. Only changed files get updated.
```python
# Sync markdown vault
for note in obsidian_vault:
if note.changed_today():
sync_to_notion(note) # Smart diff = minimal API calls
```
### 🤖 Automated Reports
Generate weekly reports with 3-column layouts, charts, and metrics—all programmatically.
```python
# Build complex layouts
columns = [
{"children": [make_heading(2, "Summary"), *summary_blocks], "width_ratio": 0.5},
{"children": [make_heading(2, "Metrics"), *metrics], "width_ratio": 0.25},
{"children": [make_heading(2, "Charts"), chart], "width_ratio": 0.25}
]
create_column_list(client, report_page, columns)
```
---
## Why This Library?
### 🧠 Smart Diff Engine (Like Git for Notion)
Traditional approach: Match blocks by position → Everything breaks when you add/remove a block.
**Our approach:** Match blocks by content → Robust to any structural change.
```python
# You have: [A, B, C]
# You want: [A, X, B, C, D]
# Traditional: "Replace B→X, C→B, add C, add D" (4 operations)
# Smart diff: "Insert X after A, append D" (2 operations)
```
**Result:** Fewer API calls = faster syncs + lower rate limit risk.
### ⚡ Two Sync Modes for Different Needs
**Structural Sync** (`generate_diff`)
- Add, remove, reorder blocks freely
- Content-based matching with SequenceMatcher
- Use for: Documentation sync, markdown conversion, testing
**Content-Only Sync** (`generate_recursive_diff`)
- Update text in identical structures
- 10x faster (only UPDATE operations)
- Use for: Translation workflows, bulk text changes
### 🛡️ Production-Ready from Day One
- **Automatic rate limiting**: 3 req/sec with exponential backoff on 429 errors
- **Smart batching**: Handles 1000+ blocks automatically (100-block API limit)
- **Resilient execution**: Skips archived blocks, handles edge cases
- **Request tracking**: Monitor API usage with `client.request_count`
### 🏗️ Build Complex Layouts Easily
```python
from notion_sync import make_paragraph, make_heading, make_toggle, create_column_list
# Create nested structures
page_content = [
make_heading(1, "Project Overview"),
make_toggle("Details", children=[
make_paragraph("Hidden content..."),
make_bulleted_list_item("Nested item")
])
]
# Create column layouts with width ratios
columns = [
{"children": [make_paragraph("Left")], "width_ratio": 0.7},
{"children": [make_paragraph("Right")], "width_ratio": 0.3}
]
create_column_list(client, page_id, columns)
```
---
## Installation
```bash
pip install git+https://github.com/mvletter/notion-sync-lib.git
```
Set your Notion API token:
```bash
export NOTION_API_TOKEN=secret_xxx
```
Or create a `.env` file:
```
NOTION_API_TOKEN=secret_xxx
```
---
## Quick Examples
### Sync Markdown to Notion
```python
from notion_sync import get_notion_client, fetch_page_blocks, generate_diff, execute_diff
def sync_markdown_to_notion(markdown: str, page_id: str):
"""Convert markdown and sync to Notion in one go."""
client = get_notion_client()
# Convert markdown to Notion blocks (your converter)
new_blocks = markdown_to_notion_blocks(markdown)
# Fetch current state and generate diff
current_blocks = fetch_page_blocks(client, page_id)
ops = generate_diff(current_blocks, new_blocks)
# Execute minimal changes
stats = execute_diff(client, ops, page_id)
print(f"✅ Synced: {stats['inserted']} added, {stats['updated']} updated, {stats['deleted']} removed")
```
### Translation Workflow (Content-Only Updates)
Perfect for maintaining translated pages:
```python
from notion_sync import get_notion_client, fetch_blocks_recursive, generate_recursive_diff, execute_recursive_diff
client = get_notion_client()
# Fetch master page structure
master = fetch_blocks_recursive(client, master_page_id)
# Apply translations (preserve structure!)
translated = apply_translations(master, translations)
# Update only changed text (10x faster)
ops = generate_recursive_diff(master, translated)
stats = execute_recursive_diff(client, ops)
print(f"✅ Updated {stats['updated']} blocks")
```
### Clone Page to Another Workspace
```python
from notion_sync import get_notion_client, fetch_blocks_recursive, append_blocks
# Fetch from source workspace
client_A = get_notion_client() # Uses token from workspace A
content = fetch_blocks_recursive(client_A, source_page_id)
# Clone to target workspace
client_B = get_notion_client() # Uses token from workspace B
new_page_id = create_page_in_workspace_B(title)
append_blocks(client_B, new_page_id, content)
print(f"✅ Cloned page with {len(content)} blocks")
```
### Preview Changes Before Applying
```python
from notion_sync import generate_diff, format_diff_preview, execute_diff
ops = generate_diff(current_blocks, new_blocks)
# Show human-readable preview
print(format_diff_preview(ops))
# Output:
# ============================================================
# Diff Preview
# ============================================================
# Summary: 2 new, 1 modified, 0 replaced, 1 deleted, 5 unchanged
# ------------------------------------------------------------
#
# Changes:
#
# + [NEW] paragraph
# "This is new content"
# -> Will be inserted at position 3
#
# ~ [MODIFIED] heading_1
# "Old Title" -> "New Title"
# -> Will update block abc123...
# Execute after confirmation
if confirm():
stats = execute_diff(client, ops, page_id)
```
---
## When to Use Which Diff?
| Your Situation | Use This | Why |
|---------------|----------|-----|
| Syncing markdown/docs to Notion | `generate_diff` | Content may be added/removed/reordered |
| Translating existing pages | `generate_recursive_diff` | Structure identical, only text changes |
| Migrating workspaces | `generate_diff` | Flexible, handles any changes |
| Bulk text updates (find/replace) | `generate_recursive_diff` | 10x faster, updates only changed blocks |
| Building pages programmatically | Block builders + `append_blocks` | Direct construction |
| Testing/prototyping | `generate_diff` + `dry_run=True` | Preview mode |
**Pro tip:** When in doubt, use `generate_diff`. It handles everything.
---
## Features
### Core Capabilities
- ✅ **Smart content-based diffing** - Minimal API calls, like Git for Notion
- ✅ **Two sync modes** - Structural (flexible) + Content-only (fast)
- ✅ **Automatic rate limiting** - 3 req/sec with exponential backoff
- ✅ **Recursive fetching** - Get entire page trees with nested content
- ✅ **Smart batching** - Handles 1000+ blocks automatically
### Advanced Features
- ✅ **Column layout support** - Create/read/unwrap with width ratios
- ✅ **Block builders** - 10+ block types (paragraphs, headings, toggles, code, etc.)
- ✅ **Text extraction** - 30+ block types → plain text
- ✅ **TypedDict returns** - Full IDE autocomplete
- ✅ **Dry-run mode** - Preview changes before applying
- ✅ **Request tracking** - Monitor API usage
### Production-Ready
- ✅ **Error resilience** - Handles archived blocks, API errors gracefully
- ✅ **Type safety** - Full type hints (passes mypy strict mode)
- ✅ **Comprehensive tests** - 25 integration tests
- ✅ **Well documented** - Usage guide + API reference + pitfalls doc
---
## Real-World Use Cases
| Use Case | Complexity | Demand | Key Feature |
|----------|-----------|--------|-------------|
| 📝 Documentation sync (GitHub → Notion) | Medium | 🔥🔥🔥 Very High | Smart diff |
| 🌍 Multi-language content management | High | 🔥🔥🔥 Very High | Recursive diff |
| 🏢 Workspace migration | Medium | 🔥🔥 High | Recursive fetch |
| 📋 Template system | Low | 🔥🔥 High | Block builders |
| 📓 Markdown sync (Obsidian/Roam) | Medium | 🔥🔥 High | Smart diff |
| 🔄 Bulk content transformation | High | 🔥 Medium | Recursive diff |
| 🤖 Automated page layouts | Low | 🔥 Medium | Column builders |
| 💾 Backup system | Low | 🔥 Medium | Text extraction |
| 📅 Meeting notes automation | Low | 🔥🔥 High | Block builders |
---
## What Makes This Different?
**Other Notion libraries:**
```python
# Manual position tracking, full page replacement
for i, block in enumerate(new_blocks):
client.update_block(old_blocks[i].id, block) # Breaks if count differs
```
**This library:**
```python
# Content-based matching, minimal operations
ops = generate_diff(old_blocks, new_blocks)
execute_diff(client, ops, page_id) # Handles add/remove/reorder automatically
```
**Result:** Your code works when blocks are added/removed/reordered. No manual tracking.
---
## Documentation
📖 **[Usage Guide](docs/usage-guide.md)** - Complete examples and patterns
📚 **[API Reference](docs/api-reference.md)** - Full API documentation
⚠️ **[Common Pitfalls](docs/pitfalls.md)** - Mistakes to avoid
🛠️ **[Development Guide](docs/development.md)** - Contributing and testing
---
## Requirements
- Python 3.10+
- Notion API token ([get one here](https://developers.notion.com/))
---
## Contributing
We welcome contributions! See [Development Guide](docs/development.md) for setup and testing.
Quick start:
```bash
git clone https://github.com/mvletter/notion-sync-lib.git
cd notion-sync-lib
pip install -e ".[dev]"
pytest -v
```
---
## License
MIT License - see [LICENSE](LICENSE) for details.
---
## Credits
Built by [Mark Vletter](https://github.com/mvletter) for handling large-scale Notion translation workflows at [Voys](https://www.voys.nl/).
Inspired by Git's diff algorithm and the need for a production-ready Notion sync tool.
---
**⭐ Star this repo if it saved you time!**
**🐛 Found a bug?** [Open an issue](https://github.com/mvletter/notion-sync-lib/issues)
**💡 Have a use case?** [Share it in discussions](https://github.com/mvletter/notion-sync-lib/discussions)
| text/markdown | Mark Vletter | null | null | null | null | api, automation, content-sync, diff, diffing, notion, notion-api, rate-limiting, sync, translation | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Lang... | [] | null | null | >=3.10 | [] | [] | [] | [
"notion-client>=2.0.0",
"python-dotenv>=1.0.0",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/mvletter/notion-sync-lib",
"Documentation, https://github.com/mvletter/notion-sync-lib#readme",
"Repository, https://github.com/mvletter/notion-sync-lib",
"Issues, https://github.com/mvletter/notion-sync-lib/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:43:25.083706 | notion_sync_lib-1.2.3.tar.gz | 32,322 | a0/da/d60e1448d8e5fa9f1bd0d9799a9bcf82d705d0035ca61ab647a15d6558c7/notion_sync_lib-1.2.3.tar.gz | source | sdist | null | false | b29a5fa28f6e1cc65118ef010d4f53e9 | c0acc7f1928fe28895f2396944751e87e0440acd9a5a9264fbcc93db33e36118 | a0dad60e1448d8e5fa9f1bd0d9799a9bcf82d705d0035ca61ab647a15d6558c7 | MIT | [
"LICENSE"
] | 230 |
2.4 | uwuipy | 1.0.0 | Allows the easy implementation of uwuifying words for applications like Discord bots and websites | # uwuipy
`uwuipy` is an advanced uwuifier for Python, designed to transform regular text into a playful and expressive "uwu" style. This whimsical modification of text is often used in online communities for humorous or emotive communication.
Whether you're looking to add a fun twist to a chat application or simply want to explore text manipulation in a lighthearted manner, `uwuipy` offers an easy-to-use interface with customizable options to create unique text transformations.
The library provides control over various aspects of the uwuification process, including stuttering, facial expressions, actions, and exclamations. Whether you want subtle changes or dramatic transformations, `uwuipy` allows you to find the perfect balance through adjustable parameters.
## Key Features:
- Ease of Use: Quickly integrate `uwuipy` into your projects with a simple API.
- Customizable: Tailor the uwuification process to your needs with adjustable parameters.
- CLI Support: Use the tool directly from the command line or integrate it into Python applications.
- Entertainment: A unique way to engage users with lively and animated text transformations.
- Robust Input Handling: `uwuify_segmented` provides advanced handling of various text segments, allowing the application to choose which parts of the text to uwuify while leaving others unchanged.
## Requirements
* Python 3.10 or higher
## Install
To install just use PyPI `pip install uwuipy`
## Usage
### As a library
Integrate `uwuipy` into your Python application to transform ordinary text into playful uwu-styled expressions. Here's a basic example of how to use it:
```python
from uwuipy import Uwuipy
uwu = Uwuipy()
print(uwu.uwuify(input()))
```
#### Constructor parameters
The `Uwuipy` constructor allows fine-tuning of the uwuification process through the following parameters:
- `seed`: An integer seed for the random number generator. Defaults to current time if - not provided.
- `stutterchance`: Probability of stuttering a word (0 to 1.0), default 0.1.
- `facechance`: Probability of adding a face (0 to 1.0), default 0.05.
- `actionchance`: Probability of adding an action (0 to 1.0), default 0.075.
- `exclamationchance`: Probability of adding exclamations (0 to 1.0), default 1.
- `nsfw_actions`: Enables more "explicit" actions if set to true; default is false.
- `power`: The uwuification "level" — higher levels lead to more text transformations being done (1 is core uwu, 2 is nyaification, 3 and 4 are just extra). Using a higher level includes the lower levels.
#### Customized Example:
Adjust the parameters to create a customized uwuification process:
```python
from uwuipy import Uwuipy
uwu = Uwuipy(None, 0.3, 0.3, 0.3, 1, False, 4)
print(uwu.uwuify(input()))
```
This can produce output like:
```
The quick bwown (ᵘʷᵘ) ***glomps*** f-f-fox jyumps uvw the ***screeches*** w-w-w-wazy ***blushes*** dog
The (ᵘﻌᵘ) quick bwown ***smirks smugly*** fox \>w\< ***screeches*** jyumps uvw t-t-t-the (uwu) wazy owo dog ~(˘▾˘~)
The q-q-q-quick ***nuzzles your necky wecky*** b-b-bwown f-f-fox ( ᵘ ꒳ ᵘ ✼) j-j-jyumps (U ﹏ U) u-uvw ***whispers to self*** the owo w-w-w-wazy Uwu d-d-d-dog ***huggles tightly***
```
#### Segmented Uwuification:
For more advanced use cases, `uwuipy` provides the `uwuify_segmented()` method. This function intelligently processes text segments, allowing for selective uwuification while preserving certain parts of the text. Here's how to use it:
```python
from uwuipy import Uwuipy
uwu = Uwuipy(1, 0.3, 0.3, 0.3, 1)
text = "Hello @everyone! Check out https://example.com and http://test.io/page?arg=1 yeah! Also, say hi to <@123456789012345678> and <@!987654321098765432>, they’re in <#112233445566778899> with role <@&998877665544332211>."
print(uwu.uwuify_segmented(text))
```
Output:
```
[('Hello', 'H-Hewwo', False), (' ', ' ', False), ... ('https://example.com', 'https://example.com', False), ... (None, 'to', False), (None, ' ', False), ('<@123456789012345678>', '<@123456789012345678>', True), ... ('<@!987654321098765432>', '<@!987654321098765432>', True), ... (None, ' ', False), ('<#112233445566778899>', '<#112233445566778899>', True), (' ', ' ', False), ('with', 'with', False), (' ', ' ', False), ('role', '***huggles', False), (' ', ' ', False), ('', 'tightly***', False), (None, ' ', False), (None, 'wowe', False), (None, ' ', False), ('<@&998877665544332211>', '<@&998877665544332211>', True), ('.', '.', False)]
```
The usecase for this is for checking if a user is trying to bypass uwuification by using mentions, URLs, or emojis. The third element in each tuple indicates whether the application should double check if the segment is a legit role, emoji, mention, or URL. And if not it can take the uwuified version instead.
`uwuify_segmented` has some optional parameters in which you can disable this reporting for some things like URLs or mentions/channel/role IDs and emojis. If set to false then the elements will never be uwuified and always marked as not needing verification.
- `verify_urls`: If ``True``, URL-like tokens are marked as special and require caller verification. If ``False``, URLs are left untouched and not marked as special. Defaults to ``False``.
- `verify_men_chan_role`: If ``True``, Discord mentions (``<@123>``), channels (``<#123>``), and roles (``<@&123>``) are marked as special tokens. If ``False``, they are preserved automatically. Defaults to ``True``.
- `verify_emojis`: If ``True``, both custom Discord emojis (``<:name:id>``, ``<a:name:id>``) and plain-text emoji syntax (``:name:``) are marked as special tokens. If ``False``, they are left untouched and not marked as special. Defaults to ``True``.
If something extra is inserted like an action or face, it will mark the original as `''` or `None` in the first element of the tuple.
#### Time-Based Seeding:
Utilize time-based seeding for unique transformations:
```python
from datetime import datetime
from uwuipy import Uwuipy
message = "Hello this is a message posted in 2017."
seed = datetime(2017, 11, 28, 23, 55, 59, 342380).timestamp()
uwu = Uwuipy(seed)
print(uwu.uwuify(message)) # Hewwo ***blushes*** t-t-t-this is a ***cries*** message posted ***screeches*** in 2017.
```
This method only uses the `uwuify()` function, accepting a string and returning an uwuified string based on the constructor parameters.
### Directly in the terminal
#### CLI
Use `uwuipy` directly from the command line for quick uwuification:
```bash
python3 -m uwuipy The quick brown fox jumps over the lazy dog
```
Output:
```bash
The q-q-quick bwown fox jyumps uvw the wazy dog
```
#### REPL
REPL Mode:
```bash
python3 -m uwuipy
>>> The quick brown fox jumps over the lazy dog
The quick bwown fox jyumps uvw the wazy dog
```
#### Help
Command Line Help:
```bash
python3 -m uwuipy --help
```
## Contributing and Licence
Feel free contribute to the [GitHub repo](https://github.com/Cuprum77/uwuipy) of the project.
Licenced under [MIT](https://github.com/Cuprum77/uwuipy/blob/main/LICENSE)
| text/markdown | Cuprum77 | null | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"homepage, https://github.com/Cuprum77/uwuipy"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T15:42:57.207622 | uwuipy-1.0.0.tar.gz | 9,214 | e3/0f/42182b17a8a25b53a19b59e9e264f94d83aaeed68698ecc783f1f82c4ecd/uwuipy-1.0.0.tar.gz | source | sdist | null | false | fe4622c83ced74fcd409f14ce0f698b7 | d14125753883734a8aebb03355aa3a7614a239ac4b706d2ff38c3751231a81d7 | e30f42182b17a8a25b53a19b59e9e264f94d83aaeed68698ecc783f1f82c4ecd | null | [
"AUTHORS",
"LICENSE"
] | 266 |
2.1 | llama-cpp-python-win | 0.3.21 | Python bindings for the llama.cpp library | <p align="center">
<img src="https://raw.githubusercontent.com/abetlen/llama-cpp-python/main/docs/icon.svg" style="height: 5rem; width: 5rem">
</p>
# Python Bindings for [`llama.cpp`](https://github.com/ggerganov/llama.cpp)
[](https://llama-cpp-python.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/abetlen/llama-cpp-python/actions/workflows/test.yaml)
[](https://pypi.org/project/llama-cpp-python/)
[](https://pypi.org/project/llama-cpp-python/)
[](https://pypi.org/project/llama-cpp-python/)
[](https://pepy.tech/projects/llama-cpp-python)
[]()
Simple Python bindings for **@ggerganov's** [`llama.cpp`](https://github.com/ggerganov/llama.cpp) library.
This package provides:
- Low-level access to C API via `ctypes` interface.
- High-level Python API for text completion
- OpenAI-like API
- [LangChain compatibility](https://python.langchain.com/docs/integrations/llms/llamacpp)
- [LlamaIndex compatibility](https://docs.llamaindex.ai/en/stable/examples/llm/llama_2_llama_cpp.html)
- OpenAI compatible web server
- [Local Copilot replacement](https://llama-cpp-python.readthedocs.io/en/latest/server/#code-completion)
- [Function Calling support](https://llama-cpp-python.readthedocs.io/en/latest/server/#function-calling)
- [Vision API support](https://llama-cpp-python.readthedocs.io/en/latest/server/#multimodal-models)
- [Multiple Models](https://llama-cpp-python.readthedocs.io/en/latest/server/#configuration-and-multi-model-support)
Documentation is available at [https://llama-cpp-python.readthedocs.io/en/latest](https://llama-cpp-python.readthedocs.io/en/latest).
## Installation
Requirements:
- Python 3.8+
- C compiler
- Linux: gcc or clang
- Windows: Visual Studio or MinGW
- MacOS: Xcode
To install the package, run:
```bash
pip install llama-cpp-python-win==0.3.21
```
This will also build `llama.cpp` from source and install it alongside this python package.
If this fails, add `--verbose` to the `pip install` see the full cmake build log.
**Pre-built Wheel (New)**
It is also possible to install a pre-built wheel with basic CPU support.
```bash
pip install llama-cpp-python-win==0.3.21 \
--extra-index-url https://github.com/Srinadhch07/llama-cpp-python-wheels/releases/download/v0.3.21/llama_cpp_python_win-0.3.21-cp314-cp314-win_amd64.whl
```
## High-level API
[API Reference](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#high-level-api)
The high-level API provides a simple managed interface through the [`Llama`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama) class.
Below is a short example demonstrating how to use the high-level API to for basic text completion:
```python
from llama_cpp import Llama
llm = Llama(
model_path="./models/7B/llama-model.gguf",
# n_gpu_layers=-1, # Uncomment to use GPU acceleration
# seed=1337, # Uncomment to set a specific seed
# n_ctx=2048, # Uncomment to increase the context window
)
output = llm(
"Q: Name the planets in the solar system? A: ", # Prompt
max_tokens=32, # Generate up to 32 tokens, set to None to generate up to the end of the context window
stop=["Q:", "\n"], # Stop generating just before the model would generate a new question
echo=True # Echo the prompt back in the output
) # Generate a completion, can also call create_completion
print(output)
```
By default `llama-cpp-python` generates completions in an OpenAI compatible format:
```python
{
"id": "cmpl-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"object": "text_completion",
"created": 1679561337,
"model": "./models/7B/llama-model.gguf",
"choices": [
{
"text": "Q: Name the planets in the solar system? A: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto.",
"index": 0,
"logprobs": None,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 14,
"completion_tokens": 28,
"total_tokens": 42
}
}
```
Text completion is available through the [`__call__`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.__call__) and [`create_completion`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.create_completion) methods of the [`Llama`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama) class.
### Pulling models from Hugging Face Hub
You can download `Llama` models in `gguf` format directly from Hugging Face using the [`from_pretrained`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.from_pretrained) method.
You'll need to install the `huggingface-hub` package to use this feature (`pip install huggingface-hub`).
```python
llm = Llama.from_pretrained(
repo_id="Qwen/Qwen2-0.5B-Instruct-GGUF",
filename="*q8_0.gguf",
verbose=False
)
```
By default [`from_pretrained`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.from_pretrained) will download the model to the huggingface cache directory, you can then manage installed model files with the [`huggingface-cli`](https://huggingface.co/docs/huggingface_hub/en/guides/cli) tool.
### Chat Completion
The high-level API also provides a simple interface for chat completion.
Chat completion requires that the model knows how to format the messages into a single prompt.
The `Llama` class does this using pre-registered chat formats (ie. `chatml`, `llama-2`, `gemma`, etc) or by providing a custom chat handler object.
The model will will format the messages into a single prompt using the following order of precedence:
- Use the `chat_handler` if provided
- Use the `chat_format` if provided
- Use the `tokenizer.chat_template` from the `gguf` model's metadata (should work for most new models, older models may not have this)
- else, fallback to the `llama-2` chat format
Set `verbose=True` to see the selected chat format.
```python
from llama_cpp import Llama
llm = Llama(
model_path="path/to/llama-2/llama-model.gguf",
chat_format="llama-2"
)
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are an assistant who perfectly describes images."},
{
"role": "user",
"content": "Describe this image in detail please."
}
]
)
```
Chat completion is available through the [`create_chat_completion`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.create_chat_completion) method of the [`Llama`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama) class.
For OpenAI API v1 compatibility, you use the [`create_chat_completion_openai_v1`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.create_chat_completion_openai_v1) method which will return pydantic models instead of dicts.
### JSON and JSON Schema Mode
To constrain chat responses to only valid JSON or a specific JSON Schema use the `response_format` argument in [`create_chat_completion`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.create_chat_completion).
#### JSON Mode
The following example will constrain the response to valid JSON strings only.
```python
from llama_cpp import Llama
llm = Llama(model_path="path/to/model.gguf", chat_format="chatml")
llm.create_chat_completion(
messages=[
{
"role": "system",
"content": "You are a helpful assistant that outputs in JSON.",
},
{"role": "user", "content": "Who won the world series in 2020"},
],
response_format={
"type": "json_object",
},
temperature=0.7,
)
```
#### JSON Schema Mode
To constrain the response further to a specific JSON Schema add the schema to the `schema` property of the `response_format` argument.
```python
from llama_cpp import Llama
llm = Llama(model_path="path/to/model.gguf", chat_format="chatml")
llm.create_chat_completion(
messages=[
{
"role": "system",
"content": "You are a helpful assistant that outputs in JSON.",
},
{"role": "user", "content": "Who won the world series in 2020"},
],
response_format={
"type": "json_object",
"schema": {
"type": "object",
"properties": {"team_name": {"type": "string"}},
"required": ["team_name"],
},
},
temperature=0.7,
)
```
### Function Calling
The high-level API supports OpenAI compatible function and tool calling. This is possible through the `functionary` pre-trained models chat format or through the generic `chatml-function-calling` chat format.
```python
from llama_cpp import Llama
llm = Llama(model_path="path/to/chatml/llama-model.gguf", chat_format="chatml-function-calling")
llm.create_chat_completion(
messages = [
{
"role": "system",
"content": "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. The assistant calls functions with appropriate input when necessary"
},
{
"role": "user",
"content": "Extract Jason is 25 years old"
}
],
tools=[{
"type": "function",
"function": {
"name": "UserDetail",
"parameters": {
"type": "object",
"title": "UserDetail",
"properties": {
"name": {
"title": "Name",
"type": "string"
},
"age": {
"title": "Age",
"type": "integer"
}
},
"required": [ "name", "age" ]
}
}
}],
tool_choice={
"type": "function",
"function": {
"name": "UserDetail"
}
}
)
```
<details>
<summary>Functionary v2</summary>
The various gguf-converted files for this set of models can be found [here](https://huggingface.co/meetkai). Functionary is able to intelligently call functions and also analyze any provided function outputs to generate coherent responses. All v2 models of functionary supports **parallel function calling**. You can provide either `functionary-v1` or `functionary-v2` for the `chat_format` when initializing the Llama class.
Due to discrepancies between llama.cpp and HuggingFace's tokenizers, it is required to provide HF Tokenizer for functionary. The `LlamaHFTokenizer` class can be initialized and passed into the Llama class. This will override the default llama.cpp tokenizer used in Llama class. The tokenizer files are already included in the respective HF repositories hosting the gguf files.
```python
from llama_cpp import Llama
from llama_cpp.llama_tokenizer import LlamaHFTokenizer
llm = Llama.from_pretrained(
repo_id="meetkai/functionary-small-v2.2-GGUF",
filename="functionary-small-v2.2.q4_0.gguf",
chat_format="functionary-v2",
tokenizer=LlamaHFTokenizer.from_pretrained("meetkai/functionary-small-v2.2-GGUF")
)
```
**NOTE**: There is no need to provide the default system messages used in Functionary as they are added automatically in the Functionary chat handler. Thus, the messages should contain just the chat messages and/or system messages that provide additional context for the model (e.g.: datetime, etc.).
</details>
## License
This project is licensed under the terms of the MIT license.
| text/markdown | null | Srinadh chintakindi <srinadhc07@gmail.com> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"typing-extensions>=4.5.0",
"numpy>=1.20.0",
"diskcache>=5.6.1",
"jinja2>=2.11.3",
"uvicorn>=0.22.0; extra == \"server\"",
"fastapi>=0.100.0; extra == \"server\"",
"pydantic-settings>=2.0.1; extra == \"server\"",
"sse-starlette>=1.6.1; extra == \"server\"",
"starlette-context<0.4,>=0.3.6; extra == \... | [] | [] | [] | [
"Homepage, https://github.com/Srinadhch07/llama-cpp-python-wheels",
"Issues, https://github.com/Srinadhch07/llama-cpp-python-wheels/issues",
"Documentation, https://llama-cpp-python.readthedocs.io/en/latest/",
"Changelog, https://llama-cpp-python.readthedocs.io/en/latest/changelog/"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T15:41:39.093005 | llama_cpp_python_win-0.3.21-cp314-cp314-win_amd64.whl | 6,998,438 | de/1a/37adbdba8727d2156f63a0ff731ad3ef7f7fd94a9fa218f80ba37c4522a2/llama_cpp_python_win-0.3.21-cp314-cp314-win_amd64.whl | cp314 | bdist_wheel | null | false | 8fdfd81023e8c8150610e7c11f6a1517 | 6999c8f9c4cdeecd1feffb41df645980f9f4f298058f833ea3f166b99a158952 | de1a37adbdba8727d2156f63a0ff731ad3ef7f7fd94a9fa218f80ba37c4522a2 | null | [] | 98 |
2.1 | fhenomai | 1.0.22 | Official Python SDK for FHEnom for AI™ - Confidential AI with fully encrypted models and data | # FHEnom AI Python Client Library
**Official Python SDK for FHEnom for AI™** - Confidential AI with fully encrypted models and data.
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
## 🚀 Quick Start
### Installation
```bash
pip install fhenomai
```
Or install from source:
<!-- ```bash
git clone https://devops.datakrypto.com/DataKrypto/FHENOMAI_LIB/_git/FHENOMAI_LIB
cd fhenomai
pip install -e .
``` -->
### CLI Configuration
First, configure the CLI with your TEE server details:
```bash
# Initialize configuration (interactive)
fhenomai config init \
--admin-host YOUR_TEE_IP \
--admin-port 9099 \
--user-host YOUR_TEE_IP \
--user-port 9999 \
--sftp-host YOUR_TEE_IP \
--sftp-username admin \
--sftp-password YOUR_PASSWORD
# Verify configuration
fhenomai config show
# Test connectivity
fhenomai test connection
```
### Basic CLI Usage
```bash
# List models
fhenomai model list --show-status
# Upload model via SFTP (upload/ prefix added automatically)
fhenomai sftp upload ./my-model my-model --recursive
# Encrypt model (paths normalized automatically)
fhenomai model encrypt my-model my-model-encrypted \
--encrypted-model-id my-model-encrypted \
--wait --show-progress
# Download encrypted model (download/ prefix added automatically)
fhenomai sftp download my-model-encrypted ./encrypted/my-model --recursive
# Start serving
fhenomai serve start my-model-encrypted \
--server-url http://YOUR_VLLM_SERVER_IP:8000 \
--display-model-name my-model
# Stop serving
fhenomai serve stop my-model-encrypted
```
### Basic Python SDK Usage
```python
from fhenomai import FHEnomClient, FHEnomConfig
# Load configuration from file
config = FHEnomConfig.from_file() # Reads from ~/.fhenomai/config.yaml
# Initialize client
client = FHEnomClient(config)
# List available models
models = client.admin.list_models()
print(f"Available models: {models}")
# Encrypt a model (paths auto-prefixed with /models/upload/ and /models/download/)
job_id = client.admin.encrypt_model(
model_name_or_path="llama-3-8b", # Becomes /models/upload/llama-3-8b
out_encrypted_model_path="llama-3-8b-encrypted", # Becomes /models/download/llama-3-8b-encrypted
encrypted_model_id="llama-3-8b-encrypted"
)
# Wait for completion
result = client.admin.wait_for_job(job_id, timeout=3600)
# Start serving
client.admin.start_serving(
encrypted_model_id="llama-3-8b-encrypted",
server_url="http://YOUR_VLLM_SERVER_IP:8000", # vLLM server IP/hostname
display_model_name="llama-3-8b-instruct" # Optional: for vLLM --served-model-name
)
```
## 📚 Features
### Core Capabilities
- **CLI Tool**: Full-featured command-line interface for all operations
- **Python SDK**: Programmatic access via `FHEnomClient` and `AdminAPI`
- **Model Encryption**: Encrypt models on TEE server with progress tracking
- **Dataset Encryption**: Encrypt datasets using encrypted models
- **SFTP Integration**: Upload/download with automatic path normalization
- **Job Monitoring**: Real-time progress updates and status checking
- **Serving Control**: Start/stop model serving with vLLM integration
### CLI Commands
- **config**: `init`, `show`, `validate`, `test`
- **model**: `list`, `encrypt`, `encrypt-dataset`, `info`, `upload`, `download`, `delete`
- **serve**: `start`, `stop`, `list`
- **sftp**: `upload`, `download`, `list`, `clear`
- **job**: `status`, `wait`
- **health**: `check`, `admin`, `sftp`
- **test**: `connection`, `admin`, `sftp`
### Advanced Features
- **Progress Bars**: Rich terminal UI with real-time progress
- **Auto Path Normalization**: Automatic `upload/` and `download/` prefix handling
- **Duplicate Detection**: Warns about existing model names
- **Directory Management**: Bulk operations on TEE directories
- **Health Monitoring**: Test connectivity to all services
- **Context Manager**: Automatic resource cleanup
- **TEE Attestation**: Generate and verify TEE attestation reports with built-in verification
### TEE Attestation Support (v1.0.7)
!!! info "New in v1.0.7"
Enhanced attestation with automatic file management, format inference, and built-in verification.
Report formatting is now integrated into fhenomai for stability.
FHEnom AI includes integrated TEE attestation with AMD SEV-SNP and Intel TDX support:
```bash
# Install fhenomai (includes dk-tee-attestation for verification)
pip install fhenomai
# Generate attestation report (creates 3 files)
fhenomai admin attestation --output report.html
# Creates: report.html, report.bin, report.nonce
# Verify attestation (nonce auto-loads from report.nonce)
fhenomai admin verify-attestation --report report.bin
# Generate detailed PDF with hex dump
fhenomai admin attestation --format detailed --output analysis.pdf
# Verify with detailed output
fhenomai admin verify-attestation --report report.bin --format detailed
```
**What's New in v1.0.7:**
- ✨ **Triple file output**: All attestation commands create .html/.pdf/.txt + .bin + .nonce
- ✨ **Format inference**: File extension determines output type (.html, .pdf, .txt)
- ✨ **Changed `--format` behavior**: Now controls display style (standard/detailed) not output type
- ✨ **Auto-load nonce**: Verification automatically loads .nonce file if not provided
- ✨ **Built-in verification**: New `verify-attestation` command with color-coded output
- ✨ **Parsed reports**: CPU info, TCB details, and signatures cleanly displayed
- ✨ **Integrated formatter**: Report formatting moved from dk-tee-attestation to fhenomai for API stability
Python SDK usage:
```python
from fhenomai import FHEnomClient, AttestationReportFormatter
client = FHEnomClient.from_config()
# Generate attestation (nonce auto-generated)
report = client.admin.attestation()
# Save report
with open("report.bin", "wb") as f:
f.write(report)
# Verify attestation
result = client.admin.verify_attestation(
report=report,
engine_type="amd_sev_snp"
)
if result['verified']:
print(f"✓ Verified - Platform: {result['platform']}")
print(f" CPU: {result['cpu_info']}")
# Use the formatter directly for custom output
formatter = AttestationReportFormatter()
html_report = formatter.format_html(report)
with open("custom_report.html", "w") as f:
f.write(html_report)
```
**Verification Features:**
- ✅ ECDSA P-384 signature validation
- ✅ Nonce binding verification
- ✅ TCB (Trusted Computing Base) parsing
- ✅ CPU identification
- ✅ Color-coded hex dumps
- ✅ HTML/PDF report generation
- ✅ Platform detection (AMD SEV-SNP, Intel TDX)
## 📖 Documentation
### Admin API Operations
```python
# Model discovery
models = client.admin.list_models()
online_models = client.admin.list_online_models()
model_info = client.admin.get_model_info(model_id)
# Model encryption (paths auto-normalized)
job_id = client.admin.encrypt_model(
model_name_or_path="model-name", # Auto-prefixed with /models/upload/
out_encrypted_model_path="model-name-encrypted", # Auto-prefixed with /models/download/
encrypted_model_id="model-name-encrypted", # Custom model ID
encryption_impl="decoder-only-llm",
dtype="bfloat16",
server_ip="fhenom_ai_server",
server_port=9100
)
# Dataset encryption (paths auto-normalized)
dataset_job = client.admin.encrypt_dataset(
encrypted_model_id="my-encrypted-model",
dataset_name_or_path="my-dataset", # Auto-prefixed with /models/upload/
out_encrypted_dataset_path="my-dataset-encrypted", # Auto-prefixed with /models/download/
dataset_encryption_impl="numeric",
text_fields=["text"],
server_ip="fhenom_ai_server",
server_port=9100
)
# Serving control
client.admin.start_serving(
encrypted_model_id=model_id,
server_url="http://YOUR_VLLM_SERVER_IP:8000", # vLLM server IP/hostname
api_key=None, # Optional
display_model_name="my-model" # Optional: custom name for vLLM
)
client.admin.stop_serving(model_id)
# Job management
status = client.admin.get_job_status(job_id)
result = client.admin.wait_for_job(
job_id,
poll_interval=5,
timeout=3600,
callback=lambda s: print(f"Progress: {s.get('progress', 0)*100:.1f}%")
)
```
### SFTP Operations
```python
# Get SFTP manager
sftp = client.get_sftp_manager()
# Upload model (upload/ prefix added automatically)
sftp.upload_directory(
local_path="./llama-3-8b",
remote_path="llama-3-8b" # Becomes upload/llama-3-8b
)
# Download encrypted model (download/ prefix added automatically)
sftp.download_directory(
remote_path="llama-3-8b-encrypted", # Becomes download/llama-3-8b-encrypted
local_path="./encrypted/llama-3-8b"
)
# List files in upload directory
files = sftp.list_upload_directory()
for file in files:
print(f"{file.name}: {file.size_mb:.2f} MB")
# Clear download directory
sftp.clear_download_directory()
# Get directory size
size_gb = sftp.get_directory_size("upload")
print(f"Upload directory: {size_gb:.2f} GB")
# Check if file exists (via Admin API's SFTP manager)
exists = client.admin.sftp.file_exists("upload/my-model/config.json")
```
### Health & Testing
```python
# Test connectivity (via CLI)
# fhenomai health check
# fhenomai test connection
# In Python - test admin API
try:
models = client.admin.list_models()
print(f"✓ Admin API connected ({len(models)} models)")
except Exception as e:
print(f"✗ Admin API failed: {e}")
# Test SFTP connection
try:
sftp = client.get_sftp_manager()
files = sftp.list_upload_directory()
print(f"✓ SFTP connected ({len(files)} files in upload/)")
except Exception as e:
print(f"✗ SFTP failed: {e}")
```
### User Inference (via OpenAI SDK)
For inference, use the standard OpenAI Python SDK:
```python
from openai import OpenAI
# Connect to FHEnom User API (port 9999)
client = OpenAI(
base_url="http://your-tee-ip:9999/v1",
api_key="not-needed" # TEE doesn't require API key
)
# Standard OpenAI-compatible inference
response = client.chat.completions.create(
model="your-model-name",
messages=[
{"role": "user", "content": "Explain quantum computing"}
],
max_tokens=200
)
print(response.choices[0].message.content)
```
## 🛠️ Advanced Usage
### Context Manager Usage
```python
from fhenomai import FHEnomClient, FHEnomConfig
# Load config
config = FHEnomConfig.from_file()
# Context manager handles connection lifecycle
with FHEnomClient(config) as client:
# SFTP connection auto-managed
sftp = client.get_sftp_manager()
# Upload model (upload/ prefix added automatically)
sftp.upload_directory("./model", "model")
# Encrypt (paths auto-normalized)
job_id = client.admin.encrypt_model(
model_name_or_path="model",
out_encrypted_model_path="model-enc",
encrypted_model_id="model-enc"
)
# Wait for completion
result = client.admin.wait_for_job(job_id)
if result.get('status') == 'done':
# Download encrypted model (download/ prefix added automatically)
sftp.download_directory(
"model-enc",
"./encrypted/model"
)
# Connection automatically closed
```
### Job Monitoring with Callbacks
```python
import time
# Encrypt with progress callback (paths auto-normalized)
job_id = client.admin.encrypt_model(
model_name_or_path="large-model",
out_encrypted_model_path="large-model-enc",
encrypted_model_id="large-model-enc"
)
# Define callback for progress updates
def progress_callback(status):
progress = status.get('progress', 0) * 100
message = status.get('message', 'Processing')
print(f"\r{message}: {progress:.1f}%", end='', flush=True)
# Wait with callback
result = client.admin.wait_for_job(
job_id,
timeout=3600,
poll_interval=5,
callback=progress_callback
)
print(f"\nCompleted: {result.get('status')}")
## 📋 Configuration
### Configuration File
Create `~/.fhenomai/config.yaml`:
```yaml
# Admin API Configuration
admin:
host: "your-tee-ip"
port: 9099
url: "http://your-tee-ip:9099" # Alternative to host+port
# User API Configuration (for inference)
user:
host: "your-tee-ip"
port: 9999
url: "http://your-tee-ip:9999/v1" # Alternative to host+port
# SFTP Configuration
sftp:
host: "your-tee-ip"
port: 22
username: "admin"
password: "your-password" # Or use key_path
# key_path: "~/.ssh/id_rsa" # Alternative to password
base_path: "/var/lib/fhenomai/FHEnomAI-server/admin" # Optional
# Optional settings
timeout: 30
max_retries: 3
verify_ssl: true
auth_token: "default-auth-token-2026" # X-Auth-Token header
```
### Environment Variables
```bash
export FHENOM_ADMIN_HOST="your-tee-ip"
export FHENOM_ADMIN_PORT="9099"
export FHENOM_SFTP_HOST="your-tee-ip"
export FHENOM_SFTP_USERNAME="admin"
export FHENOM_SFTP_PASSWORD="your-password"
```
Then use without parameters:
```python
from fhenomai import FHEnomClient, FHEnomConfig
# Load from environment
config = FHEnomConfig.from_env()
client = FHEnomClient(config)
# Or load from file
config = FHEnomConfig.from_file() # Reads ~/.fhenomai/config.yaml
client = FHEnomClient(config)
```
## 🔧 API Reference
### FHEnomClient
Main client class for FHEnom AI operations.
**Key Methods:**
- `admin` - Access AdminAPI instance for model/serving operations
- `get_sftp_manager()` - Get SFTPManager for file operations
- Context manager support with `__enter__` and `__exit__`
### AdminAPI
Admin operations (accessible via `client.admin`):
**Model Operations:**
- `list_models()` - List all encrypted models
- `list_online_models()` - List currently served models
- `get_model_info(model_id)` - Get model details
- `encrypt_model(...)` - Encrypt a plaintext model
- `encrypt_dataset(...)` - Encrypt a dataset
**Serving Operations:**
- `start_serving(encrypted_model_id, server_url, ...)` - Start serving
- `stop_serving(encrypted_model_id)` - Stop serving
**Job Operations:**
- `get_job_status(job_id)` - Check job status
- `wait_for_job(job_id, timeout, callback)` - Wait for completion
**SFTP Operations (via `admin.sftp`):**
- Access to SFTPManager for TEE directory operations
### SFTPManager
High-level SFTP operations (accessible via `client.get_sftp_manager()` or `client.admin.sftp`):
**Directory Operations:**
- `upload_directory(local_path, remote_path)` - Upload directory
- `download_directory(remote_path, local_path)` - Download directory
- `list_upload_directory()` - List files in upload/
- `list_download_directory()` - List files in download/
- `clear_upload_directory()` - Clear upload directory
- `clear_download_directory()` - Clear download directory
**File Operations:**
- `upload_file(local_file, remote_file)` - Upload single file
- `download_file(remote_file, local_file)` - Download single file
- `file_exists(remote_path)` - Check if file exists
- `get_directory_size(directory)` - Get size in GB
## 🤝 Contributing
Contributions are welcome! Please contact DataKrypto for contribution guidelines.
## 📄 License
This project is licensed under the MIT License - see [LICENSE](LICENSE) file.
## 🔗 Links
- **Repository**: [Azure DevOps](https://devops.datakrypto.com/DataKrypto/FHENOMAI_LIB/_git/FHENOMAI_LIB)
- **Documentation**: [https://docs.datakrypto.ai](https://docs.datakrypto.ai)
- **Website**: [https://datakrypto.ai](https://datakrypto.ai)
- **LinkedIn**: [DataKrypto](https://www.linkedin.com/company/datakrypto/)
- **Support**: [support@datakrypto.ai](mailto:support@datakrypto.ai)
## 📞 Contact
**DataKrypto**
**United States**
533 Airport Blvd. Ste 400
Burlingame, CA 94010
+1 (650) 373-2083
**Italy**
Via Marche, 54
00187 Rome - Italy
+39 (06) 88923849
---
**© 2026 DataKrypto. All rights reserved.**
| text/markdown | null | DataKrypto <support@datakrypto.ai> | null | null | null | fhe, fully-homomorphic-encryption, confidential-ai, encrypted-ai, machine-learning, deep-learning, privacy, security, datakrypto, fhenom | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :... | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.28.0",
"paramiko>=3.0.0",
"urllib3>=1.26.0",
"pyyaml>=6.0",
"tqdm>=4.65.0",
"rich>=13.0.0",
"click>=8.0.0",
"dk-tee-attestation>=0.2.4",
"cryptography>=46.0.0",
"reportlab>=4.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0;... | [] | [] | [] | [
"Homepage, https://datakrypto.ai",
"Documentation, https://docs.datakrypto.ai",
"Repository, https://github.com/datakrypto/fhenomai",
"Bug Tracker, https://github.com/datakrypto/fhenomai/issues",
"LinkedIn, https://www.linkedin.com/company/datakrypto/"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T15:41:10.815932 | fhenomai-1.0.22.tar.gz | 72,119 | 3b/eb/dfe8ec7c29f072ab3edc13c298cea49598381f72a09e629a018cd6816d57/fhenomai-1.0.22.tar.gz | source | sdist | null | false | 663d6e9e47269a4b95652b1cbf3045bf | 5949e2a52dbaef56981c832fdb2768f5d552ebda38f0535e430443c5b87e316b | 3bebdfe8ec7c29f072ab3edc13c298cea49598381f72a09e629a018cd6816d57 | null | [] | 236 |
2.4 | wtf-dev | 0.1.4 | What did I work on? A snarky standup generator. | # wtf-dev
A CLI tool that tells you what you worked on - with personality.
## Install
```bash
pip install wtf-dev
```
## Setup
```bash
wtf setup
```
This will prompt for your OpenRouter API key and let you pick a model.
## Usage
```bash
# what did I do today?
wtf
# look back N days
wtf --days 3
# only current repo
wtf --here
# copy to clipboard
wtf --copy
```
## Features
- **Standup summary** - LLM-generated summary of your commits
- **WIP tracking** - Shows uncommitted changes + what you're currently working on
- **Streak counter** - Track your commit streak
- **Late night detection** - Spots those 2am coding sessions
- **Branch context** - Shows which branches you touched
- **History** - View past standups with `wtf --history`
- **Cost tracking** - Track API spending with `wtf --spending`
## Output
```
PREVIOUSLY ON YOUR CODE... Feb 02, 2026
* 5 day streak
────────────────────────────────────────────────────────────
ai-platform (main) ─── 2 commits
├─ feat(sdr): add langsmith tracing
└─ feat(sdr): add automatic follow-up
[wip]
ai-platform ─── 3 files changed
├─ M src/api/routes.py
└─ A src/new_feature.py
────────────────────────────────────────────────────────────
Added LangSmith tracing and automatic follow-up for stale
conversations in the SDR pipeline.
Currently working on: Adding new API routes for validation.
Two features down, infinite bugs to go.
```
## Flags
| Flag | Short | Description |
|------|-------|-------------|
| `--dir PATH` | `-d` | Scan a specific directory |
| `--here` | `-H` | Only current repo |
| `--days N` | `-n` | Look back N days (default: 1) |
| `--author NAME` | `-a` | Filter by author |
| `--copy` | `-c` | Copy to clipboard |
| `--history` | | View past standups |
| `--spending` | | Show API costs |
| `--json` | | Output as JSON |
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"inquirerpy>=0.3.4",
"pydantic-settings>=2.12.0",
"pydantic>=2.12.5",
"pyperclip>=1.11.0",
"python-dotenv>=1.2.1",
"requests>=2.32.5",
"rich>=14.3.1",
"typer>=0.21.1"
] | [] | [] | [] | [] | uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T15:41:09.465795 | wtf_dev-0.1.4.tar.gz | 38,183 | 01/df/12bb1075b816b353d2603064fcb428f6ccbed2b82e461d34236ace017754/wtf_dev-0.1.4.tar.gz | source | sdist | null | false | fcd8694c44956b37473cbe4c67c42bc5 | 4903f6014937d7e209f97cb463f7f5bbb6da00871603d984a7051fe705ffc1ce | 01df12bb1075b816b353d2603064fcb428f6ccbed2b82e461d34236ace017754 | null | [] | 227 |
2.4 | xwmb | 0.5.5 | Efficient and lazy computation of Water Mass Budgets in arbitrary sub-domains of C-grid ocean models | # xwmb
**xWMB** is a Python package that provides a efficient and lazy computation of Water Mass Budgets in arbitrary sub-domains of C-grid ocean models. Most of the heavy lifting is done by dependency packages by the same team of developers:
- [`sectionate`](https://github.com/MOM6-Community/sectionate): for computing transports normal to a section (open or closed)
- [`regionate`](https://github.com/hdrake/regionate): for converting between gridded masks and the closed sections that bound them
- [`xbudget`](https://github.com/hdrake/xbudget): for model-agnostic wrangling of multi-level tracer budgets
- [`xwmt`](https://github.com/NOAA-GFDL/xwmt): for computing bulk water mass transformations from these budgets
Documentation is not yet available, but the core API is illustrated in the example notebooks here and in each of the dependency packages.
If you use `xwmb`, please cite the companion manuscript: Henri F. Drake, Shanice Bailey, Raphael Dussin, Stephen M. Griffies, John Krasting, Graeme MacGilchrist, Geoffrey Stanley, Jan-Erik Tesdal, Jan D. Zika. Water Mass Transformation Budgets in Finite-Volume Generalized Vertical Coordinate Ocean Models. Journal of Advances in Modeling Earth Systems. 08 March 2025. DOI: [doi.org/10.1029/2024MS004383](https://doi.org/10.1029/2024MS004383)
Quick Start Guide
-----------------
**Minimal installation within an existing environment**
```bash
pip install xwmb
```
**Installing from scratch using `conda`**
This is the recommended mode of installation for developers.
```bash
git clone git@github.com:hdrake/xwmb.git
cd xwmb
conda env create -f docs/environment.yml
conda activate docs_env_xwmb
pip install -e .
```
You can verify that the package was properly installed by confirming it passes all of the tests with:
```bash
pytest -v
```
You can launch a Jupyterlab instance using this environment with:
```bash
python -m ipykernel install --user --name docs_env_xwmb --display-name "docs_env_xwmb"
jupyter-lab
```
| text/markdown | null | "Henri F. Drake" <hfdrake@uci.edu> | null | null | null | ocean mixing, water mass transformation | [
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"regionate>=0.5.0",
"xwmt>=0.1.0"
] | [] | [] | [] | [
"Homepage, https://github.com/hdrake/xwmb",
"Bugs/Issues/Features, https://github.com/hdrake/xwmb/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T15:40:39.421392 | xwmb-0.5.5.tar.gz | 396,443 | 2d/d8/a3dd52573bc1eaacf37ef65694bb59e379acc3af844b482ad96008d1d77f/xwmb-0.5.5.tar.gz | source | sdist | null | false | f337b5c02b81a5838132a9fdcd02c2fd | 14a2b1cf88b857272e4d0af9793bc17e75a4b02247a765230cfd382181773b61 | 2dd8a3dd52573bc1eaacf37ef65694bb59e379acc3af844b482ad96008d1d77f | null | [] | 234 |
2.4 | nightcrawler-mitm | 0.11.0 | A mitmproxy addon for background passive analysis, crawling, and basic active scanning, designed as a security researcher's sidekick. | # nightcrawler-mitm

Version: 0.10.0
A mitmproxy addon for background passive analysis, crawling, and basic active
scanning, designed as a security researcher's sidekick.
**WARNING: BETA Stage - Use with caution, especially active scanning features**
## FEATURES
- Acts as an HTTP/HTTPS proxy.
- Performs passive analysis:
- Security Headers (HSTS, CSP, XCTO, XFO, Referrer-Policy, Permissions-Policy,
COOP, COEP, CORP, basic weakness checks).
- Cookie Attributes (Secure, HttpOnly, SameSite).
- JWT Discovery: Finds and decodes JWTs in headers and JSON responses,
checking for alg:none and expired claims.
- JS Library Identification: Detects common JavaScript libraries and their
versions.
- WebSocket Authentication: Warns if a WebSocket connection is established
without a session cookie.
- Basic Info Disclosure checks (Comments, basic keyword context - Note:
API/Key/Secret checks temporarily disabled).
- Crawls the target application to discover new endpoints.
- Runs basic active scans for low-hanging fruit:
- Reflected XSS (basic reflection check).
- SQL Injection (basic error/time-based checks).
- Stored XSS (basic probe injection and revisit check).
- Configurable target scope, concurrency, payloads, and output via command-line
options.
- Logs findings to console and optionally to a JSONL file.
## INSTALLATION
You can install `nightcrawler-mitm` directly from PyPI using pip (once
published):
```sh
pip install nightcrawler-mitm`
```
It's recommended to install it in a virtual environment. For development/local
testing:
- Navigate to project root directory (containing pyproject.toml)
- Activate your virtual environment (e.g., source .venv/bin/activate)
```sh
pip install -e .
```
## USAGE
Once installed, a new command `nightcrawler` becomes available. This command
wraps `mitmdump`, automatically loading the addon. You MUST specify the target
scope using the `--set nc_scope=...` option.
You can pass any other valid `mitmproxy` arguments (like `--ssl-insecure`, `-p`,
`-v`) AND Nightcrawler-specific options using the `--set name=value` syntax.
1. Configure Browser/Client: Set proxy to 127.0.0.1:8080 (or specified port).
2. Install Mitmproxy CA Certificate: Visit <http://mitm.it> via proxy.
3. Run Nightcrawler:
- Specify Target Scope (REQUIRED!): nightcrawler --set nc_scope=example.com
- Common Options (Combine as needed): nightcrawler -p 8081 --set
nc_scope=example.com nightcrawler --ssl-insecure --set
nc_scope=internal-site.local nightcrawler -v --set nc_scope=example.com #
Use -v or -vv for debug logs nightcrawler --set nc_max_concurrency=10 --set
nc_scope=secure.com nightcrawler --set nc_sqli_payload_file=sqli.txt --set
nc_output_file=findings.jsonl --set nc_scope=test.org
- Show Nightcrawler & Mitmproxy version: nightcrawler --version
- Show all Nightcrawler and Mitmproxy options (look for 'nc\_' prefix):
nightcrawler --options
NOTE: If nc_scope is not set, Nightcrawler will run but remain idle.
4. Browse: Browse the target application(s). Findings appear in the terminal and
optionally in the specified JSONL file.
By default, Nightcrawler runs in a "quiet" mode that suppresses mitmproxy's
standard connection logs, allowing you to focus only on the findings generated
by the addon.
### Recommended Commands
- **Standard Mode (Quiet):** Shows only Nightcrawler's INFO, WARN, and ERROR
logs. `nightcrawler --set nc_scope=nightcrawler.test`
- **Nightcrawler Debug Mode:** Use the `-d` or `--debug` flag to see
Nightcrawler's own DEBUG messages (e.g., `[SCAN WORKER] Starting...`), while
still hiding mitmproxy's connection chatter.
`nightcrawler -d --set nc_scope=nightcrawler.test`
- **Full Verbosity Mode:** Use mitmproxy's standard `-v` flag to see
**everything**, including all low-level connection logs. This is useful for
debugging connection issues.
`nightcrawler -v --set nc_scope=nightcrawler.test`
### On-Demand URL Dumping (Linux/macOS)
While Nightcrawler is running, you can dump all discovered URLs to a file
(`nightcrawler_links.txt`) without stopping the process.
1. When Nightcrawler starts, it will print its **Process ID (PID)**.
`[INFO][Nightcrawler] Process ID (PID): 12345`
`[INFO][Nightcrawler] Send SIGUSR1 signal to dump discovered URLs (kill -USR1 12345)`
2. From **another terminal window**, send the `SIGUSR1` signal to that PID:
`kill -USR1 12345`
3. Nightcrawler will immediately write the URLs to `nightcrawler_links.txt` in
the directory where you started it.
## CONFIGURATION
Nightcrawler configuration follows this precedence:
1. Command-line --set options (highest precedence)
2. Values in configuration file
3. Built-in defaults (lowest precedence)
**Configuration File:**
- By default, Nightcrawler looks for a YAML configuration file at:
- `~/.config/nightcrawler-mitm/config.yaml` (on Linux/macOS, standard)
- `%APPDATA%/nightcrawler-mitm/config.yaml` (on Windows, needs check)
- _Fallback:_ `~/.nightcrawler-mitm/config.yaml` (if XDG path not
found/writable)
- You can specify a different configuration file path using the `--nc-config`
option when running Nightcrawler (passed via `--set`):
`nightcrawler --set nc_config=/path/to/my_config.yaml ...`
- The configuration file uses YAML format. Keys should match the addon option
names (without the `--set`).
_Example `config.yaml`:_
```yaml
# ~/.config/nightcrawler-mitm/config.yaml
# Nightcrawler Configuration Example
# Target scope (REQUIRED if not using --set nc_scope)
nc_scope: example.com,internal.dev
# Worker concurrency
nc_max_concurrency: 10
# Custom User-Agent
nc_user_agent: "My Custom Scanner Bot/1.0"
# Custom payload files (paths relative to config file or absolute)
# nc_sqli_payload_file: payloads/custom_sqli.txt
# nc_xss_reflected_payload_file: /opt/payloads/xss.txt
# Stored XSS settings
nc_xss_stored_prefix: MyProbe
nc_xss_stored_format: "<nc_probe data='{probe_id}'/>"
nc_payload_max_age: 7200 # Track payloads for 2 hours
# Output files (relative paths resolved against default data dir, absolute paths used as is)
# nc_output_file: nightcrawler_results.jsonl # Saved in default data dir
# nc_output_html: /var/www/reports/scan_report.html # Saved to absolute path
# WebSocket inspection
nc_inspect_websocket: false
```
### Command-Line Overrides (--set)
You can always override defaults or config file values using --set. This takes
the highest precedence.
```
nightcrawler --set nc_scope=specific-target.com --set nc_max_concurrency=3
```
To see all available nc*options and their current effective values (after
considering defaults, config file, and --set), run: nightcrawler --options |
grep nc*
### Default Data Directory & Output Paths
- If you specify relative paths for nc_output_file or nc_output_html (either in
the config file or via --set), Nightcrawler will attempt to save them relative
to a default data directory:
- Linux/macOS (XDG): ~/.local/share/nightcrawler-mitm/
- Windows (approx): %LOCALAPPDATA%/nightcrawler-mitm/
- If you specify absolute paths (e.g., /tmp/report.html), they will be used
directly.
- Nightcrawler will attempt to create these directories if they don't exist.
## LIMITATIONS
- Basic Active Scans: Scanners are basic, intended for low-hanging fruit. Cannot
detect complex vulnerabilities. DO NOT rely solely on this tool.
- Stored XSS Detection: Basic implementation, may miss cases and have FPs.
- Info Disclosure: Content checks for keys/secrets are basic and currently
disabled pending refactoring.
- Resource Usage: Tune `--set nc_max_concurrency`.
- False Positives/Negatives: Expected. Manual verification is required.
## LICENSE
This project is licensed under the MIT License. See the LICENSE file for
details.
## CONTRIBUTING & EXTENSIBILITY
Nightcrawler is designed to be extensible. We welcome contributions and have
created a plugin-like architecture for adding new active scanners.
If you are a developer and want to add your own checks (e.g., for SSRF, Command
Injection, etc.), please see our detailed guide in the `CONTRIBUTING.md` file in
the repository.
| text/markdown | null | thesp0nge <your.email@example.com> | null | null | MIT License
Copyright (c) 2020 Paolo Perego - paolo@codiceinsicuro.it
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| mitmproxy, security, scanner, proxy, pentest, xss, sqli, crawler, addon | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.1... | [] | null | null | >=3.9 | [] | [] | [] | [
"mitmproxy>=10.0.0",
"httpx>=0.25.0",
"beautifulsoup4>=4.10.0",
"PyYAML>=6.0"
] | [] | [] | [] | [
"Homepage, https://github.com/thesp0nge/nightcrawler-mitm",
"Repository, https://github.com/thesp0nge/nightcrawler-mitm",
"Bug Tracker, https://github.com/thesp0nge/nightcrawler-mitm/issues"
] | twine/6.1.0 CPython/3.13.11 | 2026-02-18T15:40:20.439849 | nightcrawler_mitm-0.11.0.tar.gz | 49,434 | 6c/00/d1da2807b96feced5aa1760b2902f8eb6dcf36116a815f5e0c1c9fc9a3e7/nightcrawler_mitm-0.11.0.tar.gz | source | sdist | null | false | 80328f51916a49a9fce6f6d93115d491 | 9dbc00361101d13e08e27f1b4b7c58fea0caf7650961744fc8a6b1f8f55d20ba | 6c00d1da2807b96feced5aa1760b2902f8eb6dcf36116a815f5e0c1c9fc9a3e7 | null | [
"LICENSE"
] | 229 |
2.4 | pyzes | 0.1.0 | Python bindings for Intel Level-Zero Driver Library (Sysman API) | # drivers.gpu.compute.pyzes
pyzes
======
Python bindings to the Intel Level-Zero-Driver Library
------------------------------------------------
Provides a Python interface to GPU management and monitoring functions.
This is a wrapper around the Level-Zero-Driver library.
For information about the Level-Zero-Driver library, see the spec document
https://oneapi-src.github.io/level-zero-spec/level-zero/latest/index.html
Download the latest package from: https://github.com/oneapi-src/level-zero
The level-zero header file contains function documentation that is relevant
to this wrapper. The header file is distributed with driver (ze_api.h and zes_api.h)
This module does not handles allocating structs before returning the desired value.
Non-success codes are returned and respective error is printed.
REQUIREMENTS
------------
- **Python 3.10** (required)
- ctypes module (included in standard Python library)
- Level Zero driver installed on the system
INSTALLATION
------------
```bash
# Ensure you have Python 3.10 installed
python3.10 --version
# Install the package (when available)
pip install pyzes
```
USAGE
-----
```
>>> from pyzes import *
>>> rc = zesInit(0)
>>> driver_count = c_uint32(0)
>>> rc = pyzes.zesDriverGet(byref(driver_count), None)
>>> print(f"Driver Count: {driver_count.value}")
```
## C Structure and its python module class ##
struct zes_process_state_t {
zes_structure_type_t stype
[in] type of this structure
const void *pNext
[in][optional] must be null or a pointer to an extension-specific structure (i.e. contains stype and pNext).
uint32_t processId
[out] Host OS process ID.
uint64_t memSize
[out] Device memory size in bytes allocated by this process (may not necessarily be resident on the device at the time of reading).
uint64_t sharedSize
[out] The size of shared device memory mapped into this process (may not necessarily be resident on the device at the time of reading).
zes_engine_type_flags_t engines
[out] Bitfield of accelerator engine types being used by this process.
}
Python Class
class zes_process_state_t(Structure):
_fields_ = [
("pid", c_uint32),
("command", c_char * ZES_STRING_PROPERTY_SIZE),
("memSize", c_uint64), # in bytes
("sharedMemSize", c_uint64),# in bytes
("engineType", zes_engine_type_flags_t),
("subdeviceId", c_uint32),
]
FUNCTIONS
---------
Python methods wrap Level-Zero-Driver functions, implemented in a C shared library.
Each function's use is the same:
- C function output parameters are filled in with values, and return codes are returned.
```
ze_result_t zesDeviceGetProperties(
zes_device_handle_t hDevice,
zes_device_properties_t* pProperties);
>>> props = zes_device_properties_t()
>>> props.stype = ZES_STRUCTURE_TYPE_DEVICE_PROPERTIES
>>> props.pNext = None
>>> pyzes.zesDeviceGetProperties(devices[i], byref(props))
```
- C structs are converted into Python classes.
```
// C Function and typedef struct
ze_result_t zesDeviceGetProperties(
zes_device_handle_t hDevice,
zes_device_properties_t* pProperties);
typedef struct _zes_device_properties_t
{
zes_structure_type_t stype;
void* pNext;
ze_device_properties_t core;
uint32_t numSubdevices;
char serialNumber[ZES_STRING_PROPERTY_SIZE];
char boardNumber[ZES_STRING_PROPERTY_SIZE];
char brandName[ZES_STRING_PROPERTY_SIZE];
char modelName[ZES_STRING_PROPERTY_SIZE];
char vendorName[ZES_STRING_PROPERTY_SIZE];
char driverVersion[ZES_STRING_PROPERTY_SIZE]
} zes_device_properties_t;
>>>print(f"numSubdevices: {props.numSubdevices}")
>>>print(f"serialNumber: {props.serialNumber}")
>>>print(f"boardNumber: {props.boardNumber}")
>>>print(f"brandName: {props.brandName}")
>>>print(f"modelName: {props.modelName}")
>>>print(f"driverVersion: {props.driverVersion}")
>>>print(f"coreClockMHz: {props.core.coreClockRate}")
```
HOW TO USE STRUCTURE CHAINING
```
>>> props = zes_device_properties_t()
>>> props.stype = ZES_STRUCTURE_TYPE_DEVICE_PROPERTIES
>>> ext = zes_device_ext_properties_t()
>>> ext.stype = ZES_STRUCTURE_TYPE_DEVICE_EXT_PROPERTIES
>>> ext.pNext = None
>>> base.pNext = cast(pointer(ext), c_void_p)
>>> pyzes.zesDeviceGetProperties(devices[i], byref(props))
>>> print(f"Extension properties flags: {ext.flags}")
```
For more information see the Level-Zero-Driver documentation.
VARIABLES
---------
All meaningful constants and enums are exposed in Python module.
SUPPORTED APIs
--------------
| API Function | Module | Since Version | Limitations |
|--------------|--------|---------------|-------------|
| `zesInit` | Device | 0.1.0 | None |
| `zesDriverGet` | Device | 0.1.0 | None |
| `zesDeviceGet` | Device | 0.1.0 | None |
| `zesDeviceGetProperties` | Device | 0.1.0 | None |
| `zesDriverGetDeviceByUuidExp` | Device | 0.1.0 | Experimental API |
| `zesDeviceProcessesGetState` | Device | 0.1.0 | None |
| **Memory Management** |-|-|-|
| `zesDeviceEnumMemoryModules` | Memory | 0.1.0 | None |
| `zesMemoryGetProperties` | Memory | 0.1.0 | None |
| `zesMemoryGetState` | Memory | 0.1.0 | None |
| `zesMemoryGetBandwidth` | Memory | 0.1.0 | Linux: Requires superuser or read permissions for telem nodes |
| **Power Management** |-|-|-|
| `zesDeviceEnumPowerDomains` | Power | 0.1.0 | None |
| `zesPowerGetEnergyCounter` | Power | 0.1.0 | Linux: Requires superuser or read permissions for telem nodes |
| **Frequency Management** |-|-|-|
| `zesDeviceEnumFrequencyDomains` | Frequency | 0.1.0 | None |
| `zesFrequencyGetState` | Frequency | 0.1.0 | None |
| **Temperature Monitoring** |-|-|-|
| `zesDeviceEnumTemperatureSensors` | Temperature | 0.1.0 | None |
| `zesTemperatureGetProperties` | Temperature | 0.1.0 | None |
| `zesTemperatureGetConfig` | Temperature | 0.1.0 | None |
| `zesTemperatureGetState` | Temperature | 0.1.0 | Linux: Requires superuser or read permissions for telem nodes |
| **Engine Management** |-|-|-|
| `zesDeviceEnumEngineGroups` | Engine | 0.1.0 | Linux: Shows "no handles found" error when not in superuser mode |
| `zesEngineGetProperties` | Engine | 0.1.0 | None |
| `zesEngineGetActivity` | Engine | 0.1.0 | None |
RELEASE NOTES
-------------
Version 0.1.0 (Initial Release)
- Initial release of pyzes Python bindings for Intel Level-Zero Driver Library
- Added pyzes.py module with Python binding wrapper functions
- Added pyzes_example.py and pyzes_black_box_test.py as sample applications
- Supported API modules:
- Device Management APIs
- Memory Management APIs
- Power Management APIs
- Frequency Management APIs
- Temperature Monitoring APIs
- Engine Management APIs
Notes:
Linux:
zesPowerGetEnergyCounter
zesTemperatureGetState
zesMemoryGetBandwidth
The above APIs needs user to be in superuser/root mode or have read permissions for telem nodes
Telem Node Directory: /sys/class/intel_pmt/telem(1/2/3/4)/telem
zesDeviceEnumEngineGroups shows no handles found error when not in super user mode.
# Contributing
See [CONTRIBUTING](CONTRIBUTING.md) for more information.
# License
Distributed under the MIT license. See [LICENSE](LICENSE) for more information.
# Security
See Intel's [Security Center](https://www.intel.com/content/www/us/en/security-center/default.html) for information on how to report a potential security issue or vulnerability.
See also [SECURITY](SECURITY.md).
| text/markdown | null | Intel Corporation <secure@intel.com> | null | null | MIT | level-zero, gpu, intel, monitoring, sysman | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Topic :: Software Developme... | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/oneapi-src/level-zero",
"Documentation, https://oneapi-src.github.io/level-zero-spec/level-zero/latest/index.html",
"Repository, https://github.com/oneapi-src/level-zero",
"Issues, https://github.com/oneapi-src/level-zero/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T15:39:42.585364 | pyzes-0.1.0-py3-none-any.whl | 25,383 | 43/c7/aeae042feeac682d4c6b324738d4dae633f476dde6087d2a666feac532f3/pyzes-0.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 60cd94af93d3a33aac5a4f09ea0ce2a6 | 64f52e511987124b6ff68ea4cd26767e2373cb4e5f1ef6968fb903d40f768415 | 43c7aeae042feeac682d4c6b324738d4dae633f476dde6087d2a666feac532f3 | null | [
"LICENSE"
] | 117 |
2.4 | netbox-plugin-bind-provisioner | 1.0.7 | A Bind provisioning plugin that uses netbox_dns for its data source | # Netbox Bind Provisioner
The Netbox Bind Provisioner plugin implements a lightweight DNS server inside
Netbox and builds a bridge for BIND and other DNS Servers implementing RFC9432
to retrieve DNS Zones directly from Netbox using DNS native mechanisms.
[](https://pypi.org/project/netbox-plugin-bind-provisioner/)
[](https://github.com/suraxius/netbox-plugin-bind-provisioner/stargazers)
[](https://github.com/suraxius/netbox-plugin-bind-provisioner/network/members)
[](https://github.com/suraxius/netbox-plugin-bind-provisioner/issues)
[](https://github.com/suraxius/netbox-plugin-bind-provisioner/pulls)
[](https://github.com/suraxius/netbox-plugin-bind-provisioner/graphs/contributors)
[](https://github.com/suraxius/netbox-plugin-bind-provisioner/blob/master/LICENSE)
[](https://github.com/psf/black)
[](https://pepy.tech/project/netbox-plugin-bind-provisioner)
[](https://pepy.tech/project/netbox-plugin-bind-provisioner)
[](https://pepy.tech/project/netbox-plugin-bind-provisioner)
## Plugin configuration
While providing Zone transfers via AXFR, the Server also exposes specialized
catalog zones that BIND and other RFC9432 compliant DNS Servers use to
automatically discover newly created zones and remove deleted ones. The plugin
supports views and basic DNS security via TSIG.
The plugin exposes one catalog zone per view. Each catalog zone is made available
under the special zone name **"catz"** and addtionally under **"[viewname].catz"**
and may be queried through the built-in DNS server just like any other dns zone.
For proper operation, each view requires an installed TSIG key, and the
`dns-transfer-endpoint` must be running as a separate background service using
the `manage.py` command. Note that DNSSEC support will be added once BIND9
provides a mechanism to configure it through the Catalog Zones system.
To start the service in the foreground:
```
manage.py dns-transfer-endpoint --port 5354
```
This process needs to be scheduled as a background service for the built-in DNS
Server to work correctly. For Linux users with Systemd (Ubuntu, etc), Matt Kollross
provides a startup unit and instructions [here](docs/install-systemd-service.md).
### Service parameters
Parameter | Description
--------- | -------------------------------------------------------------------
--port | Port to listen on for requests (defaults to 5354)
--address | IP of interface to bind to (defaults to 0.0.0.0)
### Plugin settings
Setting | Description
--------------------| ---------------------------------------------------------
tsig_keys | Maps a TSIG Key to be used for each view.
## Installation guide
This setup provisions a BIND9 server directly with DNS data from NetBox.
BIND9 can optionally run on a separate server. If so, any reference to
127.0.0.1 in step 6 must be replaced with the IP address of the NetBox host.
TCP and UDP traffic from the BIND9 server to the NetBox host must be allowed on
port 5354 (or the port you have configured).
This guide assumes:
- Netbox has been installed under /opt/netbox
- Bind9 is installed on the same host as Netbox
- The Netbox DNS Plugin netbox-plugin-dns is installed
- The following dns views exist in Netbox DNS:
- `public` (the default)
- `private`
1. Preliminaries
- Install Bind9 on the same host that netbox is on.
- Generate a TSIG Key for the `public` and `private` dns views respectively.
2. Adding required package
```
cd netbox
echo netbox-plugin-bind-provisioner >> local_requirements.txt
. venv/bin/activate
pip install -r local_requirements.txt
```
3. Updating netbox plugin configuration (configuration.py)
Change following line from
```
PLUGINS = ['netbox_dns']
```
to
```
PLUGINS = ['netbox_dns', 'netbox_plugin_bind_provisioner']
```
Configure the Bind Exporter Plugin using the PLUGINS_CONFIG dictionary.
Change
```
PLUGINS_CONFIG = {}
```
to
```
PLUGINS_CONFIG = {
"netbox_plugin_bind_provisioner": {
"tsig_keys": {
"public": {
"keyname": "public_view_key",
"algorithm": "hmac-sha256",
"secret": "base64-encoded-secret"
},
"private": {
"keyname": "private_view_key",
"algorithm": "hmac-sha256",
"secret": "base64-encoded-secret"
}
}
}
}
```
Note that the tsig-key attributes keyname, algorithm and secret form a
dictionary in following python structure path:
```
PLUGINS_CONFIG.netbox_plugin_bind_provisioner.tsig_keys.<dns_view_name>
```
This allows the plugin to map requests to the right dns view using the tsig
signature from each request.
4. Run migrations
```
python3 netbox/manage.py migrate
```
5. Start listener
This step runs the DNS endpoint used by bind to configure itself. You may want
to write a service wrapper that runs this in the background.
A guide for setting up a systemd service on Ubuntu is provided by Matt
Kollross [here](docs/install-systemd-service.md). Dont forget to activate
the venv if you do decide to run this service in the background.
Note that `--port 5354` is optional. The listener will bind this port
by default.
```
python3 netbox/manage.py dns-transfer-endpoint --port 5354
```
6. Configuring a Bind9 to interact with Netbox via the dns-transfer-endpoint
endpoint. Note that its not possible to give all the correct details of the
`options` block as it is heavily dependent on the Operating System used.
Please dont forget to adjust as required.
```
########## OPTIONS ##########
options {
allow-update { none; };
allow-query { any; };
allow-recursion { none; };
notify yes;
min-refresh-time 60;
};
########## ACLs ##########
acl public {
!10.0.0.0/8;
!172.16.0.0/12;
!192.168.0.0/16;
any;
};
acl private {
10.0.0.0/8;
172.16.0.0/12;
192.168.0.0/16;
};
########## ZONES ##########
view "public" {
key "public_view_key" {
algorithm hmac-sha256;
secret "base64-encoded-secret";
};
match-clients { public; };
catalog-zones {
zone "catz"
default-masters { 127.0.0.1 port 5354 key "public_view_key"; }
zone-directory "/var/lib/bind/zones"
min-update-interval 1;
};
zone "catz" {
type slave;
file "/var/lib/bind/zones/catz_public";
masters { 127.0.0.1 port 5354 key "public_view_key"; };
notify no;
};
};
view "private" {
key "private_view_key" {
algorithm hmac-sha256;
secret "base64-encoded-secret";
};
match-clients { private; };
catalog-zones {
zone "catz"
default-masters { 127.0.0.1 port 5354 key "private_view_key"; }
zone-directory "/var/lib/bind/zones"
min-update-interval 1;
};
zone "catz" {
type slave;
file "/var/lib/bind/zones/catz_private";
masters { 127.0.0.1 port 5354 key "private_view_key"; };
notify no;
};
};
```
7. Restart bind - Done
| text/markdown | Sven Luethi | null | null | null | GPL-2.0 | null | [] | [] | null | null | >=3.7 | [] | [] | [] | [
"netbox-plugin-dns"
] | [] | [] | [] | [
"Homepage, https://github.com/Suraxius/netbox-plugin-bind-provisioner",
"Issues, https://github.com/Suraxius/netbox-plugin-bind-provisioner/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:39:17.542687 | netbox_plugin_bind_provisioner-1.0.7.tar.gz | 25,133 | 10/be/cb1a24feda20eec96fecca718256dd91ca34a0682bb553d8c92f598f55c7/netbox_plugin_bind_provisioner-1.0.7.tar.gz | source | sdist | null | false | 9fca520e5ce2f0e789d41fc53af22c1b | 56c447ba9b2766e2b5dc3ef0b638093cd23c331b70e483477264dfea9bcbdee2 | 10becb1a24feda20eec96fecca718256dd91ca34a0682bb553d8c92f598f55c7 | null | [
"LICENSE.md"
] | 232 |
2.1 | MaaDebugger | 1.18.1 | MaaDebugger | <p align="center">
<img alt="LOGO" src="https://cdn.jsdelivr.net/gh/MaaAssistantArknights/design@main/logo/maa-logo_512x512.png" width="256" height="256" />
</p>
<div align="center">
# MaaDebugger
<a href="https://pypi.org/project/MaaDebugger/" target="_blank"><img alt="pypi" src="https://img.shields.io/pypi/dm/MaaDebugger?logo=pypi&label=PyPI"></a>
<a href="https://github.com/MaaXYZ/MaaDebugger/releases/latest" target="_blank"><img alt="release" src="https://img.shields.io/github/v/release/MaaXYZ/MaaDebugger?label=Release"></a>
<a href="https://github.com/MaaXYZ/MaaDebugger/releases" target="_blank"><img alt="pre-release" src="https://img.shields.io/github/v/release/MaaXYZ/MaaDebugger?include_prereleases&label=Pre-Release"></a>
<a href="https://github.com/MaaXYZ/MaaDebugger/commits/main/" target="_blank"><img alt="activity" src="https://img.shields.io/github/commit-activity/m/MaaXYZ/MaaDebugger?color=%23ff69b4&label=Commit+Activity"></a>
**[简体中文](./README.md) | [English](./README-en.md)**
</div>
## 需求版本
- Python >= 3.9,<= 3.13
- nicegui >= 2.21,< 3.0
## 安装
```bash
python -m pip install MaaDebugger
```
## 更新
```bash
python -m pip install MaaDebugger MaaFW --upgrade
```
## 使用
```bash
python -m MaaDebugger
```
### 指定端口
MaaDebugger 默认使用端口 **8011**。你可以通过使用 --port [port] 选项来指定 MaaDebugger 运行的端口。例如,要在端口 **8080** 上运行 MaaDebugger:
```bash
python -m MaaDebugger --port 8080
```
## 开发 MaaDebugger
```bash
cd src
python -m MaaDebugger
```
或者
使用 VSCode,在项目目录中按下 `F5`
| text/markdown | MaaXYZ | null | null | null | null | null | [] | [] | null | null | <3.14,>=3.9 | [] | [] | [] | [
"MaaFw>=5.7.0b3",
"nicegui<3.0,>=2.21",
"asyncify",
"pillow",
"packaging"
] | [] | [] | [] | [
"Homepage, https://github.com/MaaXYZ/MaaDebugger"
] | pdm/2.26.6 CPython/3.9.25 Linux/6.14.0-1017-azure | 2026-02-18T15:39:16.192174 | maadebugger-1.18.1.tar.gz | 57,243 | 9b/f4/d193079d833b2ebfbe79bebbec4d7cd00bab515cbc73b668041dbd20d49b/maadebugger-1.18.1.tar.gz | source | sdist | null | false | 74ec7dabf2e78a0637280d03ef958db3 | ffd1ebba361069b61adc6a57a0e8ae2c67e6f4dc986b4b4d67fcbdda9a80b494 | 9bf4d193079d833b2ebfbe79bebbec4d7cd00bab515cbc73b668041dbd20d49b | null | [] | 0 |
2.4 | findata-api | 0.21 | Common API to access financial data | # findata_api
Common API to access and manipulate financial data.
## Install
The library can be installed using *PyPi*:
```Shell
$ pip install findata_api
```
Or directly from the *Github* repository:
```Shell
$ pip install git+https://github.com/davidel/findata_api.git
```
| text/markdown | Davide Libenzi | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Topic :: Office/Business :: Financial"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"pandas",
"orjson",
"python_misc_utils",
"pandas_market_calendars",
"websocket-client"
] | [] | [] | [] | [
"Homepage, https://github.com/davidel/findata_api",
"Issues, https://github.com/davidel/findata_api/issues",
"Repository, https://github.com/davidel/findata_api.git"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T15:39:14.606894 | findata_api-0.21.tar.gz | 45,155 | ff/57/8df3d50cfd5606fb7b36b255b8b76df574e091518aff32a28ae1ca5a8a1a/findata_api-0.21.tar.gz | source | sdist | null | false | 84ea194d4bba9a0e169c002b07fe1611 | fa3321e427d90e687987d1070cf1e65a527395daca5fcec9f7ec15b96be35bc1 | ff578df3d50cfd5606fb7b36b255b8b76df574e091518aff32a28ae1ca5a8a1a | Apache-2.0 | [
"LICENSE"
] | 227 |
2.4 | jupyterlab-chat | 0.20.0a1 | A chat extension based on shared documents | # jupyterlab_chat
[](https://github.com/jupyterlab/jupyter-chat/actions/workflows/build.yml)[](https://mybinder.org/v2/gh/jupyterlab/jupyter-chat/main?urlpath=lab)
A chat extension based on shared documents.
This extension is composed of a Python package named `jupyterlab_chat`
for the server extension and a NPM package named `jupyterlab-chat-extension`
for the frontend extension.
This extension registers a `YChat` shared document, and associate the document to a
chat widget in the front end.

## Requirements
- JupyterLab >= 4.0.0
## Install
To install the extension, execute:
```bash
pip install jupyterlab_chat
```
## Uninstall
To remove the extension, execute:
```bash
pip uninstall jupyterlab_chat
```
## Troubleshoot
If you are seeing the frontend extension, but it is not working, check
that the server extension is enabled:
```bash
jupyter server extension list
```
If the server extension is installed and enabled, but you are not seeing
the frontend extension, check the frontend extension is installed:
```bash
jupyter labextension list
```
## Contributing
### Development install
Note: You will need NodeJS to build the extension package.
The `jlpm` command is JupyterLab's pinned version of
[yarn](https://yarnpkg.com/) that is installed with JupyterLab. You may use
`yarn` or `npm` in lieu of `jlpm` below.
```bash
# Clone the repo to your local environment
# Change directory to the jupyterlab_chat directory
# Install package in development mode
pip install -e ".[test]"
# Link your development version of the extension with JupyterLab
jupyter labextension develop . --overwrite
# Rebuild extension Typescript source after making changes
jlpm build
```
You can watch the source directory and run JupyterLab at the same time in different terminals to watch for changes in the extension's source and automatically rebuild the extension.
```bash
# Watch the source directory in one terminal, automatically rebuilding when needed
jlpm watch
# Run JupyterLab in another terminal
jupyter lab
```
With the watch command running, every saved change will immediately be built locally and available in your running JupyterLab. Refresh JupyterLab to load the change in your browser (you may need to wait several seconds for the extension to be rebuilt).
By default, the `jlpm build` command generates the source maps for this extension to make it easier to debug using the browser dev tools. To also generate source maps for the JupyterLab core extensions, you can run the following command:
```bash
jupyter lab build --minimize=False
```
### Development uninstall
```bash
pip uninstall jupyterlab_chat
```
In development mode, you will also need to remove the symlink created by `jupyter labextension develop`
command. To find its location, you can run `jupyter labextension list` to figure out where the `labextensions`
folder is located. Then you can remove the symlink named `jupyterlab-chat-extension` within that folder.
### Testing the extension
#### Frontend tests
This extension is using [Jest](https://jestjs.io/) for JavaScript code testing.
To execute them, execute:
```sh
jlpm
jlpm test
```
#### Integration tests
This extension uses [Playwright](https://playwright.dev/docs/intro) for the integration tests (aka user level tests).
More precisely, the JupyterLab helper [Galata](https://github.com/jupyterlab/jupyterlab/tree/master/galata) is used to handle testing the extension in JupyterLab.
More information are provided within the [ui-tests](../../ui-tests/README.md) README.
### Packaging the extension
See [RELEASE](RELEASE.md)
| text/markdown | null | Jupyter Development Team <jupyter@googlegroups.com> | null | null | BSD 3-Clause License
Copyright (c) 2024, Jupyter Development Team
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | jupyter, jupyterlab, jupyterlab-extension | [
"Framework :: Jupyter",
"Framework :: Jupyter :: JupyterLab",
"Framework :: Jupyter :: JupyterLab :: 4",
"Framework :: Jupyter :: JupyterLab :: Extensions",
"Framework :: Jupyter :: JupyterLab :: Extensions :: Prebuilt",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python",
"Prog... | [] | null | null | >=3.9 | [] | [] | [] | [
"jupyter-collaboration<5,>=4",
"jupyter-server<3,>=2.0.1",
"jupyter-ydoc<4.0.0,>=3.0.0",
"pycrdt<0.13.0,>=0.12.0",
"coverage; extra == \"test\"",
"mypy; extra == \"test\"",
"pytest; extra == \"test\"",
"pytest-asyncio; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-jupyter[server]>=0... | [] | [] | [] | [
"documentation, https://jupyter-chat.readthedocs.io/",
"homepage, https://github.com/jupyterlab/jupyter-chat",
"Bug Tracker, https://github.com/jupyterlab/jupyter-chat/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T15:38:55.133988 | jupyterlab_chat-0.20.0a1.tar.gz | 320,153 | 91/b8/8ffe4b58849fb43ac25a1d15a6ddc469122a6f3c209d8831e9ab1875b02c/jupyterlab_chat-0.20.0a1.tar.gz | source | sdist | null | false | 07ccd5c62eca183162e765685360e7da | 9f0a33bbda320de65052f1a824592b8d45e9f6268e4dafe00a431892bde56fa4 | 91b88ffe4b58849fb43ac25a1d15a6ddc469122a6f3c209d8831e9ab1875b02c | null | [
"LICENSE"
] | 424 |
2.4 | valyu | 2.6.0 | Deepsearch API for AI. | # Valyu SDK
**Search for AIs**
Valyu's Deepsearch API gives AI the context it needs. Integrate trusted, high-quality public and proprietary sources, with full-text multimodal retrieval.
Get **$10 free credits** for the Valyu API when you sign up at [Valyu](https://platform.valyu.ai)!
_No credit card required._
## How does it work?
We do all the heavy lifting for you - one unified API for all data:
- **Academic & Research Content** - Access millions of scholarly papers and textbooks
- **Real-time Web Search** - Get the latest information from across the internet
- **Structured Financial Data** - Stock prices, market data, and financial metrics
- **Intelligent Reranking** - Results across all sources are automatically sorted by relevance
- **Transparent Pricing** - Pay only for what you use with clear CPM pricing
## Installation
Install the Valyu SDK using pip:
```bash
pip install valyu
```
## Quick Start
Here's what it looks like, make your first query in just 4 lines of code:
```python
from valyu import Valyu
valyu = Valyu(api_key="your-api-key-here")
response = valyu.search(
"Implementation details of agentic search-enhanced large reasoning models",
max_num_results=5, # Limit to top 5 results
max_price=10, # Maximum price per thousand queries (CPM)
fast_mode=True # Enable fast mode for quicker, shorter results
)
print(response)
# Feed the results to your AI agent as you would with other search APIs
```
## API Reference
### DeepResearch Method
The `deepresearch` namespace provides access to Valyu's AI-powered research agent that conducts comprehensive, multi-step research with citations and cost tracking.
```python
# Create a research task
task = valyu.deepresearch.create(
input="What are the latest developments in quantum computing?",
model="standard", # "standard" (fast) or "heavy" (thorough)
output_formats=["markdown", "pdf"] # Output formats
)
# Wait for completion with progress updates
def on_progress(status):
if status.progress:
print(f"Step {status.progress.current_step}/{status.progress.total_steps}")
result = valyu.deepresearch.wait(task.deepresearch_id, on_progress=on_progress)
print(result.output) # Markdown report
print(result.pdf_url) # PDF download URL
```
#### DeepResearch Methods
| Method | Description |
| ----------------------------------- | ----------------------------------------- |
| `create(...)` | Create a new research task |
| `status(task_id)` | Get current status of a task |
| `wait(task_id, ...)` | Wait for task completion with polling |
| `stream(task_id, ...)` | Stream real-time updates |
| `list(api_key_id, limit)` | List all your research tasks |
| `update(task_id, instruction)` | Add follow-up instruction to running task |
| `cancel(task_id)` | Cancel a running task |
| `delete(task_id)` | Delete a task |
| `toggle_public(task_id, is_public)` | Make task publicly accessible |
#### DeepResearch Create Parameters
| Parameter | Type | Default | Description |
| ------------------ | ------------ | -------------- | -------------------------------------------------------- |
| `input` | `str` | _required_ | Research query or task description |
| `model` | `str` | `"standard"` | Research model - "standard" (fast) or "heavy" (thorough) |
| `output_formats` | `List[str]` | `["markdown"]` | Output formats for the report |
| `strategy` | `str` | `None` | Natural language research strategy |
| `search` | `dict` | `None` | Search configuration (type, sources) |
| `urls` | `List[str]` | `None` | URLs to extract and analyze |
| `files` | `List[dict]` | `None` | PDF/image files to analyze |
| `mcp_servers` | `List[dict]` | `None` | MCP tool server configurations |
| `code_execution` | `bool` | `True` | Enable/disable code execution |
| `previous_reports` | `List[str]` | `None` | Previous report IDs for context (max 3) |
| `webhook_url` | `str` | `None` | HTTPS webhook URL for completion notification |
| `metadata` | `dict` | `None` | Custom metadata key-value pairs |
#### DeepResearch Examples
**Basic Research:**
```python
task = valyu.deepresearch.create(
input="Summarize recent AI safety research",
model="standard"
)
result = valyu.deepresearch.wait(task.deepresearch_id)
print(result.output)
```
**With Custom Sources:**
```python
task = valyu.deepresearch.create(
input="Latest transformer architecture improvements",
search={
"search_type": "proprietary",
"included_sources": ["academic"]
},
model="heavy",
output_formats=["markdown", "pdf"]
)
```
**With Date Filters and Source Restrictions:**
```python
from valyu.types.deepresearch import SearchConfig
# Using SearchConfig object
search_config = SearchConfig(
search_type="all",
included_sources=["academic", "web"],
start_date="2024-01-01",
end_date="2024-12-31"
)
task = valyu.deepresearch.create(
input="Recent advances in quantum computing",
search=search_config,
model="standard"
)
# Or using a dict
task = valyu.deepresearch.create(
input="Financial analysis Q1 2024",
search={
"search_type": "all",
"included_sources": ["finance", "web"],
"start_date": "2024-01-01",
"end_date": "2024-03-31",
"excluded_sources": ["patent"]
},
model="standard"
)
```
**Streaming Updates:**
```python
def on_progress(current, total):
print(f"Progress: {current}/{total}")
def on_complete(result):
print("Complete! Cost:", result.cost)
valyu.deepresearch.stream(
task.deepresearch_id,
on_progress=on_progress,
on_complete=on_complete
)
```
**With File Analysis:**
```python
task = valyu.deepresearch.create(
input="Analyze these research papers and provide key insights",
files=[{
"data": "data:application/pdf;base64,...",
"filename": "paper.pdf",
"media_type": "application/pdf"
}],
urls=["https://arxiv.org/abs/2103.14030"]
)
```
### Search Method
The `search()` method is the core of the Valyu SDK. It accepts a query string as the first parameter, followed by optional configuration parameters.
```python
def search(
query: str, # Your search query
search_type: str = "all", # "all", "web", or "proprietary"
max_num_results: int = 10, # Maximum results to return (1-20)
is_tool_call: bool = True, # Whether this is an AI tool call
relevance_threshold: float = 0.5, # Minimum relevance score (0-1)
max_price: int = 30, # Maximum price per thousand queries (CPM)
included_sources: List[str] = None, # Specific sources to search
excluded_sources: List[str] = None, # Sources to exclude from search
country_code: str = None, # Country code filter (e.g., "US", "GB")
response_length: Union[str, int] = None, # Response length: "short"/"medium"/"large"/"max" or character count
category: str = None, # Category filter
start_date: str = None, # Start date (YYYY-MM-DD)
end_date: str = None, # End date (YYYY-MM-DD)
fast_mode: bool = False, # Enable fast mode for faster but shorter results
) -> SearchResponse
```
### Parameters
| Parameter | Type | Default | Description |
| --------------------- | ----------------- | ---------- | --------------------------------------------------------------------------------- |
| `query` | `str` | _required_ | The search query string |
| `search_type` | `str` | `"all"` | Search scope: `"all"`, `"web"`, or `"proprietary"` |
| `max_num_results` | `int` | `10` | Maximum number of results to return (1-20) |
| `is_tool_call` | `bool` | `True` | Whether this is an AI tool call (affects processing) |
| `relevance_threshold` | `float` | `0.5` | Minimum relevance score for results (0.0-1.0) |
| `max_price` | `int` | `30` | Maximum price per thousand queries in CPM |
| `included_sources` | `List[str]` | `None` | Specific data sources or URLs to search |
| `excluded_sources` | `List[str]` | `None` | Data sources or URLs to exclude from search |
| `country_code` | `str` | `None` | Country code filter (e.g., "US", "GB", "JP", "ALL") |
| `response_length` | `Union[str, int]` | `None` | Response length: "short"/"medium"/"large"/"max" or character count |
| `category` | `str` | `None` | Category filter for results |
| `start_date` | `str` | `None` | Start date filter in YYYY-MM-DD format |
| `end_date` | `str` | `None` | End date filter in YYYY-MM-DD format |
| `fast_mode` | `bool` | `False` | Enable fast mode for faster but shorter results. Good for general purpose queries |
### Response Format
The search method returns a `SearchResponse` object with the following structure:
```python
class SearchResponse:
success: bool # Whether the search was successful
error: Optional[str] # Error message if any
tx_id: str # Transaction ID for feedback
query: str # The original query
results: List[SearchResult] # List of search results
results_by_source: ResultsBySource # Count of results by source type
total_deduction_dollars: float # Cost in dollars
total_characters: int # Total characters returned
```
Each `SearchResult` contains:
```python
class SearchResult:
title: str # Result title
url: str # Source URL
content: Union[str, List[Dict]] # Full content (text or structured)
description: Optional[str] # Brief description
source: str # Source identifier
price: float # Cost for this result
length: int # Content length in characters
image_url: Optional[Dict[str, str]] # Associated images
relevance_score: float # Relevance score (0-1)
data_type: Optional[str] # "structured" or "unstructured"
```
### Contents Method
The `contents()` method extracts clean, structured content from web pages with optional AI-powered data extraction and summarization.
```python
def contents(
urls: List[str], # List of URLs to process (max 10)
summary: Union[bool, str, Dict] = None, # AI summary configuration
extract_effort: str = None, # "normal", "high", or "auto"
response_length: Union[str, int] = None, # Content length configuration
max_price_dollars: float = None, # Maximum cost limit in USD
screenshot: bool = False, # Request page screenshots
) -> ContentsResponse
```
### Parameters
| Parameter | Type | Default | Description |
| ------------------- | ------------------------ | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `urls` | `List[str]` | _required_ | List of URLs to process (maximum 10 URLs per request) |
| `summary` | `Union[bool, str, Dict]` | `None` | AI summary configuration:<br>- `False/None`: No AI processing (raw content)<br>- `True`: Basic automatic summarization<br>- `str`: Custom instructions (max 500 chars)<br>- `dict`: JSON schema for structured extraction |
| `extract_effort` | `str` | `None` | Extraction thoroughness: `"normal"` (fast), `"high"` (thorough but slower), or `"auto"` (automatically determine) |
| `response_length` | `Union[str, int]` | `None` | Content length per URL:<br>- `"short"`: 25,000 characters<br>- `"medium"`: 50,000 characters<br>- `"large"`: 100,000 characters<br>- `"max"`: No limit<br>- `int`: Custom character limit |
| `max_price_dollars` | `float` | `None` | Maximum cost limit in USD |
| `screenshot` | `bool` | `False` | Request page screenshots. When `True`, each result includes a `screenshot_url` field with a pre-signed URL to a screenshot image |
### Response Format
The contents method returns a `ContentsResponse` object:
```python
class ContentsResponse:
success: bool # Whether the request was successful
error: Optional[str] # Error message if any
tx_id: str # Transaction ID for tracking
urls_requested: int # Number of URLs submitted
urls_processed: int # Number of URLs successfully processed
urls_failed: int # Number of URLs that failed
results: List[ContentsResult] # List of extraction results
total_cost_dollars: float # Total cost in dollars
total_characters: int # Total characters extracted
```
Each `ContentsResult` contains:
```python
class ContentsResult:
url: str # Source URL
title: str # Page/document title
description: Optional[str] # Brief description of the content
content: Union[str, int, float] # Extracted content
length: int # Content length in characters
source: str # Data source identifier
price: float # Cost for processing this URL
summary: Optional[Union[str, Dict]] # AI-generated summary or structured data
summary_success: Optional[bool] # Whether summary generation succeeded
data_type: Optional[str] # Type of data extracted
image_url: Optional[Dict[str, str]] # Extracted images
screenshot_url: Optional[str] # Screenshot URL if requested
citation: Optional[str] # APA-style citation
```
## Examples
### Basic Search
```python
from valyu import Valyu
valyu = Valyu("your-api-key")
# Simple search across all sources
response = valyu.search("What is machine learning?")
print(f"Found {len(response.results)} results")
```
### Academic Research
```python
# Search academic papers on arXiv
response = valyu.search(
"transformer architecture improvements",
search_type="proprietary",
included_sources=["valyu/valyu-arxiv"],
relevance_threshold=0.7,
max_num_results=10
)
```
### Web Search with Date Filtering
```python
# Search recent web content
response = valyu.search(
"AI safety developments",
search_type="web",
start_date="2024-01-01",
end_date="2024-12-31",
max_num_results=5
)
```
### Hybrid Search
```python
# Search both web and proprietary sources
response = valyu.search(
"quantum computing breakthroughs",
search_type="all",
category="technology",
relevance_threshold=0.6,
max_price=50
)
```
### Processing Results
```python
response = valyu.search("climate change solutions")
if response.success:
print(f"Search cost: ${response.total_deduction_dollars:.4f}")
print(f"Sources: Web={response.results_by_source.web}, Proprietary={response.results_by_source.proprietary}")
for i, result in enumerate(response.results, 1):
print(f"\n{i}. {result.title}")
print(f" Source: {result.source}")
print(f" Relevance: {result.relevance_score:.2f}")
print(f" Content: {result.content[:200]}...")
else:
print(f"Search failed: {response.error}")
```
### Content Extraction Examples
#### Basic Content Extraction
```python
# Extract raw content from URLs
response = valyu.contents(
urls=["https://techcrunch.com/2025/08/28/anthropic-users-face-a-new-choice-opt-out-or-share-your-data-for-ai-training/"]
)
if response.success:
for result in response.results:
print(f"Title: {result.title}")
print(f"Content: {result.content[:500]}...")
```
#### Content with AI Summary
```python
# Extract content with automatic summarization
response = valyu.contents(
urls=["https://docs.python.org/3/tutorial/"],
summary=True,
response_length="max"
)
for result in response.results:
print(f"Summary: {result.summary}")
```
#### Structured Data Extraction
```python
# Extract structured data using JSON schema
company_schema = {
"type": "object",
"properties": {
"company_name": {"type": "string"},
"founded_year": {"type": "integer"},
"key_products": {
"type": "array",
"items": {"type": "string"},
"maxItems": 3
}
}
}
response = valyu.contents(
urls=["https://en.wikipedia.org/wiki/OpenAI"],
summary=company_schema,
response_length="max"
)
if response.success:
for result in response.results:
if result.summary:
print(f"Structured data: {json.dumps(result.summary, indent=2)}")
```
#### Multiple URLs
```python
# Process multiple URLs with a cost limit
response = valyu.contents(
urls=[
"https://www.valyu.ai/",
"https://docs.valyu.ai/overview",
"https://www.valyu.ai/blogs/why-ai-agents-and-llms-struggle-with-search-and-data-access"
],
summary="Provide key takeaways in bullet points, and write in very emphasised singaporean english"
)
print(f"Processed {response.urls_processed}/{response.urls_requested} URLs")
print(f"Cost: ${response.total_cost_dollars:.4f}")
```
#### Content Extraction with Screenshots
```python
# Extract content with page screenshots
response = valyu.contents(
urls=["https://www.valyu.ai/"],
screenshot=True, # Request page screenshots
response_length="short"
)
if response.success:
for result in response.results:
print(f"Title: {result.title}")
print(f"Price: ${result.price:.4f}")
if result.screenshot_url:
print(f"Screenshot: {result.screenshot_url}")
```
## Authentication
Set your API key in one of these ways:
1. **Environment variable** (recommended):
```bash
export VALYU_API_KEY="your-api-key-here"
```
2. **Direct initialization**:
```python
valyu = Valyu(api_key="your-api-key-here")
```
## Error Handling
The SDK handles errors gracefully and returns structured error responses:
```python
response = valyu.search("test query")
if not response.success:
print(f"Error: {response.error}")
print(f"Transaction ID: {response.tx_id}")
else:
# Process successful results
for result in response.results:
print(result.title)
```
## Getting Started
1. Sign up for a free account at [Valyu](https://platform.valyu.ai)
2. Get your API key from the dashboard
3. Install the SDK: `pip install valyu`
4. Start building with the examples above
## Support
- **Documentation**: [docs.valyu.ai](https://docs.valyu.ai)
- **API Reference**: Full parameter documentation above
- **Examples**: Check the `examples/` directory in this repository
- **Issues**: Report bugs on GitHub
## License
This project is licensed under the MIT License.
| text/markdown | Valyu | contact@valyu.ai | Harvey Yorke | harvey@valyu.ai | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://valyu.ai | null | >=3.6 | [] | [] | [] | [
"requests>=2.31.0",
"pydantic>=2.5.0",
"openai>=1.66.0",
"anthropic>=0.46.0",
"python-dotenv>=1.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T15:38:47.107181 | valyu-2.6.0.tar.gz | 41,038 | 09/64/7ac04227353a65bf33ecdb4097c73665ab4c94e0184f24a6b40999023036/valyu-2.6.0.tar.gz | source | sdist | null | false | 70311a3aae5ecb261b6bc862aaa2ded5 | 75225fffd39658f76379d74b031ecad662b27ccf3ce9f2064dbf3b9092e2b045 | 09647ac04227353a65bf33ecdb4097c73665ab4c94e0184f24a6b40999023036 | null | [] | 1,694 |
2.4 | lean-lsp-mcp | 0.22.0 | Lean Theorem Prover MCP | <h1 align="center">
lean-lsp-mcp
</h1>
<h3 align="center">Lean Theorem Prover MCP</h3>
<p align="center">
<a href="https://pypi.org/project/lean-lsp-mcp/">
<img src="https://img.shields.io/pypi/v/lean-lsp-mcp.svg" alt="PyPI version" />
</a>
<a href="">
<img src="https://img.shields.io/github/last-commit/oOo0oOo/lean-lsp-mcp" alt="last update" />
</a>
<a href="https://github.com/oOo0oOo/lean-lsp-mcp/blob/master/LICENSE">
<img src="https://img.shields.io/github/license/oOo0oOo/lean-lsp-mcp.svg" alt="license" />
</a>
</p>
MCP server that allows agentic interaction with the [Lean theorem prover](https://lean-lang.org/) via the [Language Server Protocol](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/) using [leanclient](https://github.com/oOo0oOo/leanclient). This server provides a range of tools for LLM agents to understand, analyze and interact with Lean projects.
## Key Features
* **Rich Lean Interaction**: Access diagnostics, goal states, term information, hover documentation and more.
* **External Search Tools**: Use `LeanSearch`, `Loogle`, `Lean Finder`, `Lean Hammer` and `Lean State Search` to find relevant theorems and definitions.
* **Easy Setup**: Simple configuration for various clients, including VSCode, Cursor and Claude Code.
## Setup
### Overview
1. Install [uv](https://docs.astral.sh/uv/getting-started/installation/), a Python package manager.
2. Make sure your Lean project builds quickly by running `lake build` manually.
3. Configure your IDE/Setup
4. (Optional, highly recommended) Install [ripgrep](https://github.com/BurntSushi/ripgrep?tab=readme-ov-file#installation) (`rg`) for local search and source scanning (`lean_verify` warnings).
### 1. Install uv
[Install uv](https://docs.astral.sh/uv/getting-started/installation/) for your system. On Linux/MacOS: `curl -LsSf https://astral.sh/uv/install.sh | sh`
### 2. Run `lake build`
`lean-lsp-mcp` will run `lake serve` in the project root to use the language server (for most tools). Some clients (e.g. Cursor) might timeout during this process. Therefore, it is recommended to run `lake build` manually before starting the MCP. This ensures a faster build time and avoids timeouts.
### 3. Configure your IDE/Setup
<details>
<summary><b>VSCode (Click to expand)</b></summary>
One-click config setup:
[](https://insiders.vscode.dev/redirect/mcp/install?name=lean-lsp&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22lean-lsp-mcp%22%5D%7D)
[](https://insiders.vscode.dev/redirect/mcp/install?name=lean-lsp&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22lean-lsp-mcp%22%5D%7D&quality=insiders)
OR using the setup wizard:
Ctrl+Shift+P > "MCP: Add Server..." > "Command (stdio)" > "uvx lean-lsp-mcp" > "lean-lsp" (or any name you like) > Global or Workspace
OR manually adding config by opening `mcp.json` with:
Ctrl+Shift+P > "MCP: Open User Configuration"
and adding the following
```jsonc
{
"servers": {
"lean-lsp": {
"type": "stdio",
"command": "uvx",
"args": [
"lean-lsp-mcp"
]
}
}
}
```
If you installed VSCode on Windows and are using WSL2 as your development environment, you may need to use this config instead:
```jsonc
{
"servers": {
"lean-lsp": {
"type": "stdio",
"command": "wsl.exe",
"args": [
"uvx",
"lean-lsp-mcp"
]
}
}
}
```
If that doesn't work, you can try cloning this repository and replace `"lean-lsp-mcp"` with `"/path/to/cloned/lean-lsp-mcp"`.
</details>
<details>
<summary><b>Cursor (Click to expand)</b></summary>
1. Open MCP Settings (File > Preferences > Cursor Settings > MCP)
2. "+ Add a new global MCP Server" > ("Create File")
3. Paste the server config into `mcp.json` file:
```jsonc
{
"mcpServers": {
"lean-lsp": {
"command": "uvx",
"args": ["lean-lsp-mcp"]
}
}
}
```
</details>
<details>
<summary><b>Claude Code (Click to expand)</b></summary>
Run one of these commands in the root directory of your Lean project (where `lakefile.toml` is located):
```bash
# Local-scoped MCP server
claude mcp add lean-lsp uvx lean-lsp-mcp
# OR project-scoped MCP server
# (creates or updates a .mcp.json file in the current directory)
claude mcp add lean-lsp -s project uvx lean-lsp-mcp
```
You can find more details about MCP server configuration for Claude Code [here](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/tutorials#configure-mcp-servers).
</details>
#### Claude Skill: Lean4 Theorem Proving
If you are using [Claude Desktop](https://modelcontextprotocol.io/quickstart/user) or [Claude Code](https://claude.ai/code), you can also install the [Lean4 Theorem Proving Skill](https://github.com/cameronfreer/lean4-skills/tree/main/plugins/lean4-theorem-proving). This skill provides additional prompts and templates for interacting with Lean4 projects and includes a section on interacting with the `lean-lsp-mcp` server.
### 4. Install ripgrep (optional but recommended)
For the local search tool `lean_local_search`, install [ripgrep](https://github.com/BurntSushi/ripgrep?tab=readme-ov-file#installation) (`rg`) and make sure it is available in your PATH.
## MCP Tools
### File interactions (LSP)
#### lean_file_outline
Get a concise outline of a Lean file showing imports and declarations with type signatures (theorems, definitions, classes, structures).
#### lean_diagnostic_messages
Get all diagnostic messages for a Lean file. This includes infos, warnings and errors. `interactive=True` returns verbose nested `TaggedText` with embedded widgets. For "Try This" suggestions, prefer `lean_code_actions`.
<details>
<summary>Example output</summary>
```
l20c42-l20c46, severity: 1
simp made no progress
l21c11-l21c45, severity: 1
function expected at
h_empty
term has type
T ∩ compl T = ∅
...
```
</details>
#### lean_goal
Get the proof goal at a specific location (line or line & column) in a Lean file.
<details>
<summary>Example output (line)</summary>
```
Before:
S : Type u_1
inst✝¹ : Fintype S
inst✝ : Nonempty S
P : Finset (Set S)
hPP : ∀ T ∈ P, ∀ U ∈ P, T ∩ U ≠ ∅
hPS : ¬∃ T ∉ P, ∀ U ∈ P, T ∩ U ≠ ∅
compl : Set S → Set S := fun T ↦ univ \ T
hcompl : ∀ T ∈ P, compl T ∉ P
all_subsets : Finset (Set S) := Finset.univ
h_comp_in_P : ∀ T ∉ P, compl T ∈ P
h_partition : ∀ (T : Set S), T ∈ P ∨ compl T ∈ P
⊢ P.card = 2 ^ (Fintype.card S - 1)
After:
no goals
```
</details>
#### lean_term_goal
Get the term goal at a specific position (line & column) in a Lean file.
#### lean_hover_info
Retrieve hover information (documentation) for symbols, terms, and expressions in a Lean file (at a specific line & column).
<details>
<summary>Example output (hover info on a `sorry`)</summary>
```
The `sorry` tactic is a temporary placeholder for an incomplete tactic proof,
closing the main goal using `exact sorry`.
This is intended for stubbing-out incomplete parts of a proof while still having a syntactically correct proof skeleton.
Lean will give a warning whenever a proof uses `sorry`, so you aren't likely to miss it,
but you can double check if a theorem depends on `sorry` by looking for `sorryAx` in the output
of the `#print axioms my_thm` command, the axiom used by the implementation of `sorry`.
```
</details>
#### lean_declaration_file
Get the file contents where a symbol or term is declared.
#### lean_completions
Code auto-completion: Find available identifiers or import suggestions at a specific position (line & column) in a Lean file.
#### lean_run_code
Run/compile an independent Lean code snippet/file and return the result or error message.
<details>
<summary>Example output (code snippet: `#eval 5 * 7 + 3`)</summary>
```
l1c1-l1c6, severity: 3
38
```
</details>
#### lean_multi_attempt
Attempt multiple tactics on a line and return goal state and diagnostics for each.
Useful to screen different proof attempts before committing to one.
When `LEAN_REPL=true`, uses the REPL tactic mode for up to 5x faster execution (see [Environment Variables](#environment-variables)).
<details>
<summary>Example output (attempting `rw [Nat.pow_sub (Fintype.card_pos_of_nonempty S)]` and `by_contra h_neq`)</summary>
```
rw [Nat.pow_sub (Fintype.card_pos_of_nonempty S)]:
S : Type u_1
inst✝¹ : Fintype S
inst✝ : Nonempty S
P : Finset (Set S)
hPP : ∀ T ∈ P, ∀ U ∈ P, T ∩ U ≠ ∅
hPS : ¬∃ T ∉ P, ∀ U ∈ P, T ∩ U ≠ ∅
⊢ P.card = 2 ^ (Fintype.card S - 1)
l14c7-l14c51, severity: 1
unknown constant 'Nat.pow_sub'
by_contra h_neq:
S : Type u_1
inst✝¹ : Fintype S
inst✝ : Nonempty S
P : Finset (Set S)
hPP : ∀ T ∈ P, ∀ U ∈ P, T ∩ U ≠ ∅
hPS : ¬∃ T ∉ P, ∀ U ∈ P, T ∩ U ≠ ∅
h_neq : ¬P.card = 2 ^ (Fintype.card S - 1)
⊢ False
...
```
</details>
#### lean_code_actions
Get LSP code actions for a line. Returns resolved edits for "Try This" suggestions (`simp?`, `exact?`, `apply?`) and other quick fixes. The agent applies the edits using its own editing tools.
<details>
<summary>Example output (line with <code>simp?</code>)</summary>
```json
{
"actions": [
{
"title": "Try this: simp only [zero_add]",
"is_preferred": false,
"edits": [
{
"new_text": "simp only [zero_add]",
"start_line": 3,
"start_column": 37,
"end_line": 3,
"end_column": 42
}
]
}
]
}
```
</details>
#### lean_get_widgets
Get panel widgets at a position (proof visualizations, `#html`, custom widgets). Returns raw widget data - may be verbose.
<details>
<summary>Example output (<code>#html</code> widget)</summary>
```json
{
"widgets": [
{
"id": "ProofWidgets.HtmlDisplayPanel",
"javascriptHash": "15661785739548337049",
"props": {
"html": {
"element": ["b", [], [{"text": "Hello widget"}]]
}
},
"range": {
"start": {"line": 4, "character": 0},
"end": {"line": 4, "character": 50}
}
}
]
}
```
</details>
#### lean_get_widget_source
Get the JavaScript source code of a widget by its `javascriptHash` (from `lean_get_widgets` or `lean_diagnostic_messages` with `interactive=True`). Useful for understanding custom widget rendering logic. Returns full JS module - may be verbose.
#### lean_profile_proof
Profile a theorem to identify slow tactics. Runs `lean --profile` on an isolated copy of the theorem and returns per-line timing data.
<details>
<summary>Example output (profiling a theorem using simp)</summary>
```json
{
"ms": 42.5,
"lines": [
{"line": 7, "ms": 38.2, "text": "simp [add_comm, add_assoc]"}
],
"categories": {
"simp": 35.1,
"typeclass inference": 4.2
}
}
```
</details>
#### lean_verify
Check theorem soundness: returns axioms used + optional source pattern scan for `unsafe`, `set_option debug.*`, `@[implemented_by]`, etc. Standard axioms are `propext`, `Classical.choice`, `Quot.sound` — anything else (e.g. `sorryAx`) indicates an unsound proof. Source warnings require [ripgrep](https://github.com/BurntSushi/ripgrep) (`rg`).
<details>
<summary>Example output (theorem using sorry)</summary>
```json
{
"axioms": ["propext", "sorryAx"],
"warnings": [
{"line": 5, "pattern": "set_option debug.skipKernelTC"}
]
}
```
</details>
### Local Search Tools
#### lean_local_search
Search for Lean definitions and theorems in the local Lean project and stdlib.
This is useful to confirm declarations actually exist and prevent hallucinating APIs.
This tool requires [ripgrep](https://github.com/BurntSushi/ripgrep?tab=readme-ov-file#installation) (`rg`) to be installed and available in your PATH.
### External Search Tools
Currently most external tools are separately **rate limited to 3 requests per 30 seconds**. Please don't ruin the fun for everyone by overusing these amazing free services!
Please cite the original authors of these tools if you use them!
#### lean_leansearch
Search for theorems in Mathlib using [leansearch.net](https://leansearch.net) (natural language search).
[Github Repository](https://github.com/frenzymath/LeanSearch) | [Arxiv Paper](https://arxiv.org/abs/2403.13310)
- Supports natural language, mixed queries, concepts, identifiers, and Lean terms.
- Example: `bijective map from injective`, `n + 1 <= m if n < m`, `Cauchy Schwarz`, `List.sum`, `{f : A → B} (hf : Injective f) : ∃ h, Bijective h`
<details>
<summary>Example output (query by LLM: `bijective map from injective`)</summary>
```json
{
"module_name": "Mathlib.Logic.Function.Basic",
"kind": "theorem",
"name": "Function.Bijective.injective",
"signature": " {f : α → β} (hf : Bijective f) : Injective f",
"type": "∀ {α : Sort u_1} {β : Sort u_2} {f : α → β}, Function.Bijective f → Function.Injective f",
"value": ":= hf.1",
"informal_name": "Bijectivity Implies Injectivity",
"informal_description": "For any function $f \\colon \\alpha \\to \\beta$, if $f$ is bijective, then $f$ is injective."
},
...
```
</details>
#### lean_loogle
Search for Lean definitions and theorems using [loogle.lean-lang.org](https://loogle.lean-lang.org/).
[Github Repository](https://github.com/nomeata/loogle)
- Supports queries by constant, lemma name, subexpression, type, or conclusion.
- Example: `Real.sin`, `"differ"`, `_ * (_ ^ _)`, `(?a -> ?b) -> List ?a -> List ?b`, `|- tsum _ = _ * tsum _`
- **Local mode available**: Use `--loogle-local` to run loogle locally (avoids rate limits, see [Local Loogle](#local-loogle) section)
<details>
<summary>Example output (`Real.sin`)</summary>
```json
[
{
"type": " (x : ℝ) : ℝ",
"name": "Real.sin",
"module": "Mathlib.Data.Complex.Trigonometric"
},
...
]
```
</details>
#### lean_leanfinder
Semantic search for Mathlib theorems using [Lean Finder](https://huggingface.co/spaces/delta-lab-ai/Lean-Finder).
[Arxiv Paper](https://arxiv.org/abs/2510.15940)
- Supports informal descriptions, user questions, proof states, and statement fragments.
- Examples: `algebraic elements x,y over K with same minimal polynomial`, `Does y being a root of minpoly(x) imply minpoly(x)=minpoly(y)?`, `⊢ |re z| ≤ ‖z‖` + `transform to squared norm inequality`, `theorem restrict Ioi: restrict Ioi e = restrict Ici e`
<details>
<summary>Example output</summary>
Query: `Does y being a root of minpoly(x) imply minpoly(x)=minpoly(y)?`
```json
[
[
"/-- If `y : L` is a root of `minpoly K x`, then `minpoly K y = minpoly K x`. -/\ntheorem eq_of_root {x y : L} (hx : IsAlgebraic K x)\n (h_ev : Polynomial.aeval y (minpoly K x) = 0) : minpoly K y = minpoly K x :=\n ((eq_iff_aeval_minpoly_eq_zero hx.isIntegral).mpr h_ev).symm",
"Let $L/K$ be a field extension, and let $x, y \\in L$ be elements such that $y$ is a root of the minimal polynomial of $x$ over $K$. If $x$ is algebraic over $K$, then the minimal polynomial of $y$ over $K$ is equal to the minimal polynomial of $x$ over $K$, i.e., $\\text{minpoly}_K(y) = \\text{minpoly}_K(x)$. This means that if $y$ satisfies the polynomial equation defined by $x$, then $y$ shares the same minimal polynomial as $x$."
],
...
]
```
</details>
#### lean_state_search
Search for applicable theorems for the current proof goal using [premise-search.com](https://premise-search.com/).
[Github Repository](https://github.com/ruc-ai4math/Premise-Retrieval) | [Arxiv Paper](https://arxiv.org/abs/2501.13959)
A self-hosted version is [available](https://github.com/ruc-ai4math/LeanStateSearch) and encouraged. You can set an environment variable `LEAN_STATE_SEARCH_URL` to point to your self-hosted instance. It defaults to `https://premise-search.com`.
Uses the first goal at a given line and column.
Returns a list of relevant theorems.
<details> <summary>Example output (line 24, column 3)</summary>
```json
[
{
"name": "Nat.mul_zero",
"formal_type": "∀ (n : Nat), n * 0 = 0",
"module": "Init.Data.Nat.Basic"
},
...
]
```
</details>
#### lean_hammer_premise
Search for relevant premises based on the current proof state using the [Lean Hammer Premise Search](https://github.com/hanwenzhu/lean-premise-server).
[Github Repository](https://github.com/hanwenzhu/lean-premise-server) | [Arxiv Paper](https://arxiv.org/abs/2506.07477)
A self-hosted version is [available](https://github.com/hanwenzhu/lean-premise-server) and encouraged. You can set an environment variable `LEAN_HAMMER_URL` to point to your self-hosted instance. It defaults to `http://leanpremise.net`.
Uses the first goal at a given line and column.
Returns a list of relevant premises (theorems) that can be used to prove the goal.
Note: We use a simplified version, [LeanHammer](https://github.com/JOSHCLUNE/LeanHammer) might have better premise search results.
<details><summary>Example output (line 24, column 3)</summary>
```json
[
"MulOpposite.unop_injective",
"MulOpposite.op_injective",
"WellFoundedLT.induction",
...
]
```
</details>
### Project-level tools
#### lean_build
Rebuild the Lean project and restart the Lean LSP server.
### Disabling Tools
Many clients allow the user to disable specific tools manually (e.g. lean_build).
**VSCode**: Click on the Wrench/Screwdriver icon in the chat.
**Cursor**: In "Cursor Settings" > "MCP" click on the name of a tool to disable it (strikethrough).
## MCP Configuration
This MCP server works out-of-the-box without any configuration. However, a few optional settings are available.
### Environment Variables
- `LEAN_LOG_LEVEL`: Log level for the server. Options are "INFO", "WARNING", "ERROR", "NONE". Defaults to "INFO".
- `LEAN_LOG_FILE_CONFIG`: Config file path for logging, with priority over `LEAN_LOG_LEVEL`. If not set, logs are printed to stdout.
- `LEAN_PROJECT_PATH`: Path to your Lean project root. Set this if the server cannot automatically detect your project.
- `LEAN_REPL`: Set to `true`, `1`, or `yes` to enable fast REPL-based `lean_multi_attempt` (~5x faster, see [REPL Setup](#repl-setup)).
- `LEAN_REPL_PATH`: Path to the `repl` binary. Auto-detected from `.lake/packages/repl/` if not set.
- `LEAN_REPL_TIMEOUT`: Per-command timeout in seconds (default: 60).
- `LEAN_REPL_MEM_MB`: Max memory per REPL in MB (default: 8192). Only enforced on Linux/macOS.
- `LEAN_LSP_MCP_TOKEN`: Secret token for bearer authentication when using `streamable-http` or `sse` transport.
- `LEAN_STATE_SEARCH_URL`: URL for a self-hosted [premise-search.com](https://premise-search.com) instance.
- `LEAN_HAMMER_URL`: URL for a self-hosted [Lean Hammer Premise Search](https://github.com/hanwenzhu/lean-premise-server) instance.
- `LEAN_LOOGLE_LOCAL`: Set to `true`, `1`, or `yes` to enable local loogle (see [Local Loogle](#local-loogle) section).
- `LEAN_LOOGLE_CACHE_DIR`: Override the cache directory for local loogle (default: `~/.cache/lean-lsp-mcp/loogle`).
You can also often set these environment variables in your MCP client configuration:
<details>
<summary><b>VSCode mcp.json Example</b></summary>
```jsonc
{
"servers": {
"lean-lsp": {
"type": "stdio",
"command": "uvx",
"args": [
"lean-lsp-mcp"
],
"env": {
"LEAN_PROJECT_PATH": "/path/to/your/lean/project",
"LEAN_LOG_LEVEL": "NONE"
}
}
}
}
```
</details>
### Transport Methods
The Lean LSP MCP server supports the following transport methods:
- `stdio`: Standard input/output (default)
- `streamable-http`: HTTP streaming
- `sse`: Server-sent events (MCP legacy, use `streamable-http` if possible)
You can specify the transport method using the `--transport` argument when running the server. For `sse` and `streamable-http` you can also optionally specify the host and port:
```bash
uvx lean-lsp-mcp --transport stdio # Default transport
uvx lean-lsp-mcp --transport streamable-http # Available at http://127.0.0.1:8000/mcp
uvx lean-lsp-mcp --transport sse --host localhost --port 12345 # Available at http://localhost:12345/sse
```
### Bearer Token Authentication
Transport via `streamable-http` and `sse` supports bearer token authentication. This allows publicly accessible MCP servers to restrict access to authorized clients.
Set the `LEAN_LSP_MCP_TOKEN` environment variable (or see section 3 for setting env variables in MCP config) to a secret token before starting the server.
Example Linux/MacOS setup:
```bash
export LEAN_LSP_MCP_TOKEN="your_secret_token"
uvx lean-lsp-mcp --transport streamable-http
```
Clients should then include the token in the `Authorization` header.
### REPL Setup
Enable fast REPL-based `lean_multi_attempt` (~5x faster). Uses [leanprover-community/repl](https://github.com/leanprover-community/repl) tactic mode.
**1. Add REPL to your Lean project's `lakefile.toml`:**
```toml
[[require]]
name = "repl"
git = "https://github.com/leanprover-community/repl"
rev = "v4.25.0" # Match your Lean version
```
**2. Build it:**
```bash
lake build repl
```
**3. Enable via CLI or environment variable:**
```bash
uvx lean-lsp-mcp --repl
# Or via environment variable
export LEAN_REPL=true
```
The REPL binary is auto-detected from `.lake/packages/repl/`. Falls back to LSP if not found.
### Local Loogle
Run loogle locally to avoid the remote API's rate limit (3 req/30s). First run takes ~5-10 minutes to build; subsequent runs start in seconds.
```bash
# Enable via CLI
uvx lean-lsp-mcp --loogle-local
# Or via environment variable
export LEAN_LOOGLE_LOCAL=true
```
**Requirements:** `git`, `lake` ([elan](https://github.com/leanprover/elan)), ~2GB disk space.
**Note:** Local loogle is currently only supported on Unix systems (Linux/macOS). Windows users should use WSL or the remote API.
Falls back to remote API if local loogle fails.
## Notes on MCP Security
There are many valid security concerns with the Model Context Protocol (MCP) in general!
This MCP server is meant as a research tool and is currently in beta.
While it does not handle any sensitive data such as passwords or API keys, it still includes various security risks:
- Access to your local file system.
- No input or output validation.
Please be aware of these risks. Feel free to audit the code and report security issues!
For more information, you can use [Awesome MCP Security](https://github.com/Puliczek/awesome-mcp-security) as a starting point.
## Development
### MCP Inspector
```bash
npx @modelcontextprotocol/inspector uvx --with-editable path/to/lean-lsp-mcp python -m lean_lsp_mcp.server
```
### Run Tests
```bash
uv sync --all-extras
uv run pytest tests
```
## Publications and Formalization Projects using lean-lsp-mcp
- Ax-Prover: A Deep Reasoning Agentic Framework for Theorem Proving in Mathematics and Quantum Physics [arxiv](https://arxiv.org/abs/2510.12787)
- Numina-Lean-Agent: An Open and General Agentic Reasoning System for Formal Mathematics [arxiv](https://arxiv.org/abs/2601.14027) [github](https://github.com/project-numina/numina-lean-agent)
- A Group-Theoretic Approach to Shannon Capacity of Graphs and a Limit Theorem from Lattice Packings [github](https://github.com/jzuiddam/GroupTheoreticShannonCapacity/)
## Talks
lean-lsp-mcp: Tools for agentic interaction with Lean (Lean Together 2026) [youtube](https://www.youtube.com/watch?v=uttbYaTaF-E)
## Related Projects
- [LeanTool](https://github.com/GasStationManager/LeanTool)
- [LeanExplore MCP](https://www.leanexplore.com/docs/mcp)
## License & Citation
**MIT** licensed. See [LICENSE](LICENSE) for more information.
Citing this repository is highly appreciated but not required by the license.
```bibtex
@software{lean-lsp-mcp,
author = {Oliver Dressler},
title = {{Lean LSP MCP: Tools for agentic interaction with the Lean theorem prover}},
url = {https://github.com/oOo0oOo/lean-lsp-mcp},
month = {3},
year = {2025}
}
```
| text/markdown | null | Oliver Dressler <hey@oli.show> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"leanclient==0.9.2",
"mcp[cli]==1.26.0",
"orjson>=3.11.1",
"certifi>=2024.0.0",
"PyYAML>=6.0; extra == \"yaml\"",
"ruff>=0.2.0; extra == \"lint\"",
"ruff>=0.2.0; extra == \"dev\"",
"pytest>=8.3; extra == \"dev\"",
"anyio>=4.4; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest-... | [] | [] | [] | [
"Repository, https://github.com/oOo0oOo/lean-lsp-mcp"
] | uv/0.7.15 | 2026-02-18T15:38:40.390760 | lean_lsp_mcp-0.22.0.tar.gz | 71,756 | 57/3a/60670eb3f8db9508d4dcdd385395d7c6dcf347f992bf437587464b7083e2/lean_lsp_mcp-0.22.0.tar.gz | source | sdist | null | false | 45e0f7bac7bd85c0c99e02f8f5cba017 | 55f4db5035dc61f81428899a55f4e81aa3f04a7c0d671d4a97dc773cb82d28ed | 573a60670eb3f8db9508d4dcdd385395d7c6dcf347f992bf437587464b7083e2 | MIT | [
"LICENSE"
] | 951 |
2.1 | krkn-lib | 6.0.4 | Foundation library for Kraken | 



# krkn-lib
## Krkn Chaos and resiliency testing tool Foundation Library
### Contents
The Library contains Classes, Models and helper functions used in [Kraken](https://github.com/krkn-chaos/krkn) to interact with
Kubernetes, Openshift and other external APIS.
The goal of this library is to give to developers the building blocks to realize new Chaos
Scenarios and to increase the testability and the modularity of the Krkn codebase.
### Packages
The library is subdivided in several Packages
- **ocp:** Openshift Integration
- **k8s:** Kubernetes Integration
- **telemetry:**
- - **k8s:** Kubernetes Telemetry collection and distribution
- - **ocp:** Openshift Telemetry collection and distribution
- **models:** Krkn shared data models
- **utils:** common functions
### Documentation
The Library documentation is available [here](https://krkn-chaos.github.io/krkn-lib-docs/).
The documentation is automatically generated by [Sphinx](https://www.sphinx-doc.org/en/master/) on top
of the [reStructuredText Docstring Format](https://peps.python.org/pep-0287/) comments present in the code.
| text/markdown | Red Hat Chaos Team | null | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://github.com/redhat-chaos/krkn | null | <4.0,>=3.11 | [] | [] | [] | [
"PyYAML==6.0.1",
"base64io<2.0.0,>=1.0.3",
"coverage<8.0.0,>=7.6.12",
"cython==3.0",
"deprecation==2.1.0",
"elasticsearch==7.17.13",
"elasticsearch-dsl==7.4.1",
"importlib-metadata<9.0.0,>=8.7.0",
"kubeconfig<2.0.0,>=1.1.1",
"kubernetes==34.1.0",
"numpy==1.26.4",
"opensearch-py<2.8.0,>=2.0.0",... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T15:38:25.269880 | krkn_lib-6.0.4.tar.gz | 113,941 | 36/24/3e1ddb9482da779e2a39c73eb81786d7181850a13c2e6162d5879754c3a8/krkn_lib-6.0.4.tar.gz | source | sdist | null | false | adcef63183fed853380f0b7a8915d560 | c802dc351dd2a91a8b43c9d51da2a381fff9d37fb24d67ae5135166c65535173 | 36243e1ddb9482da779e2a39c73eb81786d7181850a13c2e6162d5879754c3a8 | null | [] | 246 |
2.4 | shipshape | 0.10.0 | Open-source parametric vessel design and validation library | # shipshape
Open-source parametric vessel design and validation library.
Shipshape provides boat-design-independent tools for naval engineering analysis. It is used by [Solar Proa](https://github.com/shipshape-marine/solar-proa) but can be applied to any parametric vessel design.
## Modules
| Module | Description | Requires FreeCAD |
|--------|-------------|------------------|
| `parameter` | Merge base parameters and compute derived values via a project-supplied plugin | No |
| `mass` | Compute component masses from a FreeCAD design and material properties | Yes |
| `buoyancy` | Find equilibrium pose (sinkage, pitch, roll) using Newton-Raphson iteration | Yes |
| `gz` | Compute the GZ righting-arm curve over a range of heel angles | Yes |
| `physics` | Center-of-gravity and center-of-buoyancy calculations | Yes |
## Installation
```bash
pip install shipshape
```
For modules that require FreeCAD geometry (mass, buoyancy, gz, physics), install FreeCAD via conda-forge:
```bash
conda install -c conda-forge freecad
```
## CLI Usage
Each module can be run as a CLI tool via `python -m shipshape.<module>`.
### parameter
Merges boat and configuration JSON files, then calls a project-supplied `compute_derived()` function to calculate derived values.
```bash
PYTHONPATH=. python -m shipshape.parameter \
--compute myproject.parameter.compute \
--boat constants/boats/boat.json \
--configuration constants/configurations/config.json \
--output artifact/parameters.json
```
The `--compute` argument is a dotted module path. The module must export a `compute_derived(data: dict) -> dict` function.
### mass
```bash
python -m shipshape.mass \
--design artifact/boat.design.FCStd \
--materials constants/material/materials.json \
--output artifact/boat.mass.json
```
### buoyancy
```bash
python -m shipshape.buoyancy \
--design artifact/boat.design.FCStd \
--materials constants/material/materials.json \
--output artifact/boat.buoyancy.json
```
### gz
```bash
python -m shipshape.gz \
--design artifact/boat.design.FCStd \
--buoyancy artifact/boat.buoyancy.json \
--output artifact/boat.gz.json \
--output-png artifact/boat.gz.png
```
## Releasing a new version
1. Edit `pyproject.toml` — bump the `version` field
2. `git add pyproject.toml`
3. `git commit -m "Bump version to X.Y.Z"`
4. `git push origin main`
5. `git tag vX.Y.Z`
6. `git push origin vX.Y.Z`
Push the commit before the tag so CI has the code when the tag event triggers the PyPI release.
## License
Apache 2.0
| text/markdown | null | Solar Proa <solar.proa@gmail.com> | null | null | null | hydrostatics, naval-architecture, structural-validation, vessel-design | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"numpy; extra == \"geometry\""
] | [] | [] | [] | [
"Homepage, https://github.com/solar-proa/shipshape",
"Repository, https://github.com/solar-proa/shipshape"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:36:51.527422 | shipshape-0.10.0.tar.gz | 29,885 | d9/e1/bd85630d9871966abd7d39f94940d53c7861f8a88a11e432c44006034f5e/shipshape-0.10.0.tar.gz | source | sdist | null | false | 77cb2ba983fb9570be376ad7279f2f79 | bb479703a2167eb642dab880a078f72fc2077ee1cb0c7b7d61f93e3873be80ed | d9e1bd85630d9871966abd7d39f94940d53c7861f8a88a11e432c44006034f5e | MIT | [
"LICENSE"
] | 236 |
2.1 | ok-script | 1.0.53 | Automation with Computer Vision for Python | # ok-script
* ok-script 是基于图像识别技术, 纯Python实现的, 支持Windows窗口和模拟器的自动化测试框架。
* 框架包含UI, 截图, 输入, 设备控制, OCR, 模板匹配, 框框Debug浮层, 基于Github Action的测试, 打包, 升级/降级。
* 基于开发一个工业级的自动化软件仅需几百行代码。
## 优势
1. 纯Python实现, 免费开源, 依赖库均为开源方案
2. 支持pip install任何第三方库, 可以方便整合yolo等框架
3. 一套代码即可支持Windows安卓模拟器/ADB连接的虚拟机, Windows客户端游戏
4. 自适应分辨率
5. 使用coco管理图片匹配素材, 仅需一个分辨率下的截图就, 支持不同分辨率自适应
6. 可打包离线/在线安装setup.exe, 支持通过Pip/Git国内镜像在线增量更新. 在线安装包仅3M
7. 支持Github Action一键构建
8. 支持多语言国际化
### 使用 目前仅支持Python 3.12
* 在你的项目中通过pip依赖使用
```commandline
pip install ok-script
```
* 本地编译源码使用
```commandline
pip install -r requirements.txt # 安装编译ok-script所需的的依赖
mklink /d "C:\path\to\your-project\ok" "C:\path\to\ok-script\ok" #Windows CMD 创建软链接到你的项目中
in_place_build.bat #如修改__init__.pyx 需要编译Cython代码
```
* 编译国际化文件
```commandline
cd ok\gui\i18n
.\release.cmd
cd ok\gui
.\qrc.cmd
```
## 文档和示例代码
* [游戏自动化入门](docs/intro_to_automation/README.md)
- [1、基本原理:计算机如何“玩”游戏](docs/intro_to_automation/README.md#一基本原理计算机如何玩游戏)
- [核心循环:三步走](docs/intro_to_automation/README.md#核心循环三步走)
- [图像分析:从像素到决策](docs/intro_to_automation/README.md#图像分析从像素到决策)
- [传统图色算法 (OpenCV 库)](docs/intro_to_automation/README.md#1-传统图色算法-opencv-库)
- [神经网络推理 (Inference)](docs/intro_to_automation/README.md#2-神经网络推理-inference)
- [2、编程语言选择](docs/intro_to_automation/README.md#二编程语言选择)
- [常用库概览](docs/intro_to_automation/README.md#常用库概览)
- [3、开发工具](docs/intro_to_automation/README.md#三开发工具)
* [快速开始](docs/quick_start/README.md)
* [进阶使用](docs/after_quick_start/README.md)
- [1. 模板匹配 (Template Matching)](docs/after_quick_start/README.md#1-模板匹配-template-matching)
- [2. 多语言国际化 (i18n)](docs/after_quick_start/README.md#2-多语言国际化-i18n)
- [3. 自动化测试](docs/after_quick_start/README.md#3-自动化测试)
- [4. 使用 GitHub Action 自动化打包与发布](docs/after_quick_start/README.md#4-使用-github-action-自动化打包与发布)
* [API文档](docs/api_doc/README.md)
* 开发者群: 938132715
* pip [https://pypi.org/project/ok-script](https://pypi.org/project/ok-script)
## 使用ok-script的项目:
* 鸣潮 [https://github.com/ok-oldking/ok-wuthering-wave](https://github.com/ok-oldking/ok-wuthering-waves)
* 原神(不在维护,
但是后台过剧情可用) [https://github.com/ok-oldking/ok-genshin-impact](https://github.com/ok-oldking/ok-genshin-impact)
* 少前2 [https://github.com/ok-oldking/ok-gf2](https://github.com/ok-oldking/ok-gf2)
* 星铁 [https://github.com/Shasnow/ok-starrailassistant](https://github.com/Shasnow/ok-starrailassistant)
* 星痕共鸣 [https://github.com/Sanheiii/ok-star-resonance](https://github.com/Sanheiii/ok-star-resonance)
* 二重螺旋 [https://github.com/BnanZ0/ok-duet-night-abyss](https://github.com/BnanZ0/ok-duet-night-abyss)
* 白荆回廊(停止更新) [https://github.com/ok-oldking/ok-baijing](https://github.com/ok-oldking/ok-baijing)
| text/markdown | ok-oldking | firedcto@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: Microsoft :: Windows"
] | [] | https://github.com/ok-oldking/ok-script | null | ==3.12.* | [] | [] | [] | [
"pywin32>=306",
"pyappify>=1.0.2",
"PySide6-Fluent-Widgets==1.8.3",
"typing-extensions>=4.11.0",
"requests>=2.32.3",
"psutil>=6.0.0",
"pydirectinput==1.0.4",
"pycaw==20240210",
"mouse==0.7.1"
] | [] | [] | [] | [] | twine/6.0.1 CPython/3.12.10 | 2026-02-18T15:36:05.041015 | ok_script-1.0.53.tar.gz | 247,236 | 4d/d1/cec662fb3b3f213d791c41ddd06f0aa101761ea49c22eafdaaab3b5f80e7/ok_script-1.0.53.tar.gz | source | sdist | null | false | 3cb6bb721157c9358e283154a82335a9 | 05e69df47203fe35828680555ae8a45fd135763fbeeb8e5c24767d942517014c | 4dd1cec662fb3b3f213d791c41ddd06f0aa101761ea49c22eafdaaab3b5f80e7 | null | [] | 269 |
2.4 | fluxprobe | 1.0.0 | Schema-driven protocol fuzzer | # FluxProbe
FluxProbe is a lightweight, schema-driven protocol fuzzer. Point it at a protocol description, and it will emit a mix of valid and intentionally corrupted frames, send them to your device-under-test (DUT), and log what happens. The goal is to reproduce the fast iteration of commercial fuzzers (e.g., Codenomicon) with an open, hackable core.
> **Companion Tool:** Check out [fluxgen](https://github.com/kanchankjha/fluxgen) - a multi-client traffic generator for network load testing and stress testing.
## Installation
### Quick Install (Debian/Ubuntu)
For Debian-based systems (Ubuntu, Debian, etc.), you can install FluxProbe via apt:
```bash
# One-line installation
curl -fsSL https://raw.githubusercontent.com/kanchankjha/fluxprobe/apt-repo/install.sh | sudo bash
# Or manually add the repository:
echo "deb [trusted=yes] https://kanchankjha.github.io/fluxprobe stable main" | sudo tee /etc/apt/sources.list.d/fluxprobe.list
sudo apt-get update
sudo apt-get install fluxprobe
```
### Quick Install (pip)
```bash
pip install fluxprobe
# or from GitHub:
pip install git+https://github.com/kanchankjha/fluxprobe.git
```
### Prerequisites
- **Python 3.9+** (tested with Python 3.12)
- **Git** for cloning the repository
- **pip** for installing dependencies
### Step 1: Clone the Repository
```bash
# Clone the repository
git clone https://github.com/kanchankjha/fluxprobe.git
# Navigate to the fluxprobe directory
cd fluxprobe
```
### Step 2: Install Dependencies
FluxProbe has minimal dependencies - only PyYAML for YAML schema support.
```bash
# Install required dependencies
pip install -r requirements.txt
# Or install PyYAML directly
pip install "PyYAML>=6.0"
```
### Step 3: Install FluxProbe (Optional)
You can either run FluxProbe as a module or install it as a package:
#### Option A: Run as Module (No Installation)
```bash
# Run directly from the repository
python3 -m fluxprobe --help
```
#### Option B: Install as Package
```bash
# Install in development mode (editable)
pip install -e .
# Now you can run from anywhere
fluxprobe --help
```
#### Option C: Install from Source
```bash
# Build and install
pip install .
# Run the installed command
fluxprobe --help
```
### Verify Installation
```bash
# Test with a built-in profile
python3 -m fluxprobe --protocol echo --target 127.0.0.1:9000 --iterations 5
# Or if installed as package:
fluxprobe --protocol echo --target 127.0.0.1:9000 --iterations 5
```
### For Developers
If you plan to modify or contribute to FluxProbe:
```bash
# Clone the repository
git clone https://github.com/kanchankjha/fluxprobe.git
cd fluxprobe
# Install with development dependencies
pip install -e .
# Install testing tools
pip install pytest pytest-cov pytest-mock
# Run tests to verify setup
pytest tests/ -v
# Run with coverage
pytest tests/ --cov=fluxprobe --cov-report=html
```
## Features
- Declarative protocol schemas (YAML/JSON) with primitive field types, enums, and length references.
- Valid frame generator plus structure-aware and byte-level mutators (off-by-one lengths, invalid enums, bit flips, trunc/extend, checksum tamper hooks).
- Pluggable transports (TCP/UDP) and a simple run loop with rate limiting and timeouts.
- Deterministic runs via `--seed`, with hexdump logging for replay.
## Quick Start Guide
### 1. Basic Usage with Built-in Profiles
FluxProbe comes with 11 built-in protocol profiles that work out of the box:
```bash
# Fuzz an HTTP server
python3 -m fluxprobe --protocol http --target 192.168.1.100:80 --iterations 200 --mutation-rate 0.4
# Fuzz a DNS server
python3 -m fluxprobe --protocol dns --target 8.8.8.8:53 --iterations 100
# Fuzz an MQTT broker
python3 -m fluxprobe --protocol mqtt --target localhost:1883 --iterations 500 --seed 42
# Fuzz Modbus/TCP device
python3 -m fluxprobe --protocol modbus --target 10.0.0.5:502 --iterations 300 --mutation-rate 0.5
# IPv6 target example
python3 -m fluxprobe --protocol http --target "[2001:db8::50]":80 --iterations 50
```
**Available built-in profiles:** `echo`, `http`, `dns`, `mqtt`, `modbus`, `coap`, `tcp`, `udp`, `ip`, `snmp`, `ssh`
### 2. Using Custom Schema Files
Create your own protocol definition or use provided examples:
```bash
# Use an example schema
python3 -m fluxprobe --schema examples/protocols/echo.yaml --host 127.0.0.1 --port 9000 --iterations 200
# Override schema settings
python3 -m fluxprobe --schema examples/protocols/http_request.yaml --target 192.168.1.10:8080 --mutation-rate 0.3
# Save logs for later analysis
python3 -m fluxprobe --schema my_protocol.yaml --target device.local:5000 --log-file output/fuzz.log --iterations 1000
```
### 3. Advanced Options
```bash
# Reproducible fuzzing with seed
python3 -m fluxprobe --protocol http --target localhost:80 --seed 12345 --iterations 100
# High mutation rate for aggressive testing
python3 -m fluxprobe --protocol mqtt --target broker:1883 --mutation-rate 0.9 --mutations-per-frame 3
# Slow down fuzzing with delays
python3 -m fluxprobe --protocol modbus --target plc:502 --delay-ms 100 --iterations 500
# Wait for and log responses
python3 -m fluxprobe --protocol echo --target echo-server:7 --recv-timeout 2.0 --log-file responses.log
# Build and log frames without sending (dry-run)
python3 -m fluxprobe --protocol http --target webapp:80 --iterations 5 --dry-run --log-level DEBUG
# Adjust logging verbosity
python3 -m fluxprobe --protocol http --target webapp:80 --log-level DEBUG --iterations 50
```
### 4. Common Use Cases
#### Test a Web Server
```bash
python3 -m fluxprobe --protocol http --target myapp.local:8080 \
--iterations 1000 \
--mutation-rate 0.4 \
--log-file logs/webapp-fuzz.log \
--seed 42
```
#### Test an IoT Device
```bash
python3 -m fluxprobe --protocol mqtt --target iot-device:1883 \
--iterations 500 \
--mutation-rate 0.3 \
--recv-timeout 1.0 \
--delay-ms 50
```
#### Test Industrial Control System
```bash
python3 -m fluxprobe --protocol modbus --target plc.factory:502 \
--iterations 200 \
--mutation-rate 0.2 \
--mutations-per-frame 1 \
--log-file logs/plc-test.log
```
## Quickstart
- Built-in profiles (no YAML needed): `python -m fluxprobe --protocol http --target 10.0.0.5:8080 --iterations 200 --mutation-rate 0.4`
- Using a schema file: `python -m fluxprobe --schema examples/protocols/echo.yaml --host 127.0.0.1 --port 9000 --iterations 200 --mutation-rate 0.4`
Available built-in `--protocol` profiles: `echo`, `http`, `dns`, `mqtt`, `modbus`, `coap`, `tcp`, `udp`, `ip`, `snmp`, `ssh`.
## Schema Format (MVP)
```yaml
name: Demo Echo
transport:
type: tcp # tcp | udp
host: 127.0.0.1
port: 9000
message:
fields:
- name: opcode
type: enum
choices: [0x01, 0x02, 0xFF]
default: 0x01
- name: payload_length
type: u16
length_of: payload # will be set automatically to len(payload)
- name: payload
type: bytes
min_length: 0
max_length: 32
fuzz_values: ["", "A", "BEEF"]
```
Supported field types:
- `u8`, `u16`, `u32` (big endian), `bytes`, `string` (ASCII/UTF-8).
- `enum` (numeric choices or strings).
- `length_of` lets one field mirror the length of another field.
- `min_value` / `max_value` for numeric bounds, `min_length` / `max_length` for blobs.
## CLI Reference
### Basic Options
- `--protocol <name>`: Use a built-in profile (`echo`, `http`, `dns`, `mqtt`, `modbus`, `coap`, `tcp`, `udp`, `ip`, `snmp`, `ssh`)
- `--schema <path>`: Path to custom YAML/JSON schema file (alternative to `--protocol`)
- `--target <host:port>`: Target address (shorthand for `--host` and `--port`)
- `--host <hostname>`: Target hostname or IP address
- `--port <number>`: Target port (1-65535)
### Fuzzing Behavior
- `--iterations <number>`: Number of frames to send (default: 100)
- `--mutation-rate <float>`: Probability to mutate each frame, range 0.0-1.0 (default: 0.3)
- `0.0` = only send valid frames
- `1.0` = always mutate frames
- `--mutations-per-frame <number>`: How many mutation operations per frame (default: 1)
### Timing & Network
- `--recv-timeout <seconds>`: Seconds to wait for responses (default: 0.0 = no wait)
- `--delay-ms <milliseconds>`: Delay between sends in milliseconds (default: 0)
### Reproducibility & Logging
- `--seed <number>`: RNG seed for reproducible fuzzing runs
- `--log-file <path>`: Save detailed logs with hexdumps and metadata
- `--log-level <level>`: Logging verbosity: `DEBUG`, `INFO` (default), `WARNING`, `ERROR`
### Examples
```bash
# Minimal usage - fuzz localhost echo server
python3 -m fluxprobe --protocol echo --target localhost:9000 --iterations 50
# Full options - production fuzzing with logging
python3 -m fluxprobe \
--protocol http \
--target webserver.example.com:80 \
--iterations 10000 \
--mutation-rate 0.5 \
--mutations-per-frame 2 \
--recv-timeout 5.0 \
--delay-ms 10 \
--seed 999 \
--log-file logs/production-fuzz.log \
--log-level INFO
# Custom schema with overrides
python3 -m fluxprobe \
--schema my_custom_protocol.yaml \
--host 10.0.0.50 \
--port 5555 \
--iterations 500 \
--mutation-rate 0.8
```
## CLI
- `--protocol`: use a built-in profile (see list above)
- `--schema`: path to YAML/JSON schema (if not using `--protocol`)
- `--target`: shorthand host:port override (e.g., `10.0.0.5:8080`)
- `--host` / `--port`: override transport endpoints
- `--iterations`: number of frames to send (default 100)
- `--mutation-rate`: fraction of frames to mutate (0.0 = only valid, 1.0 = always mutated)
- `--mutations-per-frame`: how many mutation operations to apply (default 1)
- `--recv-timeout`: seconds to wait for responses (0 to skip)
- `--seed`: RNG seed for reproducibility
- `--log-file`: optional log path (hexdumps + metadata)
## Structure
- `fluxprobe/` — Core library modules
- `cli.py` — Command-line interface and argument parsing
- `schema.py` — Schema loading and validation (YAML/JSON)
- `generator.py` — Valid message generation from schemas
- `mutator.py` — Mutation strategies (bit flips, length corruption, etc.)
- `transport.py` — Network transports (TCP/UDP)
- `runner.py` — Main fuzzing loop and logging
- `profiles.py` — Built-in protocol definitions
- `examples/protocols/` — Sample protocol schemas
- `echo.yaml` — Simple echo protocol
- `http_request.yaml` — HTTP GET/POST requests
- `dns_query.yaml` — DNS queries
- `mqtt_connect.yaml` — MQTT connection packets
- `modbus_tcp.yaml` — Modbus/TCP protocol
- `coap_get.yaml` — CoAP requests
- `snmp_get.yaml` — SNMP queries
- `ssh_kexinit.yaml` — SSH key exchange
- And more...
- `tests/` — Comprehensive test suite (48 tests, 96% coverage)
## Troubleshooting
### Common Issues
#### 1. "ModuleNotFoundError: No module named 'yaml'"
```bash
# Install PyYAML
pip install PyYAML
```
#### 2. "Connection refused" or timeout errors
- Verify target service is running: `telnet <host> <port>`
- Check firewall rules and network connectivity
- Try increasing `--recv-timeout` if expecting slow responses
#### 3. "Invalid port number" error
- Ensure port is between 1-65535
- Check schema file for valid port configuration
#### 4. "Circular dependency detected"
- Schema has invalid `length_of` chain (field A → B → A)
- Review schema to ensure length references don't form cycles
#### 5. Python version errors
- FluxProbe requires Python 3.9 or higher
- Check version: `python3 --version`
- Upgrade if needed: `sudo apt install python3.11` (or use pyenv)
### Getting Help
```bash
# Show all available options
python3 -m fluxprobe --help
# Test installation with verbose output
python3 -m fluxprobe --protocol echo --target localhost:9000 --iterations 5 --log-level DEBUG
# Run test suite to verify installation
pytest tests/ -v
```
### Example Debug Session
```bash
# Start with minimal test
python3 -m fluxprobe --protocol echo --target localhost:7 --iterations 1 --log-level DEBUG
# If successful, increase iterations
python3 -m fluxprobe --protocol echo --target localhost:7 --iterations 10
# Add mutations gradually
python3 -m fluxprobe --protocol echo --target localhost:7 --iterations 10 --mutation-rate 0.1
# Enable logging to review what was sent
python3 -m fluxprobe --protocol echo --target localhost:7 --iterations 10 --mutation-rate 0.3 --log-file debug.log
```
## Project Structure & Testing
### Running Tests
```bash
# Run all tests
pytest tests/ -v
# Run with coverage report
pytest tests/ --cov=fluxprobe --cov-report=html
# Run specific test file
pytest tests/test_generator_mutator.py -v
# Run tests for a specific feature
pytest tests/test_circular_dependency.py -v
```
### Test Coverage
- **48 tests** covering all major functionality
- **96% code coverage** across all modules
- Tests for edge cases, error handling, and validation
- Circular dependency detection tests
- Mutation strategy tests
- Schema validation tests
## Structure
- `fluxprobe/` — core library (schema loader, generator, mutators, transports, runner, CLI)
- `examples/protocols/` — sample schemas to adapt (echo, HTTP, DNS, MQTT, Modbus/TCP, CoAP, TCP raw, UDP payload, IPv4 packet, SNMP, SSH)
## Roadmap & Future Features
- **Coverage-guided fuzzing**: Integrate with instrumentation for smarter mutation
- **PCAP import**: Use real network captures as fuzzing seeds
- **Checksum calculation**: Automatic CRC/checksum field computation
- **State machines**: Multi-step protocol flows (handshake → request → response)
- **Web dashboard**: Real-time monitoring and result visualization
- **Corpus management**: Save interesting test cases for regression testing
- **Crash detection**: Automatic detection of target crashes/restarts
- **Response analysis**: Pattern matching on responses to detect anomalies
## Contributing
Contributions are welcome! Here's how to get started:
```bash
# Fork and clone
git clone https://github.com/YOUR_USERNAME/fluxprobe.git
cd fluxprobe
# Create a branch
git checkout -b feature/my-new-feature
# Install in development mode
pip install -e .
pip install pytest pytest-cov pytest-mock
# Make changes and test
pytest tests/ -v
# Commit and push
git add .
git commit -m "Add new feature"
git push origin feature/my-new-feature
```
### Areas for Contribution
- Additional built-in protocol profiles
- New mutation strategies
- Protocol-specific validators
- Performance optimizations
- Documentation improvements
- Bug fixes and test coverage
## License
See the LICENSE file in the repository root.
## Acknowledgments
FluxProbe aims to provide fast fuzzing iteration similar to commercial tools like Codenomicon, but with an open, hackable architecture that's easy to extend and customize.
## Roadmap Ideas
- Coverage-guided mode, PCAP import for seeds, checksum helpers, richer state machines, web dashboard.
| text/markdown | null | Kanchan Kumar Jha <kanchankjha@gmail.com> | null | null | MIT | fuzzing, protocol, testing, security, network | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Pr... | [] | null | null | >=3.9 | [] | [] | [] | [
"PyYAML>=6.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest-mock>=3.10; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/kanchankjha/fluxprobe",
"Repository, https://github.com/kanchankjha/fluxprobe.git",
"Issues, https://github.com/kanchankjha/fluxprobe/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T15:35:37.667982 | fluxprobe-1.0.0.tar.gz | 27,529 | fc/04/7d7cb3876e0fda5b581e18e86fd7576d0c092f32a956b9e7b7e187792409/fluxprobe-1.0.0.tar.gz | source | sdist | null | false | 691db8b3f444cbec201a51310ceac072 | 608a8d64a52699c8261b6d76647a10dae50ad0de5985f6b8ee4b105b4169d22e | fc047d7cb3876e0fda5b581e18e86fd7576d0c092f32a956b9e7b7e187792409 | null | [
"LICENSE"
] | 253 |
2.4 | pypn-habref-api | 0.4.4 | Python lib related to Habref referential (INPN) | # Habref-api-module
[](https://github.com/PnX-SI/Habref-api-module/actions/workflows/pytest.yml)
[](https://codecov.io/gh/PnX-SI/Habref-api-module)
API d'interrogation d'Habref : référentiel des typologies d’habitats et de végétation pour la France (https://inpn.mnhn.fr/telechargement/referentiels/habitats).
## Technologies
- Python 3
- Flask
- SQLAlchemy
## Installation
- Créer un virtualenv et l'activer :
```
virtualenv -p /usr/bin/python3 venv
source venv/bin/acticate
```
- Installer le module :
```
pip install https://github.com/PnX-SI/Habref-api-module/archive/<X.Y.Z>.zip
```
- Installer le schéma de base de données :
Le module est fourni avec une commande pour installer la base de données. Cette commande télécharge le référentiel Habref et créé un schéma de base de données nommé ``ref_habitats``.
```
# Depuis le virtualenv
install_habref_schema <database uri>
# Exemple :
# install_habref_schema "postgresql://geonatadmin:monpassachanger@localhost:5432/geonature2db"
```
| text/markdown | null | null | Parcs nationaux des Écrins et des Cévennes | geonature@ecrins-parcnational.fr | null | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.11",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Operating System :: OS Independent"
] | [] | https://github.com/PnX-SI/Habref-api-module | null | null | [] | [] | [] | [
"python-dotenv",
"flask",
"flask-marshmallow",
"flask-sqlalchemy",
"flask-migrate",
"marshmallow-sqlalchemy",
"marshmallow",
"psycopg2",
"requests",
"utils-flask-sqlalchemy>=0.4.5",
"sqlalchemy<2",
"pytest; extra == \"tests\"",
"pytest-flask; extra == \"tests\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T15:35:03.558216 | pypn_habref_api-0.4.4.tar.gz | 25,596 | 00/10/16a816887f76470fad9686909e21c466962ac36aa8afe7fdf05845282a66/pypn_habref_api-0.4.4.tar.gz | source | sdist | null | false | 14e69a59bf168991abf48e8d8a8ec32c | d74892a91213d21c5c8781cede6652484b6e067d92d78e1d49605e6cf5222043 | 001016a816887f76470fad9686909e21c466962ac36aa8afe7fdf05845282a66 | null | [
"LICENSE"
] | 247 |
2.4 | mmv-im2im | 0.7.1 | A python package for deep learning based image to image transformation | # MMV Im2Im Transformation
[](https://github.com/MMV-Lab/mmv_im2im/actions)
A generic python package for deep learning based image-to-image transformation in biomedical applications
The main branch will be further developed in order to be able to use the latest state of the art techniques and methods in the future. To reproduce the results of our manuscript, we refer to the branch [paper_version](https://github.com/MMV-Lab/mmv_im2im/tree/paper_version).
(We are actively working on the documentation and tutorials. Submit a feature request if there is anything you need.)
---
## Overview
The overall package is designed with a generic image-to-image transformation framework, which could be directly used for semantic segmentation, instance segmentation, image restoration, image generation, labelfree prediction, staining transformation, etc.. The implementation takes advantage of the state-of-the-art ML engineering techniques for users to focus on researches without worrying about the engineering details. In our pre-print [arxiv link](https://arxiv.org/abs/2209.02498), we demonstrated the effectiveness of *MMV_Im2Im* in more than ten different biomedical problems/datasets.
* For computational biomedical researchers (e.g., AI algorithm development or bioimage analysis workflow development), we hope this package could serve as the starting point for their specific problems, since the image-to-image "boilerplates" can be easily extended further development or adapted for users' specific problems.
* For experimental biomedical researchers, we hope this work provides a comprehensive view of the image-to-image transformation concept through diversified examples and use cases, so that deep learning based image-to-image transformation could be integrated into the assay development process and permit new biomedical studies that can hardly be done only with traditional experimental methods
## Installation
Before starting, we recommend to [create a new conda environment](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands) or [a virtual environment](https://docs.python.org/3/library/venv.html) with Python 3.10+.
```bash
conda create -y -n im2im -c conda-forge python=3.11
conda activate im2im
```
Please note that the proper setup of hardware is beyond the scope of this pacakge. This package was tested with GPU/CPU on Linux/Windows and CPU on MacOS. [Special note for MacOS users: Directly pip install in MacOS may need [additional setup of xcode](https://developer.apple.com/forums/thread/673827).]
### Install MONAI
To reproduce our results, we need to install MONAI's code version of a specific commit. To do this:
```bash
git clone https://github.com/Project-MONAI/MONAI.git
cd ./MONAI
git checkout 37b58fcec48f3ec1f84d7cabe9c7ad08a93882c0
pip install .
```
We will remove this step for the main branch in the future to ensure a simplified installation of our tool.
### Install MMV_Im2Im for basic usage:
(For users only using this package, not planning to change any code or make any extension):
**Option 1: core functionality only** `pip install mmv_im2im`<br>
**Option 2: advanced functionality (core + logger)** `pip install mmv_im2im[advance]`<br>
**Option 3: to reproduce paper:** `pip install mmv_im2im[paper]`<br>
**Option 4: install everything:** `pip install mmv_im2im[all]`<br>
For MacOS users, additional ' ' marks are need when using installation tags in zsh. For example, `pip install mmv_im2im[paper]` should be `pip install mmv_im2im'[paper]'` in MacOS.
### Install MMV_Im2Im for customization or extension:
```bash
git clone https://github.com/MMV-Lab/mmv_im2im.git
cd mmv_im2im
pip install -e .[all]
```
Note: The `-e` option is the so-called "editable" mode. This will allow code changes taking effect immediately. The installation tags, `advance`, `paper`, `all`, are be selected based on your needs.
### (Optional) Install using Docker
It is also possible to use our package through [docker](https://www.docker.com/). The installation tutorial is [here](docker/tutorial.md). Specifically, for MacOS users, please refer to [this tutorial](tutorials/docker/mmv_im2im_docker_tutorial.md).
### (Optional) Use MMV_Im2Im with Google Colab
We provide a web-based demo, if cloud computing is preferred. you can [](https://colab.research.google.com/github/MMV-Lab/mmv_im2im/blob/main/tutorials/colab/labelfree_2d.ipynb). The same demo can de adapted for different applications.
## Quick start
You can try out on a simple example following [the quick start guide](tutorials/quick_start.md)
Basically, you can specify your training configuration in a yaml file and run training with `run_im2im --config /path/to/train_config.yaml`. Then, you can specify the inference configuration in another yaml file and run inference with `run_im2im --config /path/to/inference_config.yaml`. You can also run the inference as a function with the provided API. This will be useful if you want to run the inference within another python script or workflow. Here is an example:
```python
from pathlib import Path
from bioio import BioImage
from bioio.writers import OmeTiffWriter
from mmv_im2im.configs.config_base import ProgramConfig, parse_adaptor, configuration_validation
from mmv_im2im import ProjectTester
# load the inference configuration
cfg = parse_adaptor(config_class=ProgramConfig, config="./paper_configs/semantic_seg_2d_inference.yaml")
cfg = configuration_validation(cfg)
# define the executor for inference
executor = ProjectTester(cfg)
executor.setup_model()
executor.setup_data_processing()
# get the data, run inference, and save the result
fn = Path("./data/img_00_IM.tiff")
img = BioImage(fn).get_image_data("YX", Z=0, C=0, T=0)
# or using delayed loading if the data is large
# img = BioImage(fn).get_image_dask_data("YX", Z=0, C=0, T=0)
seg = executor.process_one_image(img)
OmeTiffWriter.save(seg, "output.tiff", dim_orders="YX")
```
## Tutorials, examples, demonstrations and documentations
The overall package aims to achieve both simplicty and flexibilty with the modularized image-to-image boilerplates. To help different users to best use this package, we provide documentations from four different aspects:
* [Examples (i.e., scripts and config files)](tutorials/example_by_use_case.md) for reproducing all the experiments in our [pre-print](https://arxiv.org/abs/2209.02498)
* A bottom-up tutorials on [how to understand the modularized image-to-image boilerplates](tutorials/how_to_understand_boilerplates.md) (for extending or adapting the package) and [how to understand the configuration system in details](tutorials/how_to_understand_config.md) (for advance usage to make specific customization).
* A top-down tutorials as [FAQ](tutorials/FAQ.md), which will continuously grow as we receive more questions.
* All the models used in the manuscript and sample data can be found here: [](https://doi.org/10.5281/zenodo.10034416)
### Contribute models to [BioImage Model Zoo](https://bioimage.io/#/)
We highly appreciate the BioImage Model Zoo's initiative to provide a comprehensive collection of pre-trained models for a wide range of applications. To make MMV_Im2Im trained models available as well, the first step involves extracting the state_dict from the PyTorch Lightning checkpoint.
This can be done via:
```python
import torch
ckpt_path = "./lightning_logs/version_0/checkpoints/last.ckpt"
checkpoint = torch.load(ckpt_path, map_location=torch.device('cpu'))
state_dict = checkpoint['state_dict']
torch.save(state_dict, "./state_dict.pt")
```
All further steps to provide models can be found in the [official documentation](https://bioimage.io/docs/#/contribute_models/README).
## Development
See [CONTRIBUTING.md](CONTRIBUTING.md) for information related to developing the code.
**MIT license**
| text/markdown | null | Jianxu Chen <jianxuchen.ai@gmail.com> | null | null | null | deep learning, microscopy image analysis, biomedical image analysis | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"lightning>=2.5.2",
"torch>=2.6.0",
"monai>=1.5.0",
"bioio",
"pandas",
"scikit-image",
"protobuf",
"pyrallis",
"scikit-learn",
"tensorboard",
"numba",
"numpy",
"pydantic",
"fastapi",
"uvicorn",
"botocore",
"bioio-ome-tiff",
"bioio-ome-zarr",
"pydantic-zarr",
"bioio-tifffile",
... | [] | [] | [] | [
"Homepage, https://github.com/MMV-Lab/mmv_im2im"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T15:33:59.231618 | mmv_im2im-0.7.1.tar.gz | 108,196 | ee/6c/a670a0639b16cd626bf19ab0cbce29c1b21285d199f4f2110e2ce2e6a3b0/mmv_im2im-0.7.1.tar.gz | source | sdist | null | false | 2a3b76e708a19fb5f9048c8843743cd5 | d304ae77925d9a636c44ea183f522f5bc1783261cc3e74bd672d6d8ba976daa4 | ee6ca670a0639b16cd626bf19ab0cbce29c1b21285d199f4f2110e2ce2e6a3b0 | MIT | [
"LICENSE"
] | 232 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.