metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | ipfabric_netbox | 5.1.0 | NetBox plugin to sync IP Fabric data into NetBox | # IP Fabric Netbox Plugin
## IP Fabric
IP Fabric is a vendor-neutral network assurance platform that automates the
holistic discovery, verification, visualization, and documentation of
large-scale enterprise networks, reducing the associated costs and required
resources whilst improving security and efficiency.
It supports your engineering and operations teams, underpinning migration and
transformation projects. IP Fabric will revolutionize how you approach network
visibility and assurance, security assurance, automation, multi-cloud
networking, and trouble resolution.
**Integrations or scripts should not be installed directly on the IP Fabric VM unless directly communicated from the
IP Fabric Support or Solution Architect teams. Any action on the Command-Line Interface (CLI) using the root, osadmin,
or autoboss account may cause irreversible, detrimental changes to the product and can render the system unusable.**
## Overview
This plugin allows the integration and data synchronization between IP Fabric and NetBox.
The plugin uses IP Fabric collect network data utilizing the [IP Fabric Python SDK](https://gitlab.com/ip-fabric/integrations/python-ipfabric). This plugin relies on helpful features in NetBox like [Branches](https://docs.netboxlabs.com/netbox-extensions/branching/) and [Background Tasks](https://netboxlabs.com/docs/netbox/en/stable/plugins/development/background-tasks/) to make the job of bringing in data to NetBox easier.
- Multiple IP Fabric Sources
- Transform Maps
- Scheduled Synchronization
- Diff Visualization
## NetBox Compatibility
These are the required NetBox versions for corresponding plugin version. Any other versions won't work due to breaking changes in NetBox codebase.
| Netbox Version | Plugin Version |
|----------------|----------------|
| 4.5.0 and up | 5.1.0 and up |
| 4.4.0 - 4.4.10 | 4.3.0 - 5.0.2 |
| 4.3.0 - 4.3.7 | 4.2.2 |
| 4.3.0 - 4.3.6 | 4.0.0 - 4.2.1 |
| 4.2.4 - 4.2.9 | 3.2.2 - 3.2.4 |
| 4.2.0 - 4.2.3 | 3.2.0 |
| 4.1.5 - 4.1.11 | 3.1.1 - 3.1.3 |
| 4.1.0 - 4.1.4 | 3.1.0 |
| 4.0.1 | 3.0.1 - 3.0.3 |
| 4.0.0 | 3.0.0 |
| 3.7.0 - 3.7.8 | 2.0.0 - 2.0.6 |
| 3.4.0 - 3.6.9 | 1.0.0 - 1.0.11 |
## Screenshots





## Documentation
Full documentation for this plugin can be found at [IP Fabric Docs](https://docs.ipfabric.io/main/integrations/netbox/).
- User Guide
- Administrator Guide
## Contributing
If you would like to contribute to this plugin, please see the [CONTRIBUTING.md](CONTRIBUTING.md) file.
| text/markdown | Solution Architecture | solution.architecture@ipfabric.io | null | null | MIT | netbox, ipfabric, plugin, sync | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"httpx<0.29,>0.26",
"ipfabric>=7.0.0; extra != \"ipfabric-7-0\" and extra != \"ipfabric-7-2\" and extra != \"ipfabric-7-3\" and extra != \"ipfabric-7-5\" and extra != \"ipfabric-7-8\" and extra != \"ipfabric-7-9\"",
"ipfabric<7.1.0,>=7.0.0; extra == \"ipfabric-7-0\" and extra != \"ipfabric-7-2\" and extra != \"ipfabric-7-3\" and extra != \"ipfabric-7-5\" and extra != \"ipfabric-7-8\" and extra != \"ipfabric-7-9\"",
"ipfabric<7.3.0,>=7.2.0; extra != \"ipfabric-7-0\" and extra == \"ipfabric-7-2\" and extra != \"ipfabric-7-3\" and extra != \"ipfabric-7-5\" and extra != \"ipfabric-7-8\" and extra != \"ipfabric-7-9\"",
"ipfabric<7.4.0,>=7.3.0; extra != \"ipfabric-7-0\" and extra != \"ipfabric-7-2\" and extra == \"ipfabric-7-3\" and extra != \"ipfabric-7-5\" and extra != \"ipfabric-7-8\" and extra != \"ipfabric-7-9\"",
"ipfabric<7.6.0,>=7.5.0; extra != \"ipfabric-7-0\" and extra != \"ipfabric-7-2\" and extra != \"ipfabric-7-3\" and extra == \"ipfabric-7-5\" and extra != \"ipfabric-7-8\" and extra != \"ipfabric-7-9\"",
"ipfabric<7.9.0,>=7.8.0; extra != \"ipfabric-7-0\" and extra != \"ipfabric-7-2\" and extra != \"ipfabric-7-3\" and extra != \"ipfabric-7-5\" and extra == \"ipfabric-7-8\" and extra != \"ipfabric-7-9\"",
"ipfabric<7.10.0,>=7.9.0; extra != \"ipfabric-7-0\" and extra != \"ipfabric-7-2\" and extra != \"ipfabric-7-3\" and extra != \"ipfabric-7-5\" and extra != \"ipfabric-7-8\" and extra == \"ipfabric-7-9\"",
"netboxlabs-netbox-branching>=0.7.0",
"netutils"
] | [] | [] | [] | [
"Bug Tracker, https://gitlab.com/ip-fabric/integrations/ipfabric-netbox-sync/-/issues",
"Homepage, https://gitlab.com/ip-fabric/integrations/ipfabric-netbox-sync",
"Repository, https://gitlab.com/ip-fabric/integrations/ipfabric-netbox-sync"
] | poetry/2.3.2 CPython/3.12.12 Linux/6.12.73+deb13-amd64 | 2026-02-20T21:31:50.390823 | ipfabric_netbox-5.1.0-py3-none-any.whl | 215,301 | bf/a2/53fde98efbed6ee3420a8083f94b4bcab00f132d1bb4a4b0393db8c48904/ipfabric_netbox-5.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | b917e416ebb4f6e49b0f1d96d522f5e1 | dcf82953ee93c0c5c267e91af1ba9ad306a1c072fc365b8b156673af276e0689 | bfa253fde98efbed6ee3420a8083f94b4bcab00f132d1bb4a4b0393db8c48904 | null | [] | 0 |
2.4 | ruminant | 0.0.31 | Recursive metadata extraction tool | Ruminant is a recursive metadata extraction tool.
# What does it do?
Ruminant takes a file as an input and spits out a huge json object that contains all the metadata it extracted from the file. This is done recursively, e.g. by running ruminant again on each file inside a zip file.
# Why the name?
To quote Wikipedia: Ruminants are herbivorous grazing or browsing artiodactyls [...]. The process of rechewing the cud to further break down plant matter and stimulate digestion is called rumination. The word "ruminant" comes from the Latin ruminare, which means "to chew over again".
This tool behaves similarly as extracted blobs themselves can be "chewed over again" (the main entrypoint is literally called chew()) in order to recursively extract metadata.
# What can it process?
Ruminant is still in early alpha but it can already process the following file types:
* ZIP files
* APK signatures
* Java jmod modules
* encrypted files
* PDF files
* JPEG files
* EXIF metadata
* XMP metadata
* ICC profiles
* IPTC metadata (I hate you for that one Adobe)
* Adobe-specific metadata in APP14
* MPF APP2 segments
* PNG files
* EXIF metadata
* TIFF files
* EXIF metadata (EXIF metadata is literally stored in a TIFF file)
* DNG files
* ISO files
* MP4 files
* AVIF files
* HEIF/HEIC stuff
* XMP metadata
* AVC1 x264 banners
* all of the DRM stuff that Netflix puts in their streams
* CENC
* PlayReady
* Widevine
* SEFT metadata
* ICC profiles
* EP0763801A2 extension
* TrueType fonts
* RIFF files
* WebP
* WAV
* GIF files
* EBML files
* Matroska
* WebM
* Ogg files
* Opus metadata
* Theora metadata
* Vorbis metadata
* FLAC files
* DER data
* X509 certificates
* PEM files
* GZIP streams
* BZIP2 streams
* TAR files
* USTAR to be precise
* PGP stuff
* ID3v2 tags
* MPEG-TS
* MakerNotes
* Fuji
* Sony
* Google HDR+
* PSD files
* KDBX files
* JPEG2000 files
* C2PA CAI JUMBF metadata
* WASM files
* Torrent files
* Sqlite3 database files
* DICOM files
* ASF files
* WMA files
* WMV files
* age encrypted files
* tlock extensions
* LUKS headers
* Java class files
* ELF files
* .comment sections
* .interp sections
* .note sections
* PE files
* Authenticode signatures
* GRUB modules in EFI files
* Minecraft NBT files
* region files
* SPIR-V binaries
* Ar archives
* Cpio archives
* Zstd files
* SSH signatures
* Git object files
* Intel microcode files
* including public key detection and signature extraction
* EXR/OpenEXR files
* Android vbmeta partitions
* PDP-11 a.out files
* OpenTimestamps proof files
* xz files
* UF2 files
* Android adb backup files
* Java object serialization data
* Safetensors files
# How do I install it?
Run `pip3 install ruminant`.
Alternatively, you can also run `python3 -m build` in the source tree, followed by `pip3 install dist/*.whl`.
# How do I use it?
The most basic usage would be `ruminant <file>` in order to process the file and output all metadata.
Each time a blob is passed to chew(), it gets assigned a new unique ID that is stored in the "blob-id" field in its JSON object.
These blobs can be extracted with `ruminant <file> --extract <ID> <file name>`. The `--extract` option can also be shortened to `-e` and can be repeated multiple times.
Not specifying a file means that it reads from `-`, which is the standard input. You can also explicitly pass `-` as the file.
The `--walk` or `-w` option enables a binwalk-like mode where ruminant tries to parse a file and increments the start offset by one until it can correctly parse something. This is done until the end of the file.
This is a valid complex command: `ruminant -e 2 foo.jpeg - --extract 5 bar.bin -e 0 all.zip`
(Yes, you could abuse ruminant to copy files by running `function cp() { ruminant --extract 0 $2 $1 }` in bash and then using the function as `cp`.)
You can also specify `--extract-all` in order to extract all blobs to the "blobs" directory.
Specifying a directory as the file makes ruminant walk that directory recursively. Adding `--progress` shows a progress bar (this requires tqdm). Adding `--progress-names` adds file names to the progress bar.
Specifying `--url` makes ruminant treat the file name as a URL and makes it try to fetch the file from it. It uses the user agent of a recent Chrome to not be blocked.
Adding `--strip-url` makes ruminant change some parts of known URLs to preserve metadata. It can, for example, detect that a file is being hosted by Wordpress based on the "/wp-content/" start of the path and can then remove the "-<width>x<height>" part of the file name to preserve its original size and avoid reencoding of the file.
The user agent can be overridden by setting the `RUMINANT_USER_AGENT` environment variable with the desired agent.
# Ruminant can't parse xyz
Feel free to send me a sample so I can add a parser for it :)
| text/markdown | Laura Kirsch | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tqdm; extra == \"recommended\"",
"pyzstd; extra == \"recommended\"",
"backports.zstd; extra == \"recommended\"",
"flake8; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/jakiki6/ruminant",
"Issues, https://github.com/jakiki6/ruminant/issues"
] | twine/6.1.0 CPython/3.12.12 | 2026-02-20T21:30:26.435357 | ruminant-0.0.31.tar.gz | 226,269 | 21/dc/6d6b38b92979a5c1da7b123a524edec12ef0c5a6b8276053b4bd6c13ecda/ruminant-0.0.31.tar.gz | source | sdist | null | false | efc457c5a37f5bb51526fd3b3a6ed5e1 | d0f5eefb359e5bfb5b6b441b5116bd52f7793e4384e7e390c833889f9e30f35b | 21dc6d6b38b92979a5c1da7b123a524edec12ef0c5a6b8276053b4bd6c13ecda | LGPL-3.0 | [
"LICENSE"
] | 213 |
2.4 | localrouter | 0.2.20 | Multi-provider LLM client with unified message format and tool support | # LocalRouter
A unified multi-provider LLM client with consistent message formats and tool support across OpenAI, Anthropic, and Google GenAI.
## Quick Start
Install the package:
```bash
pip install localrouter
```
Set your API keys as environment variables:
```bash
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GEMINI_API_KEY="your-gemini-key" # or GOOGLE_API_KEY
```
Basic usage:
```python
import asyncio
from localrouter import get_response, ChatMessage, MessageRole, TextBlock
async def main():
messages = [
ChatMessage(
role=MessageRole.user,
content=[TextBlock(text="Hello, how are you?")]
)
]
response = await get_response(
model="gpt-4.1", # or "o3", "claude-sonnet-4-20250514", "gemini-2.5-pro", etc
messages=messages
)
print(response.content[0].text)
asyncio.run(main())
```
## Alternative Response Functions
LocalRouter provides several variants of `get_response` for different use cases:
### Caching
To use disk caching, `import get_response_cached as get_response`:
```python
# Import as get_response for consistent usage
from localrouter import get_response_cached as get_response
response = await get_response(
model="gpt-4o-mini",
messages=messages,
cache_seed=12345 # Required for caching
)
```
This will return cached results whenever get_response is called with identical inputs and `cache_seed` is provided. If no `cache_seed` is provided, it will behave exactly like `localrouter.get_response`.
### Retry with Backoff
Automatically retry failed requests with exponential backoff:
```python
from localrouter import get_response_with_backoff as get_response
response = await get_response(
model="gpt-4o-mini",
messages=messages
)
```
### Caching + Backoff
Combine caching with retry logic:
```python
from localrouter import get_response_cached_with_backoff as get_response
response = await get_response(
model="gpt-4o-mini",
messages=messages,
cache_seed=12345 # Required for caching
)
```
**Note**: When using cached functions without `cache_seed`, they behave like non-cached versions (no caching occurs).
## Images
```python
from localrouter import ChatMessage, MessageRole, TextBlock, ImageBlock
# Text message
text_msg = ChatMessage(
role=MessageRole.user,
content=[TextBlock(text="Hello world")]
)
# Image message
image_msg = ChatMessage(
role=MessageRole.user,
content=[
ImageBlock.from_base64(base64_data, media_type="image/png"), # or: ImageBlock.from_file("image.png")
TextBlock(text="What's in this image?")
]
)
```
## Tool Calling
Define tools and get structured function calls:
```python
from localrouter import ToolDefinition, get_response
# Define a tool
weather_tool = ToolDefinition(
name="get_weather",
description="Get current weather for a location",
input_schema={
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
)
# Use the tool
response = await get_response(
model="gpt-4.1-nano",
messages=[ChatMessage(
role=MessageRole.user,
content=[TextBlock(text="What's the weather in Paris?")]
)],
tools=[weather_tool]
)
# Check for tool calls
for block in response.content:
if isinstance(block, ToolUseBlock):
print(f"Tool: {block.name}, Args: {block.input}")
```
## Structured Output
Get validated Pydantic models as responses:
```python
from pydantic import BaseModel
from typing import List
class Event(BaseModel):
name: str
date: str
participants: List[str]
response = await get_response(
model="gpt-4.1-mini",
messages=[ChatMessage(
role=MessageRole.user,
content=[TextBlock(text="Alice and Bob meet for lunch Friday")]
)],
response_format=Event
)
event = response.parsed # Validated Event instance
print(f"Event: {event.name} on {event.date}")
```
### Conversation Flow
Handle multi-turn conversations with tool results:
```python
from localrouter import ToolResultBlock
# Initial request
messages = [ChatMessage(
role=MessageRole.user,
content=[TextBlock(text="Get weather for Tokyo")]
)]
# Get response with tool call
response = await get_response(model="gpt-4o-mini", messages=messages, tools=[weather_tool])
messages.append(response)
# Execute tool and add result
tool_call = response.content[0] # ToolUseBlock
tool_result = ToolResultBlock(
tool_use_id=tool_call.id,
content=[TextBlock(text="Tokyo: 22°C, sunny")] # Tool result may also contain ImageBlock parts
)
messages.append(ChatMessage(role=MessageRole.user, content=[tool_result]))
# Continue conversation
final_response = await get_response(model="gpt-4o-mini", messages=messages, tools=[weather_tool])
```
### Tool Definition
- `ToolDefinition(name, description, input_schema)` - Define available tools
- `SubagentToolDefinition()` - Predefined tool for sub-agents
## Reasoning/Thinking Support
Configure reasoning budgets for models that support explicit thinking (GPT-5, Claude Sonnet 4+, Gemini 2.5):
```python
from localrouter import ReasoningConfig
# Using effort levels (OpenAI-style)
response = await get_response(
model="gpt-5", # When available
messages=messages,
reasoning=ReasoningConfig(effort="high") # "minimal", "low", "medium", "high"
)
# Using explicit token budget (Anthropic/Gemini-style)
response = await get_response(
model="gemini-2.5-pro",
messages=messages,
reasoning=ReasoningConfig(budget_tokens=8000)
)
# Let model decide (Gemini dynamic thinking)
response = await get_response(
model="gemini-2.5-flash",
messages=messages,
reasoning=ReasoningConfig(dynamic=True)
)
# Backward compatible dict config
response = await get_response(
model="claude-sonnet-4-20250514", # When available
messages=messages,
reasoning={"effort": "medium"}
)
```
The reasoning configuration automatically converts between provider formats:
- **OpenAI (GPT-5)**: Uses `effort` levels
- **Anthropic (Claude 4+)**: Uses `budget_tokens`
- **Google (Gemini 2.5)**: Uses `thinking_budget` with dynamic option
Models that don't support reasoning will ignore the configuration.
## Custom Providers and Model Routing
LocalRouter supports regex patterns for model matching and prioritized provider selection. OpenRouter serves as a fallback for any model containing "/" (e.g., "meta-llama/llama-3.3-70b") with lowest priority.
```python
from localrouter import add_provider, re
# Add a custom provider with regex pattern support
async def custom_get_response(model, messages, **kwargs):
# Your custom implementation
pass
add_provider(
custom_get_response,
models=["custom-model-1", re.compile(r"custom-.*")], # Exact match or regex
priority=50 # Lower = higher priority (default: 100, OpenRouter: 1000)
)
```
## Request-Level Routing
LocalRouter allows you to register router functions that can dynamically modify model selection based on request parameters. This is useful for:
- Creating model aliases
- Routing requests with images to vision models
- Selecting models based on temperature, tools, or other parameters
- Implementing fallback strategies
```python
from localrouter import register_router
# Example 1: Simple alias
def alias_router(req):
if req['model'] == 'default':
return 'gpt-5'
return None # Keep original model
register_router(alias_router)
# Now you can use the alias
response = await get_response(
model="default", # Will be routed to gpt-5
messages=messages
)
```
```python
# Example 2: Route based on message content
def vision_router(req):
"""Route requests with images to vision-capable models"""
messages = req.get('messages', [])
for msg in messages:
for block in msg.content:
if hasattr(block, '__class__') and 'ImageBlock' in block.__class__.__name__:
return 'qwen/qwen3-vl-30b-a3b-instruct'
return None # Use original model for text-only requests
register_router(vision_router)
```
```python
# Example 3: Route based on parameters
def temperature_router(req):
"""Use different models based on temperature"""
temperature = req.get('temperature', 0)
if temperature > 0.8:
return 'gpt-5' # Creative tasks
return 'gpt-4.1-mini' # Deterministic tasks
register_router(temperature_router)
```
**Router Function Interface:**
- **Input**: Dictionary with keys: `model`, `messages`, `tools`, `response_format`, `reasoning`, and any other kwargs
- **Output**: String (new model name) or None (keep original model)
- **Execution**: Routers are applied in registration order, and each router sees the model name from the previous router
## Logging
LocalRouter provides a flexible logging system to capture LLM requests and responses for debugging, monitoring, and analysis.
### Basic Logging
Register custom logger functions to receive request/response data:
```python
from localrouter import register_logger
def my_logger(request, response, error):
"""
request: Dict with model, messages, tools, etc.
response: ChatMessage object (None if error occurred)
error: Exception object (None if successful)
"""
if error:
print(f"Error calling {request['model']}: {error}")
else:
print(f"Success: {request['model']} returned {len(response.content)} blocks")
register_logger(my_logger)
```
### File-Based Logging
Use the built-in `log_to_dir()` helper to automatically save requests and responses as JSON files:
```python
from localrouter import register_logger, log_to_dir
# Log all requests to .llm/logs directory
register_logger(log_to_dir('.llm/logs'))
# Now all LLM calls will be logged
response = await get_response(
model="gpt-4.1",
messages=messages
)
```
Each log file contains:
- Complete request parameters (model, messages, tools, etc.)
- Full response with all content blocks
- Error information if the request failed
- Timestamp
Log files are named: `{model-slug}_{timestamp}.json`
### Multiple Loggers
You can register multiple loggers that will all be called:
```python
# Log to disk
register_logger(log_to_dir('.llm/logs'))
# Also send to monitoring service
def monitoring_logger(request, response, error):
send_to_datadog(request, response, error)
register_logger(monitoring_logger)
```
**Note**: Logger errors are silently caught to prevent them from breaking your LLM calls.
| text/markdown | Center on Long-term Risk | null | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"anthropic>=0.50.0",
"backoff>=2.2.1",
"openai>=1.98.0",
"pydantic>=2.7.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0.2",
"cache_on_disk>=0.5.0",
"google-genai>=1.26.0",
"Pillow>=10.0.0; extra == \"images\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pre-commit>=4.0.0; extra == \"dev\"",
"black>=25.0.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:30:21.133557 | localrouter-0.2.20.tar.gz | 59,293 | c6/8a/57e65a12876f2d978bcd83b9f46b25fe3a4efe5a63e281e981f018d15a4c/localrouter-0.2.20.tar.gz | source | sdist | null | false | c49ade95d5ddde4e167a7cab3c0bd3ca | acc4c03ccbb1c6029ac15580828fda6e73cf1d40ac17e7e393c2b0affa1217e1 | c68a57e65a12876f2d978bcd83b9f46b25fe3a4efe5a63e281e981f018d15a4c | null | [] | 215 |
2.4 | agent-notify | 0.1.1 | Cross-platform notifications for long-running agentic CLI tools | # agent-notify
`agent-notify` sends notifications when long-running CLI/agent tasks finish.
It supports two usage styles:
1. Wrap a command (`agent-notify run -- ...`)
2. Receive task-level events from interactive agents (Codex, Claude Code, Gemini, Ollama pipelines)
## Why People Use This
- You can keep coding in one window and get notified when a background task completes.
- Notifications include success/failure, duration, and exit code.
- Works on macOS and Windows (with console fallback when desktop notifications are unavailable).
## Install
Recommended:
```bash
pipx install agent-notify
```
Alternative:
```bash
pip install agent-notify
```
From source:
```bash
python -m pip install -e .
```
Confirm install:
```bash
agent-notify --help
agent-notify test-notify --channel console
```
## 2-Minute Quickstart
Run any long command through the wrapper:
```bash
agent-notify run -- python3 -c "import time; time.sleep(8)"
```
If the command fails, the notification title changes to `Failed`.
## Choose Your Mode
### Mode A: Task-Level Notifications (Recommended for interactive CLIs)
This notifies when an agent turn/task completes inside Codex/Claude/Gemini flows.
You do not need to exit the CLI session.
### Mode B: Shell Exit Notifications (`shell-init`)
This notifies when shell commands end. For interactive agents, that usually means on CLI exit.
Use this only if you want process-exit behavior.
## Interactive Tool Setup
### Codex CLI
Codex provides a `notify` hook. Use the included bridge:
```bash
chmod +x examples/codex_notify_bridge.sh
BRIDGE_PATH="$(realpath examples/codex_notify_bridge.sh)"
echo "$BRIDGE_PATH"
```
Add to `~/.codex/config.toml`:
```toml
notify = [
"/absolute/path/to/examples/codex_notify_bridge.sh"
]
```
Optional debug logs:
```bash
export AGENT_NOTIFY_DEBUG=1
```
Log location:
`~/.agentnotify/logs/codex_notify.log`
### Claude Code
Configure hooks in `.claude/settings.local.json` (or user settings):
```json
{
"hooks": {
"Stop": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "agent-notify claude-hook --event Stop --name claude-code --channel both --quiet-when-focused --chime ping"
}
]
}
],
"SubagentStop": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "agent-notify claude-hook --event SubagentStop --name claude-code --channel both --quiet-when-focused --chime ping"
}
]
}
]
}
}
```
### Gemini CLI
Configure `AfterAgent` hook:
```json
{
"hooksConfig": {
"enabled": true
},
"hooks": {
"AfterAgent": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "agent-notify gemini-hook --name gemini --channel both --quiet-when-focused --chime ping",
"timeout": 10000
}
]
}
]
}
}
```
### Ollama
- Pure interactive `ollama run` currently has no native per-turn completion hook.
- If you use `ollama launch codex` or `ollama launch claude`, configure Codex/Claude hooks above.
- For non-interactive JSON output (`--format json`), pipe into `agent-notify ollama-hook`.
Example:
```bash
ollama run llama3 --format json | agent-notify ollama-hook --name ollama --channel both
```
## Core Commands
`agent-notify run -- <cmd...>`
- Run and notify on completion.
- Wrapper exits with the same exit code as the wrapped command.
`agent-notify watch --pid <pid>`
- Watch an existing process ID until exit.
`agent-notify test-notify`
- Send a sample notification.
`agent-notify tail --file <path> --pattern <text>`
- Notify when a log pattern appears.
`agent-notify shell-init`
- Generate shell hook script for process-exit notifications.
Hook bridge commands used by integrations:
- `agent-notify gemini-hook`
- `agent-notify claude-hook`
- `agent-notify codex-hook`
- `agent-notify ollama-hook`
## Common Customizations
Suppress notifications while terminal is focused (macOS):
```bash
agent-notify gemini-hook --quiet-when-focused
```
Add sound:
```bash
agent-notify claude-hook --chime ping
```
Force console output:
```bash
agent-notify run --channel console -- your-command
```
## Configuration
Environment variables:
- `AGENT_NOTIFY_TITLE_PREFIX="Agent"`
- `AGENT_NOTIFY_CHANNELS="desktop,console"`
- `AGENT_NOTIFY_TAIL_LINES=20`
- `AGENT_NOTIFY_POLL_INTERVAL=1.0`
Optional TOML config at `~/.agentnotify/config.toml`:
```toml
title_prefix = "Agent"
channels = ["desktop"]
tail_lines = 20
poll_interval = 1.0
```
Environment variables override file values.
## Troubleshooting
### I only get notifications when I exit the CLI
You are likely using `shell-init` mode. That is process-exit based.
Use task-level hooks (`codex-hook`, `claude-hook`, `gemini-hook`) instead and remove `shell-init` lines from your shell startup file.
### Desktop notifications do not appear
1. Test fallback path:
- `agent-notify test-notify --channel console`
2. Verify platform backend:
- macOS uses `osascript`
- Windows uses PowerShell/BurntToast (with `win10toast` fallback)
### Codex notifications not firing
1. Confirm `notify` is configured in `~/.codex/config.toml`.
2. Confirm bridge script is executable: `chmod +x examples/codex_notify_bridge.sh`.
3. Enable bridge debug logs with `AGENT_NOTIFY_DEBUG=1` and inspect `~/.agentnotify/logs/codex_notify.log`.
## Platform Notes
macOS:
- Desktop notifications via Notification Center (`osascript`).
Windows:
- Primary backend: PowerShell + BurntToast.
- Optional fallback dependency: `pip install "agent-notify[windows]"`.
## For Maintainers
Development checks:
```bash
python -m pip install -e ".[dev]"
ruff check .
pytest -q
python -m build
twine check dist/*
```
Release checklist:
- `docs/release_checklist.md`
Project/process docs:
- `docs/project_charter.md`
- `docs/scrum_working_agreement.md`
- `docs/assumptions.md`
- `docs/commit_plan.md`
- `SECURITY.md`
## License
MIT (`LICENSE`)
| text/markdown | agent-notify contributors | null | null | null | MIT | cli, notification, agents, codex, developer-tools | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Build Tools"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click<9.0,>=8.1",
"tomli>=2.0; python_version < \"3.11\"",
"build>=1.2; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.5; extra == \"dev\"",
"twine>=5.1; extra == \"dev\"",
"win10toast>=0.9; platform_system == \"Windows\" and extra == \"windows\""
] | [] | [] | [] | [
"Homepage, https://github.com/Kipung/AgentNotify",
"Repository, https://github.com/Kipung/AgentNotify",
"Issues, https://github.com/Kipung/AgentNotify/issues"
] | twine/6.2.0 CPython/3.12.8 | 2026-02-20T21:30:02.807997 | agent_notify-0.1.1.tar.gz | 26,731 | 25/4d/728809d693f9bc72b553bcda8e40b9ab0726974dc164f909d3af9faccdf4/agent_notify-0.1.1.tar.gz | source | sdist | null | false | df33252df43a6d5b9819c9f281f45b88 | 39d0ff74c185f39f7f74355c60c4f835e05c7aa9090332f7420355599166190f | 254d728809d693f9bc72b553bcda8e40b9ab0726974dc164f909d3af9faccdf4 | null | [
"LICENSE"
] | 221 |
2.4 | semantic-id | 0.2.5 | A library for generating semantic IDs from embeddings using RQ-KMeans and other algorithms. | # Semantic ID 🌟
**Turn your vectors into meaningful strings.**
Semantic ID is a friendly Python library that helps you transform continuous vector embeddings (like those from OpenAI, BERT, or ResNet) into discrete, human-readable semantic strings. It uses algorithms like **RQ-KMeans** (Residual Quantization K-Means) and **RQ-VAE** (Residual Quantization Variational Autoencoder) to hierarchically cluster your data, giving you IDs that actually mean something!
Imagine turning `[0.12, -0.88, 0.04, ...]` into `"cars-suv-landrover"`. Okay, maybe more like `"12-4-9-1"`, but you get the idea—it preserves semantic similarity!
## 💡 Inspiration
This project is heavily inspired by the incredible work found in:
* **[Recommender Systems with Generative Retrieval](https://arxiv.org/pdf/2305.05065)** (Rajput et al., 2023): The paper that lays the groundwork for using semantic IDs in next-gen recommendation systems.
* **[MiniOneRec](https://github.com/AkaliKong/MiniOneRec)**: A fantastic repository that demonstrates these concepts in action.
We aim to make these powerful techniques accessible and easy to use for everyone.
## 🗺️ Explore Your Embeddings
Before you start clustering, it's super helpful to "see" your data. We love **[Apple's Embedding Atlas](https://github.com/apple/embedding-atlas)** and suggest everyone try it out! It's a great way to visualize your high-dimensional vectors and understand the landscape of your data. It's also a great way to evaluate your results after training your RQ-model.
---
## ✨ Features
* **RQ-KMeans**: Hierarchical residual quantization with K-Means on CPU & GPU.
* **RQ-VAE**: Neural network-based quantization with learnable codebooks.
* **Balanced Clustering**: Constrained K-Means for evenly distributed codes.
* **Uniqueness**: Automatic collision resolution (suffix-based and Sinkhorn re-encoding).
* **Custom Formats**: User-defined formatter callbacks for any ID format, plus custom item IDs for collision resolution.
* **Evaluation**: Built-in metrics — collision rate, recall@K, NDCG@K, distance correlation, code utilization, entropy, quantization MSE.
* **LLM-Friendly Tokens**: Output IDs in `<a_3><b_9><c_1>` format for language models.
* **Persistence**: Save/Load models and full engine pipelines.
## 📦 Installation
```bash
pip install -e .
```
To enable **GPU acceleration** (recommended!):
```bash
pip install torch
```
To use **balanced clustering**:
```bash
pip install k-means-constrained
```
## 🚀 Quick Start
### 1. The Basics (RQ-KMeans)
Let's generate some simple IDs. We'll use a small number of clusters (10 per level) so the IDs are short and sweet.
```python
import numpy as np
from semantic_id import RQKMeans
# 1. Generate some dummy data (100 vectors, 16 dimensions)
X = np.random.randn(100, 16)
# 2. Initialize the model
# We'll use 3 levels with 10 clusters each.
# This means our IDs will look like "X-Y-Z" where numbers are 0-9.
model = RQKMeans(n_levels=3, n_clusters=10, random_state=42)
# 3. Train the model
model.fit(X)
# 4. Generate Semantic IDs
# This converts vectors -> codes -> strings
codes = model.encode(X) # shape (100, 3)
sids = model.semantic_id(codes)
print(f"Vector: {X[0][:3]}...")
print(f"Semantic ID: {sids[0]}") # Output: e.g., "3-9-1"
```
### 2. Go Fast with GPU 🏎️
Got a GPU? Let's use it! The PyTorch backend is compatible with `cuda` and `mps`.
```python
device = "cuda" # or "mps" for Mac, or "cpu"
model = RQKMeans(n_levels=3, n_clusters=10)
model.fit(X, device=device)
codes = model.encode(X, device=device)
```
### 3. Ensure Uniqueness (The Engine)
In the real world, two different items might end up in the same cluster. The `SemanticIdEngine` handles this gracefully by appending a counter to duplicates.
```python
from semantic_id import SemanticIdEngine, RQKMeans, UniqueIdResolver, SQLiteCollisionStore
# Setup the algorithm
encoder = RQKMeans(n_levels=3, n_clusters=10)
# Setup the persistence (saves collision counts to a file)
store = SQLiteCollisionStore("collisions.db")
resolver = UniqueIdResolver(store=store)
# Create the engine
engine = SemanticIdEngine(encoder=encoder, unique_resolver=resolver)
# Train and Get Unique IDs
engine.fit(X)
unique_ids = engine.unique_ids(X)
print(unique_ids[0]) # e.g., "3-9-1"
# If another item has code (3, 9, 1), it becomes "3-9-1-1" automatically!
```
> **Tip:** For quick experiments, skip the store setup entirely — `SemanticIdEngine` uses an `InMemoryCollisionStore` by default:
> ```python
> engine = SemanticIdEngine(encoder=encoder) # zero-config uniqueness
> ```
### 4. Neural Networks (RQ-VAE) 🧠
For complex data, a simple K-Means might not be enough. **RQ-VAE** uses a neural network to learn the optimal codebooks.
```python
from semantic_id import RQVAE
model = RQVAE(
in_dim=16, # Input dimension of your vectors
num_emb_list=[32, 32, 32], # 32 clusters per level
e_dim=16, # Codebook dimension
layers=[32, 16], # Hidden layers
device="cpu"
)
model.fit(X)
ids = model.semantic_id(model.encode(X))
```
### 5. Evaluate Your IDs 📊
Use the built-in `evaluate()` function to measure how well your IDs preserve the structure of the original embeddings.
```python
from semantic_id import evaluate
metrics = evaluate(X, codes, encoder=model)
print(metrics)
# {
# 'n_samples': 100,
# 'n_unique_codes': 87,
# 'collision_rate': 0.13,
# 'collision_rate_per_level': [0.9, 0.45, 0.13],
# 'recall_at_10': 0.42,
# 'ndcg_at_10': 0.38,
# 'distance_correlation': 0.65,
# 'code_utilization_per_level': [1.0, 0.95, 0.87],
# 'code_entropy_per_level': [2.30, 2.25, 2.10],
# 'quantization_mse': 0.003
# }
```
| Metric | What it measures |
|---|---|
| `collision_rate` | Fraction of items sharing an ID with another item (lower is better) |
| `collision_rate_per_level` | Collision rate at each prefix depth — shows where uniqueness breaks down |
| `recall_at_10` | How well code-space neighbors match embedding-space neighbors (higher is better) |
| `ndcg_at_10` | Ranking quality of code-space neighbors vs embedding-space (higher is better) |
| `distance_correlation` | Spearman correlation between embedding distances and code distances (higher is better) |
| `code_utilization_per_level` | Fraction of codebook entries used at each level (higher is better) |
| `code_entropy_per_level` | Shannon entropy of code distribution per level (higher = more uniform) |
| `quantization_mse` | Reconstruction error from `decode()` (lower is better; requires an encoder with `decode()`) |
### 6. LLM-Friendly Token Format 🤖
When feeding semantic IDs into a language model, the token format wraps each level in angle brackets with a level letter:
```python
codes = model.encode(X)
# Standard format (default)
plain_ids = model.semantic_id(codes) # ["3-9-1", "0-5-7", ...]
# Token format for LLMs
token_ids = model.semantic_id(codes, fmt="token") # ["<a_3><b_9><c_1>", ...]
```
### 7. Custom ID Formats 🎨
Define your own format function for full control over how codes become strings:
```python
# Custom format for your LLM
def my_llm_format(codes):
return "".join(f"[item_L{i}_{c}]" for i, c in enumerate(codes))
ids = model.semantic_id(codes, formatter=my_llm_format)
# ["[item_L0_3][item_L1_9][item_L2_1]", ...]
# Works through the engine too
engine = SemanticIdEngine(encoder=model)
engine.fit(X)
uids = engine.unique_ids(X, formatter=my_llm_format)
```
### 8. Use Your Own Item IDs 🏷️
Instead of auto-incremented suffixes (`-1`, `-2`), attach your own identifiers:
```python
db_keys = ["SKU001", "SKU002", "SKU003", ...]
uids = engine.unique_ids(X, item_ids=db_keys)
# Collisions become "3-9-1-SKU042" instead of "3-9-1-1"
# Custom separator for the suffix too
uids = engine.unique_ids(X, item_ids=db_keys, sep="/")
# "3/9/1/SKU042"
```
### 9. Balanced Clustering ⚖️
Use `implementation="constrained"` to enforce roughly equal cluster sizes. This reduces collision rates but requires the `k-means-constrained` package.
```python
model = RQKMeans(
n_levels=3,
n_clusters=10,
implementation="constrained", # balanced clusters
random_state=42
)
model.fit(X)
```
## 🔄 Reproducibility & Persistence
We know how annoying it is when IDs change between machines. To ensure **identical Semantic IDs** across different environments (e.g., Training on GPU -> Inference on CPU):
1. **Train (`fit`) once** on your training machine.
2. **Save** the model.
3. **Load** on your production machine.
Do not re-train on the second machine, as random initialization will differ!
```python
# Save a single encoder
model.save("my_model")
loaded = RQKMeans.load("my_model")
# Save the full engine (encoder + collision store)
engine.save("my_engine")
loaded_engine = SemanticIdEngine.load("my_engine")
```
Both `RQKMeans` and `RQVAE` support `save()`/`load()`. The engine also persists the collision store so suffix counters are preserved.
## 🗺️ Project Status
We are actively building! Here is what's ready for you today:
- ✅ **RQ-KMeans**: Core algorithm working on CPU & GPU.
- ✅ **RQ-VAE**: Neural network based quantization with training history tracking.
- ✅ **Balanced Clustering**: Constrained K-Means for even code distribution.
- ✅ **Uniqueness**: Suffix-based and Sinkhorn-based collision resolution.
- ✅ **Custom Formats**: User-defined formatter callbacks and item IDs for collision resolution.
- ✅ **Evaluation**: Comprehensive metrics including NDCG, code utilization, entropy, and hierarchical distance.
- ✅ **Token Format**: LLM-friendly ID output.
- ✅ **Persistence**: Save/Load models and engines.
| text/markdown | null | Mikhail <mikhail@example.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=2.0.0",
"scikit-learn",
"k-means-constrained",
"torch>=2.0.0",
"tqdm",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"black; extra == \"dev\"",
"isort; extra == \"dev\"",
"flake8; extra == \"dev\"",
"mypy; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.5 | 2026-02-20T21:30:00.209222 | semantic_id-0.2.5.tar.gz | 53,765 | 7d/4d/a3048b0bc4ab7fc1781468a0f37fa4d1417420c61013bf90b07c56b407c9/semantic_id-0.2.5.tar.gz | source | sdist | null | false | 9877924e555bc28b4cfe2c433a80d209 | 616cc913c927c1f9b1c716cd68a8c8bf8ec53cc7574bccbf7aa0d9c75bf57234 | 7d4da3048b0bc4ab7fc1781468a0f37fa4d1417420c61013bf90b07c56b407c9 | null | [
"LICENSE"
] | 217 |
2.4 | tinymetabobjloader | 2.0.0rc14.dev3 | Tiny but powerful Wavefront OBJ loader | # tinyobjloader
[](https://badge.fury.io/py/tinyobjloader)
[](https://dev.azure.com/tinyobjloader/tinyobjloader/_build/latest?definitionId=1&branchName=master)
[](https://ci.appveyor.com/project/syoyo/tinyobjloader-6e4qf/branch/master)
[](https://coveralls.io/github/syoyo/tinyobjloader?branch=master)
[](https://aur.archlinux.org/packages/tinyobjloader)
Tiny but powerful single file wavefront obj loader written in C++03. No dependency except for C++ STL. It can parse over 10M polygons with moderate memory and time.
`tinyobjloader` is good for embedding .obj loader to your (global illumination) renderer ;-)
If you are looking for C99 version, please see https://github.com/syoyo/tinyobjloader-c .
Version notice
--------------
We recommend to use `master`(`main`) branch. Its v2.0 release candidate. Most features are now nearly robust and stable(Remaining task for release v2.0 is polishing C++ and Python API, and fix built-in triangulation code).
We have released new version v1.0.0 on 20 Aug, 2016.
Old version is available as `v0.9.x` branch https://github.com/syoyo/tinyobjloader/tree/v0.9.x
## What's new
* 29 Jul, 2021 : Added Mapbox's earcut for robust triangulation. Also fixes triangulation bug(still there is some issue in built-in triangulation algorithm: https://github.com/tinyobjloader/tinyobjloader/issues/319).
* 19 Feb, 2020 : The repository has been moved to https://github.com/tinyobjloader/tinyobjloader !
* 18 May, 2019 : Python binding!(See `python` folder. Also see https://pypi.org/project/tinyobjloader/)
* 14 Apr, 2019 : Bump version v2.0.0 rc0. New C++ API and python bindings!(1.x API still exists for backward compatibility)
* 20 Aug, 2016 : Bump version v1.0.0. New data structure and API!
## Requirements
* C++03 compiler
### Old version
Previous old version is available in `v0.9.x` branch.
## Example

tinyobjloader can successfully load 6M triangles Rungholt scene.
http://casual-effects.com/data/index.html

* [examples/viewer/](examples/viewer) OpenGL .obj viewer
* [examples/callback_api/](examples/callback_api/) Callback API example
* [examples/voxelize/](examples/voxelize/) Voxelizer example
## Use case
TinyObjLoader is successfully used in ...
### New version(v1.0.x)
* Double precision support through `TINYOBJLOADER_USE_DOUBLE` thanks to noma
* Loading models in Vulkan Tutorial https://vulkan-tutorial.com/Loading_models
* .obj viewer with Metal https://github.com/middlefeng/NuoModelViewer/tree/master
* Vulkan Cookbook https://github.com/PacktPublishing/Vulkan-Cookbook
* cudabox: CUDA Solid Voxelizer Engine https://github.com/gaspardzoss/cudavox
* Drake: A planning, control, and analysis toolbox for nonlinear dynamical systems https://github.com/RobotLocomotion/drake
* VFPR - a Vulkan Forward Plus Renderer : https://github.com/WindyDarian/Vulkan-Forward-Plus-Renderer
* glslViewer: https://github.com/patriciogonzalezvivo/glslViewer
* Lighthouse2: https://github.com/jbikker/lighthouse2
* rayrender(an open source R package for raytracing scenes in created in R): https://github.com/tylermorganwall/rayrender
* liblava - A modern C++ and easy-to-use framework for the Vulkan API. [MIT]: https://github.com/liblava/liblava
* rtxON - Simple Vulkan raytracing tutorials https://github.com/iOrange/rtxON
* metal-ray-tracer - Writing ray-tracer using Metal Performance Shaders https://github.com/sergeyreznik/metal-ray-tracer https://sergeyreznik.github.io/metal-ray-tracer/index.html
* Supernova Engine - 2D and 3D projects with Lua or C++ in data oriented design: https://github.com/supernovaengine/supernova
* AGE (Arc Game Engine) - An open-source engine for building 2D & 3D real-time rendering and interactive contents: https://github.com/MohitSethi99/ArcGameEngine
* [Wicked Engine<img src="https://github.com/turanszkij/WickedEngine/blob/master/Content/logo_small.png" width="28px" align="center"/>](https://github.com/turanszkij/WickedEngine) - 3D engine with modern graphics
* [Lumina Game Engine](https://github.com/MrDrElliot/LuminaEngine) - A modern, high-performance game engine built with Vulkan
* Your project here! (Plese send PR)
### Old version(v0.9.x)
* bullet3 https://github.com/erwincoumans/bullet3
* pbrt-v2 https://github.com/mmp/pbrt-v2
* OpenGL game engine development http://swarminglogic.com/jotting/2013_10_gamedev01
* mallie https://lighttransport.github.io/mallie
* IBLBaker (Image Based Lighting Baker). http://www.derkreature.com/iblbaker/
* Stanford CS148 http://web.stanford.edu/class/cs148/assignments/assignment3.pdf
* Awesome Bump http://awesomebump.besaba.com/about/
* sdlgl3-wavefront OpenGL .obj viewer https://github.com/chrisliebert/sdlgl3-wavefront
* pbrt-v3 https://github.com/mmp/pbrt-v3
* cocos2d-x https://github.com/cocos2d/cocos2d-x/
* Android Vulkan demo https://github.com/SaschaWillems/Vulkan
* voxelizer https://github.com/karimnaaji/voxelizer
* Probulator https://github.com/kayru/Probulator
* OptiX Prime baking https://github.com/nvpro-samples/optix_prime_baking
* FireRays SDK https://github.com/GPUOpen-LibrariesAndSDKs/FireRays_SDK
* parg, tiny C library of various graphics utilities and GL demos https://github.com/prideout/parg
* Opengl unit of ChronoEngine https://github.com/projectchrono/chrono-opengl
* Point Based Global Illumination on modern GPU https://pbgi.wordpress.com/code-source/
* Fast OBJ file importing and parsing in CUDA http://researchonline.jcu.edu.au/42515/1/2015.CVM.OBJCUDA.pdf
* Sorted Shading for Uni-Directional Pathtracing by Joshua Bainbridge https://nccastaff.bournemouth.ac.uk/jmacey/MastersProjects/MSc15/02Josh/joshua_bainbridge_thesis.pdf
* GeeXLab http://www.geeks3d.com/hacklab/20160531/geexlab-0-12-0-0-released-for-windows/
## Features
* Group(parse multiple group name)
* Vertex
* Vertex color(as an extension: https://blender.stackexchange.com/questions/31997/how-can-i-get-vertex-painted-obj-files-to-import-into-blender)
* Texcoord
* Normal
* Crease tag('t'). This is OpenSubdiv specific(not in wavefront .obj specification)
* Callback API for custom loading.
* Double precision support(for HPC application).
* Smoothing group
* Python binding : See `python` folder.
* Precompiled binary(manylinux1-x86_64 only) is hosted at pypi https://pypi.org/project/tinyobjloader/)
### Primitives
* [x] face(`f`)
* [x] lines(`l`)
* [ ] points(`p`)
* [ ] curve
* [ ] 2D curve
* [ ] surface.
* [ ] Free form curve/surfaces
### Material
* PBR material extension for .MTL. Please see [pbr-mtl.md](pbr-mtl.md) for details.
* Texture options
* Unknown material attributes are returned as key-value(value is string) map.
## TODO
* [ ] Fix obj_sticker example.
* [ ] More unit test codes.
## License
TinyObjLoader is licensed under MIT license.
### Third party licenses.
* pybind11 : BSD-style license.
* mapbox earcut.hpp: ISC License.
## Usage
### Installation
One option is to simply copy the header file into your project and to make sure that `TINYOBJLOADER_IMPLEMENTATION` is defined exactly once.
### Building tinyobjloader - Using vcpkg(not recommended though)
Although it is not a recommended way, you can download and install tinyobjloader using the [vcpkg](https://github.com/Microsoft/vcpkg) dependency manager:
git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg integrate install
./vcpkg install tinyobjloader
The tinyobjloader port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please [create an issue or pull request](https://github.com/Microsoft/vcpkg) on the vcpkg repository.
### Data format
`attrib_t` contains single and linear array of vertex data(position, normal and texcoord).
```
attrib_t::vertices => 3 floats per vertex
v[0] v[1] v[2] v[3] v[n-1]
+-----------+-----------+-----------+-----------+ +-----------+
| x | y | z | x | y | z | x | y | z | x | y | z | .... | x | y | z |
+-----------+-----------+-----------+-----------+ +-----------+
attrib_t::normals => 3 floats per vertex
n[0] n[1] n[2] n[3] n[n-1]
+-----------+-----------+-----------+-----------+ +-----------+
| x | y | z | x | y | z | x | y | z | x | y | z | .... | x | y | z |
+-----------+-----------+-----------+-----------+ +-----------+
attrib_t::texcoords => 2 floats per vertex
t[0] t[1] t[2] t[3] t[n-1]
+-----------+-----------+-----------+-----------+ +-----------+
| u | v | u | v | u | v | u | v | .... | u | v |
+-----------+-----------+-----------+-----------+ +-----------+
attrib_t::colors => 3 floats per vertex(vertex color. optional)
c[0] c[1] c[2] c[3] c[n-1]
+-----------+-----------+-----------+-----------+ +-----------+
| x | y | z | x | y | z | x | y | z | x | y | z | .... | x | y | z |
+-----------+-----------+-----------+-----------+ +-----------+
```
Each `shape_t::mesh_t` does not contain vertex data but contains array index to `attrib_t`.
See `loader_example.cc` for more details.
```
mesh_t::indices => array of vertex indices.
+----+----+----+----+----+----+----+----+----+----+ +--------+
| i0 | i1 | i2 | i3 | i4 | i5 | i6 | i7 | i8 | i9 | ... | i(n-1) |
+----+----+----+----+----+----+----+----+----+----+ +--------+
Each index has an array index to attrib_t::vertices, attrib_t::normals and attrib_t::texcoords.
mesh_t::num_face_vertices => array of the number of vertices per face(e.g. 3 = triangle, 4 = quad , 5 or more = N-gons).
+---+---+---+ +---+
| 3 | 4 | 3 | ...... | 3 |
+---+---+---+ +---+
| | | |
| | | +-----------------------------------------+
| | | |
| | +------------------------------+ |
| | | |
| +------------------+ | |
| | | |
|/ |/ |/ |/
mesh_t::indices
| face[0] | face[1] | face[2] | | face[n-1] |
+----+----+----+----+----+----+----+----+----+----+ +--------+--------+--------+
| i0 | i1 | i2 | i3 | i4 | i5 | i6 | i7 | i8 | i9 | ... | i(n-3) | i(n-2) | i(n-1) |
+----+----+----+----+----+----+----+----+----+----+ +--------+--------+--------+
```
Note that when `triangulate` flag is true in `tinyobj::LoadObj()` argument, `num_face_vertices` are all filled with 3(triangle).
### float data type
TinyObjLoader now use `real_t` for floating point data type.
Default is `float(32bit)`.
You can enable `double(64bit)` precision by using `TINYOBJLOADER_USE_DOUBLE` define.
### Robust triangulation
When you enable `triangulation`(default is enabled),
TinyObjLoader triangulate polygons(faces with 4 or more vertices).
Built-in triangulation code may not work well in some polygon shape.
You can define `TINYOBJLOADER_USE_MAPBOX_EARCUT` for robust triangulation using `mapbox/earcut.hpp`.
This requires C++11 compiler though. And you need to copy `mapbox/earcut.hpp` to your project.
If you have your own `mapbox/earcut.hpp` file incuded in your project, you can define `TINYOBJLOADER_DONOT_INCLUDE_MAPBOX_EARCUT` so that `mapbox/earcut.hpp` is not included inside of `tiny_obj_loader.h`.
#### Example code (Deprecated API)
```c++
#define TINYOBJLOADER_IMPLEMENTATION // define this in only *one* .cc
// Optional. define TINYOBJLOADER_USE_MAPBOX_EARCUT gives robust triangulation. Requires C++11
//#define TINYOBJLOADER_USE_MAPBOX_EARCUT
#include "tiny_obj_loader.h"
std::string inputfile = "cornell_box.obj";
tinyobj::attrib_t attrib;
std::vector<tinyobj::shape_t> shapes;
std::vector<tinyobj::material_t> materials;
std::string warn;
std::string err;
bool ret = tinyobj::LoadObj(&attrib, &shapes, &materials, &warn, &err, inputfile.c_str());
if (!warn.empty()) {
std::cout << warn << std::endl;
}
if (!err.empty()) {
std::cerr << err << std::endl;
}
if (!ret) {
exit(1);
}
// Loop over shapes
for (size_t s = 0; s < shapes.size(); s++) {
// Loop over faces(polygon)
size_t index_offset = 0;
for (size_t f = 0; f < shapes[s].mesh.num_face_vertices.size(); f++) {
size_t fv = size_t(shapes[s].mesh.num_face_vertices[f]);
// Loop over vertices in the face.
for (size_t v = 0; v < fv; v++) {
// access to vertex
tinyobj::index_t idx = shapes[s].mesh.indices[index_offset + v];
tinyobj::real_t vx = attrib.vertices[3*size_t(idx.vertex_index)+0];
tinyobj::real_t vy = attrib.vertices[3*size_t(idx.vertex_index)+1];
tinyobj::real_t vz = attrib.vertices[3*size_t(idx.vertex_index)+2];
// Check if `normal_index` is zero or positive. negative = no normal data
if (idx.normal_index >= 0) {
tinyobj::real_t nx = attrib.normals[3*size_t(idx.normal_index)+0];
tinyobj::real_t ny = attrib.normals[3*size_t(idx.normal_index)+1];
tinyobj::real_t nz = attrib.normals[3*size_t(idx.normal_index)+2];
}
// Check if `texcoord_index` is zero or positive. negative = no texcoord data
if (idx.texcoord_index >= 0) {
tinyobj::real_t tx = attrib.texcoords[2*size_t(idx.texcoord_index)+0];
tinyobj::real_t ty = attrib.texcoords[2*size_t(idx.texcoord_index)+1];
}
// Optional: vertex colors
// tinyobj::real_t red = attrib.colors[3*size_t(idx.vertex_index)+0];
// tinyobj::real_t green = attrib.colors[3*size_t(idx.vertex_index)+1];
// tinyobj::real_t blue = attrib.colors[3*size_t(idx.vertex_index)+2];
}
index_offset += fv;
// per-face material
shapes[s].mesh.material_ids[f];
}
}
```
#### Example code (New Object Oriented API)
```c++
#define TINYOBJLOADER_IMPLEMENTATION // define this in only *one* .cc
// Optional. define TINYOBJLOADER_USE_MAPBOX_EARCUT gives robust triangulation. Requires C++11
//#define TINYOBJLOADER_USE_MAPBOX_EARCUT
#include "tiny_obj_loader.h"
std::string inputfile = "cornell_box.obj";
tinyobj::ObjReaderConfig reader_config;
reader_config.mtl_search_path = "./"; // Path to material files
tinyobj::ObjReader reader;
if (!reader.ParseFromFile(inputfile, reader_config)) {
if (!reader.Error().empty()) {
std::cerr << "TinyObjReader: " << reader.Error();
}
exit(1);
}
if (!reader.Warning().empty()) {
std::cout << "TinyObjReader: " << reader.Warning();
}
auto& attrib = reader.GetAttrib();
auto& shapes = reader.GetShapes();
auto& materials = reader.GetMaterials();
// Loop over shapes
for (size_t s = 0; s < shapes.size(); s++) {
// Loop over faces(polygon)
size_t index_offset = 0;
for (size_t f = 0; f < shapes[s].mesh.num_face_vertices.size(); f++) {
size_t fv = size_t(shapes[s].mesh.num_face_vertices[f]);
// Loop over vertices in the face.
for (size_t v = 0; v < fv; v++) {
// access to vertex
tinyobj::index_t idx = shapes[s].mesh.indices[index_offset + v];
tinyobj::real_t vx = attrib.vertices[3*size_t(idx.vertex_index)+0];
tinyobj::real_t vy = attrib.vertices[3*size_t(idx.vertex_index)+1];
tinyobj::real_t vz = attrib.vertices[3*size_t(idx.vertex_index)+2];
// Check if `normal_index` is zero or positive. negative = no normal data
if (idx.normal_index >= 0) {
tinyobj::real_t nx = attrib.normals[3*size_t(idx.normal_index)+0];
tinyobj::real_t ny = attrib.normals[3*size_t(idx.normal_index)+1];
tinyobj::real_t nz = attrib.normals[3*size_t(idx.normal_index)+2];
}
// Check if `texcoord_index` is zero or positive. negative = no texcoord data
if (idx.texcoord_index >= 0) {
tinyobj::real_t tx = attrib.texcoords[2*size_t(idx.texcoord_index)+0];
tinyobj::real_t ty = attrib.texcoords[2*size_t(idx.texcoord_index)+1];
}
// Optional: vertex colors
// tinyobj::real_t red = attrib.colors[3*size_t(idx.vertex_index)+0];
// tinyobj::real_t green = attrib.colors[3*size_t(idx.vertex_index)+1];
// tinyobj::real_t blue = attrib.colors[3*size_t(idx.vertex_index)+2];
}
index_offset += fv;
// per-face material
shapes[s].mesh.material_ids[f];
}
}
```
## Optimized loader
Optimized multi-threaded .obj loader is available at `experimental/` directory.
If you want absolute performance to load .obj data, this optimized loader will fit your purpose.
Note that the optimized loader uses C++11 thread and it does less error checks but may work most .obj data.
Here is some benchmark result. Time are measured on MacBook 12(Early 2016, Core m5 1.2GHz).
* Rungholt scene(6M triangles)
* old version(v0.9.x): 15500 msecs.
* baseline(v1.0.x): 6800 msecs(2.3x faster than old version)
* optimised: 1500 msecs(10x faster than old version, 4.5x faster than baseline)
## Python binding
```
$ python -m pip install tinyobjloader
```
See [python/sample.py](python/sample.py) for example use of Python binding of tinyobjloader.
### CI + PyPI upload
cibuildwheels + twine upload for each git tagging event is handled in Github Actions and Cirrus CI(arm builds).
#### How to bump version(For developer)
* Apply `black` to python files(`python/sample.py`)
* Bump version in CMakeLists.txt
* Commit and push `release`. Confirm C.I. build is OK.
* Create tag starting with `v`(e.g. `v2.1.0`)
* `git push --tags`
* version settings is automatically handled in python binding through setuptools_scm.
* cibuildwheels + pypi upload(through twine) will be automatically triggered in Github Actions + Cirrus CI.
## Tests
Unit tests are provided in `tests` directory. See `tests/README.md` for details.
| text/markdown | Syoyo Fujita, Paul Melnikow | syoyo@lighttransport.com, github@paulmelnikow.com | null | null | MIT AND ISC | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Manufacturing",
"Topic :: Artistic Software",
"Topic :: Multimedia :: Graphics :: 3D Modeling",
"Topic :: Scientific/Engineering :: Visualization",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | https://github.com/curvewise-forks/tinyobjloader | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:29:35.323951 | tinymetabobjloader-2.0.0rc14.dev3.tar.gz | 1,013,390 | 36/39/9fbb0e3a28d798aa2d5c36b7ab16e1bb83cda53cfc53cb87255fa3de1e5b/tinymetabobjloader-2.0.0rc14.dev3.tar.gz | source | sdist | null | false | 0508077b7bd4dc5ab81bb3235f281776 | 82703b3846d97958297b4e252e4481dcd89731ab5de3057715cc290a6ce430fb | 36399fbb0e3a28d798aa2d5c36b7ab16e1bb83cda53cfc53cb87255fa3de1e5b | null | [
"LICENSE"
] | 1,808 |
2.4 | tox | 4.44.0 | tox is a generic virtualenv management and test command line tool | # tox
[](https://pypi.org/project/tox/)
[](https://pypi.org/project/tox/)
[](https://pepy.tech/project/tox)
[](https://tox.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/tox-dev/tox/actions/workflows/check.yaml)
`tox` aims to automate and standardize testing in Python. It is part of a larger vision of easing the packaging, testing
and release process of Python software (alongside [pytest](https://docs.pytest.org/en/latest/) and
[devpi](https://www.devpi.net)).
tox is a generic virtual environment management and test command line tool you can use for:
- checking your package builds and installs correctly under different environments (such as different Python
implementations, versions or installation dependencies),
- running your tests in each of the environments with the test tool of choice,
- acting as a frontend to continuous integration servers, greatly reducing boilerplate and merging CI and shell-based
testing.
Please read our [user guide](https://tox.wiki/en/latest/user_guide.html#basic-example) for an example and more detailed
introduction, or watch [this YouTube video](https://www.youtube.com/watch?v=SFqna5ilqig) that presents the problem space
and how tox solves it.
| text/markdown | null | Bernát Gábor <gaborjbernat@gmail.com> | null | Anthony Sottile <asottile@umich.edu>, Bernát Gábor <gaborjbernat@gmail.com>, Jürgen Gmach <juergen.gmach@googlemail.com>, Oliver Bestwalter <oliver@bestwalter.de> | null | environments, isolated, testing, virtual | [
"Development Status :: 5 - Production/Stable",
"Framework :: tox",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Testing",
"Topic :: Utilities"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"cachetools>=7.0.1",
"chardet>=5.2",
"colorama>=0.4.6",
"filelock>=3.24",
"packaging>=26",
"platformdirs>=4.9.1",
"pluggy>=1.6",
"pyproject-api>=1.10",
"tomli>=2.4; python_version < \"3.11\"",
"typing-extensions>=4.15; python_version < \"3.11\"",
"virtualenv>=20.36.1",
"argcomplete>=3.6.3; extra == \"completion\""
] | [] | [] | [] | [
"Documentation, https://tox.wiki",
"Homepage, http://tox.readthedocs.org",
"Release Notes, https://tox.wiki/en/latest/changelog.html",
"Source, https://github.com/tox-dev/tox",
"Tracker, https://github.com/tox-dev/tox/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:29:15.879484 | tox-4.44.0.tar.gz | 243,605 | 0b/73/dadb7954bdb3f67662322faaef9fe5ede418527d8cff0c57fa368c558f37/tox-4.44.0.tar.gz | source | sdist | null | false | fe839eed5d370100c8846df47538fd16 | 0c911cbc448a2ac5dd7cbb6be2f9ffa26d0a10405982f9efea654803b23cec77 | 0b73dadb7954bdb3f67662322faaef9fe5ede418527d8cff0c57fa368c558f37 | MIT | [
"LICENSE"
] | 207,238 |
2.4 | redsun | 0.6.2 | Event-driven data acquisition software for scientific applications. | [](https://pypi.org/project/redsun)
[](https://pypi.org/project/redsun)
[](https://codecov.io/gh/redsun-acquisition/redsun)
[](https://github.com/astral-sh/ruff)
[](https://mypy-lang.org/)
[](https://opensource.org/licenses/Apache-2.0)
# `redsun`
A component-based, customizable application builder for scientific hardware orchestration, based on the [Bluesky] framework.
> [!WARNING]
> This project is still in alpha stage and very unstable. Use at your own risk.
See the [documentation] for more informations.
[bluesky]: https://blueskyproject.io/bluesky/main/index.html
[documentation]: https://redsun-acquisition.github.io/redsun/main/
| text/markdown | null | Jacopo Abramo <jacopo.abramo@gmail.com> | null | Jacopo Abramo <jacopo.abramo@gmail.com> | Apache-2.0 | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Scientific/Engineering",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"platformdirs>=4.3.8",
"sunflare>=0.10.1",
"sunflare[pyqt]; extra == \"pyqt\"",
"sunflare[pyside]; extra == \"pyside\""
] | [] | [] | [] | [
"bugs, https://github.com/redsun-acquisition/redsun/issues",
"changelog, https://github.com/redsun-acquisition/redsun/blob/master/changelog.md",
"homepage, https://github.com/redsun-acquisition/redsun"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:29:06.283451 | redsun-0.6.2.tar.gz | 150,215 | 63/8d/4234e8e3cef40314c6f314434d9e1dcb16bfa5f7b6ecd8b363ee412209ea/redsun-0.6.2.tar.gz | source | sdist | null | false | f47c69e83ef7a3cca5d942e6946566ac | c654264094c07bc28dd4cf007ab56edfca56de351d59c7d332ee4ca36f405e34 | 638d4234e8e3cef40314c6f314434d9e1dcb16bfa5f7b6ecd8b363ee412209ea | null | [
"LICENSE"
] | 553 |
2.4 | atk-cli | 0.0.4 | AI Toolkit - Manage AI development tools through a git-backed, declarative manifest | <p align="center">
<img src="assets/logo.png" alt="ATK Logo" width="280px">
</p>
# ATK — AI Tool Kit for Developers
ATK is a developer-side **toolchain and service manager for AI system development**.
It helps you install, run, update, version, and reproduce the growing set of local tools modern AI-assisted development depends on — without Docker sprawl, shell scripts, or "how did I install this again?" moments.
## The problem ATK solves
ATK is built for people who **develop with AI**, not only people who develop AI.
If you build AI systems locally, your setup probably looks like this:
* an MCP server installed from a Git repo
* a tracing or observability tool running in Docker
* a vector database started with a long-forgotten `docker run`
* a TTS or inference service installed via a binary
* CLI tools installed via `pip`, `npm`, or Homebrew
* secrets scattered across `.env` files
It works.
Until you:
* switch machines
* break something
* want to roll back
* onboard someone else
* come back after two months and don’t remember what’s running
ATK exists because this setup is **real**, fragile, and constantly changing — and painful.
## Who ATK is for
ATK is for developers who:
* rely on coding agents (Claude Code, Codex, Augment Code, etc.)
* use MCP servers (local and remote)
* run local services like memory, observability, or vector stores
* care about owning their data and controlling their setup
* want identical setups across machines and tools
ATK is not limited to people building AI models. It is for people **building software with AI systems in the loop**.
## What ATK is (and is not)
### ATK **is**
* a **toolchain and service manager** for developers
* focused on **local, long-lived AI tooling**
* **git-backed and reproducible**
* **CLI-first and automation-friendly**
* designed to be driven by humans *and* coding agents
### ATK is **not**
* an environment manager (Nix, Conda, Devbox)
* infrastructure-as-code (Terraform, Ansible)
* project-scoped
* a production deployment system
If you’re configuring servers, ATK is the wrong tool.
If you’re keeping your **AI dev setup sane**, it’s the right one.
## Mental model
> Think of ATK as **a control plane for your local AI toolchain**.
>
> It sits above package managers, Docker, and agent configs, and keeps everything in sync.
> Think of ATK as **Homebrew + docker-compose + git history** —
> but for AI developer tooling.
* Tools are **plugins**
* Each plugin has a **lifecycle** (install, start, stop, logs, status)
* Everything lives under `~/.atk`
* Every change is **versioned**
* Your setup can be cloned, audited, and rolled back
## A tiny example
<details>
<summary><strong>Install uv</strong></summary>
**Install uv** (The modern Python manager):
- **macOS:** `brew install uv`
- **Windows/Linux/Other:** [Official Install Guide](https://docs.astral.sh/uv/getting-started/installation/)
</details>
```bash
# install ATK
uv tool install atk-cli # recommended
# or: pip install atk-cli
# initialize ATK Home (defaults to ~/.atk)
atk init
# add a plugin (from the registry)
atk add openmemory
# run it
atk status
# generate MCP config JSON for your agent
atk mcp openmemory
```
Your entire setup now lives in `~/.atk/` — a git repository.
Push it. Clone it on another machine. Run `atk install --all`.
## Reproducibility, updates, and drift
AI tooling is not static. MCPs, agents, and local services evolve constantly.
ATK treats **updates and drift as first-class concerns**, not afterthoughts.
AI tooling is not static. MCPs, agents, and local services evolve constantly.
ATK treats **updates as a first-class concern**, not an afterthought.
##
ATK environments are fully reproducible:
* plugins are validated against a **versioned schema**
* additive schema changes are backward-compatible
* plugin versions are **pinned** in a manifest
* secrets live in isolated, gitignored `.env` files
* the entire toolkit directory is **git-backed**
Clone the repo. Run `atk sync`. You get the same toolchain — including tool versions, MCPs, and agent-facing configuration.
## ATK plugins and registry
ATK is built around **plugins**.
A plugin describes how to install, configure, run, update, and integrate a tool or service — including MCPs, local services, CLIs, or agent-facing components.
ATK supports **three ways** to work with plugins:
### 1. Official ATK Registry (vetted plugins)
ATK maintains a growing **registry of vetted plugins** for common and useful tools in AI-assisted development.
Install by name:
```bash
atk add openmemory
atk add langfuse
```
Examples include:
* popular MCPs (e.g. GitHub, Playwright, design tools)
* local AI infrastructure (memory systems, observability, vector stores)
* tools like OpenMemory, Langfuse, and similar services
Registry plugins are:
* reviewed and schema-validated
* versioned and pinned
* safe to install and update
Think of this as the "known good" layer.
### 2. Git repository plugins (distribution channel)
Any Git repository can become an ATK plugin.
If you are building a tool, MCP, or service that others may want to use, you can add a `.atk` definition to your repository.
Users can then add it directly from the repository URL (ATK looks for a `.atk/` directory at the repo root):
```bash
atk add github.com/your-org/your-tool
```
ATK will:
* validate the plugin against the schema
* pin it to a specific commit hash in the manifest
* manage its lifecycle like any other plugin
(Under the hood, ATK uses sparse checkout to fetch only the `.atk/` directory.)
This turns ATK into a **distribution channel** for AI tooling — without a centralized gatekeeper.
### 3. Local plugins (personal or internal tooling)
You can also define plugins locally for your own use.
These plugins:
* live in your `~/.atk` directory
* are fully versioned and git-backed
* use the same schema and validation
This is ideal for:
* personal scripts and services
* internal tools
* experiments you don’t want to publish
---
ATK lives **above** package managers, Docker, and agent configs.
It doesn’t replace them — it orchestrates them.
## Unified lifecycle
ATK gives every tool the same lifecycle, regardless of how it is installed.
```bash
atk start openmemory
atk stop openmemory
atk restart openmemory
atk status
atk logs openmemory
```
This works whether the tool is:
* a Docker service
* a Python CLI
* a Node binary
* a custom shell-based MCP server
## Design principles
| Principle | Meaning |
| ----------- | ----------------------------------------------------- |
| Declarative | The manifest describes desired state; ATK enforces it |
| Idempotent | Running the same command twice yields the same result |
| Git-native | Every mutation is a commit; rollback = `git revert` |
| Transparent | Human-readable YAML; no hidden state |
| AI-first | CLI-driven, scriptable, agent-friendly |
| Focused | Manages tools, doesn’t build them |
## Why ATK exists
ATK was built to solve a real, personal problem: keeping a complex AI developer setup understandable, reproducible, and reversible.
If you’re working with:
* agent frameworks
* MCP servers
* observability and tracing
* local inference
* vector databases
* hybrid stacks of Python, Node, Docker, and binaries
ATK is for you.
## What ATK is evolving into
ATK is evolving into a **control plane for AI-assisted development**.
Planned directions include:
* a full **ATK plugin registry** as a discovery and distribution layer
* proactive configuration of coding agents (e.g. `atk mcp setup claude-code openmemory`)
* automatic binding between MCPs, local services, and agents
* managing `AGENT.md`, `CLAUDE.md`, and similar files centrally
* keeping multiple coding agents in sync to reduce vendor lock-in
The long-term goal:
> **ATK becomes the last MCP you ever need to install.**
>
> Through the ATK MCP, coding agents can install, update, configure, start, stop, and compose all other tools.
Switching agents should feel trivial, because your setup — tools, memory, rules, MCPs — stays identical.
## Installation
```bash
# Recommended
uv tool install atk-cli
# Alternative
pip install atk-cli
```
ATK is distributed via **PyPI** and installs as a single self-contained CLI.
## Roadmap (high level)
* Core CLI and lifecycle management — **done**
* Plugin schema and validation — **done**
* Registry and git-based distribution — **in progress**
* Agent configuration and MCP binding — **planned**
* ATK MCP (self-hosting control plane) — **planned**
## Status
ATK is under active development.
Expect rough edges, fast iteration, and opinionated choices.
If this problem resonates with you, try it — and break it.
| text/markdown | null | "Oleksandr (Sasha) Antoshchenko" <sasha@svtoo.com> | null | null | Apache-2.0 | agent, ai, cli, devtools, mcp, tools | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Build Tools",
"Topic :: System :: Systems Administration"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic>=2.0",
"python-dotenv>=1.2.1",
"pyyaml>=6.0",
"rich>=13.0",
"typer>=0.12"
] | [] | [] | [] | [
"Homepage, https://github.com/Svtoo/atk",
"Repository, https://github.com/Svtoo/atk",
"Issues, https://github.com/Svtoo/atk/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:29:01.495728 | atk_cli-0.0.4.tar.gz | 256,602 | ad/d2/4d1668994af8cf1ddb23a6b8668f9dd7c2b4ad30678ef34dde5198f0584b/atk_cli-0.0.4.tar.gz | source | sdist | null | false | bc97bd3928a8feb806dbe4c2ec2a8bc8 | 7c3f53d3b439e8792929dccc60fa01531c7ef1df5a35f3d3f239a580fbe21d4f | add24d1668994af8cf1ddb23a6b8668f9dd7c2b4ad30678ef34dde5198f0584b | null | [
"LICENSE"
] | 205 |
2.4 | lumefuse-sdk | 1.0.0 | LumeFuse SDK - Atomic Veracity Protocol for Born-Signed Data | # LumeFuse Python SDK
The official Python SDK for [LumeFuse](https://lumefuse.io) - Atomic Veracity Protocol for Born-Signed Data.
## Installation
```bash
pip install lumefuse-sdk
```
## Quick Start
```python
from lumefuse import LumeFuse
# Initialize the client
lf = LumeFuse(api_key="lf_your_api_key")
# Open a data stream (The "Latch")
stream = lf.open_stream("medical_lab_results")
# Every data point is automatically:
# 1. Hashed (SHA-256)
# 2. Linked to previous packet (Recursive DNA)
# 3. Anchored to BSV ledger
stream.write({"patient_id": "P-001", "glucose": 95, "timestamp": "2026-02-20T14:30:00Z"})
stream.write({"patient_id": "P-001", "glucose": 102, "timestamp": "2026-02-20T15:30:00Z"})
# Close stream and get the Merkle root
result = stream.close()
print(f"Chain Root: {result.merkle_root}")
print(f"Total Packets: {result.total_packets}")
print(f"BSV TX: {result.anchor_txid}")
```
## Features
### Recursive DNA Binding (The Cryptographic Heartbeat)
Every Bit-Packet contains the DNA of the previous packet:
```
H_N = SHA256(Data_N + H_{N-1})
```
This creates an unbreakable chain where altering any packet breaks all subsequent packets.
### Quantum Resistance
The recursive hashing model provides practical resistance against quantum computing attacks. Even if a single hash is compromised, the entire chain cannot be altered without breaking the recursive DNA sequence.
### Self-Healing Data
The Sentinel system automatically detects and heals data corruption:
```python
# Verify chain integrity
status = lf.verify_chain("medical_lab_results")
if status.chain_intact:
print("Data integrity verified")
else:
print(f"Break detected at sequence {status.break_sequence}")
if status.healed:
print("Data automatically healed!")
```
## API Reference
### LumeFuse Client
```python
from lumefuse import LumeFuse
# Initialize with API key
lf = LumeFuse(
api_key="lf_your_api_key",
base_url="https://api.lumefuse.io/v1", # Optional
timeout=30.0, # Optional
auto_heal=True # Optional - auto-heal on chain breaks
)
```
### Data Streams
```python
# Open a stream
stream = lf.open_stream("source_id")
# Write data (any JSON-serializable object)
stream.write({"key": "value"})
stream.write(["array", "of", "items"])
stream.write("plain string")
# Close and get result
result = stream.close()
```
### Verification
```python
# Verify single data item
result = lf.verify({"key": "value"})
print(f"Verified: {result.verified}")
# Verify entire chain
status = lf.verify_chain("source_id")
print(f"Chain intact: {status.chain_intact}")
print(f"Quantum resistant: {status.quantum_resistant}")
```
### Sentinel (Self-Healing)
```python
# Get sentinel status
status = lf.get_sentinel_status()
# Manually trigger audit
audit = lf.trigger_audit()
# Manually heal a break
result = lf.heal("source_id", break_sequence=5)
# Get healing history
history = lf.get_healing_history("source_id")
```
### Credits
```python
# Get credit balance
balance = lf.get_credits()
print(f"Packets available: {balance.packets_available}")
print(f"Satoshi balance: {balance.satoshi_balance}")
```
## Context Manager
```python
with LumeFuse(api_key="lf_your_key") as lf:
stream = lf.open_stream("data")
stream.write({"event": "action"})
result = stream.close()
```
## Error Handling
```python
from lumefuse import LumeFuse
from lumefuse.exceptions import (
AuthenticationError,
RateLimitError,
ChainIntegrityError,
InsufficientCreditsError
)
try:
lf = LumeFuse(api_key="lf_invalid")
except AuthenticationError:
print("Invalid API key")
try:
status = lf.verify_chain("source", auto_heal=False)
except ChainIntegrityError as e:
print(f"Chain broken at sequence {e.break_sequence}")
```
## Environment Variables
- `LUMEFUSE_API_KEY` - Your API key
- `LUMEFUSE_BASE_URL` - Custom API base URL (optional)
## Publishing to PyPI
```bash
# Install build tools
pip install build twine
# Build the package
python -m build
# Upload to PyPI
twine upload dist/*
```
## License
MIT License - see LICENSE file for details.
## Support
- Documentation: https://docs.lumefuse.io
- Email: support@lumefuse.io
- Enterprise: enterprise@lumefuse.io
| text/markdown | null | LumeFuse <sdk@lumefuse.io> | null | null | null | lumefuse, blockchain, bsv, data-integrity, verification, bit-packet, recursive-dna, quantum-resistant | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Security :: Cryptography",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"httpx>=0.24.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://lumefuse.io",
"Documentation, https://docs.lumefuse.io",
"Repository, https://github.com/lumefuse/lumefuse-python",
"Issues, https://github.com/lumefuse/lumefuse-python/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T21:28:50.911726 | lumefuse_sdk-1.0.0.tar.gz | 13,669 | 46/fe/e5b9d627a96bc80359d615aa04f64ebc491ab43c0e4437c7688f16ab7b5b/lumefuse_sdk-1.0.0.tar.gz | source | sdist | null | false | c53d1c88e50fbfd6028441d4a6b6052a | 0d83fdc75b67e721166f973b9b6de6bf7d281ea1876d900fd9bb30cbeac22a74 | 46fee5b9d627a96bc80359d615aa04f64ebc491ab43c0e4437c7688f16ab7b5b | MIT | [] | 219 |
2.4 | pytest-codeblock | 0.5.5 | Pytest plugin to collect and test code blocks in reStructuredText and Markdown files. | ================
pytest-codeblock
================
.. External references
.. _reStructuredText: https://docutils.sourceforge.io/rst.html
.. _Markdown: https://daringfireball.net/projects/markdown/
.. _pytest: https://docs.pytest.org
.. _Django: https://www.djangoproject.com
.. _pip: https://pypi.org/project/pip/
.. _uv: https://pypi.org/project/uv/
.. _fake.py: https://github.com/barseghyanartur/fake.py
.. _boto3: https://github.com/boto/boto3
.. _moto: https://github.com/getmoto/moto
.. _openai: https://github.com/openai/openai-python
.. _Ollama: https://github.com/ollama/ollama
.. _tomli: https://pypi.org/project/tomli/
.. Internal references
.. _pytest-codeblock: https://github.com/barseghyanartur/pytest-codeblock/
.. _Read the Docs: http://pytest-codeblock.readthedocs.io/
.. _Examples: https://github.com/barseghyanartur/pytest-codeblock/tree/main/examples
.. _Customisation docs: https://pytest-codeblock.readthedocs.io/en/latest/customisation.html
.. _Contributor guidelines: https://pytest-codeblock.readthedocs.io/en/latest/contributor_guidelines.html
.. _reStructuredText docs: https://pytest-codeblock.readthedocs.io/en/latest/restructured_text.html
.. _Markdown docs: https://pytest-codeblock.readthedocs.io/en/latest/markdown.html
.. _llms.txt: https://barseghyanartur.github.io/pytest-codeblock/llms.txt
Test your documentation code blocks.
.. image:: https://img.shields.io/pypi/v/pytest-codeblock.svg
:target: https://pypi.python.org/pypi/pytest-codeblock
:alt: PyPI Version
.. image:: https://img.shields.io/pypi/pyversions/pytest-codeblock.svg
:target: https://pypi.python.org/pypi/pytest-codeblock/
:alt: Supported Python versions
.. image:: https://github.com/barseghyanartur/pytest-codeblock/actions/workflows/test.yml/badge.svg?branch=main
:target: https://github.com/barseghyanartur/pytest-codeblock/actions
:alt: Build Status
.. image:: https://readthedocs.org/projects/pytest-codeblock/badge/?version=latest
:target: http://pytest-codeblock.readthedocs.io
:alt: Documentation Status
.. image:: https://img.shields.io/badge/docs-llms.txt-blue
:target: http://pytest-codeblock.readthedocs.io/en/latest/llms.txt
:alt: llms.txt - documentation for LLMs
.. image:: https://img.shields.io/badge/license-MIT-blue.svg
:target: https://github.com/barseghyanartur/pytest-codeblock/#License
:alt: MIT
.. image:: https://coveralls.io/repos/github/barseghyanartur/pytest-codeblock/badge.svg?branch=main&service=github
:target: https://coveralls.io/github/barseghyanartur/pytest-codeblock?branch=main
:alt: Coverage
`pytest-codeblock`_ is a `Pytest`_ plugin that discovers Python code examples
in your `reStructuredText`_ and `Markdown`_ documentation files and runs them
as part of your test suite. This ensures your docs stay correct and up-to-date.
Features
========
- **reStructuredText and Markdown support**: Automatically find and test code
blocks in `reStructuredText`_ (``.rst``) and `Markdown`_ (``.md``) files.
Async code snippets are supported as well.
- **Grouping**: Split a single example across multiple code blocks;
the plugin concatenates them into one test.
- **Pytest markers support**: Add existing or custom `pytest`_ markers
to the code blocks and hook into the tests life-cycle using ``conftest.py``.
- **Pytest fixtures support**: Request existing or custom `pytest`_ fixtures
for the code blocks.
Prerequisites
=============
- Python 3.10+
- `pytest`_ is the only required dependency (on Python 3.11+; for Python 3.10
`tomli`_ is also required).
Documentation
=============
- Documentation is available on `Read the Docs`_.
- For `reStructuredText`_, see a dedicated `reStructuredText docs`_.
- For `Markdown`_, see a dedicated `Markdown docs`_.
- Both `reStructuredText docs`_ and `Markdown docs`_ have extensive
documentation on `pytest`_ markers and corresponding ``conftest.py`` hooks.
- For guidelines on contributing check the `Contributor guidelines`_.
Installation
============
Install with `pip`_:
.. code-block:: sh
pip install pytest-codeblock
Or install with `uv`_:
.. code-block:: sh
uv pip install pytest-codeblock
.. _configuration:
Configuration
=============
For most use cases, no configuration needed.
By default, all code blocks with a name starting with ``test_`` will be
collected and executed as tests. This allows you to have both test and non-test
code blocks in your documentation, giving you flexibility in how you structure
your examples.
However, if you want to test all code blocks, you can
set ``test_nameless_codeblocks`` to ``true`` in your `pyproject.toml`:
*Filename: pyproject.toml*
.. code-block:: toml
[tool.pytest-codeblock]
test_nameless_codeblocks = true
If you still want to skip some code blocks, you can use built-in or custom
pytest markers.
See the dedicated `reStructuredText docs`_ and `Markdown docs`_ to learn more
about `pytestmark` directive.
Note, that nameless code blocks have limitations when it comes to grouping.
----
By default, all code `.rst` and `.md` files shall be picked automatically.
However, if you need to add another file extension or use or another language
identifier for python in codeblock, you could configure that.
See the following example of `pyproject.toml` configuration:
*Filename: pyproject.toml*
.. code-block:: toml
[tool.pytest-codeblock]
rst_user_codeblocks = ["c_py"]
rst_user_extensions = [".rst.txt"]
md_user_codeblocks = ["c_py"]
md_user_extensions = [".md.txt"]
See `customisation docs`_ for more.
Usage
=====
reStructruredText usage
-----------------------
Any code directive, such as ``.. code-block:: python``, ``.. code:: python``,
or literal blocks with a preceding ``.. codeblock-name: <name>``, will be
collected and executed automatically by `pytest`_.
``code-block`` directive example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note:: Note that ``:name:`` value has a ``test_`` prefix.
*Filename: README.rst*
.. code-block:: rst
.. code-block:: python
:name: test_basic_example
import math
result = math.pow(3, 2)
assert result == 9
``literalinclude`` directive example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note:: Note that ``:name:`` value has a ``test_`` prefix.
*Filename: README.rst*
.. code-block:: rst
.. literalinclude:: examples/python/basic_example.py
:name: test_li_basic_example
See a dedicated `reStructuredText docs`_ for more.
Markdown usage
--------------
Any fenced code block with a recognized Python language tag (e.g., ``python``,
``py``) will be collected and executed automatically by `pytest`_.
.. note:: Note that ``name`` value has a ``test_`` prefix.
*Filename: README.md*
.. code-block:: markdown
```python name=test_basic_example
import math
result = math.pow(3, 2)
assert result == 9
```
See a dedicated `Markdown docs`_ for more.
Tests
=====
Run the tests with `pytest`_:
.. code-block:: sh
pytest
Troubleshooting
===============
If something doesn't work, try to add this to your pyproject.toml:
*Filename: pyproject.toml*
.. code-block:: text
[tool.pytest.ini_options]
testpaths = [
"**/*.rst",
"**/*.md",
]
Writing documentation
=====================
Keep the following hierarchy.
.. code-block:: text
=====
title
=====
header
======
sub-header
----------
sub-sub-header
~~~~~~~~~~~~~~
sub-sub-sub-header
^^^^^^^^^^^^^^^^^^
sub-sub-sub-sub-header
++++++++++++++++++++++
sub-sub-sub-sub-sub-header
**************************
License
=======
MIT
Support
=======
For security issues contact me at the e-mail given in the `Author`_ section.
For overall issues, go
to `GitHub <https://github.com/barseghyanartur/pytest-codeblock/issues>`_.
Author
======
Artur Barseghyan <artur.barseghyan@gmail.com>
| text/x-rst | null | Artur Barseghyan <artur.barseghyan@gmail.com> | null | Artur Barseghyan <artur.barseghyan@gmail.com> | null | pytest, plugin, documentation, code blocks, markdown, rst | [
"Framework :: Pytest",
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python",
"Topic :: Software Development :: Testing",
"Topic :: Software Development"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pytest",
"tomli; python_version < \"3.11\"",
"pytest-codeblock[build,dev,docs,test]; extra == \"all\"",
"detect-secrets; extra == \"dev\"",
"doc8; extra == \"dev\"",
"ipython; extra == \"dev\"",
"mypy; extra == \"dev\"",
"pydoclint; extra == \"dev\"",
"ruff; extra == \"dev\"",
"twine; extra == \"dev\"",
"uv; extra == \"dev\"",
"django; extra == \"test\"",
"fake.py; extra == \"test\"",
"moto[s3]; extra == \"test\"",
"openai; extra == \"test\"",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-django; extra == \"test\"",
"respx; extra == \"test\"",
"sphinx; extra == \"docs\"",
"sphinx-autobuild; extra == \"docs\"",
"sphinx-rtd-theme>=1.3.0; extra == \"docs\"",
"sphinx-no-pragma; extra == \"docs\"",
"sphinx-llms-txt-link; extra == \"docs\"",
"sphinx-source-tree; extra == \"docs\"",
"build; extra == \"build\"",
"twine; extra == \"build\"",
"wheel; extra == \"build\""
] | [] | [] | [] | [
"Homepage, https://github.com/barseghyanartur/pytest-codeblock/",
"Repository, https://github.com/barseghyanartur/pytest-codeblock/",
"Issues, https://github.com/barseghyanartur/pytest-codeblock/issues",
"Documentation, https://pytest-codeblock.readthedocs.io/",
"Changelog, https://pytest-codeblock.readthedocs.io/en/latest/changelog.html"
] | twine/6.2.0 CPython/3.12.11 | 2026-02-20T21:28:14.461662 | pytest_codeblock-0.5.5.tar.gz | 30,719 | f4/b5/11fdf7a72e6eafa32538bd2e32c977f97c941945ee2e930534664be71b1f/pytest_codeblock-0.5.5.tar.gz | source | sdist | null | false | 5174ad8c9e21a77f8960a6fec7573d97 | e866f9449fdf204e66eae8816e7dbf19b8e4c97f3f5de30d9cef4322d3c79cbb | f4b511fdf7a72e6eafa32538bd2e32c977f97c941945ee2e930534664be71b1f | MIT | [
"LICENSE"
] | 289 |
2.1 | mps-sim-gpu | 1.0.1 | GPU-accelerated MPS quantum circuit simulator (CUDA / Apple MPS / CPU) | # mps_sim_gpu
GPU-accelerated Matrix Product State (MPS) quantum circuit simulator.
A drop-in extension of `mps_sim` that transparently accelerates the two
hottest operations — **SVD truncation** and **tensor contraction** (einsum)
— using whichever GPU backend is available on your machine.
---
## Supported backends
| Backend | Hardware | Library |
|---------|----------|---------|
| `cuda` | NVIDIA GPU | CuPy *(preferred)* or PyTorch |
| `mps` | Apple Silicon (M1/M2/M3…) | PyTorch |
| `cpu` | Any CPU | NumPy (original behaviour) |
Backend is **auto-detected** at runtime — no code changes needed when
moving between machines.
---
## Installation
```bash
# Base install (CPU fallback always available)
pip install mps_sim_gpu
# NVIDIA GPU support via CuPy (fastest on CUDA)
pip install "mps_sim_gpu[cuda]"
# Apple MPS or CUDA via PyTorch
pip install "mps_sim_gpu[torch]"
```
---
## Quick start
```python
from mps_gpu import GPUSimulator
from mps_sim.circuits import Circuit
# Build a 20-qubit GHZ circuit
circ = Circuit(20)
circ.h(0)
for i in range(19):
circ.cx(i, i + 1)
# Run — backend auto-selected (CUDA > MPS > CPU)
sim = GPUSimulator(chi=64)
state = sim.run(circ)
print(state)
# GPUMPS(n=20, chi=64, backend=cuda, ...)
print(state.expectation_pauli_z(0)) # → ~0.0 (GHZ is Z-symmetric)
```
### Force a specific backend
```python
sim = GPUSimulator(chi=128, backend="cuda") # NVIDIA GPU
sim = GPUSimulator(chi=128, backend="mps") # Apple Silicon
sim = GPUSimulator(chi=128, backend="cpu") # CPU / debug mode
```
---
## Benchmarking
```python
from mps_gpu import benchmark
results = benchmark(n_qubits=20, chi=64, depth=40)
# ============================================================
# MPS GPU Benchmark
# n=20 qubits | chi=64 | depth≈40 | 3 runs
# ============================================================
# cpu 4.821s ± 0.031s
# cuda 0.182s ± 0.008s
#
# Speedups vs CPU:
# cuda: 26.5×
# ============================================================
```
Typical speedups (random brickwork circuit, chi=64):
| Hardware | Speedup vs CPU |
|----------|---------------|
| NVIDIA A100 | 30–60× |
| NVIDIA RTX 4090 | 20–40× |
| Apple M2 Pro | 4–10× |
| Apple M1 | 2–6× |
Speedup scales with bond dimension χ — larger χ means more benefit.
---
## Architecture
```
mps_gpu/
├── __init__.py — public API
├── backend.py — detects CuPy / PyTorch-MPS / PyTorch-CUDA / NumPy
├── mps.py — GPUMPS: backend-dispatched tensor ops
├── simulator.py — GPUSimulator: circuit runner
└── benchmark.py — CPU vs GPU timing comparison
```
### How it works
All MPS tensor operations flow through a **backend object** that exposes
a NumPy-compatible API (`einsum`, `svd`, `qr`, …). The three hot-paths
that benefit most from GPU acceleration are:
1. **SVD** (`apply_svd_truncation`) — called once per two-qubit gate.
Dominates runtime for large χ. CuPy `cupy.linalg.svd` uses cuSolver
under the hood and is typically 20–50× faster than NumPy for χ ≥ 64.
2. **Two-site einsum** (`'lir,rjs->lijs'`, `'abij,lijs->labs'`) — GPU
einsum via CuPy or `torch.einsum`.
3. **QR** (`canonicalize`) — called during expectation-value evaluation.
GPU QR gives 5–15× speedup for large χ.
Tensors live in host (CPU) memory between gate applications and are
**transferred to device only for the duration of each heavy op**. This
keeps the memory model simple and avoids issues with Python's GC on device
arrays. For workloads with very large χ (≥ 512), enabling `pin_tensors=True`
keeps tensors device-resident and reduces transfer overhead further
(requires CuPy).
---
## Compatibility
`GPUMPS` and `GPUSimulator` expose the same public API as the original
`MPS` and `MPSSimulator` from `mps_sim`, so they can be used as drop-in
replacements. The `mps_sim.extrapolation` module works unchanged with
`GPUSimulator`.
---
## Requirements
- Python ≥ 3.9
- `mps_sim >= 1.0.0`
- `numpy >= 1.22`
- For CUDA: `cupy-cuda11x` or `cupy-cuda12x` (match your CUDA version)
- For Apple MPS / CUDA via PyTorch: `torch >= 2.0`
| text/markdown | mps_sim contributors | null | null | null | MIT | quantum computing MPS tensor network GPU CUDA simulation | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Physics",
"Intended Audience :: Science/Research"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Source, https://github.com/your-org/mps_sim_gpu",
"Bug Tracker, https://github.com/your-org/mps_sim_gpu/issues"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-20T21:27:52.441649 | mps_sim_gpu-1.0.1.tar.gz | 14,731 | 6f/72/82c835cd345d4c373f7ec522aeed6ed38c37f74dd0ccbec91b3032375dfe/mps_sim_gpu-1.0.1.tar.gz | source | sdist | null | false | 135e7b0614c7a09a1dd73d1c6e5be8ec | 8424e6ad62d77ec27768e3de7ad66f1fbff111cc4f64fe329a0cf15488c6e7d5 | 6f7282c835cd345d4c373f7ec522aeed6ed38c37f74dd0ccbec91b3032375dfe | null | [] | 164 |
2.4 | chart-xkcd | 0.4.2 | Python API for generating XKCD-style charts | ## chart.xkcd
A Python + JavaScript library for creating xkcd-style charts.
See [this repository](https://github.com/timqian/chart.xkcd) for the original code.
### Setup
Install Python dependencies (requires Python 3.13+):
```
uv pip install -e ".[dev]"
```
Install JavaScript dependencies:
```
cd js && npm install && cd -
```
### Generating example data
The examples use data from [snailz](https://pypi.org/project/snailz/),
a synthetic data generator. The configuration is in `examples/snailz.json`.
To regenerate the SQLite database and CSV files:
```
task data
```
This runs `snailz` to create `data/snailz.db`, then runs each SQL file
in `examples/*.sql` against it to produce CSV files in `tmp/`.
### Building
Build the font data, JavaScript bundle, and Python package:
```
task build
```
This:
1. Encodes `assets/xkcd-script.ttf` as a base64 data URL in `js/src/utils/fontData.js`.
2. Bundles the JavaScript source with esbuild into `src/chart_xkcd/static/chart.xkcd.js`.
3. Builds the Python wheel and sdist with `python -m build`.
### Examples
#### Python command-line examples (`examples/*.py`)
Each chart type has a standalone script that reads a CSV file and
writes a static HTML page:
| Script | Chart type | Input CSV | Description |
|---|---|---|---|
| `bar.py` | Bar | `tmp/bar.csv` | Samples per person |
| `stacked_bar.py` | StackedBar | `tmp/stacked_bar.csv` | Samples by variety and grid |
| `line.py` | Line | `tmp/line.csv` | Samples collected per week |
| `scatter.py` | Scatter | `tmp/scatter.csv` | Snail mass vs diameter |
| `pie.py` | Pie | `tmp/pie.csv` | Samples by variety |
| `radar.py` | Radar | `tmp/radar.csv` | Samples by variety and grid |
Run them all at once with:
```
task ex_py
```
Or run one individually:
```
python examples/bar.py tmp/bar.csv tmp/bar.html
```
#### Marimo notebook (`examples/notebook.py`)
A marimo notebook that displays all six chart types as interactive
widgets. Each cell reads a CSV file from `tmp/` and calls `to_widget()`
to render the chart.
```
marimo run examples/notebook.py
```
#### Selection test notebook (`examples/test_selection.py`)
A marimo notebook demonstrating click, shift-click, and box-select
interactions. Each chart is wrapped with `mo.ui.anywidget()` so that
selection changes trigger reactive cell updates.
```
marimo run examples/test_selection.py
```
#### JavaScript examples (`js/examples/`)
A standalone HTML page (`example.html`) that renders all six chart
types using the JavaScript library directly. The data is loaded
dynamically from CSV files via `fetch()`. A symlink
`js/examples/tmp` points to the project-level `tmp/` directory.
To view the JavaScript examples with a dev server:
```
task ex_js
```
Then open the URL printed by the dev server in a browser.
### Project structure
```
assets/ xkcd-script.ttf font file
bin/ build scripts (font_encode.py)
examples/ Python examples, SQL queries, marimo notebooks
js/src/ JavaScript chart source
Bar.js, Line.js, ... chart classes
config.js shared constants
widget.js anywidget entry point
index.js standalone library entry point
components/Tooltip.js tooltip component
utils/ shared helpers (axes, labels, legend, font, filter)
src/chart_xkcd/ Python package
bar.py, line.py, ... chart classes
charts.py base classes and validation
widget.py anywidget adapter (ChartWidget, to_widget)
renderer.py HTML rendering (render, to_html)
config.py positionType constants
main.py CLI entry point
static/ bundled JS (built artifact)
```
| text/markdown | null | null | null | null | null | charts | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"anywidget>=0.9.0",
"marimo>=0.19.11"
] | [] | [] | [] | [
"Repository, https://github.com/gvwilson/chart.xkcd",
"Documentation, https://chartxkcd.readthedocs.io"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T21:27:47.323756 | chart_xkcd-0.4.2.tar.gz | 903,544 | 45/6a/52319cb2271611a8479be957bff7284223ddfcf8b9da0f4fbc9685fd0a69/chart_xkcd-0.4.2.tar.gz | source | sdist | null | false | 7147ac97ef2425dfe6f5866099545be1 | 14f4d6f3cc7efcc0177c3cd29edb312da37cfeb3db871aff06d70a143a324b1a | 456a52319cb2271611a8479be957bff7284223ddfcf8b9da0f4fbc9685fd0a69 | MIT | [
"LICENSE.md"
] | 209 |
2.4 | mallama | 0.2.1 | Browser UI for Ollama • Local LLM Interface • Web Chat Client for Local AI Models | # Ollama Web UI
A beautiful web interface for Ollama with conversation management and markdown support.
## Features
- 💬 Chat with Ollama models
- 📝 Markdown support with syntax highlighting
- 💾 Save and manage conversations
- ⚙️ Adjustable parameters (temperature, top-p, max tokens)
- 📎 File upload support
- 🎨 Beautiful glass-morphism UI
- ⌨️ Keyboard shortcuts (Ctrl+C to stop generation)
## Installation
### Via pip
```bash
pip install mallama
mallama --host 0.0.0.0 --port 5000
```
Via AUR (Arch Linux)
```bash
yay -S mallama
# or
paru -S mallama
```
# Run as a service
```bash
systemctl --user enable mallama
systemctl --user start mallama
```
From source
```bash
git clone https://github.com/mesut2ooo/mallama
cd mallama
pip install -e .
mallama
```
Requirements
Python 3.8+
Ollama installed and running locally (http://localhost:11434)
Usage
Make sure Ollama is running with at least one model pulled
Start the web UI: mallama
Open http://localhost:5000 in your browser
Select a model and start chatting!
Configuration
The application stores conversations and uploads in ~/.mallama/
License
MIT
| text/markdown | Masoud Gholypour | Masoud Gholypour <masoudgholypour2000@gmail.com> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | https://github.com/mesut2ooo/mallama | null | >=3.8 | [] | [] | [] | [
"flask>=2.0.0",
"requests>=2.28.0",
"werkzeug>=2.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/mesut2ooo/mallama",
"Repository, https://github.com/mesut2ooo/mallama.git",
"Issues, https://github.com/mesut2ooo/mallama/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T21:27:37.425192 | mallama-0.2.1.tar.gz | 92,686 | 13/df/d28a5c7fb018434b8937e10c8721a31b981d815760747f94e54d16e39d00/mallama-0.2.1.tar.gz | source | sdist | null | false | 97fd249c63e469c37b9636562b185905 | ba434d69d8e7f6fdf589dc8bf330df2c0bd43ad02eacc770877f3ffba8686593 | 13dfd28a5c7fb018434b8937e10c8721a31b981d815760747f94e54d16e39d00 | null | [
"LICENSE"
] | 207 |
2.4 | quillsql | 3.0.0 | Quill SDK for Python. | # Quill Python SDK
## Quickstart
First, install the quillsql package by running:
```bash
$ pip install quillsql
```
Then, add a `/quill` endpoint to your existing python server. For example, if
you were running a FASTAPI app, you would just add the endpoint like this:
```python
from quillsql import Quill
quill = Quill(
private_key=os.getenv("QULL_PRIVATE_KEY"),
database_connection_string=os.getenv("POSTGRES_READ"),
database_type="postgresql"
)
security = HTTPBearer()
async def authenticate_jwt(token: str = Depends(security)):
# Your JWT validation logic here
# Return user object or raise HTTPException
user = validate_jwt_token(token.credentials)
return user
@app.post("/quill")
async def quill_post(data: Request, user: dict = Depends(authenticate_jwt)):
# assuming user fetched via auth middleware has an userId
user_id = user["user_id"]
body = await data.json()
metadata = body.get("metadata")
result = await quill.query(
tenants=[{"tenantField": "user_id", "tenantIds": [user_id]}],
metadata=metadata
)
return result
```
Then you can run your app like normally. Pass in this route to our react library
on the frontend and you all set!
## Streaming
```python
from quillsql import Quill
from fastapi.responses import StreamingResponse
import asyncio
quill = Quill(
private_key=os.getenv("QULL_PRIVATE_KEY"),
database_connection_string=os.getenv("POSTGRES_READ"),
database_type="postgresql"
)
@app.post("/quill-stream")
async def quill_post(data: Request, user: dict = Depends(authenticate_jwt)):
# assuming user fetched via auth middleware has an userId
user_id = user["user_id"]
body = await data.json()
metadata = body.get("metadata")
quill_stream = quill.stream(
tenants=[{"tenantField": "user_id", "tenantIds": [user_id]}],
metadata=metadata,
)
async def event_generator():
# Full event types list: https://ai-sdk.dev/docs/ai-sdk-ui/stream-protocol#data-stream-protocol
async for event in quill_stream:
if event["type"] == "start":
pass
elif event["type"] == "text-delta":
yield event['delta']
elif event["type"] == "finish":
return
elif event["type"] == "error":
yield event['errorText']
await asyncio.sleep(0)
return StreamingResponse(event_generator(), media_type="text/event-stream")
```
| text/markdown | Quill | shawn@quill.co | null | null | null | null | [] | [] | https://github.com/quill-sql/quill-python | null | null | [] | [] | [] | [
"psycopg[binary]",
"psycopg-pool",
"requests",
"redis",
"python-dotenv",
"pytest",
"google-cloud-bigquery",
"google-auth"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.6 | 2026-02-20T21:27:21.754771 | quillsql-3.0.0.tar.gz | 29,242 | 24/80/8007439889511497a0ff306c6745526dfddc532b01418d00e0e843dd34cc/quillsql-3.0.0.tar.gz | source | sdist | null | false | 2ba3d0b69df9b739538ec4a5754e3efa | c74d68cf2cd8c8f434a74a26e6effff26dd53bc7a4ec6c491779ce6c492f89a3 | 24808007439889511497a0ff306c6745526dfddc532b01418d00e0e843dd34cc | null | [] | 214 |
2.4 | folderbot | 0.1.82 | Telegram bot for chatting with your folder using LLMs | # Folderbot
[](https://badge.fury.io/py/folderbot)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://gitlab.com/jorgeecardona/folderbot/-/pipelines)
[](https://gitlab.com/jorgeecardona/folderbot/-/pipelines)
[](https://folderbot.readthedocs.io/en/latest/?badge=latest)
A Telegram bot that lets you chat with your folder using LLMs (Claude, GPT-4, Gemini, and more).
**Repository:** [https://gitlab.com/jorgeecardona/folderbot](https://gitlab.com/jorgeecardona/folderbot) | **Docs:** [https://folderbot.readthedocs.io](https://folderbot.readthedocs.io)
## Features
- **Tool Use**: The AI actively interacts with your files using built-in tools
- **Built-in Tools**: List, read, search, and write files
- **Web Tools**: Search the web and fetch content from URLs
- **Task Scheduler**: Schedule long-running, repeating, cron, or time-limited tasks
- **File Notifications**: Get alerted when files in your folder change
- **Activity Logging**: View detailed logs of all tool calls and bot activity
- **Custom Tools**: Extend with your own tools via `.folderbot/tools.py`
- **Persistent Sessions**: Conversation history stored in SQLite
- **Auto-logging**: All conversations logged to markdown files
- **Access Control**: Whitelist specific Telegram user IDs
- **Append Protection**: Configure which files can be appended to
- **Smart Message Handling**: Send multiple messages quickly - they're combined into one request
- **Version Notifications**: Get notified when the bot updates
## Installation
```bash
pip install folderbot
```
Or install from source:
```bash
git clone https://gitlab.com/jorgeecardona/folderbot
cd folderbot
pip install -e .
```
## Quick Start
```bash
cd /path/to/your/folder
folderbot init # Creates .folderbot/config.toml here
folderbot run # Runs from current directory
```
## Configuration
Configuration lives inside your folder at `.folderbot/config.toml`:
```toml
telegram_token = "YOUR_TELEGRAM_BOT_TOKEN"
anthropic_api_key = "YOUR_API_KEY" # Or set FOLDERBOT_API_KEY env var
allowed_user_ids = [123456789] # Your Telegram user ID
# root_folder is implicit — it's the parent of .folderbot/
# You only need to set it if you want to override the default.
model = "anthropic/claude-sonnet-4-20250514" # or openai/gpt-4o, google/gemini-2.0-flash, etc.
[read_rules]
include = ["**/*.md", "**/*.txt"]
exclude = ["**/docs/**", ".git/**"]
append_allowed = ["**/todo.md"]
[tools.web_search]
google_api_key = "YOUR_GOOGLE_API_KEY"
google_cx = "YOUR_SEARCH_ENGINE_ID"
```
Get your Telegram user ID from [@userinfobot](https://t.me/userinfobot).
## Built-in Tools
The AI has access to these tools for interacting with your folder:
### File Tools
| Tool | Description |
|------|-------------|
| `list_files` | List files in a folder or subfolder |
| `read_file` | Read the contents of a specific file |
| `read_files` | Read multiple files at once (concatenated view) |
| `search_files` | Search for text across all files |
| `write_file` | Create or update files (supports append mode) |
All file tools respect the `include`, `exclude`, and `append_allowed` patterns in your config.
### Utility Tools
| Tool | Description |
|------|-------------|
| `get_time` | Get the current date and time (with timezone support) |
| `send_message` | Send a message to the user (useful in scheduled tasks) |
| `compare_numbers` | Compare two numbers |
| `shuffle_list` | Randomly shuffle a list of items |
| `sort_list` | Sort a list alphabetically or numerically |
| `random_choice` | Pick random item(s) from a list |
| `random_number` | Generate a random number within a range |
### Activity & Monitoring Tools
| Tool | Description |
|------|-------------|
| `read_activity_log` | View the bot's activity log (tool calls, messages, task events) |
| `enable_file_notifications` | Enable notifications for all file changes |
| `disable_file_notifications` | Disable file change notifications |
| `get_file_notification_status` | Check if file notifications are enabled |
## File Notifications
Get notified via Telegram when any files in your folder are created, modified, or deleted.
### Example Conversations
> "Enable file notifications"
The AI will turn on notifications and you'll be alerted when any files change.
> "Turn off file notifications"
The AI will disable file change notifications.
### How It Works
- Uses the `watchdog` library to monitor file system events
- Notifications include the file path and type of change (modified, created, deleted)
- Notification preference is stored per user
## Activity Logging
All tool calls, messages, and task events are logged to structured JSON files in `.folderbot/logs/`. This enables:
- **Debugging**: See exactly what tools were called and with what parameters
- **Auditing**: Review what the bot has done in your folder
- **Analysis**: Search through past activity
### Example Conversations
> "Show me the activity log for today"
> "What tools did you use in the last hour?"
> "Search the activity log for 'write_file'"
Logs are automatically rotated (kept for 30 days).
## Web Tools
The AI can search the web and fetch content from URLs to help with research.
### Installation
```bash
pip install folderbot[web]
```
### Available Tools
| Tool | Description |
|------|-------------|
| `web_search` | Search the web using DuckDuckGo |
| `web_fetch` | Fetch and extract text content from a URL |
### Example Conversations
**Research a topic:**
> "Search for the latest Python 3.12 features and summarize them"
The AI will search the web, find relevant articles, and provide a summary.
**Save web content to notes:**
> "Fetch this article and save the key points to my notes: https://example.com/article"
The AI will fetch the URL content, extract the text, and write a summary to your folder.
## Task Scheduler
The task scheduler lets the AI plan and execute long-running or repeating tasks autonomously. Tasks run in the background, report progress via Telegram, and optionally generate a summary when complete.
### Installation
The scheduler requires an optional dependency for cron scheduling:
```bash
pip install folderbot[scheduler]
```
### Schedule Types
| Type | Description | Example Use Case |
|------|-------------|------------------|
| `once` | Run once, optionally after a delay | "Check this file in 30 minutes" |
| `repeating` | Run at fixed intervals | "Check for updates every hour" |
| `cron` | Run on a cron schedule | "Generate a report daily at 9am" |
| `time_limited` | Run repeatedly until time expires | "Search for available domains for 5 minutes" |
### Scheduler Tools
| Tool | Description |
|------|-------------|
| `schedule_task` | Create a new scheduled task |
| `list_tasks` | List your scheduled tasks (filter by status) |
| `cancel_task` | Cancel a running or pending task |
| `get_task_results` | Get the results of a task |
### Example Conversations
**Time-limited search:**
> "Search for available .ai domains for 5 minutes and tell me what you find"
The AI will schedule a time-limited task that repeatedly calls your domain search tool, sends progress updates, and summarizes results when done.
**Repeating check:**
> "Check my inbox folder every hour and let me know if there are new files"
The AI schedules a repeating task that runs indefinitely (or until you cancel it).
**Cron schedule:**
> "Every day at 9am, read my todo.md and send me a summary"
The AI schedules a cron task using the expression `0 9 * * *`.
**Delayed execution:**
> "In 30 minutes, remind me to review the meeting notes"
The AI schedules a one-time task with a 30-minute delay.
### Task Management
Use the `/tasks` Telegram command for a quick overview of your scheduled tasks, or ask the bot:
- "What tasks do I have running?"
- "Cancel task abc123"
- "Show me the results of my domain search task"
### Features
- **Progress updates**: Configure how often to receive updates (every N iterations)
- **Auto-summarization**: The AI summarizes results when tasks complete
- **Error handling**: Tasks auto-stop after too many consecutive errors
- **Persistence**: Tasks survive bot restarts (restored from SQLite)
- **Result capping**: Only the most recent results are kept to manage memory
## Custom Tools
You can extend folderbot with custom tools by creating `.folderbot/tools.py` in your root folder.
### Example: Daily Journal Tool
This example adds a tool that appends timestamped entries to a daily journal file:
```python
# .folderbot/tools.py
from datetime import datetime
from pathlib import Path
from typing import Any
from pydantic import BaseModel, Field
from folderbot.tools import ToolDefinition, ToolResult
class JournalEntryInput(BaseModel):
"""Input for adding a journal entry."""
content: str = Field(description="The journal entry content")
mood: str = Field(default="neutral", description="Current mood (happy, sad, neutral, excited)")
class CustomTools:
"""Custom tools for my folder."""
def __init__(self, root_folder: Path, tools_config: dict[str, dict[str, Any]] | None = None):
self.root_folder = root_folder
self.config = (tools_config or {}).get("add_journal_entry", {})
def get_tool_definitions(self) -> list[dict[str, Any]]:
tools = [
ToolDefinition(
name="add_journal_entry",
description="Add a timestamped entry to today's journal with optional mood tracking",
input_model=JournalEntryInput,
),
]
return [t.to_api_format() for t in tools]
def execute(self, tool_name: str, tool_input: dict[str, Any]) -> ToolResult:
if tool_name == "add_journal_entry":
return self._add_journal_entry(tool_input)
return ToolResult(content=f"Unknown tool: {tool_name}", is_error=True)
def _add_journal_entry(self, tool_input: dict[str, Any]) -> ToolResult:
params = JournalEntryInput(**tool_input)
# Create journal folder if needed
journal_dir = self.root_folder / "journal"
journal_dir.mkdir(exist_ok=True)
# Today's journal file
today = datetime.now().strftime("%Y-%m-%d")
journal_file = journal_dir / f"{today}.md"
# Create header if new file
if not journal_file.exists():
header = f"# Journal - {today}\n\n"
journal_file.write_text(header)
# Append entry with timestamp
timestamp = datetime.now().strftime("%H:%M")
mood_emoji = {"happy": "😊", "sad": "😢", "neutral": "😐", "excited": "🎉"}.get(params.mood, "📝")
entry = f"### {timestamp} {mood_emoji}\n\n{params.content}\n\n---\n\n"
with open(journal_file, "a") as f:
f.write(entry)
return ToolResult(content=f"Added journal entry to {today}.md")
```
Now you can tell the bot: *"Add to my journal: Had a great meeting with the team today"* and it will create a timestamped entry.
### Alternative: Factory Function
Instead of a `CustomTools` class, you can export a `create_tools(root_folder)` function:
```python
def create_tools(root_folder: Path):
return CustomTools(root_folder)
```
### Package Structure
For more complex tools, use `.folderbot/tools/__init__.py`:
```
.folderbot/
└── tools/
├── __init__.py # Exports CustomTools or create_tools
├── journal.py # Journal tool implementation
└── reminders.py # Reminder tool implementation
```
## CLI Commands
### Bot Management
```bash
folderbot run # Run the bot (finds .folderbot/ in PWD)
folderbot run --bot mybot # Run a specific bot (multi-bot config)
folderbot status # Show configuration status
```
### Configuration
```bash
folderbot init # Create .folderbot/config.toml in current directory
folderbot config show # Show current config
folderbot config set telegram_token XXX # Set a config value
folderbot config folder /path/to/folder # Set root folder
```
### Folder Management
```bash
folderbot move /new/path # Move folder and update systemd service
```
### Systemd Service
```bash
folderbot service install # Install as user service (uses PWD as WorkingDirectory)
folderbot service enable # Enable auto-start
folderbot service start # Start the service
folderbot service stop # Stop the service
folderbot service status # Check service status
folderbot service logs # View logs
folderbot service logs -f # Follow logs
folderbot service uninstall # Remove service
```
## Telegram Commands
| Command | Description |
|---------|-------------|
| `/start` | Initialize bot and show help |
| `/clear` | Clear conversation history |
| `/new` | Start a new topic (clears history) |
| `/status` | Show session info |
| `/files` | List files available in context |
| `/tasks` | List your scheduled tasks |
## Multi-Bot Configuration
Run multiple bots with different folders from a single config:
```toml
api_key = "SHARED_KEY"
[bots.work]
telegram_token = "WORK_BOT_TOKEN"
root_folder = "~/work/notes"
allowed_user_ids = [123456789]
[bots.personal]
telegram_token = "PERSONAL_BOT_TOKEN"
root_folder = "~/personal/notes"
allowed_user_ids = [123456789]
```
```bash
folderbot run --bot work
folderbot run --bot personal
```
## Development
```bash
# Setup
./setup/setup.sh
source .venv/bin/activate
# Run tests
pytest
# Run with coverage
pytest --cov=folderbot
```
## Roadmap
Planned features, roughly in priority order:
- [x] **YAML to TOML auto-migration** — Detect old `.folderbot/config.yaml` and auto-convert to `.toml` on startup
- [x] **Todo management tool** — Structured todo tracking with status (plan/in-progress/done), natural speech interface, "what can I do in 30 minutes?" queries — todo management for procrastinators
- [x] **Image/OCR tool** — Process photos: OCR for printed text and handwritten notes, image description, save to uploads folder
- [x] **Statistics/DataFrame tool** — Group-by, aggregate, and pivot tabular data (e.g. token usage by day/week/month) with flexible date ranges
- [x] **Plotting tool** — Generate charts and plots (e.g. token usage over time, cost trends) using Plotly, send as Telegram images
- [x] **Cost estimation** — Track and estimate LLM costs alongside token counts, with per-model pricing
- [x] **Granular token tracking** — Per-interaction timestamp tracking for hourly/daily/weekly breakdowns with natural date range queries
- [x] **Calendar tool** — Built-in calendar for events and reminders, with potential Google Calendar / Outlook integration
## License
MIT
| text/markdown | Jorge Cardona | null | null | null | null | ai, bot, claude, llm, openai, personal-assistant, telegram | [
"Development Status :: 3 - Alpha",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"anthropic>=0.70.0",
"beautifulsoup4>=4.12",
"croniter>=6.0",
"faster-whisper>=1.1",
"httpx>=0.27",
"instructor>=1.14.0",
"matplotlib>=3.8",
"pathspec>=1.0",
"pydantic>=2.0",
"python-dotenv>=1.0",
"python-telegram-bot>=22.0",
"structlog>=25.5",
"tomlkit>=0.13",
"watchdog>=6.0",
"pytest-asyncio>=1.3; extra == \"dev\"",
"pytest-cov>=7.0; extra == \"dev\"",
"pytest>=9.0; extra == \"dev\"",
"myst-parser>=4.0; extra == \"docs\"",
"sphinx-autodoc-typehints>=2.0; extra == \"docs\"",
"sphinx-rtd-theme>=3.0; extra == \"docs\"",
"sphinx>=8.0; extra == \"docs\"",
"sphinxcontrib-mermaid>=1.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://gitlab.com/jorgeecardona/folderbot",
"Repository, https://gitlab.com/jorgeecardona/folderbot"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-20T21:26:49.040241 | folderbot-0.1.82.tar.gz | 151,449 | d1/e4/b6bd0dbbd3295303c5966394028c53b975bfc6f6df81a37c14e006291a5e/folderbot-0.1.82.tar.gz | source | sdist | null | false | 43ff073bb559327b40372898d317c635 | eb6911f2625d422163749f1a44b01cf5125cba22e9306a71ea1097e7f4ef0ac3 | d1e4b6bd0dbbd3295303c5966394028c53b975bfc6f6df81a37c14e006291a5e | MIT | [] | 208 |
2.3 | mcpdx | 0.2.0 | Scaffold, test, and develop MCP servers — built for developer experience | # mcpdx
> The developer experience layer the MCP ecosystem is missing.
mcpdx is a CLI tool that makes it easy to scaffold, test, validate, and deploy [MCP](https://modelcontextprotocol.io/) (Model Context Protocol) servers. Go from zero to a working, testable MCP server in under 2 minutes.
```
mcpdx init → mcpdx test → mcpdx dev → mcpdx validate → mcpdx eval → mcpdx sandbox
scaffold test locally dev + REPL protocol checks quality evals containerized run
```
## Why?
Building MCP servers today means reading scattered docs, copying boilerplate from examples, no standardized project structure, and no way to test tools locally without a full LLM client. mcpdx fixes all of that.
- **No boilerplate** — `mcpdx init` generates a complete project with best practices baked in
- **Test without Claude** — `mcpdx test` runs your tools locally using YAML fixtures
- **Fast feedback loop** — `mcpdx dev` gives you hot reload and an interactive REPL
- **Protocol compliance** — `mcpdx validate` catches MCP violations before shipping
- **Quality checks** — `mcpdx eval` tests tool effectiveness with assertions and golden files
- **Sandboxed execution** — `mcpdx sandbox` runs tools in containers with security policies
## Install
```bash
# Option 1: Install as a global CLI tool (recommended)
uv tool install mcpdx
# Option 2: Run without installing
uvx mcpdx --help
# Option 3: pip
pip install mcpdx
```
Requires Python 3.13+ and [uv](https://docs.astral.sh/uv/) (recommended) or pip.
## Quick Start
```bash
# 1. Create a new MCP server
mcpdx init --name weather-mcp --language python --transport stdio --template minimal
# 2. Install and test it
cd weather-mcp
pip install -e .
mcpdx test
# 3. Start the dev server with interactive REPL
mcpdx dev
```
Or run `mcpdx init` without flags for an interactive experience.
## Commands
### `mcpdx init`
Create a new MCP server project with interactive prompts or CLI flags.
```bash
# Interactive mode
mcpdx init
# Non-interactive mode
mcpdx init \
--name weather-mcp \
--language python \
--transport stdio \
--template api-wrapper \
--description "Weather data via OpenWeather API"
```
**Templates:**
| Template | Description |
|----------|-------------|
| `minimal` | Bare-bones starter with one example tool |
| `api-wrapper` | REST API integration with httpx, auth patterns |
**Languages:** Python, TypeScript
**Transports:** stdio, HTTP (streamable)
### `mcpdx test`
Run YAML test fixtures against your MCP server. No LLM client needed.
```bash
mcpdx test # Run all fixtures
mcpdx test -v # Verbose (show request/response)
mcpdx test -f "hello*" # Filter by test name
mcpdx test -t 10000 # Custom timeout (ms)
```
Test fixtures are defined in YAML:
```yaml
tests:
- name: "get_weather returns data"
tool: "weather_get_current"
input:
city: "London"
expect:
status: "success"
content_contains: "London"
- name: "handles missing city"
tool: "weather_get_current"
input:
city: ""
expect:
status: "error"
```
**Assertions available:**
- `status` — `"success"` or `"error"`
- `content_contains` / `content_not_contains` — Check response text
- `schema` — Validate response against JSON Schema
- `max_latency_ms` — Latency threshold
### `mcpdx dev`
Start your server with hot reload and an interactive REPL for calling tools.
```bash
mcpdx dev # Hot reload + REPL
mcpdx dev --no-repl # Watch and restart only
```
REPL usage:
```
weather-mcp> weather_hello name="Alice"
Hello, Alice! Welcome to weather_mcp.
weather-mcp> weather_hello {"name": "Bob"}
Hello, Bob! Welcome to weather_mcp.
weather-mcp> tools # List available tools
weather-mcp> help # Show commands
weather-mcp> quit # Exit
```
### `mcpdx validate`
Run protocol compliance checks against your MCP server. Catches naming issues, missing schemas, annotation gaps, and error handling problems.
```bash
mcpdx validate # Run all checks
mcpdx validate -v # Show passing rules too
mcpdx validate --category naming # Only naming rules
mcpdx validate --category schema # Only schema rules
mcpdx validate --severity error # Only errors (hide warnings/info)
mcpdx validate --json-output # Machine-readable JSON (for CI)
mcpdx validate -t 10000 # Custom timeout (ms)
```
**What it checks:**
| Category | Examples | Severity |
| ----------- | ----------------------------------------------------- | -------- |
| Naming | Tool names use `snake_case` with prefix | Warning |
| Schema | All tools have `inputSchema`, valid JSON Schema types | Error |
| Annotations | Tools have `readOnlyHint`, destructive tools flagged | Warning |
| Errors | Actionable error messages, no internal leaks | Error |
| Response | Dual format support (JSON + Markdown) | Info |
Exit code `0` if no errors. Warnings and info don't fail.
### `mcpdx eval`
Test tool _effectiveness_ with realistic scenarios — not just protocol compliance, but whether your tools return correct, useful data.
```bash
mcpdx eval # Run all suites
mcpdx eval --suite "core tool quality" # Filter by suite name
mcpdx eval -t 10000 # Custom timeout per tool call (ms)
mcpdx eval --evals-dir custom/evals/ # Custom evals directory
mcpdx eval --update-golden # Create/update golden snapshots
mcpdx eval --compare # Detect regressions vs previous run
mcpdx eval --html-report report.html # Generate HTML report
mcpdx eval --skip-validation # Skip protocol validation pre-check
```
Eval suites live in `evals/` as YAML files:
```yaml
suite: "core tool quality"
description: "Validates core tools return correct data"
evals:
- name: "returns valid weather object"
tool: "weather_get_current"
input:
city: "London"
assertions:
- type: schema
schema:
type: object
required: [temperature, conditions]
- name: "temperature in reasonable range"
tool: "weather_get_current"
input:
city: "London"
assertions:
- type: range
path: "$.temperature"
min: -50
max: 60
```
**Assertion types:**
| Type | Description |
| ------------- | ------------------------------------------------------ |
| `schema` | Validate response against a JSON Schema |
| `range` | Check a numeric value falls within `min`/`max` bounds |
| `length` | Check an array's length matches `expected` |
| `contains` | Check a string contains the `expected` substring |
| `golden_file` | Compare full response against a saved snapshot |
By default, `mcpdx eval` runs `mcpdx validate` first as a pre-check. Validation errors block evals; warnings don't.
### `mcpdx sandbox`
Run MCP tools inside containers with configurable security policies.
**Prerequisites:** An OCI-compliant container runtime — [Docker](https://www.docker.com/products/docker-desktop), [Podman](https://podman.io/getting-started/installation), or [nerdctl](https://github.com/containerd/nerdctl/releases). No Python SDK needed — mcpdx calls the CLI directly. All non-sandbox features work without a container runtime.
```bash
# Run a single tool in the sandbox
mcpdx sandbox run get_weather '{"city": "London"}'
mcpdx sandbox run get_weather '{"city": "London"}' --policy strict
mcpdx sandbox run get_weather '{"city": "London"}' --policy sandbox-policy.yaml
mcpdx sandbox run get_weather '{"city": "London"}' --build --keep --timeout 60
# Interactive shell inside the container
mcpdx sandbox shell
mcpdx sandbox shell --policy permissive --keep
# Build the container image without running
mcpdx sandbox build
mcpdx sandbox build --policy strict
# Use a specific container runtime
mcpdx sandbox run get_weather '{}' --runtime podman
mcpdx sandbox shell -r nerdctl
```
**Security policy presets:**
| Preset | Network | Filesystem | CPU | Memory | Timeout | Use Case |
| ------------ | ------- | ------------- | --- | ------ | ------- | ---------------------- |
| `strict` | None | Read-only | 0.5 | 256m | 15s | Pure computation tools |
| `standard` | Bridge | Limited write | 1.0 | 512m | 30s | API wrapper tools |
| `permissive` | Bridge | Full write | 2.0 | 1g | 120s | Development/debugging |
**Custom policy file** (`sandbox-policy.yaml`):
```yaml
policy: standard # Inherit from a preset, then override
network:
enabled: true
allowed_domains:
- "api.example.com"
- "*.github.com"
filesystem:
read_only_mounts:
- "/app/config"
writable_dirs:
- "/tmp"
resources:
cpu_limit: "0.5"
memory_limit: "256m"
timeout_seconds: 30
```
When `allowed_domains` is configured, mcpdx enforces the allowlist using iptables rules applied at container start. Only traffic to allowed domains is permitted.
**Runtime resolution order:**
1. `--runtime` / `-r` CLI flag
2. `container_runtime` in project config
3. Auto-detect from PATH: docker > podman > nerdctl
**Note:** Domain allowlist filtering requires a rootful container runtime (Docker, rootful Podman, nerdctl). Rootless Podman cannot apply iptables rules — the container will start but without network filtering.
Evals can also run inside the sandbox:
```bash
mcpdx eval --sandbox
mcpdx eval --policy strict
mcpdx eval --policy sandbox-policy.yaml
```
## Configuration
mcpdx reads config from `pyproject.toml` or `mcpdx.toml` (generated automatically by `mcpdx init`).
**pyproject.toml** (Python projects):
```toml
[tool.mcpdx]
server_command = "python -m weather_mcp.server"
fixtures_dir = "tests/fixtures"
container_runtime = "podman" # Optional: docker (default), podman, or nerdctl
```
**mcpdx.toml** (TypeScript/other projects):
```toml
[server]
name = "weather-mcp"
command = "node dist/server.js"
[testing]
fixtures_dir = "tests/fixtures"
[sandbox]
runtime = "podman" # Optional: docker (default), podman, or nerdctl
```
## Generated Project Structure
```
weather-mcp/
├── pyproject.toml # Project config with mcpdx settings
├── README.md # Getting started guide
├── src/
│ └── weather_mcp/
│ ├── __init__.py
│ └── server.py # MCP server with example tool
└── tests/
└── fixtures/
└── test_tools.yaml # Test fixtures
```
## Writing Tools
Tools live in `src/<package>/server.py`. Here's the pattern mcpdx generates:
```python
from fastmcp import FastMCP
mcp = FastMCP("weather_mcp")
@mcp.tool(
annotations={
"title": "Get Current Weather",
"readOnlyHint": True,
"idempotentHint": True,
"openWorldHint": True,
}
)
def weather_get_current(city: str) -> str:
"""Get current weather for a city.
Args:
city: City name (e.g., "London", "New York")
"""
# Your logic here
return f"Weather in {city}: 72°F, sunny"
```
Key conventions:
- **Tool naming**: `{prefix}_{action}` (e.g., `weather_get_current`)
- **Annotations**: Always include `readOnlyHint`, `idempotentHint`, `openWorldHint`
- **Docstrings**: First line is the tool description, `Args:` section documents parameters
- **Type hints**: All parameters must have type annotations
## Porting an Existing MCP Server
If you already have an MCP server and want to bring it into the mcpdx workflow:
1. **Scaffold a new project** with the closest template:
```bash
mcpdx init --name my-server-mcp --template minimal
```
2. **Copy your tool logic** into the generated `server.py`, following the conventions above
3. **Add your dependencies** to the generated `pyproject.toml`
4. **Write test fixtures** for each tool (aim for at least 2 per tool: happy path + error case)
5. **Validate**:
```bash
cd my-server-mcp
uv sync
mcpdx test
mcpdx dev
```
## Troubleshooting
### "No mcpdx configuration found"
You're not in a project directory, or the config is missing. Make sure `pyproject.toml` has a `[tool.mcpdx]` section or `mcpdx.toml` exists.
### "Server failed to start"
Check that `server_command` in your config is correct and the server can run independently:
```bash
python -m your_package.server
```
### Tests timing out
Increase the timeout: `mcpdx test -t 15000`. If a specific tool is slow, use `max_latency_ms` in your fixture to set per-test expectations.
### REPL says "Server is offline"
The server crashed during a file change. Check the error output above the REPL prompt, fix the issue, and the server will restart automatically.
### "No container runtime found"
Install Docker, Podman, or nerdctl and make sure the daemon is running. You can verify with `docker info` or `podman info`.
### Sandbox domain filtering not working
Domain allowlist enforcement requires a rootful container runtime. If you're using rootless Podman, iptables rules cannot be applied. Switch to Docker or rootful Podman for full domain filtering support.
## Development
```bash
# Clone and install
git clone https://github.com/ceasarb/mcpdx.git
cd mcpdx
uv sync
# Run tests
pytest tests/ -v
# Lint
ruff check src/
ruff format src/
```
## License
MIT
| text/markdown | ceasarb | ceasarb <ceazb21@gmail.com> | null | null | MIT | mcp, model-context-protocol, cli, scaffold, testing, developer-tools | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Testing",
"Topic :: Software Development :: Code Generators"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"click>=8.3.1",
"jinja2>=3.1.6",
"jsonschema>=4.23.0",
"pyyaml>=6.0.3",
"rich>=14.3.2",
"watchdog>=6.0.0",
"jsonpath-ng>=1.6.0"
] | [] | [] | [] | [
"Issues, https://github.com/ceasarb/mcpdx/issues",
"Repository, https://github.com/ceasarb/mcpdx"
] | uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T21:25:29.857684 | mcpdx-0.2.0.tar.gz | 71,451 | 12/bf/c30867c0d3c9082fdace0f661028ba4ddd2db27b19b8dcf7a38631132342/mcpdx-0.2.0.tar.gz | source | sdist | null | false | 748642712c3c6547ae466f722f813f3b | 5ae731530f9d1ff3b92013036582431bce2945aea65e45979b1ceebcadd5b0a2 | 12bfc30867c0d3c9082fdace0f661028ba4ddd2db27b19b8dcf7a38631132342 | null | [] | 202 |
2.4 | orchestra-mcp | 0.1.0 | MCP server testing and orchestration tool | # 🎵 Orchestra
**Production-Ready MCP Server Testing and Orchestration Tool**
Orchestra is a powerful CLI tool for testing MCP (Model Context Protocol) servers with declarative YAML test collections. Discover server capabilities, write comprehensive tests, and validate your MCP implementations with ease.
## ✨ Key Features
- **🎯 Interactive Builder** — `new` command walks you through creating collections (no YAML knowledge required!)
- **🔍 Server Discovery** — `inspect` command reveals all available tools and their schemas
- **📝 Declarative YAML** — Define test collections in easy-to-read YAML files
- **🌐 Multiple Transports** — STDIO (local), HTTP (remote), and SSE support
- **🔒 Authentication** — Built-in support for Bearer, API Key, and Basic auth
- **✅ Powerful Assertions** — JSONPath queries, error detection, and content validation
- **⚡ Rate Limit Handling** — Configurable delays between steps
- **📊 Detailed Reports** — JSON reports with run IDs, timestamps, and step-by-step results
- **🚀 CI/CD Ready** — Exit codes for pass/fail, quiet mode for automation
- **🔐 Secure** — Environment variable support for secrets and API keys
## 📦 Installation
```bash
# From source
pip install -e .
# Or with uv
uv pip install -e .
```
## 🚀 Quick Start
### 0. Create Your First Collection (Interactive!)
**New to Orchestra?** Use the interactive builder:
```bash
orchestra new schemas/my_test.yaml
```
The wizard will guide you through:
1. Choosing your transport type (local/remote)
2. Configuring your server
3. Setting up authentication (if needed)
4. Adding example test steps
**Example session:**
```
🎵 Orchestra Collection Builder
What would you like to name this collection? My MCP Test
✓ Collection name: My MCP Test
How does your MCP server run?
1. Local (STDIO) - Runs as a subprocess
2. Remote (HTTP) - Cloud-hosted server
3. SSE - Server-Sent Events
Choose transport type [2]: 2
Enter server URL: https://mcp.deepwiki.com/mcp
✓ URL: https://mcp.deepwiki.com/mcp
✅ Collection saved to schemas/my_test.yaml
Next steps:
1. Run orchestra inspect schemas/my_test.yaml to discover tools
2. Edit schemas/my_test.yaml to add your test steps
3. Run orchestra run schemas/my_test.yaml to execute tests
```
### 1. Discover Server Capabilities
Use the `inspect` command to discover what tools a server offers:
```bash
orchestra inspect schemas/my_server.yaml
```
**Example output:**
```
📡 Connecting to MCP server...
✅ Connected to DeepWiki v2.14.3
🔍 Discovering tools...
Found 3 tool(s):
1. read_wiki_structure
Get a list of documentation topics for a GitHub repository.
Parameters:
* repoName (string)
GitHub repository in owner/repo format (e.g. "facebook/react")
Example YAML:
- id: call_read_wiki_structure
type: tool_call
tool: "read_wiki_structure"
input:
repoName: "facebook/react"
save: "$"
```
**💡 Pro tip:** The generated YAML from `orchestra new` works perfectly with `inspect`!
**For inspection, you only need server config:**
```yaml
# schemas/my_server.yaml
version: 1
name: "My MCP Server"
server:
transport: "http"
url: "https://api.example.com/mcp"
```
### 2. Create a Test Collection (Manual Method)
If you prefer writing YAML directly:
```yaml
# schemas/my_test.yaml
version: 1
name: "My MCP Test"
server:
transport: "stdio"
command: "npx"
args: ["-y", "@modelcontextprotocol/server-memory"]
steps:
- id: create_entity
type: tool_call
tool: "create_entities"
input:
entities:
- name: "TestUser"
entityType: "person"
observations: ["Loves testing"]
save: "$"
delay_ms: 1000 # Wait 1 second before next step
- id: verify_no_error
type: assert
from: "create_entity"
check:
op: "no_error"
- id: verify_created
type: assert
from: "create_entity"
check:
op: "jsonpath_exists"
path: "$.content[0].text"
```
### 3. Run the Collection
```bash
# Standard run
orchestra run schemas/my_test.yaml
# Show full JSON responses (great for debugging)
orchestra run schemas/my_test.yaml --show-responses
# Quiet mode (errors only)
orchestra run schemas/my_test.yaml --quiet
```
### 4. View the Results
```
📄 Loading collection: schemas/my_test.yaml
✅ Valid collection: My MCP Test
============================================================
Running: My MCP Test
Server: stdio
Steps: 3
============================================================
📡 Connecting to MCP server...
✅ Connected to memory-server v0.6.3
▶ Step: create_entity (tool_call)
Tool: create_entities
Input: {"entities": [{"name": "TestUser", ...}]}...
✅ Success
⏱️ Waiting 1000ms...
▶ Step: verify_no_error (assert)
Asserting on: create_entity
Check: no_error at $
✅ Passed
▶ Step: verify_created (assert)
Asserting on: create_entity
Check: jsonpath_exists at $.content[0].text
✅ Passed
👋 Disconnecting...
═══════════════════════════════════════════════════════════
Run Report: My MCP Test
═══════════════════════════════════════════════════════════
Run ID: abc123-def456
Status: ✅ PASSED
Duration: 1234ms
Started: 2026-02-14T12:00:00Z
───────────────────────────────────────────────────────────
Steps: 3 passed, 0 failed, 0 errors, 0 skipped
───────────────────────────────────────────────────────────
✅ tool_call - 150ms
✅ assert - 2ms
✅ assert - 5ms
═══════════════════════════════════════════════════════════
📁 Report saved: reports/abc123-def456.json
```
## 📚 CLI Reference
### `orchestra new`
Create a new test collection with an interactive wizard (perfect for beginners!).
```bash
orchestra new [output_file]
# Examples:
orchestra new schemas/my_test.yaml
orchestra new # Defaults to schemas/my_collection.yaml
```
**What it does:**
- Guides you through choosing transport type
- Helps configure server connection
- Sets up authentication if needed
- Generates valid YAML automatically
- Provides next steps after creation
**Time to first test:** ~3 minutes (75% faster than manual YAML)
See [Interactive Builder Guide](docs/INTERACTIVE_BUILDER.md) for detailed walkthrough.
### `orchestra inspect`
Discover available tools and their schemas from any MCP server.
```bash
orchestra inspect <server.yaml> [OPTIONS]
Options:
-v, --verbose Show detailed connection info and raw schemas
```
**Use cases:**
- Explore new MCP servers before writing tests
- Verify correct parameter names
- Generate example YAML snippets for tool calls
### `orchestra run`
Run a test collection.
```bash
orchestra run <collection.yaml> [OPTIONS]
Options:
-V, --verbose / --no-verbose Show detailed step output (default: on)
-q, --quiet Only show errors and final status
-r, --show-responses Show full JSON responses from tool calls
-o, --output [text|json] Output format (default: text)
-R, --report-dir PATH Directory for JSON reports (default: reports/)
--no-report Don't save a JSON report
```
### `orchestra validate`
Validate a collection schema without running it.
```bash
orchestra validate <collection.yaml>
```
### `orchestra info`
Show Orchestra information and version.
```bash
orchestra info
```
## 📖 Collection Schema Reference
### Server Configuration
#### STDIO Transport (Local Servers)
For local MCP servers that run as subprocesses:
```yaml
server:
transport: "stdio"
command: "npx"
args: ["-y", "@modelcontextprotocol/server-memory"]
# Optional: Pass environment variables to subprocess
env:
API_KEY: "{{env.MY_API_KEY}}"
DEBUG: "true"
```
**Environment Variables for STDIO:**
- Orchestra can pass environment variables from the collection schema to the subprocess
- Useful for servers that need API keys (e.g., Brave Search, Kaggle)
```yaml
# Collection-level env vars (accessible via {{env.VAR}})
env:
BRAVE_API_KEY: "your-key-here"
server:
transport: "stdio"
command: "npx"
args: ["-y", "@modelcontextprotocol/server-brave-search"]
# Pass to subprocess
env:
BRAVE_API_KEY: "{{env.BRAVE_API_KEY}}"
```
#### HTTP Transport (Remote Servers)
For remote MCP servers over HTTP (Streamable HTTP):
```yaml
server:
transport: "http"
url: "https://mcp.deepwiki.com/mcp"
```
#### SSE Transport
For servers using Server-Sent Events:
```yaml
server:
transport: "sse"
url: "http://localhost:3001"
```
#### With Authentication
Orchestra supports three authentication types:
**Bearer Token:**
```yaml
server:
transport: "http"
url: "https://api.example.com/mcp"
auth:
type: "bearer"
token: "{{env.API_TOKEN}}"
```
**API Key:**
```yaml
server:
transport: "http"
url: "https://api.example.com/mcp"
auth:
type: "api_key"
key: "{{env.API_KEY}}"
```
**Basic Auth:**
```yaml
server:
transport: "http"
url: "https://api.example.com/mcp"
auth:
type: "basic"
username: "{{env.USERNAME}}"
password: "{{env.PASSWORD}}"
```
### Steps
#### Tool Call Step
Invoke an MCP tool and optionally save the result for assertions:
```yaml
- id: my_step
type: tool_call
tool: "tool_name"
input:
param1: "value"
param2: 123
nested:
key: "value"
save: "$" # Save full response
delay_ms: 2000 # Wait 2 seconds after this step (for rate limiting)
```
**Rate Limiting:**
Use `delay_ms` to prevent hitting rate limits on public APIs:
```yaml
steps:
- id: search_1
type: tool_call
tool: "brave_web_search"
input:
query: "Python"
save: "$"
delay_ms: 2000 # Wait 2 seconds
- id: search_2
type: tool_call
tool: "brave_web_search"
input:
query: "JavaScript"
save: "$"
delay_ms: 2000 # Wait 2 seconds
```
#### Assertion Step
Validate tool call results with powerful assertions:
```yaml
- id: check_result
type: assert
from: "my_step" # Reference a previous step
check:
op: "jsonpath_eq"
path: "$.field"
value: "expected"
delay_ms: 0 # Optional delay after assertion
```
### Assertion Operators
| Operator | Description | Requires Path | Requires Value | Example |
|----------|-------------|---------------|----------------|---------|
| `jsonpath_exists` | Check if path exists in response | ✅ | ❌ | Path `$.content[0].text` exists |
| `jsonpath_eq` | Check if value equals expected | ✅ | ✅ | `$.status` equals `"success"` |
| `jsonpath_contains` | Check if string/array contains value | ✅ | ✅ | `$.text` contains `"hello"` |
| `jsonpath_len_eq` | Check array length equals N | ✅ | ✅ | `$.items` has exactly 5 items |
| `jsonpath_len_gte` | Check array length >= N | ✅ | ✅ | `$.items` has at least 3 items |
| `jsonpath_len_lte` | Check array length <= N | ✅ | ✅ | `$.items` has at most 10 items |
| `is_error` | Check if MCP response has isError=true | ❌ | ❌ | Response contains an error |
| `no_error` | Check if MCP response has no error | ❌ | ❌ | Response is successful |
#### Error Detection
MCP servers can return successful JSON-RPC responses that contain tool execution errors (indicated by `isError: true`). Orchestra automatically detects these and provides specialized assertions:
```yaml
steps:
# Call a tool that might fail
- id: risky_call
type: tool_call
tool: "some_tool"
input:
param: "value"
save: "$"
# Verify it succeeded
- id: check_success
type: assert
from: "risky_call"
check:
op: "no_error" # Fails if isError=true
# Or test error handling
- id: bad_call
type: tool_call
tool: "nonexistent_tool"
input: {}
save: "$"
- id: expect_error
type: assert
from: "bad_call"
check:
op: "is_error" # Passes if isError=true
```
**Console output with error detection:**
```
▶ Step: bad_call (tool_call)
Tool: nonexistent_tool
⚠️ Tool returned error: Tool not found
Response:
{
"content": [...],
"isError": true
}
▶ Step: expect_error (assert)
Check: is_error at $
✅ Passed
```
### Defaults
Set default values for all steps:
```yaml
defaults:
timeout_ms: 30000 # 30 seconds (default)
retries: 0 # No retries (default)
```
### Environment Variables
Orchestra supports environment variable interpolation using `{{env.VAR_NAME}}`:
```yaml
# Define at collection level
env:
API_KEY: "my-secret-key"
BASE_URL: "https://api.example.com"
server:
transport: "http"
url: "{{env.BASE_URL}}/mcp"
auth:
type: "api_key"
key: "{{env.API_KEY}}"
steps:
- id: call_with_env
type: tool_call
tool: "search"
input:
api_key: "{{env.API_KEY}}"
```
**Environment variables are pulled from:**
1. Collection-level `env` block
2. Shell environment (via `os.environ`)
## 🧪 Real-World Examples
### Example 1: DeepWiki (Remote HTTP, No Auth)
Test an AI-powered documentation server:
```yaml
version: 1
name: "DeepWiki Test"
server:
transport: "http"
url: "https://mcp.deepwiki.com/mcp"
steps:
- id: ask_about_react
type: tool_call
tool: "ask_question"
input:
repoName: "facebook/react"
question: "What are React hooks?"
save: "$"
delay_ms: 2000
- id: check_answer
type: assert
from: "ask_about_react"
check:
op: "jsonpath_contains"
path: "$.content[0].text"
value: "hook"
```
### Example 2: Brave Search (STDIO, API Key)
Test a search API with authentication:
```yaml
version: 1
name: "Brave Search Test"
env:
BRAVE_API_KEY: "your-api-key"
server:
transport: "stdio"
command: "npx"
args: ["-y", "@modelcontextprotocol/server-brave-search"]
env:
BRAVE_API_KEY: "{{env.BRAVE_API_KEY}}"
steps:
- id: search_python
type: tool_call
tool: "brave_web_search"
input:
query: "Python programming"
count: 5
save: "$"
delay_ms: 2000 # Rate limiting
- id: check_results
type: assert
from: "search_python"
check:
op: "jsonpath_len_gte"
path: "$.content"
value: 1
```
### Example 3: MCPCalc (Remote HTTP, Calculator Service)
Test a calculator service:
```yaml
version: 1
name: "MCPCalc Test"
server:
transport: "http"
url: "https://mcpcalc.com/api/v1/mcp"
steps:
- id: list_calculators
type: tool_call
tool: "list_calculators"
input:
category: "math"
save: "$"
- id: calculate_percentage
type: tool_call
tool: "calculate"
input:
calculator: "percentage"
inputs:
value: 50
percentage: 20
save: "$"
- id: verify_result
type: assert
from: "calculate_percentage"
check:
op: "no_error"
```
## 📊 Reports
Orchestra generates detailed JSON reports for every test run:
```json
{
"run_id": "abc123-def456",
"collection_name": "My Test",
"status": "passed",
"started_at": "2026-02-14T12:00:00Z",
"completed_at": "2026-02-14T12:00:01Z",
"duration_ms": 1234,
"server": {
"name": "memory-server",
"version": "0.6.3"
},
"steps": [
{
"id": "create_entity",
"type": "tool_call",
"status": "success",
"duration_ms": 150,
"tool": "create_entities",
"result": { ... }
},
{
"id": "verify_created",
"type": "assert",
"status": "passed",
"duration_ms": 5,
"assertion": {
"op": "jsonpath_exists",
"path": "$.content[0].text"
}
}
]
}
```
## ✅ Tested MCP Servers
Orchestra has been validated against multiple production MCP servers:
- ✅ **DeepWiki** - AI-powered codebase documentation (HTTP, remote)
- ✅ **MCPCalc** - Calculator service (HTTP, remote)
- ✅ **Puppeteer** - Browser automation (STDIO, local)
- ✅ **Brave Search** - Web search API (STDIO, API key auth)
- ✅ **Kaggle** - Dataset management (STDIO, local)
- ✅ **Memory Server** - Knowledge graph storage (STDIO, local)
- ✅ **Everything Server** - Demo server with all MCP features (STDIO/HTTP/SSE)
## 🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## 📝 License
MIT
## 🙏 Acknowledgments
Built on the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) by Anthropic.
---
**Made with ❤️ for the MCP community**
| text/markdown | Ahaan Chaudhuri | null | null | null | MIT | automation, mcp, model-context-protocol, testing | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp>=3.9.0",
"jsonpath-ng>=1.6.0",
"pyyaml>=6.0",
"rich>=13.0.0",
"typer>=0.9.0",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/ahaanc/orchestra",
"Documentation, https://github.com/ahaanc/orchestra#readme",
"Repository, https://github.com/ahaanc/orchestra"
] | twine/6.2.0 CPython/3.12.7 | 2026-02-20T21:24:40.442045 | orchestra_mcp-0.1.0.tar.gz | 123,496 | 55/ea/bf4abaaf096dbb07fe38810c01a85f6ff695584574ed5d645a392529616b/orchestra_mcp-0.1.0.tar.gz | source | sdist | null | false | 7919374828363586e8b13831640d8592 | c9daf5c0d0ef8064c6d8a8f9c5dbe38a448bd8ca21846a8133274c9e3cd918e9 | 55eabf4abaaf096dbb07fe38810c01a85f6ff695584574ed5d645a392529616b | null | [
"LICENSE"
] | 223 |
2.4 | xcoll | 0.9.7 | Xsuite collimation package | # xcoll
<!--- -->







Collimation in xtrack simulations
## Description
## Getting Started
### Dependencies
* python >= 3.8
* numpy
* pandas
* xsuite (in particular xobjects, xdeps, xtrack, xpart)
* Geant4 and BDSIM: conda install bdsim-g4 -c conda-forge
### Installing
`xcoll` is packaged using `poetry`, and can be easily installed with `pip`:
```bash
pip install xcoll
```
For a local installation, clone and install in editable mode (need to have `pip` >22):
```bash
git clone git@github.com:xsuite/xcoll.git
pip install -e xcoll
```
### Example
## Features
## Authors
* [Frederik Van der Veken](https://github.com/freddieknets) (frederik@cern.ch)
* [Despina Demetriadou](https://github.com/ddemetriadou)
* [Andrey Abramov](https://github.com/anabramo)
* [Giovanni Iadarola](https://github.com/giadarol)
## Version History
* 0.1
* Initial Release
## License
This project is [Apache 2.0 licensed](./LICENSE).
| text/markdown | Frederik F. Van der Veken | frederik@cern.ch | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | https://github.com/xsuite/xcoll | null | >=3.10 | [] | [] | [] | [
"ruamel-yaml>=0.17.31",
"numpy>=1.0",
"pandas>=1.4",
"xobjects>=0.5.12",
"xdeps>=0.10.11",
"xpart>=0.23.8",
"xtrack>=0.99.4",
"meson",
"pytest; extra == \"tests\"",
"pytest-html; extra == \"tests\"",
"pytest-xdist; extra == \"tests\""
] | [] | [] | [] | [
"Homepage, https://github.com/xsuite/xcoll",
"Repository, https://github.com/xsuite/xcoll",
"Documentation, https://xsuite.readthedocs.io/",
"Bug Tracker, https://github.com/xsuite/xsuite/issues",
"download, https://pypi.python.org/pypi/xcoll"
] | poetry/2.2.1 CPython/3.13.7 Darwin/25.2.0 | 2026-02-20T21:24:31.658581 | xcoll-0.9.7.tar.gz | 291,072 | 14/e4/6784acf45a641f1914a1d8a85b4adb5d3585f5a3f66a9dac91344453d039/xcoll-0.9.7.tar.gz | source | sdist | null | false | 2647ef5e53cd37c60e5b76185f18f53b | a1da130073b6e8daac5be538cb380d53009273ccbaa3f5d84304074d02f59d72 | 14e46784acf45a641f1914a1d8a85b4adb5d3585f5a3f66a9dac91344453d039 | null | [] | 207 |
2.4 | pico-auth | 0.1.4 | Minimal JWT auth server for the pico ecosystem | # Pico-Auth
[](https://pypi.org/project/pico-auth/)
[](https://deepwiki.com/dperezcabrera/pico-auth)
[](https://opensource.org/licenses/MIT)

[](https://codecov.io/gh/dperezcabrera/pico-auth)
[](https://dperezcabrera.github.io/pico-auth/)
**Minimal JWT auth server for the Pico ecosystem.**
Pico-Auth is a ready-to-run authentication server built on top of the [pico-framework](https://github.com/dperezcabrera/pico-ioc) stack. It provides:
- **RS256 JWT tokens** with auto-generated RSA key pairs
- **Refresh token rotation** with SHA-256 hashed storage
- **RBAC** with four built-in roles: `superadmin`, `org_admin`, `operator`, `viewer`
- **OIDC discovery** endpoints (`.well-known/openid-configuration`, JWKS)
- **Bcrypt password hashing** (72-byte input limit enforced)
- **Zero-config startup** with auto-created admin user
> Requires Python 3.11+
---
## Architecture
Pico-Auth uses the full Pico stack with dependency injection:
| Layer | Component | Decorator |
|-------|-----------|-----------|
| Config | `AuthSettings` | `@configured(prefix="auth")` |
| Models | `User`, `RefreshToken` | SQLAlchemy `AppBase` |
| Repository | `UserRepository`, `RefreshTokenRepository` | `@component` |
| Service | `AuthService` | `@component` |
| Security | `JWTProvider`, `PasswordService`, `LocalJWKSProvider` | `@component` |
| Routes | `AuthController`, `OIDCController` | `@controller` |
---
## Installation
```bash
pip install -e ".[dev]"
```
---
## Quick Start
### 1. Run the Server
```bash
python -m pico_auth.main
```
The server starts on `http://localhost:8100` with:
- An auto-created admin user (`admin@pico.local` / `admin`)
- SQLite database at `auth.db`
- RSA keys at `~/.pico-auth/`
### 2. Register a User
```bash
curl -X POST http://localhost:8100/api/v1/auth/register \
-H "Content-Type: application/json" \
-d '{"email": "alice@example.com", "password": "secret123", "display_name": "Alice"}'
```
### 3. Login
```bash
curl -X POST http://localhost:8100/api/v1/auth/login \
-H "Content-Type: application/json" \
-d '{"email": "alice@example.com", "password": "secret123"}'
```
Returns:
```json
{
"access_token": "eyJhbGciOiJSUzI1NiIs...",
"refresh_token": "a1b2c3d4...",
"token_type": "Bearer",
"expires_in": 900
}
```
### 4. Access Protected Endpoint
```bash
curl http://localhost:8100/api/v1/auth/me \
-H "Authorization: Bearer eyJhbGciOiJSUzI1NiIs..."
```
---
## API Endpoints
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| POST | `/api/v1/auth/register` | No | Register a new user |
| POST | `/api/v1/auth/login` | No | Login and get tokens |
| POST | `/api/v1/auth/refresh` | No | Refresh access token |
| GET | `/api/v1/auth/me` | Bearer | Get current user profile |
| POST | `/api/v1/auth/me/password` | Bearer | Change password |
| GET | `/api/v1/auth/users` | Admin | List all users |
| PUT | `/api/v1/auth/users/{id}/role` | Admin | Update user role |
| GET | `/api/v1/auth/jwks` | No | JSON Web Key Set |
| GET | `/.well-known/openid-configuration` | No | OIDC discovery |
---
## Configuration
All settings are loaded from `application.yaml` and can be overridden with environment variables:
```yaml
auth:
data_dir: "~/.pico-auth" # RSA key storage
access_token_expire_minutes: 15 # JWT lifetime
refresh_token_expire_days: 7 # Refresh token lifetime
issuer: "http://localhost:8100" # JWT issuer claim
audience: "pico-bot" # JWT audience claim
auto_create_admin: true # Create admin on startup
admin_email: "admin@pico.local" # Default admin email
admin_password: "admin" # Default admin password
database:
url: "sqlite+aiosqlite:///auth.db" # Database URL
echo: false # SQL logging
auth_client:
enabled: true # Enable auth middleware
issuer: "http://localhost:8100" # Must match auth.issuer
audience: "pico-bot" # Must match auth.audience
fastapi:
title: "Pico Auth API"
version: "0.1.0"
```
Environment variable override example:
```bash
AUTH_ISSUER=https://auth.myapp.com AUTH_ADMIN_PASSWORD=strong-password python -m pico_auth.main
```
---
## JWT Token Claims
Access tokens include:
| Claim | Description |
|-------|-------------|
| `sub` | User ID |
| `email` | User email |
| `role` | User role (`superadmin`, `org_admin`, `operator`, `viewer`) |
| `org_id` | Organization ID |
| `iss` | Issuer URL |
| `aud` | Audience |
| `iat` | Issued at (Unix timestamp) |
| `exp` | Expiration (Unix timestamp) |
| `jti` | Unique token ID |
---
## Ecosystem
Pico-Auth is built on:
| Package | Role |
|---------|------|
| [pico-ioc](https://github.com/dperezcabrera/pico-ioc) | Dependency injection container |
| [pico-boot](https://github.com/dperezcabrera/pico-boot) | Bootstrap and plugin discovery |
| [pico-fastapi](https://github.com/dperezcabrera/pico-fastapi) | FastAPI integration with `@controller` |
| [pico-sqlalchemy](https://github.com/dperezcabrera/pico-sqlalchemy) | Async SQLAlchemy with `SessionManager` |
| [pico-client-auth](https://github.com/dperezcabrera/pico-client-auth) | JWT auth middleware with `SecurityContext` |
---
## Development
```bash
# Install in dev mode
pip install -e ".[dev]"
# Run tests
pytest tests/ -v
# Run with coverage
pytest --cov=pico_auth --cov-report=term-missing tests/
# Full test matrix
tox
# Lint
ruff check pico_auth/ tests/
```
---
## License
MIT - [LICENSE](./LICENSE)
| text/markdown | David Perez | null | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Security",
"Topic :: Internet :: WWW/HTTP"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pico-ioc[yaml]>=2.2.4",
"pico-boot>=0.1.1",
"pico-fastapi>=0.3.0",
"pico-sqlalchemy>=0.3.0",
"pico-client-auth>=0.2.1",
"python-jose[cryptography]>=3.5",
"bcrypt>=5.0",
"aiosqlite>=0.22",
"uvicorn[standard]>=0.41",
"pytest>=9.0; extra == \"dev\"",
"pytest-asyncio>=1.3; extra == \"dev\"",
"httpx>=0.28; extra == \"dev\"",
"ruff; extra == \"dev\"",
"tox; extra == \"dev\"",
"mkdocs-material>=9.0; extra == \"docs\"",
"mkdocstrings[python]>=0.24; extra == \"docs\"",
"mkdocs-minify-plugin; extra == \"docs\"",
"mkdocs-git-revision-date-localized-plugin; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/dperezcabrera/pico-auth",
"Documentation, https://dperezcabrera.github.io/pico-auth/",
"Repository, https://github.com/dperezcabrera/pico-auth",
"Issues, https://github.com/dperezcabrera/pico-auth/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:23:16.671195 | pico_auth-0.1.4.tar.gz | 34,840 | ce/97/a2081ee31a13fe3e6213e9dd12cbdcd3e98645d72bfcaf81f8b79ca7c293/pico_auth-0.1.4.tar.gz | source | sdist | null | false | e4784c7ad256ed0cef3bd06248d9f969 | 4868b0b4e2a57968048546ae5427131d2750dc610d7b528ffcfada8e4ff81f94 | ce97a2081ee31a13fe3e6213e9dd12cbdcd3e98645d72bfcaf81f8b79ca7c293 | null | [
"LICENSE"
] | 201 |
2.4 | handwrytten | 1.2.0 | Official Python SDK for the Handwrytten API — send real handwritten notes at scale | # Handwrytten Python SDK
The official Python SDK for the [Handwrytten API](https://www.handwrytten.com/api/) — send real handwritten notes at scale using robots with real pens.
## Installation
```bash
pip install handwrytten
```
## Quick Start
```python
from handwrytten import Handwrytten
client = Handwrytten("your_api_key")
# Browse available cards and fonts
cards = client.cards.list()
fonts = client.fonts.list()
# Send a handwritten note in one call
result = client.orders.send(
card_id=cards[0].id,
font=fonts[0].id,
message="Thanks for being an amazing customer!",
wishes="Best,\nThe Handwrytten Team",
sender={
"firstName": "David",
"lastName": "Wachs",
"street1": "100 S Mill Ave",
"city": "Tempe",
"state": "AZ",
"zip": "85281",
},
recipient={
"firstName": "Jane",
"lastName": "Doe",
"street1": "123 Main Street",
"city": "Phoenix",
"state": "AZ",
"zip": "85001",
},
)
```
## Usage
### Send a Single Note
```python
result = client.orders.send(
card_id="12345",
font="hwDavid",
message="Thank you for your business!",
wishes="Best,\nThe Team",
sender={
"firstName": "David",
"lastName": "Wachs",
"street1": "100 S Mill Ave",
"city": "Tempe",
"state": "AZ",
"zip": "85281",
},
recipient={
"firstName": "Jane",
"lastName": "Doe",
"street1": "123 Main St",
"city": "Phoenix",
"state": "AZ",
"zip": "85001",
},
)
```
### Send Bulk — Multiple Recipients with Per-Recipient Overrides
Each recipient can have its own `message`, `wishes`, and `sender`. Top-level values serve as defaults for any recipient that doesn't specify its own.
```python
result = client.orders.send(
card_id="12345",
font="hwDavid",
sender={"firstName": "David", "lastName": "Wachs",
"street1": "100 S Mill Ave", "city": "Tempe",
"state": "AZ", "zip": "85281"},
recipient=[
{
"firstName": "Jane",
"lastName": "Doe",
"street1": "123 Main St",
"city": "Phoenix",
"state": "AZ",
"zip": "85001",
"message": "Thanks for your loyalty, Jane!",
"wishes": "Warmly,\nThe Team",
},
{
"firstName": "John",
"lastName": "Smith",
"street1": "456 Oak Ave",
"city": "Tempe",
"state": "AZ",
"zip": "85281",
"message": "Great working with you, John!",
"sender": {"firstName": "Other", "lastName": "Person",
"street1": "789 Elm St", "city": "Mesa",
"state": "AZ", "zip": "85201"},
},
],
)
```
### Use Saved Address IDs
If you have addresses saved in your Handwrytten account, pass their IDs directly:
```python
result = client.orders.send(
card_id="12345",
font="hwDavid",
message="Thank you!",
sender=98765, # saved return-address ID
recipient=67890, # saved recipient address ID
)
# Mix saved IDs and inline addresses in a bulk send
result = client.orders.send(
card_id="12345",
font="hwDavid",
message="Hello!",
sender=98765,
recipient=[
67890, # saved address ID
{"firstName": "Jane", "lastName": "Doe",
"street1": "123 Main St", "city": "Phoenix",
"state": "AZ", "zip": "85001"},
],
)
```
### Use Typed Models
```python
from handwrytten import Recipient, Sender
sender = Sender(
first_name="David",
last_name="Wachs",
street1="100 S Mill Ave",
city="Tempe",
state="AZ",
zip="85281",
)
recipient = Recipient(
first_name="Jane",
last_name="Doe",
street1="123 Main Street",
city="Phoenix",
state="AZ",
zip="85001",
)
result = client.orders.send(
card_id="12345",
font="hwDavid",
message="Welcome aboard!",
sender=sender,
recipient=recipient,
)
```
### Custom Cards
Create custom cards with your own cover images and logos.
```python
# 1. Get available card dimensions
dims = client.custom_cards.dimensions()
for d in dims:
print(d.id, d) # e.g. "1 7.000x5.000 flat (landscape)"
# Filter by format and/or orientation
flat_dims = client.custom_cards.dimensions(format="flat")
landscape = client.custom_cards.dimensions(format="flat", orientation="landscape")
# 2. Upload a full-bleed cover image (front of card)
cover = client.custom_cards.upload_image(
url="https://example.com/cover.jpg",
image_type="cover",
)
# 3. Upload a logo (appears on the writing side)
logo = client.custom_cards.upload_image(
url="https://example.com/logo.png",
image_type="logo",
)
# Or upload from a local file
logo = client.custom_cards.upload_image(
file_path="/path/to/logo.png",
image_type="logo",
)
# 4. Check image quality (optional)
check = client.custom_cards.check_image(image_id=logo.id)
# 5. Create the custom card
card = client.custom_cards.create(
name="My Custom Card",
dimension_id=dims[0].id, # card dimension
cover_id=cover.id, # front cover image
header_logo_id=logo.id, # logo on writing side
header_logo_size_percent=80,
)
# 6. Use the new card to send orders
client.orders.send(
card_id=str(card.card_id),
font="hwDavid",
message="Hello from our custom card!",
recipient={...},
)
```
Custom cards support text and logos in multiple zones:
| Zone | Logo field | Text field | Font field |
|---|---|---|---|
| Header (top of writing side) | `header_logo_id` | `header_text` | `header_font_id` |
| Main (center, folded cards) | `main_logo_id` | `main_text` | `main_font_id` |
| Footer (bottom of writing side) | `footer_logo_id` | `footer_text` | `footer_font_id` |
| Back | `back_logo_id` | `back_text` | `back_font_id` |
| Front cover | `cover_id` | — | — |
| Back cover | `back_cover_id` | — | — |
Font IDs for text zones come from `client.fonts.list_for_customizer()` (printed/typeset fonts), which are different from the handwriting fonts used in `client.fonts.list()`.
### Manage Custom Images
```python
# List all uploaded images
images = client.custom_cards.list_images()
for img in images:
print(img.id, img.image_type, img.image_url)
# Filter by type
covers = client.custom_cards.list_images(image_type="cover")
logos = client.custom_cards.list_images(image_type="logo")
# Get details of a custom card
card = client.custom_cards.get(card_id=456)
# Delete an image
client.custom_cards.delete_image(image_id=123)
# Delete a custom card
client.custom_cards.delete(card_id=456)
```
### Browse Cards and Fonts
```python
# Card templates
cards = client.cards.list()
card = client.cards.get("12345")
categories = client.cards.categories()
# Handwriting fonts (for orders)
fonts = client.fonts.list()
for font in fonts:
print(f"{font.id}: {font.label}")
# Customizer fonts (for custom card text zones)
customizer_fonts = client.fonts.list_for_customizer()
```
### Gift Cards and Inserts
```python
# List gift cards with their denominations (price points)
gift_cards = client.gift_cards.list()
for gc in gift_cards:
print(f"{gc.title}: {len(gc.denominations)} denominations")
for d in gc.denominations:
print(f" ${d.nominal} (price: ${d.price})")
# Include a gift card denomination in an order
client.orders.send(
card_id="12345",
font="hwDavid",
message="Enjoy!",
denomination_id=gc.denominations[0].id,
recipient={...},
)
# List inserts (optionally include historical/discontinued)
inserts = client.inserts.list()
all_inserts = client.inserts.list(include_historical=True)
# Include an insert in an order
client.orders.send(
card_id="12345",
font="hwDavid",
message="Hello!",
insert_id=inserts[0].id,
recipient={...},
)
```
### QR Codes
Create QR codes and attach them to custom cards.
```python
from handwrytten import QRCodeLocation
# Create a QR code
qr = client.qr_codes.create(name="Website Link", url="https://example.com")
# List existing QR codes
qr_codes = client.qr_codes.list()
# Browse available frames (decorative borders around the QR code)
frames = client.qr_codes.frames()
# Attach a QR code to a custom card
card = client.custom_cards.create(
name="Card with QR",
dimension_id=dims[0].id,
cover_id=cover.id,
qr_code_id=int(qr.id),
qr_code_location=QRCodeLocation.FOOTER, # HEADER, FOOTER, or MAIN
qr_code_size_percent=30,
qr_code_align="right",
)
# Delete a QR code
client.qr_codes.delete(qr_code_id=int(qr.id))
```
### Address Book
Save and manage recipient and sender addresses, then use their IDs when sending orders.
```python
# Save a sender (return address)
sender_id = client.address_book.add_sender(
first_name="David",
last_name="Wachs",
street1="100 S Mill Ave",
city="Tempe",
state="AZ",
zip="85281",
)
# Save a recipient
recipient_id = client.address_book.add_recipient(
first_name="Jane",
last_name="Doe",
street1="123 Main St",
city="Phoenix",
state="AZ",
zip="85001",
)
# Send using saved IDs
client.orders.send(
card_id="12345",
font="hwDavid",
message="Hello!",
sender=sender_id,
recipient=recipient_id,
)
# Update a recipient
client.address_book.update_recipient(
address_id=recipient_id,
street1="456 New St",
city="Scottsdale",
)
# List saved addresses
senders = client.address_book.list_senders()
recipients = client.address_book.list_recipients()
for r in recipients:
print(r.id, r) # e.g. "123 Jane Doe, 456 New St, Scottsdale, AZ 85001"
# Delete addresses
client.address_book.delete_recipient(address_id=recipient_id)
client.address_book.delete_sender(address_id=sender_id)
# Batch delete
client.address_book.delete_recipient(address_ids=[1, 2, 3])
# Countries and states
countries = client.address_book.countries()
states = client.address_book.states("US")
```
### Signatures
List the user's saved handwriting signatures for use in orders.
```python
signatures = client.auth.list_signatures()
for sig in signatures:
print(f" [{sig.id}] preview={sig.preview}")
```
### Two-Step Basket Workflow
For finer control, use `client.basket` directly instead of `client.orders.send()`:
```python
# Step 1: Add order(s) to the basket
client.basket.add_order(
card_id="12345",
font="hwDavid",
addresses=[{
"firstName": "Jane",
"lastName": "Doe",
"street1": "123 Main St",
"city": "Phoenix",
"state": "AZ",
"zip": "85001",
"message": "Hello!",
}],
)
# Step 2: Submit the basket
result = client.basket.send()
# Inspect the basket before sending
basket = client.basket.list() # all items with totals
item = client.basket.get_item(9517) # single item by basket_id
n = client.basket.count() # number of items
# Remove a specific item or clear everything
client.basket.remove(basket_id=9517)
client.basket.clear()
# List previously submitted baskets
past = client.orders.list_past_baskets(page=1)
```
### Error Handling
```python
from handwrytten import (
HandwryttenError,
AuthenticationError,
BadRequestError,
RateLimitError,
)
try:
result = client.orders.send(...)
except AuthenticationError:
print("Check your API key")
except BadRequestError as e:
print(f"Invalid request: {e.message}")
print(f"Details: {e.response_body}")
except RateLimitError as e:
print(f"Rate limited — retry after {e.retry_after}s")
except HandwryttenError as e:
print(f"API error: {e}")
```
## API Resources
| Resource | Methods |
|---|---|
| `client.auth` | `get_user()`, `login()`, `list_signatures()` |
| `client.cards` | `list()`, `get(id)`, `categories()` |
| `client.custom_cards` | `dimensions()`, `upload_image()`, `check_image()`, `list_images()`, `delete_image()`, `create()`, `get()`, `delete()` |
| `client.fonts` | `list()`, `list_for_customizer()` |
| `client.gift_cards` | `list()` |
| `client.inserts` | `list(include_historical)` |
| `client.qr_codes` | `list()`, `create()`, `delete()`, `frames()` |
| `client.address_book` | `list_recipients()`, `add_recipient()`, `update_recipient()`, `delete_recipient()`, `list_senders()`, `add_sender()`, `delete_sender()`, `countries()`, `states(country)` |
| `client.orders` | `send()`, `get(id)`, `list()`, `list_past_baskets()` |
| `client.basket` | `add_order()`, `send()`, `remove(basket_id)`, `clear()`, `list()`, `get_item(basket_id)`, `count()` |
| `client.prospecting` | `calculate_targets(zip, radius)` |
## Configuration
```python
client = Handwrytten(
api_key="your_key",
timeout=60, # seconds
max_retries=5, # automatic retries with exponential backoff
)
```
## Full Example
See [`examples/example.py`](examples/example.py) for a complete working demo that exercises every resource: listing cards/fonts, sending single and bulk orders, uploading custom images, creating custom cards, and cleanup.
## Requirements
- Python 3.8+
- `requests`
## License
MIT
| text/markdown | null | Handwrytten <contact@handwrytten.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.25.0",
"click>=8.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"responses>=0.23; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://www.handwrytten.com",
"Documentation, https://www.handwrytten.com/api/",
"Repository, https://github.com/handwrytten/handwrytten-python-sdk",
"Issues, https://github.com/handwrytten/handwrytten-python-sdk/issues"
] | twine/6.2.0 CPython/3.12.10 | 2026-02-20T21:22:55.948840 | handwrytten-1.2.0.tar.gz | 50,373 | f3/01/6d89c54f402c1db8c67efc157ab85c90d947c0878f9d35c5e53cc5ea3a2d/handwrytten-1.2.0.tar.gz | source | sdist | null | false | 86b6282d28b31115e227fd8dc6b45bf3 | a0e9d8b11ed5de264c34d09631b9101e7e6e5cae5f8ecf30ce5bbfd5795e5d40 | f3016d89c54f402c1db8c67efc157ab85c90d947c0878f9d35c5e53cc5ea3a2d | MIT | [
"LICENSE"
] | 210 |
2.4 | ryuuseigun | 0.1.1 | A lightweight, type-safe, async web framework inspired by Flask | # ☄️ Ryūseigun
A lightweight, type-safe, async web framework inspired by [Flask.](https://flask.palletsprojects.com)
---
_Ryūseigun_ is deliberately minimal, embracing a “bring-your-own-X” philosophy. It provides just enough to give you
something that works and leaves plenty of room to do things how you want to.
## Installation
```shell
pip install ryuuseigun
```
You will also need an ASGI server like [Uvicorn](https://uvicorn.dev/) or
[Granian](https://github.com/emmett-framework/granian). These two servers have been tested with
Ryūseigun, others have not.
## Minimal Example
```py
from ryuuseigun import Ryuuseigun
app = Ryuuseigun(__name__)
```
## Kitchen Sink Example
```py
from json import loads, dumps
from ryuuseigun import Context, Response, Ryuuseigun
app = Ryuuseigun(
__name__,
# Optional. If omitted, `ctx.url_for(..., full_url=True)` derives origin from request headers.
base_url='http://localhost:8000',
# A route ending with a slash is treated the same as a route without a slash
strict_slashes=False,
# Serve files from this path (e.g. public/favicon.ico -> website.com/favicon.ico)
public_dir='./public',
# Reject request bodies over this limit with HTTP 413 (set None to disable)
max_request_body_size=16 * 1024 * 1024,
# JSON parser/serializer hooks (swap with orjson.loads/orjson.dumps if desired)
loads=loads,
dumps=dumps,
)
# -------------- #
# Error handlers #
# -------------- #
@app.error_handler(Exception) # More specific exception classes will be prioritized
async def handle_exceptions(ctx: Context, e: Exception) -> Response:
return Response(str(e), status=500)
# ---------- #
# Blueprints #
# ---------- #
from ryuuseigun import Blueprint
api_bp = Blueprint('api', url_prefix='/api')
@api_bp.get('/')
async def api_index() -> str:
return 'API docs'
@api_bp.error_handler(Exception) # Blueprint-specific error handlers
async def handle_api_errors(ctx: Context, e: Exception) -> dict:
return {
'success': True,
'message': str(e),
}
users_bp = Blueprint('users', url_prefix='/users')
@users_bp.get('/<username>')
async def get_user(ctx: Context) -> str:
return ctx.request.route_params['username']
api_bp.register_blueprint(users_bp)
app.register_blueprint(api_bp) # GET /api/users/caim -> 'caim'
# ------------------------------------ #
# Request globals & lifecycle handlers #
# ------------------------------------ #
from typing import cast
@app.before_request
async def before_all_requests(ctx: Context):
ctx.g['value'] = 123
@ctx.after_this_request # Called after *this* request
async def after_all_requests(response: Response):
response.set_header('X-My-Value', str(ctx.g['value']))
return response
return None
@app.route('/', methods=['GET'])
async def index(ctx: Context) -> str: # Route handlers can return `str`, `dict`, or `Response` (`Response.stream(...)` for streaming)
my_value = cast(int, ctx.g['value'])
return str(my_value)
# -------------------- #
# ASGI lifespan events #
# -------------------- #
@app.on_startup
async def startup() -> None:
await connect_db()
@app.on_shutdown
async def shutdown() -> None:
await disconnect_db()
# ---------------- #
# Route parameters #
# ---------------- #
@app.post('/multiply/<int:num>/<int:factor>') # HTTP method shortcuts for convenience
async def index(ctx: Context) -> dict:
num = ctx.request.route_param('num', as_type=int)
factor = ctx.request.route_param('factor', as_type=int)
return {
'num': num,
'factor': factor,
'product': num * factor,
}
# ------------------ #
# Route converters #
# ------------------ #
# Built-in converters: `int`, `float`, and `path`.
# Route matching is specificity-aware: static segments and typed converters win over broader `path` captures.
# Register custom route converters with parse/format behavior:
app.register_converter(
'hex',
regex=r'[0-9a-fA-F]+',
parse=lambda raw: int(raw, 16),
format=lambda value: format(int(value), 'x'),
)
@app.get('/colors/<hex:color>')
async def show_color(ctx: Context) -> dict:
color = ctx.request.route_param('color') # int (parsed from hex)
return {'decimal': color}
@app.get('/links/color')
async def color_link(ctx: Context) -> dict:
# Uses converter `format`: -> /colors/ff
return {'url': ctx.url_for('show_color', color=255)}
# Converters can also be scoped to blueprints:
assets_bp = Blueprint('assets', url_prefix='/assets')
assets_bp.register_converter(
'slugpath',
regex=r'.+',
parse=str,
format=str,
allows_slash=True, # allow values like "images/icons/logo.svg"
)
@assets_bp.get('/<slugpath:key>')
async def get_asset(ctx: Context) -> dict:
return {'key': ctx.request.route_param('key')}
app.register_blueprint(assets_bp)
# --------------------- #
# Request body (async) #
# --------------------- #
@app.post('/upload')
async def upload(ctx: Context) -> dict:
total = 0
async for chunk in ctx.request.iter_body():
total += len(chunk)
return {'bytes': total}
@app.post('/json')
async def json_endpoint(ctx: Context) -> dict:
payload = await ctx.request.json_async()
return {'ok': True, 'payload': payload}
# In ASGI request flow, body parsing is stream-first.
# Use `await request.read()/json_async()/form_async()/payload_async()` for body access.
# ------------------------------ #
# Parsing/coercion customization #
# ------------------------------ #
# Register custom request payload parsers by content-type match:
app.add_request_payload_parser(
'text/csv',
lambda request: [row.split(',') for row in request.body.decode('utf-8').splitlines() if row],
first=True, # check before built-in parsers
)
# Register custom response coercers for arbitrary return types:
class Box:
def __init__(self, value: str):
self.value = value
app.add_response_coercer(
lambda result: Response(f'box:{result.value}', status=201) if isinstance(result, Box) else None,
first=True,
)
# ------------------------------ #
# Conditional caching (optional) #
# ------------------------------ #
from datetime import datetime, timezone
from ryuuseigun.utils import apply_conditional_response, make_etag
@app.get('/assets/app.js')
async def app_js(ctx: Context) -> Response:
body = b'console.log("hello")\n'
response = Response(
body=body,
headers={'content-type': 'application/javascript'},
)
return apply_conditional_response(
ctx,
response,
etag=make_etag(body),
last_modified=datetime(2026, 2, 20, tzinfo=timezone.utc),
cache_control='public, max-age=300',
)
# If the client sends matching `If-None-Match` or `If-Modified-Since`,
# `apply_conditional_response` returns HTTP 304 automatically.
#----------#
# Sessions #
#----------#
# Sessions default to an in-memory engine (per process, cookie-identified, not shared across workers).
# You can tune it with `session_ttl`, `session_purge_interval`, and `session_max_entries`, or fully
# replace storage by passing a custom `session_engine` object that implements: `load`, `create`,
# `save`, and `destroy` as async methods.
# Session access is async-first: use `await ctx.session_async()` when you need to create/load a
# session.
@app.get('/login')
async def login(ctx: Context) -> str:
session = await ctx.session_async()
session['user_id'] = 123
return 'ok'
@app.get('/me')
async def me(ctx: Context) -> str:
session = await ctx.session_async()
user_id = session.get('user_id')
return str(user_id or 'anonymous')
@app.get('/logout')
async def logout(ctx: Context) -> str:
session = await ctx.session_async()
session.destroy()
return 'bye'
```
## Testing
```shell
pip install -e ".[test]"
python -m pytest
```
| text/markdown | depthbomb | null | null | null | null | null | [
"Development Status :: 4 - Beta",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pytest>=9.0; extra == \"test\""
] | [] | [] | [] | [
"source, https://github.com/depthbomb/ryuuseigun",
"documentation, https://github.com/depthbomb/ryuuseigun",
"changelog, https://github.com/depthbomb/ryuuseigun/blob/master/CHANGELOG.md"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T21:22:53.938181 | ryuuseigun-0.1.1.tar.gz | 34,171 | cb/1e/b5a72439bbd42c4235ccf774cdc786c4c6aeb781aa05bfface211a905343/ryuuseigun-0.1.1.tar.gz | source | sdist | null | false | 36fdf2a8a498d1f2e440a54786b5747f | 14a68deb2ad729839c9bf2e2412624ae17dfc0fb1d4dd96915ec2cbd29e70a14 | cb1eb5a72439bbd42c4235ccf774cdc786c4c6aeb781aa05bfface211a905343 | Apache-2.0 | [
"LICENSE"
] | 204 |
2.3 | arthur-common | 2.4.46 | Utility code common to Arthur platform components. | # Arthur Common
Arthur Common is a library that contains common operations between Arthur platform services.
## Installation
To install the package, use [Poetry](https://python-poetry.org/):
```bash
poetry add arthur-common
```
or pip
```bash
pip install arthur-common
```
## Requirements
- Python 3.13
## Development
To set up the development environment, ensure you have [Poetry](https://python-poetry.org/) installed, then run:
```bash
poetry env use 3.13
poetry install
```
### Running Tests
This project uses [pytest](https://pytest.org/) for testing. To run the tests, execute:
```bash
poetry run pytest
```
## Release process
1. Merge changes into `main` branch
2. Go to **Actions** -> **Arthur Common Version Bump**
3. Click **Run workflow**. The workflow will create a new commit with the version bump, push it back to the same branch it is triggered on (default `main`), and start the release process
4. Watch in [GitHub Actions](https://github.com/arthur-ai/arthur-common/actions) for Arthur Common Release to run
5. Update package version in your project (arthur-engine)
## License
This project is licensed under the MIT License.
## Authors
- Arthur <engineering@arthur.ai>
| text/markdown | Arthur | engineering@arthur.ai | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"datasketches>=5.1.0",
"duckdb>=1.1.3",
"fastapi>=0.115.8",
"fsspec>=2024.10.0",
"litellm<2.0.0,>=1.77.7",
"openinference-semantic-conventions<0.2.0,>=0.1.12",
"pandas<3.0.0,>=2.2.2",
"pydantic>=2",
"simple-settings>=1.2.0",
"types-python-dateutil>=2.9.0",
"types-requests>=2.32.0.20241016",
"typing-extensions>=4.7.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:22:21.165659 | arthur_common-2.4.46.tar.gz | 58,457 | bd/c5/b2ac200f1e81bf7f8b51dff574e4f004698cd93ccdb8cd95fc2fbb77e9ed/arthur_common-2.4.46.tar.gz | source | sdist | null | false | 6731b9f6abd125759f6862939fccc3bf | c66fc3f99babc415998a7c89bbe7f8b4e29f7dd2cb61952fa6158ba69a3188ae | bdc5b2ac200f1e81bf7f8b51dff574e4f004698cd93ccdb8cd95fc2fbb77e9ed | null | [] | 222 |
2.4 | graphlit-client | 1.0.20260220001 | Graphlit API Python Client | # Python Client for Graphlit Platform
## Overview
The Graphlit Client for Python enables easy interaction with the Graphlit API, allowing developers to execute queries and mutations against the Graphlit service. This document outlines the setup process and provides a basic example of using the client.
## Prerequisites
Before you begin, ensure you have the following:
- Python 3.x installed on your system.
- An active account on the [Graphlit Platform](https://portal.graphlit.dev) with access to the API settings dashboard.
## Installation
To install the Graphlit Client, use pip:
```bash
pip install graphlit-client
```
## Configuration
The Graphlit Client supports environment variables to be set for authentication and configuration:
- `GRAPHLIT_ENVIRONMENT_ID`: Your environment ID.
- `GRAPHLIT_ORGANIZATION_ID`: Your organization ID.
- `GRAPHLIT_JWT_SECRET`: Your JWT secret for signing the JWT token.
Alternately, you can pass these values with the constructor of the Graphlit client.
You can find these values in the API settings dashboard on the [Graphlit Platform](https://portal.graphlit.dev).
For example, to use Graphlit in a Google Colab notebook, you need to assign these properties as Colab secrets: GRAPHLIT_ORGANIZATION_ID, GRAPHLIT_ENVIRONMENT_ID and GRAPHLIT_JWT_SECRET.
```python
import os
from google.colab import userdata
from graphlit import Graphlit
os.environ['GRAPHLIT_ORGANIZATION_ID'] = userdata.get('GRAPHLIT_ORGANIZATION_ID')
os.environ['GRAPHLIT_ENVIRONMENT_ID'] = userdata.get('GRAPHLIT_ENVIRONMENT_ID')
os.environ['GRAPHLIT_JWT_SECRET'] = userdata.get('GRAPHLIT_JWT_SECRET')
graphlit = Graphlit()
```
### Setting Environment Variables
To set these environment variables on your system, use the following commands, replacing `your_value` with the actual values from your account.
For Unix/Linux/macOS:
```bash
export GRAPHLIT_ENVIRONMENT_ID=your_environment_id_value
export GRAPHLIT_ORGANIZATION_ID=your_organization_id_value
export GRAPHLIT_JWT_SECRET=your_secret_key_value
```
For Windows Command Prompt (CMD):
```cmd
set GRAPHLIT_ENVIRONMENT_ID=your_environment_id_value
set GRAPHLIT_ORGANIZATION_ID=your_organization_id_value
set GRAPHLIT_JWT_SECRET=your_secret_key_value
```
For Windows PowerShell:
```powershell
$env:GRAPHLIT_ENVIRONMENT_ID="your_environment_id_value"
$env:GRAPHLIT_ORGANIZATION_ID="your_organization_id_value"
$env:GRAPHLIT_JWT_SECRET="your_secret_key_value"
```
## Support
Please refer to the [Graphlit API Documentation](https://docs.graphlit.dev/).
For support with the Graphlit Client, please submit a [GitHub Issue](https://github.com/graphlit/graphlit-client-python/issues).
For further support with the Graphlit Platform, please join our [Discord](https://discord.gg/ygFmfjy3Qx) community.
| text/markdown | Unstruk Data Inc. | questions@graphlit.com | null | null | null | null | [] | [] | https://github.com/graphlit/graphlit-client-python | null | >=3.6 | [] | [] | [] | [
"httpx",
"pydantic<3.0.0,>=2.0.0",
"PyJWT",
"websockets"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T21:22:09.005755 | graphlit_client-1.0.20260220001.tar.gz | 242,429 | d8/b4/1ef24cb04c783dcdfefe70988b443e532ecb4abaaf843810bb0a397a5519/graphlit_client-1.0.20260220001.tar.gz | source | sdist | null | false | e0aa959d577f6d0b135b7d39dc8b1fdf | 97694f5a1f680ead4779395fc82f92475d2b1d75457f9174e1c0d31c70b9c480 | d8b41ef24cb04c783dcdfefe70988b443e532ecb4abaaf843810bb0a397a5519 | null | [
"LICENSE"
] | 515 |
2.4 | pvw-cli | 1.8.5 | Microsoft Purview CLI with comprehensive automation capabilities | # PURVIEW CLI v1.8.5 - Microsoft Purview Automation & Data Governance
[](https://github.com/Keayoub/pvw-cli/releases/tag/v1.8.5)
[](https://github.com/Keayoub/pvw-cli)
[](https://github.com/Keayoub/pvw-cli)
[](https://github.com/Keayoub/pvw-cli)
> **LATEST UPDATE v1.8.5 (February 6, 2026):**
>
> **CSV Term Import - Multi-Value Separator Standardization & Enhanced Dry-Run**
>
> - **[FIXED]** CSV separator conflicts - all multi-value fields now use semicolon (`;`) exclusively
> - **[IMPROVED]** Dry-run output with visual indicators showing post-processing operations
> - **[ADDED]** Comprehensive CSV column reference documentation
> - **[ADDED]** Enhanced validation for separator usage
> - **[UPDATED]** Sample CSVs with proper semicolon separators
> - **[BREAKING]** Multi-value fields must use semicolon, not comma
>
> **[Full Release Notes v1.8.5](releases/v1.8.5.md)** | **[CSV Column Reference](samples/csv/UC_TERMS_CSV_COLUMNS_REFERENCE.md)** | **[CSV Examples](samples/csv/)**
>
> **Previous Update v1.8.1 (January 28, 2026):**
>
> **Unified Catalog APIs - Analytics & Visualization**
>
> - List Hierarchy Terms with interactive tree visualization
> - Get Term Facets for statistics and filters
> - Get CDE, Data Product, and Objective Facets
> - List Related Entities for relationship exploration
> - UC API Coverage increased to **96%** (+15%)
>
> **[Archive](releases/)**
---
## What is PVW CLI?
**PVW CLI v1.8.5** is a modern, full-featured command-line interface and Python library for Microsoft Purview. It enables automation and management of *all major Purview APIs* with **96% Unified Catalog API coverage** (46 of 48 operations).
### Key Capabilities
**Unified Catalog (UC) Management - 96% Complete** ⭐ *NEW*
- **[NEW]** Glossary hierarchy visualization with interactive tree views
- **[NEW]** Facets & analytics for terms, CDEs, data products, and objectives
- **[NEW]** Complete relationship exploration for terms
- Complete governance domains, glossary terms, data products, OKRs, CDEs
- Relationships API - Link data products/CDEs/terms to entities and columns
- Query APIs - Advanced OData filtering with multi-criteria search
- Policy Management - Complete CRUD for governance and RBAC policies
- Custom Metadata & Attributes - Extensible business metadata and attributes
**Data Operations**
- Entity management (create, update, bulk, import/export)
- Lineage operations with interactive creation and CSV import
- Advanced search and discovery with fixed suggest/autocomplete
- Business metadata with proper scope configuration
**Collections Management - 100% Spec Compliant**
- Full collection CRUD operations with proper API conformance
- Hierarchy and tree operations for collection navigation
- Permission management for collection access control
- Analytics for collection usage and asset tracking
**Automation & Scripting**
- Bulk Operations - Import/export from CSV/JSON with dry-run support
- Scriptable Output - Multiple formats (table, json, jsonc) for PowerShell/bash
- 80+ usage examples and 15+ comprehensive guides
- PowerShell integration with ConvertFrom-Json support
**Legacy API Support**
- Account management with full API compatibility
- Data product management (legacy operations)
- Classification, label, and status management
The CLI is designed for data engineers, stewards, architects, and platform teams to automate, scale, and enhance their Microsoft Purview experience.
### NEW: MCP Server for AI Assistants
**[NEW]** Model Context Protocol (MCP) server enables LLM-powered data governance workflows!
- Natural language interface to Purview catalog
- 20+ tools for AI assistants (Claude, Cline, etc.)
- Automate complex multi-step operations
- See `mcp/README.md` for setup instructions
---
## Release Information
For detailed information about previous releases, see the **[Full Release Archive](releases/)**.
**Latest Release:** [v1.8.1](releases/v1.8.1.md) (January 28, 2026)
**Previous Release:** [v1.6.2](releases/v1.6.2.md) (January 27, 2026)
---
## 📊 API Coverage Summary
### Unified Catalog (UC) - 96% Complete ⭐
| Category | Coverage | Count | Status |
|----------|----------|-------|--------|
| **Glossary Terms** | 100% | 9/9 | ✅ Complete |
| **Domains** | 100% | 5/5 | ✅ Complete |
| **Data Products** | 100% | 8/8 | ✅ Complete |
| **Critical Data Elements (CDE)** | 100% | 8/8 | ✅ Complete |
| **Objectives (OKRs)** | 100% | 6/6 | ✅ Complete |
| **Key Results** | 100% | 5/5 | ✅ Complete |
| **Policies** | 100% | 4/4 | ✅ Complete |
| **Facets & Analytics** | 100% | 4/4 | ✅ Complete |
| **Relationships** | 100% | 3/3 | ✅ Complete |
| **Hierarchy** | 100% | 1/1 | ✅ Complete |
| **TOTAL UC** | **96%** | **46/48** | 🎯 **Production Ready** |
### 🎯 New in v1.8.1 - Six Advanced UC APIs
1. **List Hierarchy Terms (NEW)**
```bash
# Interactive tree view of glossary hierarchy
pvw uc term hierarchy --output tree
# Filter by domain with max depth control
pvw uc term hierarchy --domain-id <domain-guid> --max-depth 3 --output table
```
2. **Get Term Facets (NEW)**
```bash
# Statistics and filters for glossary terms
pvw uc term facets --output table
# JSON export for automation
pvw uc term facets --output json
```
3. **Get CDE Facets (NEW)**
```bash
# Compliance dashboards (GDPR, HIPAA, SOC2)
pvw uc cde facets --domain-id <domain-guid> --output table
# See color-coded compliance summary
pvw uc cde facets --facet-fields "criticality,compliance_status"
```
4. **Get Data Product Facets (NEW)**
```bash
# Analytics for data product portfolios
pvw uc dataproduct facets --output table
# Filter by domain
pvw uc dataproduct facets --domain-id <domain-guid> --output json
```
5. **Get Objective Facets (NEW)**
```bash
# OKR dashboards with health metrics
pvw uc objective facets --output table
# JSON export for dashboards
pvw uc objective facets --output json
```
6. **List Related Entities (NEW)**
```bash
# Complete relationship exploration for terms
pvw uc term relationships --term-id <term-guid> --output table
# Filter by relationship type (Synonym, Related, Parent)
pvw uc term relationships --term-id <term-guid> --relationship-type "Synonym"
```
---
## Getting Started
Follow this short flow to get PVW CLI installed and running quickly.
1. Install (from PyPI):
```bash
pip install pvw-cli
```
For the bleeding edge or development:
```bash
pip install git+https://github.com/Keayoub/Purview_cli.git
# or for editable development
git clone https://github.com/Keayoub/Purview_cli.git
cd Purview_cli
pip install -r requirements.txt
pip install -e .
```
2. Set required environment variables (examples for cmd, PowerShell, and pwsh)
Windows cmd (example):
```cmd
set PURVIEW_ACCOUNT_NAME=your-purview-account
set PURVIEW_ACCOUNT_ID=your-purview-account-id-guid
set PURVIEW_RESOURCE_GROUP=your-resource-group-name
set AZURE_REGION= # optional
```
PowerShell (Windows PowerShell):
```powershell
$env:PURVIEW_ACCOUNT_NAME = "your-purview-account"
$env:PURVIEW_ACCOUNT_ID = "your-purview-account-id-guid"
$env:PURVIEW_RESOURCE_GROUP = "your-resource-group-name"
$env:AZURE_REGION = "" # optional
```
pwsh (PowerShell Core - cross-platform, recommended):
```pwsh
$env:PURVIEW_ACCOUNT_NAME = 'your-purview-account'
$env:PURVIEW_ACCOUNT_ID = 'your-purview-account-id-guid'
$env:PURVIEW_RESOURCE_GROUP = 'your-resource-group-name'
$env:AZURE_REGION = '' # optional
```
3. Authenticate
- Run `az login` (recommended), or
- Provide Service Principal credentials via environment variables.
**Important for Legacy Tenants:**
Some Azure environments use the legacy Purview service principal (`https://purview.azure.net`) instead of the current one (`https://purview.azure.com`). If you encounter authentication errors like:
```
AADSTS500011: The resource principal named https://purview.azure.com was not found in the tenant
```
You need to detect and set the correct authentication scope:
**Step 1: Detect your tenant's Purview service principal**
```powershell
# Check which service principal your tenant uses
az ad sp show --id "73c2949e-da2d-457a-9607-fcc665198967" --query "servicePrincipalNames" -o json
```
Look for one of these values:
- `https://purview.azure.com` or `https://purview.azure.com/` → Use `.com` (default)
- `https://purview.azure.net` or `https://purview.azure.net/` → Use `.net` (legacy)
**Step 2: Set the authentication scope (if using legacy .net)**
If your tenant uses the legacy service principal, set this environment variable:
```powershell
# PowerShell
$env:PURVIEW_AUTH_SCOPE = "https://purview.azure.net/.default"
# Or add to your profile for persistence
Add-Content $PROFILE "`n`$env:PURVIEW_AUTH_SCOPE = 'https://purview.azure.net/.default'"
```
```bash
# Bash/Linux
export PURVIEW_AUTH_SCOPE="https://purview.azure.net/.default"
# Or add to ~/.bashrc for persistence
echo 'export PURVIEW_AUTH_SCOPE="https://purview.azure.net/.default"' >> ~/.bashrc
```
```cmd
# Windows CMD
set PURVIEW_AUTH_SCOPE=https://purview.azure.net/.default
```
**Note:** Most modern Azure tenants use `https://purview.azure.com` (default), but some legacy or special environments (test, government clouds) may still use `https://purview.azure.net`. Always verify using the command above if you encounter authentication issues.
4. Try a few commands:
```bash
# List governance domains
pvw uc domain list
# Search
pvw search query --keywords="customer" --limit=5
# Get help
pvw --help
pvw uc --help
```
For more advanced usage, see the documentation in `doc/` or the project docs: <https://pvw-cli.readthedocs.io/>
---
## Quick Start Examples
### Collections Management (v1.6.2+)
```bash
# Create a new collection
pvw collections create \
--name "Data Engineering" \
--friendly-name "Data Engineering Team" \
--description "Collection for DE team assets"
# List collection hierarchy
pvw collections read-hierarchy --collection-name "Data Engineering"
# Update collection
pvw collections update \
--name "Data Engineering" \
--friendly-name "Data Engineering (Updated)"
# Manage collection permissions
pvw collections read-permissions --collection-name "Data Engineering"
```
### Lineage Management
```bash
# Create column-level lineage
pvw lineage create-column \
--process-name "ETL_Sales_Transform" \
--source-table-guid "9ebbd583-4987-4d1b-b4f5-d8f6f6f60000" \
--target-table-guids "c88126ba-5fb5-4d33-bbe2-5ff6f6f60000" \
--column-mapping "ProductID:ProductID,Name:Name"
# Import lineage from CSV
pvw lineage import samples/csv/lineage_with_columns.csv
# List column lineages
pvw lineage list-column --format table
```
### Governance & Relationships
```bash
# Link data product to entity
pvw uc dataproduct link-entity \
--id "dp-sales-2024" \
--entity-id "4fae348b-e960-42f7-834c-38f6f6f60000" \
--type-name "azure_sql_table"
# Link CDE to specific column
pvw uc cde link-entity \
--id "cde-customer-email" \
--entity-id "ea3412c3-7387-4bc1-9923-11f6f6f60000" \
--column-qualified-name "mssql://server/db/schema/table#EmailAddress"
# Query terms by domain
pvw uc term query --domain-ids "finance,sales" --status Approved --top 50
```
### Policy & Metadata
```bash
# List all policies
pvw uc policy list
# Create policy
pvw uc policy create --payload-file policy-rbac.json
# Import business metadata
pvw uc custom-metadata import --file business_concept.csv
# Add metadata to entity
pvw uc custom-metadata add \
--guid "4fae348b-e960-42f7-834c-38f6f6f60000" \
--name "BusinessConcept" \
--attributes '{"Department":"Sales"}'
```
---
## Installation
You can install PVW CLI in two ways:
1. **From PyPI (recommended for most users):**
```bash
pip install pvw-cli
```
2. **Directly from the GitHub repository (for latest/dev version):**
```bash
pip install git+https://github.com/Keayoub/Purview_cli.git
```
Or for development (editable install):
```bash
git clone https://github.com/Keayoub/Purview_cli.git
cd Purview_cli
pip install -r requirements.txt
pip install -e .
```
---
## Requirements
- Python 3.8+
- Azure CLI (`az login`) or Service Principal credentials
- Microsoft Purview account
---
## Getting Started
1. **Install**
```bash
pip install pvw-cli
```
2. **Set Required Environment Variables**
```bash
# Required for Purview API access
set PURVIEW_ACCOUNT_NAME=your-purview-account
set PURVIEW_ACCOUNT_ID=your-purview-account-id-guid
set PURVIEW_RESOURCE_GROUP=your-resource-group-name
# Optional
set AZURE_REGION= # (optional, e.g. 'china', 'usgov')
```
3. **Authenticate**
- Azure CLI: `az login`
- Or set Service Principal credentials as environment variables
4. **Run a Command**
```bash
pvw search query --keywords="customer" --limit=5
```
5. **See All Commands**
```bash
pvw --help
```
---
## Authentication
PVW CLI supports multiple authentication methods for connecting to Microsoft Purview, powered by Azure Identity's `DefaultAzureCredential`. This allows you to use the CLI securely in local development, CI/CD, and production environments.
### 1. Azure CLI Authentication (Recommended for Interactive Use)
- Run `az login` to authenticate interactively with your Azure account.
- The CLI will automatically use your Azure CLI credentials.
### 2. Service Principal Authentication (Recommended for Automation/CI/CD)
Set the following environment variables before running any PVW CLI command:
- `AZURE_CLIENT_ID` (your Azure AD app registration/client ID)
- `AZURE_TENANT_ID` (your Azure AD tenant ID)
- `AZURE_CLIENT_SECRET` (your client secret)
**Example (Windows):**
```cmd
set AZURE_CLIENT_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
set AZURE_TENANT_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
set AZURE_CLIENT_SECRET=your-client-secret
```
**Example (Linux/macOS):**
```bash
export AZURE_CLIENT_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
export AZURE_TENANT_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
export AZURE_CLIENT_SECRET=your-client-secret
```
### 3. Managed Identity (for Azure VMs, App Services, etc.)
If running in Azure with a managed identity, no extra configuration is needed. The CLI will use the managed identity automatically.
### 4. Visual Studio/VS Code Authentication
If you are signed in to Azure in Visual Studio or VS Code, `DefaultAzureCredential` can use those credentials as a fallback.
---
**Note:**
- The CLI will try all supported authentication methods in order. The first one that works will be used.
- For most automation and CI/CD scenarios, service principal authentication is recommended.
- For local development, Azure CLI authentication is easiest.
For more details, see the [Azure Identity documentation](https://learn.microsoft.com/en-us/python/api/overview/azure/identity-readme?view=azure-python).
---
## Output Formats & Scripting Integration
PVW CLI supports multiple output formats to fit different use cases - from human-readable tables to machine-parseable JSON.
### Output Format Options
All `list` commands now support the `--output` parameter with three formats:
1. **`table`** (default) - Rich formatted table with colors for human viewing
2. **`json`** - Plain JSON for scripting with PowerShell, bash, jq, etc.
3. **`jsonc`** - Colored JSON with syntax highlighting for viewing
### PowerShell Integration
The `--output json` format produces plain JSON that works perfectly with PowerShell's `ConvertFrom-Json`:
```powershell
# Get all terms as PowerShell objects
$domainId = "59ae27b5-40bc-4c90-abfe-fe1a0638fe3a"
$terms = py -m purviewcli uc term list --domain-id $domainId --output json | ConvertFrom-Json
# Access properties
Write-Host "Found $($terms.Count) terms"
foreach ($term in $terms) {
Write-Host " • $($term.name) - $($term.status)"
}
# Filter and export
$draftTerms = $terms | Where-Object { $_.status -eq "Draft" }
$draftTerms | Export-Csv -Path "draft_terms.csv" -NoTypeInformation
# Group by status
$terms | Group-Object status | Format-Table Count, Name
```
### Bash/Linux Integration
Use `jq` for JSON processing in bash:
```bash
# Get domain ID
DOMAIN_ID="59ae27b5-40bc-4c90-abfe-fe1a0638fe3a"
# Get term names only
pvw uc term list --domain-id $DOMAIN_ID --output json | jq -r '.[] | .name'
# Count terms
pvw uc term list --domain-id $DOMAIN_ID --output json | jq 'length'
# Filter by status
pvw uc term list --domain-id $DOMAIN_ID --output json | jq '.[] | select(.status == "Draft")'
# Group by status
pvw uc term list --domain-id $DOMAIN_ID --output json | jq 'group_by(.status) | map({status: .[0].status, count: length})'
# Save to file
pvw uc term list --domain-id $DOMAIN_ID --output json > terms.json
```
### Examples by Command
```bash
# Domains
pvw uc domain list --output json | jq '.[] | .name'
# Terms
pvw uc term list --domain-id "abc-123" --output json
pvw uc term list --domain-id "abc-123" --output table # Default
pvw uc term list --domain-id "abc-123" --output jsonc # Colored for viewing
# Data Products
pvw uc dataproduct list --domain-id "abc-123" --output json
```
### Migration from Old --json Flag
**Old (deprecated):**
```bash
pvw uc term list --domain-id "abc-123" --json
```
**New (recommended):**
```bash
pvw uc term list --domain-id "abc-123" --output json # Plain JSON for scripting
pvw uc term list --domain-id "abc-123" --output jsonc # Colored JSON (old behavior)
```
---
## Required Purview Configuration
Before using PVW CLI, you need to set three essential environment variables. Here's how to find them:
### 🔍 **How to Find Your Purview Values**
#### **1. PURVIEW_ACCOUNT_NAME**
- This is your Purview account name as it appears in Azure Portal
- Example: `kaydemopurview`
#### **2. PURVIEW_ACCOUNT_ID**
- This is the GUID that identifies your Purview account for Unified Catalog APIs
- **Important: For most Purview deployments, this is your Azure Tenant ID**
- **Method 1 - Get your Tenant ID (recommended):**
**Bash/Command Prompt:**
```bash
az account show --query tenantId -o tsv
```
**PowerShell:**
```powershell
az account show --query tenantId -o tsv
# Or store directly in environment variable:
$env:PURVIEW_ACCOUNT_ID = az account show --query tenantId -o tsv
```
- **Method 2 - Azure CLI (extract from Atlas endpoint):**
```bash
az purview account show --name YOUR_ACCOUNT_NAME --resource-group YOUR_RG --query endpoints.catalog -o tsv
```
Extract the GUID from the URL (before `-api.purview-service.microsoft.com`)
- **Method 3 - Azure Portal:**
1. Go to your Purview account in Azure Portal
2. Navigate to Properties → Atlas endpoint URL
3. Extract GUID from: `https://GUID-api.purview-service.microsoft.com/catalog`
#### **3. PURVIEW_RESOURCE_GROUP**
- The Azure resource group containing your Purview account
- Example: `fabric-artifacts`
### 📋 **Setting the Variables**
**Windows Command Prompt:**
```cmd
set PURVIEW_ACCOUNT_NAME=your-purview-account
set PURVIEW_ACCOUNT_ID=your-purview-account-id
set PURVIEW_RESOURCE_GROUP=your-resource-group
```
**Windows PowerShell:**
```powershell
$env:PURVIEW_ACCOUNT_NAME="your-purview-account"
$env:PURVIEW_ACCOUNT_ID="your-purview-account-id"
$env:PURVIEW_RESOURCE_GROUP="your-resource-group"
```
**Linux/macOS:**
```bash
export PURVIEW_ACCOUNT_NAME=your-purview-account
export PURVIEW_ACCOUNT_ID=your-purview-account-id
export PURVIEW_RESOURCE_GROUP=your-resource-group
```
**Permanent (Windows Command Prompt):**
```cmd
setx PURVIEW_ACCOUNT_NAME "your-purview-account"
setx PURVIEW_ACCOUNT_ID "your-purview-account-id"
setx PURVIEW_RESOURCE_GROUP "your-resource-group"
```
**Permanent (Windows PowerShell):**
```powershell
[Environment]::SetEnvironmentVariable("PURVIEW_ACCOUNT_NAME", "your-purview-account", "User")
[Environment]::SetEnvironmentVariable("PURVIEW_ACCOUNT_ID", "your-purview-account-id", "User")
[Environment]::SetEnvironmentVariable("PURVIEW_RESOURCE_GROUP", "your-resource-group", "User")
```
### **Debug Environment Issues**
If you experience issues with environment variables between different terminals, use these debug commands:
**Command Prompt/Bash:**
```bash
# Run this to check your current environment
python -c "
import os
print('PURVIEW_ACCOUNT_NAME:', os.getenv('PURVIEW_ACCOUNT_NAME'))
print('PURVIEW_ACCOUNT_ID:', os.getenv('PURVIEW_ACCOUNT_ID'))
print('PURVIEW_RESOURCE_GROUP:', os.getenv('PURVIEW_RESOURCE_GROUP'))
"
```
**PowerShell:**
```powershell
# Check environment variables in PowerShell
python -c "
import os
print('PURVIEW_ACCOUNT_NAME:', os.getenv('PURVIEW_ACCOUNT_NAME'))
print('PURVIEW_ACCOUNT_ID:', os.getenv('PURVIEW_ACCOUNT_ID'))
print('PURVIEW_RESOURCE_GROUP:', os.getenv('PURVIEW_RESOURCE_GROUP'))
"
# Or use PowerShell native commands
Write-Host "PURVIEW_ACCOUNT_NAME: $env:PURVIEW_ACCOUNT_NAME"
Write-Host "PURVIEW_ACCOUNT_ID: $env:PURVIEW_ACCOUNT_ID"
Write-Host "PURVIEW_RESOURCE_GROUP: $env:PURVIEW_RESOURCE_GROUP"
```
---
## Search Command (Discovery Query API)
The PVW CLI provides advanced search using the latest Microsoft Purview Discovery Query API:
- Search for assets, tables, files, and more with flexible filters
- Use autocomplete and suggestion endpoints
- Perform faceted, time-based, and entity-type-specific queries
**v1.6.2 Enhancements:**
- Collections API now 100% conformant with Microsoft Purview specification
- Improved search result caching and performance
- Enhanced error handling and diagnostics
- All search commands validated and working correctly (query, browse, suggest, find-table)
### CLI Usage Examples
#### **Multiple Output Formats**
```bash
# 1. Table Format (Default) - Quick overview
pvw search query --keywords="customer" --limit=5
# → Clean table with Name, Type, Collection, Classifications, Qualified Name
# 2. Detailed Format - Human-readable with all metadata
pvw search query --keywords="customer" --limit=5 --detailed
# → Rich panels showing full details, timestamps, search scores
# 3. JSON Format - Complete technical details with syntax highlighting (WELL-FORMATTED)
pvw search query --keywords="customer" --limit=5 --json
# → Full JSON response with indentation, line numbers and color coding
# 4. Table with IDs - For entity operations
pvw search query --keywords="customer" --limit=5 --show-ids
# → Table format + entity GUIDs for copy/paste into update commands
```
#### **Search Operations**
```bash
# Basic search for assets with keyword 'customer'
pvw search query --keywords="customer" --limit=5
# Advanced search with classification filter
pvw search query --keywords="sales" --classification="PII" --objectType="Tables" --limit=10
# Pagination through large result sets
pvw search query --keywords="SQL" --offset=10 --limit=5
# Autocomplete suggestions for partial keyword
pvw search autocomplete --keywords="ord" --limit=3
# Get search suggestions (fuzzy matching)
pvw search suggest --keywords="prod" --limit=2
**IMPORTANT - Command Line Quoting:**
```cmd
# [OK] CORRECT - Use quotes around keywords
pvw search query --keywords="customer" --limit=5
# [OK] CORRECT - For wildcard searches, use quotes
pvw search query --keywords="*" --limit=5
# ❌ WRONG - Don't use unquoted * (shell expands to file names)
pvw search query --keywords=* --limit=5
# This causes: "Error: Got unexpected extra arguments (dist doc ...)"
```
```bash
# Faceted search with aggregation
pvw search query --keywords="finance" --facetFields="objectType,classification" --limit=5
# Browse entities by type and path
pvw search browse --entityType="Tables" --path="/root/finance" --limit=2
# Time-based search for assets created after a date
pvw search query --keywords="audit" --createdAfter="2024-01-01" --limit=1
# Entity type specific search
pvw search query --keywords="finance" --entityTypes="Files,Tables" --limit=2
```
#### **Usage Scenarios**
- **Daily browsing**: Use default table format for quick scans
- **Understanding assets**: Use `--detailed` for rich information panels
- **Technical work**: Use `--json` for complete API data access
- **Entity operations**: Use `--show-ids` to get GUIDs for updates
### Python Usage Example
```python
from purviewcli.client._search import Search
search = Search()
args = {"--keywords": "customer", "--limit": 5}
search.searchQuery(args)
print(search.payload) # Shows the constructed search payload
```
### Test Examples
See `tests/test_search_examples.py` for ready-to-run pytest examples covering all search scenarios:
- Basic query
- Advanced filter
- Autocomplete
- Suggest
- Faceted search
- Browse
- Time-based search
- Entity type search
---
## Unified Catalog Management (NEW)
PVW CLI now includes comprehensive **Microsoft Purview Unified Catalog (UC)** support with the new `uc` command group. This provides complete management of modern data governance features including governance domains, glossary terms, data products, objectives (OKRs), and critical data elements.
**🎯 Feature Parity**: Full compatibility with [UnifiedCatalogPy](https://github.com/olafwrieden/unifiedcatalogpy) functionality.
See [`doc/commands/unified-catalog.md`](doc/commands/unified-catalog.md) for complete documentation and examples.
### Quick UC Examples
#### **Governance Domains Management**
```bash
# List all governance domains
pvw uc domain list
# Create a new governance domain
pvw uc domain create --name "Finance" --description "Financial data governance domain"
# Get domain details
pvw uc domain get --domain-id "abc-123-def-456"
# Update domain information
pvw uc domain update --domain-id "abc-123" --description "Updated financial governance"
```
#### **Glossary Terms in UC**
```bash
# List all terms in a domain
pvw uc term list --domain-id "abc-123"
pvw uc term list --domain-id "abc-123" --output json # Plain JSON for scripting
pvw uc term list --domain-id "abc-123" --output jsonc # Colored JSON for viewing
# Create a single glossary term
pvw uc term create --name "Customer" --domain-id "abc-123" --description "A person or entity that purchases products"
# Get term details
pvw uc term show --term-id "term-456"
# Update term
pvw uc term update --term-id "term-456" --description "Updated description"
# Delete term
pvw uc term delete --term-id "term-456" --confirm
```
**📦 Bulk Import (NEW)**
Import multiple terms from CSV or JSON files with validation and progress tracking:
```bash
# CSV Import - Preview with dry-run
pvw uc term import-csv --csv-file "samples/csv/uc_terms_bulk_example.csv" --domain-id "abc-123" --dry-run
# CSV Import - Actual import
pvw uc term import-csv --csv-file "samples/csv/uc_terms_bulk_example.csv" --domain-id "abc-123"
# JSON Import - Preview with dry-run
pvw uc term import-json --json-file "samples/json/term/uc_terms_bulk_example.json" --dry-run
# JSON Import - Actual import (domain_id from JSON or override with flag)
pvw uc term import-json --json-file "samples/json/term/uc_terms_bulk_example.json"
pvw uc term import-json --json-file "samples/json/term/uc_terms_bulk_example.json" --domain-id "abc-123"
```
**Bulk Import Features:**
- [OK] Import from CSV or JSON files
- [OK] Dry-run mode to preview before importing
- [OK] Support for multiple owners (Entra ID Object IDs), acronyms, and resources
- [OK] Progress tracking with Rich console output
- [OK] Detailed error messages and summary reports
- [OK] Sequential POST requests (no native bulk endpoint available)
**CSV Format Example:**
```csv
name,description,status,acronym,owner_id,resource_name,resource_url
Customer Acquisition Cost,Cost to acquire new customer,Draft,CAC,<guid>,Metrics Guide,https://docs.example.com
Monthly Recurring Revenue,Predictable monthly revenue,Draft,MRR,<guid>,Finance Dashboard,https://finance.example.com
```
**JSON Format Example:**
```json
{
"terms": [
{
"name": "Data Lake",
"description": "Centralized repository for structured/unstructured data",
"domain_id": "your-domain-id-here",
"status": "Draft",
"acronyms": ["DL"],
"owner_ids": ["<entra-id-object-id-guid>"],
"resources": [{"name": "Architecture Guide", "url": "https://example.com"}]
}
]
}
```
**Important Notes:**
- ⚠️ **Owner IDs must be Entra ID Object IDs (GUIDs)**, not email addresses
- ⚠️ **Terms cannot be "Published" in unpublished domains** - use "Draft" status
- [OK] Sample files available: `samples/csv/uc_terms_bulk_example.csv`, `samples/json/term/uc_terms_bulk_example.json`
- 📖 Complete documentation: [`doc/commands/unified-catalog/term-bulk-import.md`](doc/commands/unified-catalog/term-bulk-import.md)
**🗑️ Bulk Delete (NEW)**
Delete all terms in a domain using PowerShell or Python scripts:
```powershell
# PowerShell - Delete all terms with confirmation
.\scripts\delete-all-uc-terms.ps1 -DomainId "abc-123"
# PowerShell - Delete without confirmation
.\scripts\delete-all-uc-terms.ps1 -DomainId "abc-123" -Force
```
```bash
# Python - Delete all terms with confirmation
python scripts/delete_all_uc_terms_v2.py --domain-id "abc-123"
# Python - Delete without confirmation
python scripts/delete_all_uc_terms_v2.py --domain-id "abc-123" --force
```
**Bulk Delete Features:**
- [OK] Interactive confirmation prompts (type "DELETE" to confirm)
- [OK] Beautiful progress display with colors
- [OK] Success/failure tracking per term
- [OK] Detailed summary reports
- [OK] Rate limiting (200ms delay between deletes)
- [OK] Graceful error handling and Ctrl+C support
#### **Data Products Management**
```bash
# List all data products in a domain
pvw uc dataproduct list --domain-id "abc-123"
# Create a comprehensive data product
pvw uc dataproduct create \
--name "Customer Analytics Dashboard" \
--domain-id "abc-123" \
--description "360-degree customer analytics with behavioral insights" \
--type Analytical \
--status Draft
# Get detailed data product information
pvw uc dataproduct show --product-id "prod-789"
# Update data product (partial updates supported - only specify fields to change)
pvw uc dataproduct update \
--product-id "prod-789" \
--status Published \
--description "Updated comprehensive customer analytics" \
--endorsed
# Update multiple fields at once
pvw uc dataproduct update \
--product-id "prod-789" \
--status Published \
--update-frequency Monthly \
--endorsed
# Delete a data product (with confirmation)
pvw uc dataproduct delete --product-id "prod-789"
# Delete without confirmation prompt
pvw uc dataproduct delete --product-id "prod-789" --yes
```
#### **Objectives & Key Results (OKRs)**
```bash
# List objectives for a domain
pvw uc objective list --domain-id "abc-123"
# Create measurable objectives
pvw uc objective create \
--definition "Improve data quality score by 25% within Q4" \
--domain-id "abc-123" \
--target-value "95" \
--measurement-unit "percentage"
# Track objective progress
pvw uc objective update \
--objective-id "obj-456" \
--domain-id "abc-123" \
--current-value "87" \
--status "in-progress"
```
#### **Critical Data Elements (CDEs)**
```bash
# List critical data elements
pvw uc cde list --domain-id "abc-123"
# Define critical data elements with governance rules
pvw uc cde create \
--name "Social Security Number" \
--data-type "String" \
--domain-id "abc-123" \
--classification "PII" \
--retention-period "7-years"
# Associate CDEs with data assets
pvw uc cde link \
--cde-id "cde-789" \
--domain-id "abc-123" \
--asset-id "ea3412c3-7387-4bc1-9923-11f6f6f60000"
```
#### **Health Monitoring (NEW)**
Monitor governance health and get automated recommendations to improve your data governance posture.
```bash
# List all health findings and recommendations
pvw uc health query
# Filter by severity
pvw uc health query --severity High
pvw uc health query --severity Medium
# Filter by status
pvw uc health query --status NotStarted
pvw uc health query --status InProgress
# Get detailed information about a specific health action
pvw uc health show --action-id "5ea3fc78-6a77-4098-8779-ed81de6f87c9"
# Update health action status
pvw uc health update \
--action-id "5ea3fc78-6a77-4098-8779-ed81de6f87c9" \
--status InProgress \
--reason "Working on assigning glossary terms to data products"
# Get health summary statistics
pvw uc health summary
# Output health findings in JSON format
pvw uc health query --json
```
**Health Finding Types:**
- Missing glossary terms on data products (High)
- Data products without OKRs (Medium)
- Missing data quality scores (Medium)
- Classification gaps on data assets (Medium)
- Description quality issues (Medium)
- Business domains without critical data entities (Medium)
#### **Workflow Management (NEW)**
Manage approval workflows and business process automation in Purview.
```bash
# List all workflows
pvw workflow list
# Get workflow details
pvw workflow get --workflow-id "workflow-123"
# Create a new workflow (requires JSON definition)
pvw workflow create --workflow-id "approval-flow-1" --payload-file workflow-definition.json
# Execute a workflow
pvw workflow execute --workflow-id "workflow-123"
# List workflow executions
pvw workflow executions --workflow-id "workflow-123"
# View specific execution details
pvw workflow execution-details --workflow-id "workflow-123" --execution-id "exec-456"
# Update workflow configuration
pvw workflow update --workflow-id "workflow-123" --payload-file updated-workflow.json
# Delete a workflow
pvw workflow delete --workflow-id "workflow-123"
# Output workflows in JSON format
pvw workflow list --json
```
**Workflow Use Cases:**
- Data access request approvals
- Glossary term certification workflows
- Data product publishing approvals
- Classification review processes
#### **Integrated Workflow Example**
```bash
# 1. Discover assets to govern
pvw search query --keywords="customer" --detailed
# 2. Create governance domain for discovered assets
pvw uc domain create --name "Customer Data" --description "Customer information governance"
# 3. Define governance terms
pvw uc term create --name "Customer PII" --domain-id "new-domain-id" --definition "Personal customer information"
# 4. Create data product from discovered assets
pvw uc dataproduct create --name "Customer Master Data" --domain-id "new-domain-id"
# 5. Set governance objectives
pvw uc objective create --definition "Ensure 100% PII classification compliance" --domain-id "new-domain-id"
```
---
## Entity Management & Updates
PVW CLI provides comprehensive entity management capabilities for updating Purview assets like descriptions, classifications, and custom attributes.
### **Entity Update Examples**
#### **Update Asset Descriptions**
```bash
# Update table description using GUID
pvw entity update-attribute \
--guid "ece43ce5-ac45-4e50-a4d0-365a64299efc" \
--attribute "description" \
--value "Updated customer data warehouse table with enhanced analytics"
# Update dataset description using qualified name
pvw entity update-attribute \
--qualifiedName "https://app.powerbi.com/groups/abc-123/datasets/def-456" \
--attribute "description" \
--value "Power BI dataset for customer analytics dashboard"
```
#### **Bulk Entity Operations**
```bash
# Read entity details before updating
pvw entity read-by-attribute \
--guid "ea3412c3-7387-4bc1-9923-11f6f6f60000" \
--attribute "description,classifications,customAttributes"
# Update multiple attributes at once
pvw entity update-bulk \
--input-file entities_to_update.json \
--output-file update_results.json
```
#### **Column-Level Updates**
```bash
# Update specific column descriptions in a table
pvw entity update-attribute \
--guid "column-guid-123" \
--attribute "description" \
--value "Customer unique identifier - Primary Key"
# Add classifications to sensitive columns
pvw entity add-classification \
--guid "column-guid-456" \
--classification "MICROSOFT.PERSONAL.EMAIL"
```
### **Discovery to Update Workflow**
```bash
# 1. Find assets that need updates
pvw search query --keywords="customer table" --show-ids --limit=10
# 2. Get detailed information about a specific asset
pvw entity read-by-attribute --guid "FOUND_GUID" --attribute "description,classifications"
# 3. Update the asset description
pvw entity update-attribute \
--guid "FOUND_GUID" \
--attribute "description" \
--value "Updated description based on business requirements"
# 4. Verify the update
pvw search query --keywords="FOUND_GUID" --detailed
```
---
## Lineage CSV Import & Management
PVW CLI provides powerful lineage management capabilities including CSV-based bulk import for automating data lineage creation.
### **Lineage CSV Import**
Import lineage relationships from CSV files to automate the creation of data flow documentation in Microsoft Purview.
#### **CSV Format**
The CSV file must contain the following columns:
**Required columns:**
- `source_entity_guid` - GUID of the source entity
- `target_entity_guid` - GUID of the target entity
**Optional columns:**
- `relationship_type` - Type of relationship (default: "Process")
- `process_name` - Name of the transformation process
- `description` - Description of the transformation
- `confidence_score` - Confidence score (0-1)
- `owner` - Process owner
- `metadata` - Additional JSON metadata
**Example CSV:**
```csv
source_entity_guid,target_entity_guid,relationship_type,process_name,description,confidence_score,owner,metadata
dcfc99ed-c74d-49aa-bd0b-72f6f6f60000,1db9c650-acfb-4914-8bc5-1cf6f6f60000,Process,Transform_Product_Data,Transform product data for analytics,0.95,data-engineering,"{""tool"": ""Azure Data Factory""}"
```
#### **Lineage Commands**
```bash
# Validate CSV format before import (no API calls)
pvw lineage validate lineage_data.csv
# Import lineage relationships from CSV
pvw lineage import lineage_data.csv
# Generate sample CSV file with examples
pvw lineage sample output.csv --num-samples 10 --template detailed
# View available CSV templates
pvw lineage templates
```
#### **Available Templates**
- **`basic`** - Minimal columns (source, target, process name)
- **`detailed`** - All columns including metadata and confidence scores
- **`qualified_names`** - Use qualified names instead of GUIDs
#### **Workflow Example**
```bash
# 1. Find entity GUIDs using search
pvw search find-table --name "Product" --schema "dbo" --id-only
# 2. Create CSV file with lineage relationships
# (use the GUIDs from step 1)
# 3. Validate CSV format
pvw lineage validate my_lineage.csv
# Output: SUCCESS: Lineage validation passed (5 rows, 8 columns)
# 4. Import to Purview
pvw lineage import my_lineage.csv
# Output: SUCCESS: Lineage import completed successfully
```
#### **Advanced Features**
- **GUID Validation**: Automatic validation of GUID format with helpful error messages
- **Process Entity Creation**: Creates intermediate "Process" entities to link source→target relationships
- **Metadata Support**: Add custom JSON metadata to each lineage relationship
- **Dry-Run Validation**: Validate CSV format locally before making API calls
**For detailed documentation, see:** [`doc/guides/lineage-csv-import.md`](doc/guides/lineage-csv-import.md)
---
## Data Product Management (Legacy)
PVW CLI also includes the original `data-product` command group for backward compatibility with traditional data product lifecycle management.
See [`doc/commands/data-product.md`](doc/commands/data-product.md) for full documentation and examples.
### Example | text/markdown | null | AYOUB KEBAILI <keayoub@msn.com> | null | AYOUB KEBAILI <keayoub@msn.com> | null | azure, purview, cli, data, catalog, governance, automation, pvw | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Systems Administration",
"Topic :: Database",
"Topic :: Internet :: WWW/HTTP"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"azure-identity>=1.12.0",
"azure-core>=1.24.0",
"click>=8.0.0",
"rich>=12.0.0",
"requests>=2.28.0",
"pandas>=1.5.0",
"aiohttp>=3.8.0",
"pydantic<2.12,>=1.10.0",
"PyYAML>=6.0",
"cryptography<46.0.0,>=41.0.5",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.20.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=22.0.0; extra == \"dev\"",
"isort>=5.10.0; extra == \"dev\"",
"flake8>=5.0.0; extra == \"dev\"",
"mypy>=0.991; extra == \"dev\"",
"pre-commit>=2.20.0; extra == \"dev\"",
"sphinx>=5.0.0; extra == \"docs\"",
"sphinx-rtd-theme>=1.0.0; extra == \"docs\"",
"myst-parser>=0.18.0; extra == \"docs\"",
"pytest>=7.0.0; extra == \"test\"",
"pytest-asyncio>=0.20.0; extra == \"test\"",
"pytest-cov>=4.0.0; extra == \"test\"",
"requests-mock>=1.9.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/Keayoub/pvw-cli",
"Documentation, https://github.com/Keayoub/pvw-cli/wiki",
"Repository, https://github.com/Keayoub/pvw-cli.git",
"Bug Tracker, https://github.com/Keayoub/pvw-cli/issues",
"Source, https://github.com/Keayoub/pvw-cli"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:21:28.832257 | pvw_cli-1.8.5.tar.gz | 287,773 | b9/4d/fd1a5ecfad4e23f1b53e6566690122af1e2767e05119380ca1a82030eb65/pvw_cli-1.8.5.tar.gz | source | sdist | null | false | 20ba535ba1815cc2641f9c8731688697 | 98077ddfa74b49c786858727e62c09f77806b1ad76bffb8d52c7e2ab2669e434 | b94dfd1a5ecfad4e23f1b53e6566690122af1e2767e05119380ca1a82030eb65 | MIT | [] | 226 |
2.4 | sunflare | 0.10.1 | Redsun plugin development toolkit | [](https://pypi.org/project/sunflare)
[](https://pypi.org/project/sunflare)
[](https://codecov.io/gh/redsun-acquisition/sunflare)
[](https://github.com/astral-sh/ruff)
[](https://mypy-lang.org/)
[](https://opensource.org/licenses/Apache-2.0)
# `sunflare`
> [!WARNING]
> This project is still in alpha stage and very unstable. Use at your own risk.
`sunflare` is a software development kit (SDK) which provides common, reusable components for building plugins which can interact with [`redsun`].
The aim is to provide reusable patterns in developing software applications for scientific device orchestration leveraging the [Bluesky] hardware interface and data model.
For more information, see the [documentation].
[`redsun`]: https://redsun-acquisition.github.io/redsun/
[documentation]: https://redsun-acquisition.github.io/sunflare/
[bluesky]: https://blueskyproject.io/bluesky/main/index.html
| text/markdown | null | Jacopo Abramo <jacopo.abramo@gmail.com> | null | Jacopo Abramo <jacopo.abramo@gmail.com> | Apache-2.0 | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"bluesky>=1.14.6",
"dependency-injector>=4.48.3",
"psygnal>=0.15.1",
"pyyaml>=6.0.3",
"typing-extensions>=4.15.0",
"magicgui>=0.10.1; extra == \"pyqt\"",
"pyqt6>=6.10.2; extra == \"pyqt\"",
"qtpy>=2.4.3; extra == \"pyqt\"",
"magicgui>=0.10.1; extra == \"pyside\"",
"pyside6>=6.9.1; extra == \"pyside\"",
"qtpy>=2.4.3; extra == \"pyside\""
] | [] | [] | [] | [
"bugs, https://github.com/redsun-acquisition/sunflare/issues",
"changelog, https://redsun-acquisition.github.io/sunflare/changelog/",
"homepage, https://github.com/redsun-acquisition/sunflare"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:20:57.290965 | sunflare-0.10.1.tar.gz | 21,087 | 44/2a/126fd16b2c2175e64e325be35f183aca5aedcc5f8dc70630a8f0227f14db/sunflare-0.10.1.tar.gz | source | sdist | null | false | 51bcfabfa007a72cf69cfb16efc9c17c | 5b7448d5f7c2796a7eb7381977e869f5f4fa211e98c46a24bddeed501e28f185 | 442a126fd16b2c2175e64e325be35f183aca5aedcc5f8dc70630a8f0227f14db | null | [
"LICENSE"
] | 605 |
2.4 | kader | 2.5.0 | kader coding agent | # Kader
Kader is an intelligent coding agent designed to assist with software development tasks. It provides a comprehensive framework for building AI-powered agents with advanced reasoning capabilities and tool integration.
## Features
- 🤖 **AI-powered Code Assistance** - Support for multiple LLM providers:
- **Ollama**: Local LLM execution for privacy and speed.
- **Google Gemini**: Cloud-based powerful models via the Google GenAI SDK.
- 🖥️ **Interactive CLI** - Modern TUI interface built with Textual:
- **Lazy Loading**: Efficient directory tree loading for large projects.
- **TODO Management**: Integrated TODO list widget with automatic updates.
- 🛠️ **Tool Integration** - File system, command execution, web search, and more.
- 🧠 **Memory Management** - State persistence, conversation history, and isolated sub-agent memory.
- 🔁 **Session Management** - Save and load conversation sessions.
- ⌨️ **Keyboard Shortcuts** - Efficient navigation and operations.
- 📝 **YAML Configuration** - Agent configuration via YAML files.
- 🔄 **Planner-Executor Framework** - Sophisticated reasoning and acting architecture using task planning and delegation.
- 🗂️ **File System Tools** - Read, write, search, and edit files.
- 🤝 **Agent-As-Tool** - Spawn sub-agents for specific tasks with isolated memory and automated context aggregation.
- 🎯 **Agent Skills** - Modular skill system for specialized domain knowledge and task-specific instructions.
## Installation
### Prerequisites
- Python 3.11 or higher
- [Ollama](https://ollama.ai/) (optional, for local LLMs)
- [uv](https://docs.astral.sh/uv/) package manager (recommended) or [pip](https://pypi.org/project/pip/)
### Using uv (recommended)
```bash
# Clone the repository
git clone https://github.com/your-repo/kader.git
cd kader
# Install dependencies with uv
uv sync
# Run the CLI
uv run python -m cli
```
### Using pip
```bash
# Clone the repository
git clone https://github.com/your-repo/kader.git
cd kader
# Install in development mode
pip install -e .
# Run the CLI
python -m cli
```
## Quick Start
### Running the CLI
```bash
# Run the Kader CLI using uv
uv run python -m cli
# Or using pip
python -m cli
```
### First Steps in CLI
Once the CLI is running:
1. Type any question to start chatting with the agent.
2. Use `/help` to see available commands.
3. Use `/models` to check available models from all providers.
4. The directory tree on the left features **lazy loading**, expanding only when needed.
5. The **TODO list** on the right tracks tasks identified by the planner.
## Configuration
When the kader module is imported for the first time, it automatically creates a `.kader` directory in your home directory and a `.env` file.
### Environment Variables
The application automatically loads environment variables from `~/.kader/.env`:
- `OLLAMA_API_KEY`: API key for Ollama service (if applicable).
- `GOOGLE_API_KEY`: API key for Google Gemini (required for Google Provider).
- Additional variables can be added to the `.env` file and will be automatically loaded.
### Memory and Sessions
Kader stores data in `~/.kader/`:
- Sessions: `~/.kader/memory/sessions/`
- Configuration: `~/.kader/`
- Memory files: `~/.kader/memory/`
- Checkpoints: `~/.kader/memory/sessions/<session-id>/executors/` (Aggregated context from sub-agents)
## CLI Commands
| Command | Description |
|---------|-------------|
| `/help` | Show command reference |
| `/models` | Show available models (Ollama & Google) |
| `/clear` | Clear conversation |
| `/save` | Save current session |
| `/load <id>` | Load a saved session |
| `/sessions` | List saved sessions |
| `/refresh` | Refresh file tree |
| `/exit` | Exit the CLI |
### Keyboard Shortcuts
| Shortcut | Action |
|----------|--------|
| `Ctrl+Q` | Quit |
| `Ctrl+L` | Clear conversation |
| `Ctrl+S` | Save session |
| `Ctrl+R` | Refresh file tree |
| `Tab` | Navigate panels |
## Project Structure
```
kader/
├── cli/ # Interactive command-line interface
│ ├── app.py # Main application entry point
│ ├── app.tcss # Textual CSS for styling
│ ├── llm_factory.py # Provider selection logic
│ ├── widgets/ # Custom Textual widgets
│ │ ├── conversation.py # Chat display widget
│ │ ├── loading.py # Loading spinner widget
│ │ ├── confirmation.py # Tool/model selection widgets
│ │ └── todo_list.py # TODO tracking widget
│ └── README.md # CLI documentation
├── examples/ # Example implementations
│ ├── memory_example.py # Memory management examples
│ ├── google_example.py # Google Gemini provider examples
│ ├── planner_executor_example.py # Advanced workflow examples
│ ├── skills/ # Agent skills examples
│ │ ├── hello/ # Greeting skill with instructions
│ │ ├── calculator/ # Math calculation skill
│ │ └── react_agent.py # Skills demo with ReAct agent
│ └── README.md # Examples documentation
├── kader/ # Core framework
│ ├── agent/ # Agent implementations (Planning, ReAct)
│ ├── memory/ # Memory management & persistence
│ ├── providers/ # LLM providers (Ollama, Google)
│ ├── tools/ # Tools (File System, Web, Command, AgentTool)
│ ├── prompts/ # Prompt templates (Jinja2)
│ └── utils/ # Utilities (Checkpointer, ContextAggregator)
├── pyproject.toml # Project dependencies
├── README.md # This file
└── uv.lock # Dependency lock file
```
## Core Components
### Agents
Kader provides a robust agent architecture:
- **ReActAgent**: Reasoning and Acting agent that combines thoughts with actions.
- **PlanningAgent**: High-level agent that breaks complex tasks into manageable plans.
- **BaseAgent**: Abstract base class for creating custom agent behaviors.
### LLM Providers
Kader supports multiple backends:
- **OllamaProvider**: Connects to locally running Ollama instances.
- **GoogleProvider**: High-performance access to Gemini models.
### Agent-As-Tool (AgentTool)
The `AgentTool` allows a `PlanningAgent` (Architect) to delegate work to a `ReActAgent` (Worker). It features:
- **Persistent Memory**: Sub-agent conversations are saved to JSON.
- **Context Aggregation**: Sub-agent research and actions are automatically merged into the main session's `checkpoint.md` via `ContextAggregator`.
### Agent Skills
Kader supports a modular skill system for domain-specific knowledge and specialized instructions:
- **Skill Structure**: Skills are defined as directories containing `SKILL.md` files with YAML frontmatter
- **Skill Loading**: Skills are loaded from `~/.kader/skills` (high priority) and `./.kader/` directories
- **Skill Injection**: Available skills are automatically injected into the system prompt
- **Skills Tool**: Agents can load skills dynamically using the `skills_tool`
Example skill structure:
```
~/.kader/skills/hello/
├── SKILL.md
└── scripts/
└── hello.py
```
Example skill (`SKILL.md`):
```yaml
---
name: hello
description: Skill for ALL greeting requests
---
# Hello Skill
This skill provides the greeting format you must follow.
## How to greet
Always greet the user with:
- A warm welcome
- Their name if mentioned
- A friendly emoji
```
### Memory Management
- **SlidingWindowConversationManager**: Maintains context within token limits.
- **PersistentSlidingWindowConversationManager**: Auto-saves sub-agent history.
- **Checkpointer**: Generates markdown summaries of agent actions.
## Development
### Setting up for Development
```bash
# Clone the repository
git clone https://github.com/your-repo/kader.git
cd kader
# Install in development mode with uv
uv sync
# Run the CLI with hot reload for development
uv run textual run --dev cli.app:KaderApp
```
### Running Tests
```bash
# Run tests with uv
uv run pytest
# Run tests with specific options
uv run pytest --verbose
```
### Code Quality
Kader uses various tools for maintaining code quality:
```bash
# Run linter
uv run ruff check .
# Format code
uv run ruff format .
```
## Troubleshooting
### Common Issues
- **No models found**: Ensure your providers are correctly configured. For Ollama, run `ollama serve`. For Google, ensure `GOOGLE_API_KEY` is set.
- **Connection errors**: Verify internet access for cloud providers and local service availability for Ollama.
## Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines on:
- Setting up your development environment
- Code style guidelines
- Running tests
- Submitting pull requests
### Quick Start for Contributors
```bash
# Fork and clone
git clone https://github.com/your-username/kader.git
cd kader
# Install dependencies
uv sync
# Run tests
uv run pytest
# Run linter
uv run ruff check .
```
## License
This project is licensed under the MIT License - see the LICENSE file for details.
## Acknowledgments
- Built with [Textual](https://textual.textualize.io/) for the beautiful CLI interface.
- Uses [Ollama](https://ollama.ai/) for local LLM execution.
- Powered by [Google Gemini](https://ai.google.dev/) for advanced cloud-based reasoning.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"aiofiles>=25.1.0",
"faiss-cpu>=1.9.0",
"google-api-core>=2.24.0",
"google-genai>=1.61.0",
"jinja2>=3.1.6",
"loguru>=0.7.3",
"mistralai>=1.12.0",
"ollama>=0.6.1",
"openai>=2.20.0",
"outdated>=0.2.2",
"python-frontmatter>=1.1.0",
"pyyaml>=6.0.3",
"tenacity>=9.1.2",
"textual[syntax]>=6.8.0",
"typing-extensions>=4.15.0",
"wcmatch>=10.1"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T21:20:40.161981 | kader-2.5.0.tar.gz | 1,011,538 | 7d/52/1602fc6a72f195cc7724cf2e8226f4ae1d0ac61cb7aad42163d446c85c71/kader-2.5.0.tar.gz | source | sdist | null | false | f2277b65872c6f7d0a26c4ddbf894cbc | 9e91e03e9455c66394b47157305a5eaba48715948489c2edda74330db68029d0 | 7d521602fc6a72f195cc7724cf2e8226f4ae1d0ac61cb7aad42163d446c85c71 | null | [
"LICENSE"
] | 209 |
2.4 | ontario-data-mcp | 0.1.7 | MCP server for searching, downloading, and analyzing datasets from Ontario, Toronto, and Ottawa open data portals | <!-- mcp-name: ontario-data-mcp -->
# ontario-data-mcp
> [!IMPORTANT]
> **Beta:** This project is under active development. The data structure and tool interfaces may change, as may the data sources until v0.1.
> LLM-generated analysis may contain errors. Always verify critical findings against the returned source data.
This is an [MCP server](https://gist.github.com/sprine/3a6f2c30c73cc0fe8a7a472a4af771d3) for discovering, downloading, querying, and analyzing datasets from Ontario's Open Data portals. It allows asking questions of the data in English (or Spanish, Chinese, French, etc).
It currently supports the Ontario, Toronto, and Ottawa portals, and utilizes a shared [DuckDB](https://duckdb.org/) cache for fast SQL queries, statistical analysis, and geospatial operations.
## Contributing
Contributions welcome! To get started, see **Installation** below.
Found a bug? Have an idea? Discovered something interesting?
Open an issue here: https://github.com/sprine/ontario-data-mcp/issues
## Features
* `find` - search across supported Ontario open data portals
* `download` - retrieve and cache datasets
* `query` - run SQL, statistical, and geospatial analysis via DuckDB
* **WIP** A `validate` step to verify query outputs against original source files and metadata.
* A shared DuckDB cache for high-performance analytics
```
Portal APIs find → Dataset download → DuckDB cache → MCP tools (find, download, query)
```
## Installation
### With Claude Code
```bash
claude mcp add ontario-data -- uvx ontario-data-mcp
```
To auto-approve all tool calls (no confirmation prompts), add to your Claude Code settings:
```json
{
"permissions": {
"allow": ["mcp:ontario-data:*"]
}
}
```
All read-only tools are annotated as such. The only destructive tool is `cache_manage`, which removes local cached data (no remote mutations).
<details>
<summary>With VS Code</summary>
Add to `.vscode/mcp.json`:
```json
{
"mcpServers": {
"ontario-data": {
"command": "uvx",
"args": ["ontario-data-mcp"]
}
}
}
```
</details>
<details>
<summary>From Source</summary>
```bash
git clone https://github.com/sprine/ontario-data-mcp
cd ontario-data-mcp
uv sync
uv run ontario-data-mcp
```
</details>
## Supported Portals
All searches fan out to every portal by default — no need to select a portal. Dataset and resource IDs are prefixed with their portal (e.g. `toronto:abc123`).
| Portal | Platform | Datasets |
|--------|----------|----------|
| `ontario` | CKAN | ~5,700 |
| `toronto` | CKAN | ~533 |
| `ottawa` | ArcGIS Hub | ~665 |
## List of tools available to the AI agent
<details>
<summary><b>Discovery</b> (5 tools)</summary>
| Tool | Description |
|------|-------------|
| `search_datasets` | Search for datasets across all portals (or narrow with `portal=`) |
| `list_portals` | List all available portals with platform type |
| `list_organizations` | List government ministries with dataset counts |
| `list_topics` | List all tags/topics in the catalogue |
| `find_related_datasets` | Find datasets related by tags and organization |
</details>
<details>
<summary><b>Metadata</b> (4 tools)</summary>
| Tool | Description |
|------|-------------|
| `get_dataset_info` | Get full metadata for a dataset (use prefixed ID like `toronto:abc123`) |
| `list_resources` | List all files in a dataset with formats and sizes |
| `get_resource_schema` | Get column schema and sample values for a datastore resource |
| `compare_datasets` | Compare metadata side-by-side for multiple datasets (cross-portal) |
</details>
<details>
<summary><b>Retrieval & Caching</b> (4 tools)</summary>
| Tool | Description |
|------|-------------|
| `download_resource` | Download a resource and cache it in DuckDB (use prefixed ID like `toronto:abc123`) |
| `cache_info` | Cache statistics + list all cached datasets with staleness |
| `cache_manage` | Remove single resource, clear all, or refresh (action enum) |
| `refresh_cache` | Re-download cached resources with latest data |
</details>
<details>
<summary><b>Querying</b> (4 tools)</summary>
| Tool | Description |
|------|-------------|
| `query_resource` | Query a resource via CKAN Datastore API (remote) |
| `sql_query` | Run SQL against the CKAN Datastore (remote) |
| `query_cached` | Run SQL against locally cached data in DuckDB |
| `preview_data` | Quick preview of first N rows of a resource |
</details>
<details>
<summary><b>Data Quality</b> (3 tools)</summary>
| Tool | Description |
|------|-------------|
| `check_data_quality` | Analyze nulls, type consistency, duplicates, outliers |
| `check_freshness` | Check if a dataset is current vs. its update schedule |
| `profile_data` | Statistical profile using DuckDB SUMMARIZE |
</details>
<details>
<summary><b>Geospatial</b> (3 tools)</summary>
| Tool | Description |
|------|-------------|
| `load_geodata` | Cache a geospatial resource (SHP, KML, GeoJSON) into DuckDB |
| `spatial_query` | Run spatial queries against cached geospatial data |
| `list_geo_datasets` | Find datasets containing geospatial resources |
</details>
## Prompts
Context-aware guided workflow prompts:
- **`explore_topic`** — Guided exploration of a topic (fetches live catalogue context)
- **`data_investigation`** — Deep dive into a specific dataset: schema, quality, statistics
- **`compare_data`** — Side-by-side analysis of multiple datasets
## Environment Variables
| Variable | Default | Purpose |
|----------|---------|---------|
| `ONTARIO_DATA_CACHE_DIR` | `~/.cache/ontario-data` | DuckDB storage + log file location |
| `ONTARIO_DATA_TIMEOUT` | `30` | HTTP timeout in seconds |
| `ONTARIO_DATA_RATE_LIMIT` | `10` | Max CKAN requests per second |
## Development
```bash
uv sync
uv run python -m pytest tests/ -v
```
## License
MIT — see [LICENSE](LICENSE) for the software.
Data accessed through this tool is provided under the following open government licences:
- Contains information licensed under the [Open Government Licence – Ontario](https://www.ontario.ca/page/open-government-licence-ontario).
- Contains information licensed under the [Open Government Licence – Toronto](https://open.toronto.ca/open-data-licence/).
- Contains information licensed under the [Open Government Licence – City of Ottawa](https://open.ottawa.ca/pages/open-data-licence).
| text/markdown | A. Mathur | null | null | null | null | ckan, duckdb, mcp, ontario, open-data, ottawa, toronto | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Database",
"Topic :: Scientific/Engineering :: Information Analysis"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"duckdb>=1.1.0",
"fastmcp>=3.0.0",
"geopandas>=1.0.0",
"httpx>=0.27.0",
"openpyxl>=3.1.0",
"pandas>=2.2.0",
"shapely>=2.0.0"
] | [] | [] | [] | [
"Homepage, https://sprine.github.io/ontario-data-mcp/",
"Repository, https://github.com/sprine/ontario-data-mcp",
"Documentation, https://github.com/sprine/ontario-data-mcp#readme",
"Changelog, https://github.com/sprine/ontario-data-mcp/blob/main/CHANGELOG.md",
"Issues, https://github.com/sprine/ontario-data-mcp/issues"
] | uv/0.5.9 | 2026-02-20T21:20:39.291955 | ontario_data_mcp-0.1.7.tar.gz | 198,688 | 0e/6e/fecf91c2f0c87f916e4dc419eab1345b05565523713daa09ad45a6d21a14/ontario_data_mcp-0.1.7.tar.gz | source | sdist | null | false | 6c2af075e586f86306a2ee954b2eefae | df25963ca60f0323e11314f46db180601a199f4f2a8a1707350af788c7fa5905 | 0e6efecf91c2f0c87f916e4dc419eab1345b05565523713daa09ad45a6d21a14 | MIT | [
"LICENSE"
] | 215 |
2.4 | zhijiang | 0.9.94 | setup linux env, such as bashrc etc. | # A project used to setup development environment, especially the rcfiles(aka dotfiles)
After pip install, the shell command "zhijiang" could be used, "zhijiang info" could be used to get help.
| text/markdown | Xu Zhijiang | 15852939122@163.com | null | null | null | env setup, development, zhijiang | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License"
] | [] | https://github.com/xu-zhijiang/env-setup.git | null | >=2 | [] | [] | [] | [
"termcolor",
"fire",
"check-manifest; extra == \"dev\"",
"coverage; extra == \"test\""
] | [] | [] | [] | [
"Funding, https://github.com/xu-zhijiang",
"Source, https://github.com/xu-zhijiang/env-setup.git"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T21:20:04.077640 | zhijiang-0.9.94-py3-none-any.whl | 2,310,653 | b0/3d/040f20057e7bc0c254bbf9153c4c19efbc9859aac6f418ec4bd9c3cf49cc/zhijiang-0.9.94-py3-none-any.whl | py3 | bdist_wheel | null | false | d3e0dbdd69560b6fc7d7223963dd6afe | 76a312f196de601056608ffd19c1087c429893b9384e84661b83ae7f644d61fe | b03d040f20057e7bc0c254bbf9153c4c19efbc9859aac6f418ec4bd9c3cf49cc | null | [
"LICENSE.txt"
] | 90 |
2.4 | mbo_utilities | 2.6.3 | Various utilities for the Miller Brain Observatory | <p align="center">
<img src="mbo_utilities/assets/static/logo_utilities.png" height="220" alt="MBO Utilities logo">
</p>
<p align="center">
<a href="https://github.com/MillerBrainObservatory/mbo_utilities/actions/workflows/test_python.yml"><img src="https://github.com/MillerBrainObservatory/mbo_utilities/actions/workflows/test_python.yml/badge.svg" alt="CI"></a>
<a href="https://badge.fury.io/py/mbo-utilities"><img src="https://badge.fury.io/py/mbo-utilities.svg" alt="PyPI version"></a>
<a href="https://millerbrainobservatory.github.io/mbo_utilities/"><img src="https://img.shields.io/badge/docs-online-green" alt="Documentation"></a>
</p>
<p align="center">
<a href="#installation"><b>Installation</b></a> ·
<a href="https://millerbrainobservatory.github.io/mbo_utilities/"><b>Documentation</b></a> ·
<a href="https://millerbrainobservatory.github.io/mbo_utilities/user_guide.html"><b>User Guide</b></a> ·
<a href="https://millerbrainobservatory.github.io/mbo_utilities/file_formats.html"><b>Supported Formats</b></a> ·
<a href="https://github.com/MillerBrainObservatory/mbo_utilities/issues"><b>Issues</b></a>
</p>
Image processing utilities for the [Miller Brain Observatory](https://github.com/MillerBrainObservatory) (MBO).
- **Modern Image Reader/Writer**: Fast, lazy I/O for ScanImage/generic TIFFs, Suite2p `.bin`, Zarr, HDF5, and Numpy (in memeory or saved to `.npy`)
- **Run processing pipelines** for calcium imaging - motion correction, cell extraction, and signal analysis
- Operates on **3D timeseries** natively and is extendable to ND-arrays
- **Visualize data interactively** with a GPU-accelerated GUI for exploring large datasets with [fastplotlib](https://fastplotlib.org/user_guide/guide.html#what-is-fastplotlib)
<p align="center">
<img src="docs/_images/gui/readme/01_step_file_dialog.png" height="280" alt="File Selection" />
<img src="docs/_images/gui/readme/02_step_data_view.png" height="280" alt="Data Viewer" />
<img src="docs/_images/gui/readme/03_metadata_viewer.png" height="280" alt="Metadata Viewer" />
<br/>
<em>Select data, visualize, and inspect metadata</em>
</p>
> **Note:**
> `mbo_utilities` is in **late-beta** stage of active development. There will be bugs that can be addressed quickly, file an [issue](https://github.com/MillerBrainObservatory/mbo_utilities/issues) or reach out on slack.
## Installation
`mbo_utilities` is available in [pypi](https://pypi.org/project/mbo_utilities/):
`pip install mbo_utilities`
> We recommend using a virtual environment. For help setting up a virtual environment, see [the MBO guide on virtual environments](https://millerbrainobservatory.github.io/guides/venvs.html).
```bash
# base: reader + GUI
pip install mbo_utilities
# with lbm_suite2p_python, suite2p, cellpose
pip install "mbo_utilities[suite2p]"
# all processing pipelines
pip install "mbo_utilities[all]"
```
> **Suite3D + CuPy:** Suite3D requires [CuPy](https://cupy.dev/) for GPU acceleration. CuPy must be installed separately to match your CUDA toolkit version:
>
> ```bash
> # check your CUDA version
> nvcc --version
>
> # CUDA 12.x
> pip install cupy-cuda12x
>
> # CUDA 11.x
> pip install cupy-cuda11x
> ```
>
> The install script below detects your systems cuda-version automatically.
### Installation Script with [UV](https://docs.astral.sh/uv/getting-started/features/) (Recommended)
The install script will prompt several options:
1. Create a virtual environment with `mbo_utilities`,
2. Install the image reader globally, with Destkop Icon + use `mbo` any terminal
3. Install in both a virtual environment and globally
3. Specify optional dependencies, and environment paths
```powershell
# Windows (PowerShell)
irm https://raw.githubusercontent.com/MillerBrainObservatory/mbo_utilities/master/scripts/install.ps1 | iex
```
```bash
# Linux/macOS
curl -sSL https://raw.githubusercontent.com/MillerBrainObservatory/mbo_utilities/master/scripts/install.sh | bash
```
> **Note:** The `mbo` command is available globally thanks to [uv tools](https://docs.astral.sh/uv/concepts/tools/). Update with the install script or manually with `uv tool upgrade mbo_utilities`.
## Usage
The [user-guide](https://millerbrainobservatory.github.io/mbo_utilities/user_guide.html) covers usage in a jupyter notebook.
The [CLI Guide](https://millerbrainobservatory.github.io/mbo_utilities/cli.html) provides a more in-depth overview of the CLI commands.
The [GUI Guide](https://millerbrainobservatory.github.io/mbo_utilities/usage/gui_guide.html) provides a more in-depth overview of the GUI.
The [ScanPhase Guide](https://millerbrainobservatory.github.io/mbo_utilities/usage/cli.html#scan-phase-analysis) describes the bi-directional scan-phase analysis tool with output figures and figure descriptions.
| Command | Description |
|---------|-------------|
| `mbo /path/to/data.tiff` | View a supported file/folder |
| `mbo info /path/to/data.tiff` | Show file info and metadata |
| `mbo convert input.tiff output.zarr` | Convert between formats |
| `mbo scanphase /path/to/data.tiff` | Run scan-phase analysis |
| `mbo notebook lsp` | Generate template notebook |
| `mbo formats` | List supported formats |
| `mbo download path/to/notebook.ipynb` | Download a notebook to the current directory |
| `mbo pollen` | Pollen calibration tool (WIP) |
| `mbo pollen path/to/data` | Pollen calibration - Skip data collection |
→ [CLI Guide](https://millerbrainobservatory.github.io/mbo_utilities/usage/cli.html)
### GUI
Launch an interactive GPU-accelerated viewer for exploring large imaging datasets. Supports all MBO file formats with real-time visualization.
```bash
mbo # launch GUI
mbo /path/to/data # open file directly
mbo --check-install # verify GPU configuration
```
→ [GUI Guide](https://millerbrainobservatory.github.io/mbo_utilities/usage/gui_guide.html)
### Scan-Phase Analysis
Measure and correct bidirectional scan-phase offset in resonant scanning microscopy data. Generates diagnostic figures showing temporal stability, spatial variation, and recommended corrections.
```bash
mbo scanphase /path/to/data.tiff -o ./output
```
→ [Scan-Phase Guide](https://millerbrainobservatory.github.io/mbo_utilities/usage/cli.html#scan-phase-analysis)
### Supported Formats
| Format | Read | Write | Description |
|--------|:----:|:-----:|-------------|
| ScanImage TIFF | ✓ | ✓ | Native LBM acquisition format |
| Generic TIFF | ✓ | ✓ | Standard TIFF stacks |
| Zarr | ✓ | ✓ | Chunked cloud-ready arrays |
| HDF5 | ✓ | ✓ | Hierarchical data format |
| Suite2p | ✓ | ✓ | Binary and ops.npy files |
→ [Formats Guide](https://millerbrainobservatory.github.io/mbo_utilities/file_formats.html)
### Upgrade
The CLI tool can be upgraded with `uv tool upgrade mbo_utilities`, or the package can be upgraded with `uv pip install -U mbo_utilities`.
The CLI tool can be upgraded with `uv tool upgrade mbo_utilities`, or the package can be upgraded with `uv pip install -U mbo_utilities`.
| Method | Command |
|--------|---------|
| Install script | Re-run install script |
| CLI tool | `uv tool upgrade mbo_utilities` |
| Virtual env | `uv pip install -U mbo_utilities` |
## ScanImage Acquisition Modes
`mbo_utilities` automatically detects and parses metadata from these ScanImage acquisition modes:
| Configuration | Detection | Result |
|---------------|-----------|--------|
| LBM single channel | `channelSave=[1..N]`, AI0 only | `lbm=True`, `colors=1` |
| LBM dual channel | `channelSave=[1..N]`, AI0+AI1 | `lbm=True`, `colors=2` |
| Piezo (single frame/slice) | `hStackManager.enable=False`, `framesPerSlice=1` | `piezo=True` |
| Piezo multi-frame (with avg) | `hStackManager.enable=False`, `logAvgFactor>1` | `piezo=True`, averaged frames |
| Piezo multi-frame (no avg) | `hStackManager.enable=False`, `framesPerSlice>1`, `logAvg=1` | `piezo=True`, raw frames |
| Single plane | `hStackManager.enable=False` | `zplanes=1` |
> **Note:** Frame-averaging (`logAverageFactor > 1`) is only available for non-LBM acquisitions.
## Uninstall
**If installed via quick install script:**
```powershell
# Windows
uv tool uninstall mbo_utilities
Remove-Item -Recurse -Force "$env:USERPROFILE\.mbo"
Remove-Item "$env:USERPROFILE\Desktop\MBO Utilities.lnk" -ErrorAction SilentlyContinue
```
```bash
# Linux/macOS
uv tool uninstall mbo_utilities
rm -rf ~/mbo
```
**If installed in a project venv:**
```bash
uv pip uninstall mbo_utilities
```
## Troubleshooting
<details>
<summary><b>GPU/CUDA Errors</b></summary>
**Error: "Failed to auto-detect CUDA root directory"**
This occurs when using GPU-accelerated features and CuPy cannot find your CUDA Toolkit.
**Check if CUDA is installed:**
```powershell
# Windows
dir "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA" -ErrorAction SilentlyContinue
$env:CUDA_PATH
```
```bash
# Linux/macOS
nvcc --version
echo $CUDA_PATH
```
**Set CUDA_PATH:**
```powershell
# Windows (replace v12.6 with your version)
$env:CUDA_PATH = "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6"
[System.Environment]::SetEnvironmentVariable('CUDA_PATH', $env:CUDA_PATH, 'User')
```
```bash
# Linux/macOS (add to ~/.bashrc or ~/.zshrc)
export CUDA_PATH=/usr/local/cuda-12.6
```
If CUDA is not installed, download from [NVIDIA CUDA Downloads](https://developer.nvidia.com/cuda-downloads).
</details>
<details>
<summary><b>Git LFS Download Errors</b></summary>
There is a [bug in fastplotlib](https://github.com/fastplotlib/fastplotlib/issues/861) causing `git lfs` errors when installed from a git branch.
Set `GIT_LFS_SKIP_SMUDGE=1` and restart your terminal:
```powershell
# Windows
[System.Environment]::SetEnvironmentVariable('GIT_LFS_SKIP_SMUDGE', '1', 'User')
```
```bash
# Linux/macOS
echo 'export GIT_LFS_SKIP_SMUDGE=1' >> ~/.bashrc
source ~/.bashrc
```
</details>
## Built With
- **[Suite2p](https://github.com/MouseLand/suite2p)** - Integration support
- **[Rastermap](https://github.com/MouseLand/rastermap)** - Visualization
- **[Suite3D](https://github.com/alihaydaroglu/suite3d)** - Volumetric processing
## Issues & Support
- **Bug reports:** [GitHub Issues](https://github.com/MillerBrainObservatory/mbo_utilities/issues)
- **Questions:** See [documentation](https://millerbrainobservatory.github.io/mbo_utilities/) or open a discussion
| text/markdown | null | null | null | null | null | Microscopy, ScanImage, multiROI, Tiff | [] | [] | null | null | <3.13,>=3.12.7 | [] | [] | [] | [
"setuptools<81",
"numpy<2.4,>=2.2.5",
"pandas",
"scipy",
"tifffile>=2025.3.30",
"scikit-image",
"zarr>=3.1.3",
"dask>=2025.3.0",
"imageio[ffmpeg]",
"ffmpeg-python",
"matplotlib>=3.10.1",
"seaborn>=0.13.2",
"opencv-python-headless",
"h5py",
"tqdm",
"jupyterlab>=4.2.6",
"ipykernel",
"ipywidgets<9,>=8.0.0",
"icecream>=2.1.4",
"glfw; sys_platform != \"linux\"",
"pygfx>=0.15.2",
"jupyter_rfb>=0.5.1",
"llvmlite>=0.43.0",
"mkl_fft>=2.0.0",
"mbo-fastplotlib>=0.7.3",
"rendercanvas<2.5,>=2.4.2",
"imgui-bundle>=1.92.5",
"pyqt6>=6.7",
"pyqt6-sip",
"pyqtgraph",
"qtpy",
"wgpu<0.29,>=0.28.1",
"numba>=0.60.0; extra == \"suite2p\"",
"lbm_suite2p_python>=2.5.4; extra == \"suite2p\"",
"suite2p_mbo>=2.0.1; extra == \"suite2p\"",
"setuptools; extra == \"suite2p\"",
"mbo-suite3d>=0.0.7; extra == \"suite3d\"",
"rastermap; extra == \"rastermap\"",
"napari; extra == \"napari\"",
"napari-ome-zarr; extra == \"napari\"",
"napari-animation; extra == \"napari\"",
"pyklb; extra == \"isoview\"",
"torch; extra == \"torch\"",
"torchvision; extra == \"torch\"",
"mbo_utilities[rastermap,suite2p,suite3d,torch]; extra == \"processing\"",
"sphinx>=6.1.3; extra == \"docs\"",
"docutils>=0.19; extra == \"docs\"",
"nbsphinx; extra == \"docs\"",
"numpydoc; extra == \"docs\"",
"ipykernel; extra == \"docs\"",
"sphinx-autodoc2; extra == \"docs\"",
"sphinx_tippy; extra == \"docs\"",
"sphinx_gallery; extra == \"docs\"",
"sphinx-togglebutton; extra == \"docs\"",
"sphinx-copybutton; extra == \"docs\"",
"sphinx_book_theme; extra == \"docs\"",
"sphinx_design; extra == \"docs\"",
"sphinxcontrib-images; extra == \"docs\"",
"sphinxcontrib-video; extra == \"docs\"",
"sphinxcontrib-bibtex; extra == \"docs\"",
"jupytext; extra == \"docs\"",
"myst_nb; extra == \"docs\"",
"pygfx>=0.14.0; extra == \"docs\"",
"mbo_utilities[docs,processing]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/millerbrainobservatory/mbo_utilities",
"Documentation, https://millerbrainobservatory.github.io/mbo_utilities/index.html",
"Repository, https://github.com/millerbrainobservatory/mbo_utilities",
"Issues, https://github.com/MillerBrainObservatory/mbo_utilities/issues"
] | uv/0.9.2 | 2026-02-20T21:19:38.558404 | mbo_utilities-2.6.3.tar.gz | 3,373,170 | e4/1f/8bb070c4c0da335086eb080966750c9bf135bc12ed2729e57ef587909ed2/mbo_utilities-2.6.3.tar.gz | source | sdist | null | false | a0defd4f6b78b22ea4c37fdc92977d70 | bb07e05bc4c06ede7a6cb251d987e4e005a9e7edf23e1d88ae45ddda676b9b77 | e41f8bb070c4c0da335086eb080966750c9bf135bc12ed2729e57ef587909ed2 | BSD-3-Clause | [
"LICENSE.md"
] | 0 |
2.4 | crocodeel | 1.2.0 | CroCoDeEL is a tool that detects cross-sample (aka well-to-well) contamination in shotgun metagenomic data | # CroCoDeEL : **CRO**ss-sample **CO**ntamination **DE**tection and **E**stimation of its **L**evel 🐊
[](https://anaconda.org/bioconda/crocodeel)
[](https://pypi.org/project/crocodeel/)
[](https://doi.org/10.5281/zenodo.14708154)
## Introduction
CroCoDeEL is a tool that detects cross-sample contamination (aka well-to-well leakage) in shotgun metagenomic data.\
It accurately identifies contaminated samples but also pinpoints contamination sources and estimates contamination rates.\
CroCoDeEL relies only on species abundance tables and does not need negative controls nor sample position during processing (i.e. plate maps).
<p align="center">
<img src="docs/logos/logo.webp" width="350" height="350" alt="logo">
</p>
## Installation
CroCoDeEL is available on bioconda:
```
conda create --name crocodeel_env -c conda-forge -c bioconda crocodeel
conda activate crocodeel_env
```
Alternatively, you can use pip with Python ≥ 3.12:
```
pip install crocodeel
```
Docker and Singularity containers are also available on [BioContainers](https://biocontainers.pro/tools/crocodeel)
## Installation test
To verify that CroCoDeEL is installed correctly, run the following command:
```
crocodeel test_install
```
This command runs CroCoDeEL on a toy dataset and checks whether the generated results match the expected ones.
To inspect the results, you can rerun the command with the `--keep-results` parameter.
## Quick start
### Input
CroCoDeEL takes as input a species abundance table in TSV format.\
The first column should correspond to species names. The other columns correspond to the abundance of species in each sample.\
An example is available [here](crocodeel/test_data/mgs_profiles_test.tsv).
| species_name | sample1 | sample2 | sample3 | ... |
|:----------------|:-------:|:-------:|:-------:|:--------:|
| species 1 | 0 | 0.05 | 0.07 | ... |
| species 2 | 0.1 | 0.01 | 0 | ... |
| ... | ... | ... | ... | ... |
CroCoDeEL works with relative abundances.
The table will automatically be normalized so the abundance of each column equals 1.
**Important**: CroCoDeEL requires accurate estimation of the abundance of subdominant species.\
We strongly recommend using [the Meteor software suite](https://github.com/metagenopolis/meteor) to generate the species abundance table.\
Alternatively, MetaPhlan4 can be used (parameter: --tax\_level t), although it will fail to detect low-level contaminations.\
We advise against using other taxonomic profilers that, according to our benchmarks, do not meet this requirement.
### Search for contamination
Run the following command to identify cross-sample contamination:
```
crocodeel search_conta -s species_abundance.tsv -c contamination_events.tsv
```
CroCoDeEL will output all detected contamination events in the file _contamination_events.tsv_.\
This TSV file includes the following details for each contamination event:
- The contamination source
- The contaminated sample (target)
- The estimated contamination rate
- The score (probability) computed by the Random Forest model
- The species specifically introduced into the target by contamination
An example output file is available [here](crocodeel/test_data/results/contamination_events.tsv).
If you are using MetaPhlan4, we strongly recommend filtering out low-abundance species to improve CroCoDeEL's sensitivity.\
Use the _--filter-low-ab_ option as shown below:
```
crocodeel search_conta -s species_abundance.tsv --filter-low-ab 20 -c contamination_events.tsv
```
### Visualization of the results
Contaminations events can be visually inspected by generating a PDF file consisting in scatterplots.
```
crocodeel plot_conta -s species_abundance.tsv -c contamination_events.tsv -r contamination_events.pdf
```
Each scatterplot compares in a log-scale the species abundance profiles of a contaminated sample (x-axis) and its contamination source (y-axis).\
The contamination line (in red) highlights species specifically introduced by contamination.\
An example is available [here](crocodeel/test_data/results/contamination_events.pdf).
### Easy workflow
Alternatively, you can search for cross-sample contamination and create the PDF report in one command.
```
crocodeel easy_wf -s species_abundance.tsv -c contamination_events.tsv -r contamination_events.pdf
```
### Results interpretation
CroCoDeEL will probably report false contamination events for samples with similar species abundances profiles (e.g. longitudinal data, animals raised together).\
For non-related samples, CroCoDeEL may occasionally generate false positives that can be filtered out by a human-expert.\
Thus, we strongly recommend inspecting scatterplots of each contamination event to discard potential false positives.\
Please check the [wiki](https://github.com/metagenopolis/CroCoDeEL/wiki) for more information.
### Reproduce results of the paper
Species abundance tables of the training, validation and test datasets are available in this [repository](https://doi.org/10.57745/N6JSHQ).
You can use CroCoDeEL to analyze these tables and reproduce the results presented in the paper.
For example, to process Plate 3 from the Lou et al. dataset, first download the species abundance table:
```
wget --content-disposition 'https://entrepot.recherche.data.gouv.fr/api/access/datafile/:persistentId?persistentId=doi:10.57745/BH1RKY'
```
and then run CroCoDeEL:
```
crocodeel easy_wf -s PRJNA698986_P3.meteor.tab -c PRJNA698986_P3.meteor.crocodeel.tsv -r PRJNA698986_P3.meteor.crocodeel.pdf
```
### Train a new Random Forest model
Advanced users can train a custom Random Forest model, which classifies sample pairs as contaminated or not.
You will need a species abundance table with labeled **contaminated** and **non-contaminated** sample pairs, to be used for training and testing.
To get started, you can download and decompress the dataset we used to train CroCoDeEL's default model:
```
wget --content-disposition 'https://entrepot.recherche.data.gouv.fr/api/access/datafile/:persistentId?persistentId=doi:10.57745/IBIPVG'
xz -d training_dataset.meteor.tsv.xz
```
Then, use the following command to train a new model:
```
crocodeel train_model -s training_dataset.meteor.tsv -m crocodeel_model.tsv -r crocodeel_model_perf.tsv
```
Finally, to use your trained model instead of the default one, pass it with the _-m_ option:
```
crocodeel search_conta -s species_ab.tsv -m crocodeel_model.tsv -c conta_events.tsv
```
## Citation
If you find CroCoDeEL useful, please cite:\
Goulet, L. et al. "CroCoDeEL: accurate control-free detection of cross-sample contamination in metagenomic data" *bioRxiv* (2025). [https://doi.org/10.1101/2025.01.15.633153](https://doi.org/10.1101/2025.01.15.633153).
| text/markdown | Lindsay Goulet | lindsay.goulet@inrae.fr | null | null | GPL-3.0-or-later | Metagenomics | [
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering :: Bio-Informatics"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"joblib<2.0,>=1.5",
"matplotlib<4.0,>=3.10",
"numpy<3.0,>=2.4",
"pandas<3.0,>=2.3",
"pyarrow>=16.0.0",
"scikit-learn<2.0,>=1.8",
"scipy<2.0,>=1.17",
"tqdm<5.0,>=4.67"
] | [] | [] | [] | [
"Repository, https://github.com/metagenopolis/CroCoDeEL"
] | poetry/2.3.2 CPython/3.12.3 Linux/6.11.0-1018-azure | 2026-02-20T21:19:29.698299 | crocodeel-1.2.0-py3-none-any.whl | 274,131 | 7a/df/d8d7192815b8384d631432a0dad8f3eaf406a2f4414b434df755994af1ef/crocodeel-1.2.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 7a73e92f74b90ea045ba8ac4ad59105f | 0777d4de9f36c514f4a6a0d55ccc5399d1ca2724368e5556e6365edcd2e1841c | 7adfd8d7192815b8384d631432a0dad8f3eaf406a2f4414b434df755994af1ef | null | [
"COPYING"
] | 229 |
2.4 | next-django | 0.6.1 | File-system routing para Django, inspirado no Next.js App Router. | # 🚀 Next-Django
A Developer Experience (DX) mágica do **Next.js App Router**, construída em cima da fundação sólida e robusta do **Django**.
O `next-django` elimina a necessidade de configurar `urls.py`, gerenciar dezenas de apps fragmentados e lidar com templates confusos. Ele traz roteamento baseado em arquivos (File-System Routing), componentes UI nativos, APIs automáticas e **navegação instantânea SPA** com zero dor de cabeça.
---
## ✨ Funcionalidades Principais
- ⚡ **NOVO: Navegação SPA (Zero JS):** O framework já vem com HTMX pré-configurado. Use o componente `<c-ui.link href="/rota">` e navegue entre as páginas instantaneamente, sem recarregar o navegador. A exata sensação de velocidade do Next.js, mas com HTML puro!
- 📁 **File-System Routing (UI):** Esqueça o `urls.py`. Crie uma pasta `app/sobre/` com um arquivo `page.py` e a rota `/sobre/` é gerada automaticamente. Suporta rotas dinâmicas como `[int:id]`!
- 🥷 **API Routes (Zero Config):** Crie arquivos na pasta `api/` e tenha endpoints RESTful gerados automaticamente usando o poder do **Django Ninja** (com Swagger UI incluso).
- 🧩 **Componentização (UI):** Suporte nativo a componentes reutilizáveis estilo React/Vue através do `django-cotton`. Crie `components/ui/button.html` e use como `<c-ui.button>` em qualquer lugar.
- 🗄️ **Modelos Desacoplados:** Uma pasta `models/` centralizada na raiz do projeto (estilo Prisma/TypeORM), dando adeus à obrigação de ter modelos presos dentro de sub-aplicativos do Django.
- 🪄 **CLI Automático:** Um único comando (`next-django init`) injeta as configurações no seu `settings.py`, atualiza seu `urls.py` e gera toda a arquitetura base com Tailwind CSS pré-configurado.
---
## 🚀 Quick Start (Passo a Passo)
Comece um projeto moderno em menos de 1 minuto:
**1. Crie um projeto Django padrão (se ainda não tiver):**
```bash
# Crie a pasta e o ambiente virtual
mkdir meu_app && cd meu_app
python -m venv venv
source venv/bin/activate # (No Windows: venv\Scripts\activate)
# Instale o Django e inicie o projeto na pasta atual (.)
pip install django
django-admin startproject core .
```
**2. Instale o Next-Django:**
```bash
pip install next-django
```
**3. Inicialize a Mágica (Zero Config):**
```bash
next-django init
```
*(Este comando cria as pastas `app/`, `api/`, `components/` e `models/`, e auto-configura o seu `settings.py` e `urls.py`!)*
**4. Rode o servidor:**
```bash
python manage.py runserver
```
Acesse `http://127.0.0.1:8000` e veja seu novo framework em ação!
---
## 🏗️ Como utilizar a Estrutura
Ao rodar o `next-django init`, você ganha a seguinte arquitetura:
```text
meu_projeto/
├── app/ # Rotas de interface (Páginas HTML)
├── api/ # Rotas de API (JSON/REST)
├── components/ # Componentes visuais reutilizáveis
├── models/ # Banco de dados centralizado
├── core/ # Configurações originais do Django
└── manage.py
```
### 1. Criando Páginas (Pasta `app/`)
O roteamento segue a estrutura de pastas. O arquivo que renderiza a tela **deve** se chamar `page.py` e conter uma função chamada `page`.
*Dica de Ouro:* O caminho no `render()` deve sempre espelhar a pasta onde o arquivo está!
**Exemplo de `app/sobre/page.py`:**
```python
from django.shortcuts import render
def page(request):
# O caminho do template reflete a pasta!
return render(request, 'sobre/page.html', {"titulo": "Sobre Nós"})
```
### 2. Navegação Instantânea (HTMX)
Para navegar entre as páginas sem a tela piscar (SPA), não use a tag `<a>` normal. Use o componente nativo de link do Next-Django:
```html
<c-ui.link href="/sobre">
Ir para a página Sobre
</c-ui.link>
```
### 3. Criando APIs (Pasta `api/`)
Cada arquivo criado em `api/` (exceto `__init__.py`) vira uma rota base. O arquivo precisa instanciar um `Router` do Django Ninja na variável `router`.
**Exemplo de `api/produtos.py` (Gera a rota `/api/produtos/`):**
```python
from ninja import Router
router = Router()
@router.get("/")
def listar_produtos(request):
return [{"id": 1, "nome": "Teclado Mecânico"}]
```
*Acesse `http://127.0.0.1:8000/api/docs` para ver o Swagger gerado automaticamente!*
### 4. Usando Componentes (Pasta `components/`)
Todo arquivo `.html` colocado aqui vira uma tag customizada.
* **Arquivo:** `components/ui/card.html`
* **Como usar no seu `app/page.html`:**
```html
<c-ui.card>
<h2>Conteúdo do Card</h2>
</c-ui.card>
```
### 5. Gerenciando Modelos (Pasta `models/`)
Crie seus modelos separadamente, por exemplo, `models/produto.py`.
**Atenção:** Para o Django reconhecer seu modelo na hora de rodar as migrações, você **precisa** importá-lo no arquivo `models/__init__.py`:
```python
# models/__init__.py
from .produto import Produto
```
Depois, basta rodar `python manage.py makemigrations` e `python manage.py migrate` normalmente!
---
## 🤝 Contribuindo
Pull requests são muito bem-vindos! Para mudanças maiores, por favor, abra uma *issue* primeiro para discutirmos o que você gostaria de mudar.
## 📄 Licença
MIT License - Sinta-se livre para usar, modificar e distribuir.
| text/markdown | Guilherme Santos | guilhermedevasantos2004@gmail.com | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Environment :: Web Environment",
"Framework :: Django",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3"
] | [] | https://github.com/guizeroum/next-django | null | >=3.10.11 | [] | [] | [] | [
"Django>=5.2.11",
"django-cotton>=2.6.1",
"django-ninja>=1.5.3",
"pytest>=9.0.2",
"pytest-django>=4.12.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.11 | 2026-02-20T21:19:19.167610 | next_django-0.6.1.tar.gz | 14,357 | f2/19/e454a2dd9149a3ed0aa295099fad0a55b20260350e02bc0949413b545f03/next_django-0.6.1.tar.gz | source | sdist | null | false | bf7fbdc9618cc5aefb2bffbb60e10e7c | 99686d3a4e88339b8c0a2e935044456541bdb90073375ad98445eeebfb262d5d | f219e454a2dd9149a3ed0aa295099fad0a55b20260350e02bc0949413b545f03 | null | [] | 215 |
2.4 | moose-cli | 0.6.412 | Build tool for moose apps | <a href="https://docs.fiveonefour.com/moosestack/"><img src="https://raw.githubusercontent.com/514-labs/moose/main/logo-m-light.png" alt="moose logo" height="100px"></a>
[](https://www.fiveonefour.com)
[](https://www.npmjs.com/package/@514labs/moose-cli?activeTab=readme)
[](http://slack.moosestack.com)
[](https://docs.fiveonefour.com/moosestack/getting-started/quickstart)
[](LICENSE)
# MooseStack
**The ClickHouse-native developer framework and agent harness for building real-time analytical backends in TypeScript and Python** — designed for developers and AI coding agents alike.
MooseStack offers a unified, type‑safe, code‑first developer experience layer for [ClickHouse](https://clickhouse.com/) (realtime analytical database), [Kafka](https://kafka.apache.org/)/[Redpanda](https://redpanda.com/) (realtime streaming), and [Temporal](https://temporal.io/) (workflow orchestration). So you can integrate real time analytical infrastructure into your application stack in native Typescript or Python.
Because your entire analytical stack is captured in code, AI agents can read, write, and refactor it just like any other part of your application. Combined with fast local feedback loops via `moose dev` and complementary [ClickHouse best-practice agent skills](https://github.com/514-labs/agent-skills), agents can often handle the bulk of your development work — schema design, materialized views, migrations, and query APIs — while you review and steer.
## Why MooseStack?
- **AI agent harness**: AI-friendly interfaces enable coding agents to iterate quickly and safely on your analytical workloads
- **Git-native development**: Version control, collaboration, and governance built-in
- **Local-first experience**: Full mirror of production environment on your laptop with `moose dev`
- **Schema & migration management**: Typed schemas in your application code, with automated schema migration support
- **Code‑first infrastructure**: Declare tables, streams, workflows, and APIs in TS/Python -> MooseStack wires it all up
- **Modular design**: Only enable the modules you need. Each module is independent and can be adopted incrementally
## MooseStack Modules
- [Moose **OLAP**](https://docs.fiveonefour.com/moosestack/olap): Manage ClickHouse tables, materialized views, and migrations in code.
- [Moose **Streaming**](https://docs.fiveonefour.com/moosestack/streaming): Real‑time ingest buffers and streaming transformation functions with Kafka/Redpanda.
- [Moose **Workflows**](https://docs.fiveonefour.com/moosestack/workflows): ETL pipelines and tasks with Temporal.
- [Moose **APIs** and Web apps](https://docs.fiveonefour.com/moosestack/apis): Type‑safe ingestion and query endpoints, or bring your own API framework (Nextjs, Express, FastAPI, Fastify, etc)
## MooseStack Tooling:
- [Moose **Dev**](https://docs.fiveonefour.com/moosestack/dev): Local dev server with hot-reloading infrastructure
- [Moose **Dev MCP**](https://docs.fiveonefour.com/moosestack/moosedev-mcp): AI agent interface to your local dev stack
- [Moose **Language Server / LSP**](https://docs.fiveonefour.com/moosestack/language-server): In-editor diagnostics and autocomplete for agents and devs
- [ClickHouse TS/Py **Agent Skills**](https://github.com/514-labs/agent-skills): ClickHouse best practices as agent-readable rules
- [Moose **Migrate**](https://docs.fiveonefour.com/moosestack/migrate): Code-based schema migrations for ClickHouse
- [Moose **Deploy**](https://docs.fiveonefour.com/moosestack/deploying): Ship your app to production
## Quickstart
Also available in the Docs: [5-minute Quickstart](https://docs.fiveonefour.com/moosestack/getting-started/quickstart)
Already running Clickhouse: [Getting Started with Existing Clickhouse](https://docs.fiveonefour.com/moosestack/getting-started/from-clickhouse)
### Install the CLI
```bash
bash -i <(curl -fsSL https://fiveonefour.com/install.sh) moose
```
### Create a project
```bash
# typescript
moose init my-project --from-remote <YOUR_CLICKHOUSE_CONNECTION_STRING> --language typescript
# python
moose init my-project --from-remote <YOUR_CLICKHOUSE_CONNECTION_STRING> --language python
```
### Run locally
```bash
cd my-project
npm install # or: pip install -r requirements.txt
moose dev
```
MooseStack will start ClickHouse, Redpanda, Temporal, and Redis; the CLI validates each component.
## Deploy with Fiveonefour hosting
The fastest way to deploy your MooseStack application is with [hosting from Fiveonefour](https://fiveonefour.boreal.cloud/sign-up), the creators of MooseStack. Fiveonefour provides automated preview branches, managed schema migrations, deep integration with Github and CI/CD, and an agentic harness for your realtime analytical infrastructure in the cloud.
[Get started with Fiveonefour hosting →](https://fiveonefour.boreal.cloud/sign-up)
## Deploy Yourself
MooseStack is open source and can be self-hosted. If you're only using MooseOLAP, you can use the Moose library in your app for schema management, migrations, and typed queries on your ClickHouse database without deploying the Moose runtime. For detailed self-hosting instructions, see our [deployment documentation](https://docs.fiveonefour.com/moosestack/deploying).
## Examples
### TypeScript
```typescript
import { Key, OlapTable, Stream, IngestApi, ConsumptionApi } from "@514labs/moose-lib";
interface DataModel {
primaryKey: Key<string>;
name: string;
}
// Create a ClickHouse table
export const clickhouseTable = new OlapTable<DataModel>("TableName");
// Create a Redpanda streaming topic
export const redpandaTopic = new Stream<DataModel>("TopicName", {
destination: clickhouseTable,
});
// Create an ingest API endpoint
export const ingestApi = new IngestApi<DataModel>("post-api-route", {
destination: redpandaTopic,
});
// Create consumption API endpoint
interface QueryParams {
limit?: number;
}
export const consumptionApi = new ConsumptionApi<QueryParams, DataModel[]>("get-api-route",
async ({limit = 10}: QueryParams, {client, sql}) => {
const result = await client.query.execute(sql`SELECT * FROM ${clickhouseTable} LIMIT ${limit}`);
return await result.json();
}
);
```
### Python
```python
from moose_lib import Key, OlapTable, Stream, StreamConfig, IngestApi, IngestApiConfig, ConsumptionApi
from pydantic import BaseModel
class DataModel(BaseModel):
primary_key: Key[str]
name: str
# Create a ClickHouse table
clickhouse_table = OlapTable[DataModel]("TableName")
# Create a Redpanda streaming topic
redpanda_topic = Stream[DataModel]("TopicName", StreamConfig(
destination=clickhouse_table,
))
# Create an ingest API endpoint
ingest_api = IngestApi[DataModel]("post-api-route", IngestApiConfig(
destination=redpanda_topic,
))
# Create a consumption API endpoint
class QueryParams(BaseModel):
limit: int = 10
def handler(client, params: QueryParams):
return client.query.execute("SELECT * FROM {table: Identifier} LIMIT {limit: Int32}", {
"table": clickhouse_table.name,
"limit": params.limit,
})
consumption_api = ConsumptionApi[RequestParams, DataModel]("get-api-route", query_function=handler)
```
## Docs
- [Overview](https://docs.fiveonefour.com/moosestack)
- [5-min Quickstart](https://docs.fiveonefour.com/moosestack/getting-started/quickstart)
- [Quickstart with Existing Clickhouse](https://docs.fiveonefour.com/moosestack/getting-started/from-clickhouse)
## Built on
- [ClickHouse](https://clickhouse.com/) (OLAP storage)
- [Redpanda](https://redpanda.com/) (streaming)
- [Temporal](https://temporal.io/) (workflow orchestration)
- [Redis](https://redis.io/) (internal state)
## Community
[Join us on Slack](https://join.slack.com/t/moose-community/shared_invite/zt-2fjh5n3wz-cnOmM9Xe9DYAgQrNu8xKxg)
## Cursor Background Agents
MooseStack works with Cursor's background agents for remote development. The repository includes a pre-configured Docker-in-Docker setup that enables Moose's Docker dependencies to run in the agent environment.
### Quick Setup
1. Enable background agents in Cursor
2. The environment will automatically build with Docker support
3. Run `moose dev` or other Moose commands in the agent
For detailed setup instructions and troubleshooting, see [Docker Setup Documentation](.cursor/DOCKER_SETUP.md).
## Contributing
We welcome contributions! See the [contribution guidelines](https://github.com/514-labs/moosestack/blob/main/CONTRIBUTING.md).
## License
MooseStack is open source software and MIT licensed.
| text/markdown; charset=UTF-8; variant=GFM | Fiveonefour Labs Inc. <support@fiveonefour.com> | Fiveonefour Labs Inc. <support@fiveonefour.com> | null | null | MIT | null | [] | [] | https://www.fiveonefour.com/moose | null | >=3.12 | [] | [] | [] | [] | [] | [] | [] | [
"homepage, https://www.fiveonefour.com",
"documentation, https://docs.fiveonefour.com/moose",
"repository, https://github.com/514labs/moose"
] | maturin/1.12.3 | 2026-02-20T21:19:11.753505 | moose_cli-0.6.412-py3-none-manylinux_2_28_x86_64.whl | 15,945,459 | 9a/95/3a0e2d42ec3189ac14f2f4dbfa507d1404a4a7c754c3c7353ba0bfd41ce2/moose_cli-0.6.412-py3-none-manylinux_2_28_x86_64.whl | py3 | bdist_wheel | null | false | 0adad2609207e2227b46478d4c3f2cb0 | 64da64489c08f49aab785353405cff2ff80113bf82f27cf34ce79365bad4f63e | 9a953a0e2d42ec3189ac14f2f4dbfa507d1404a4a7c754c3c7353ba0bfd41ce2 | null | [] | 223 |
2.4 | opteryx-core | 0.6.28 | Opteryx Query Engine | # Opteryx
Opteryx-Core is fork of [Opteryx](https://github.com/mabel-dev/opteryx) with a reduced API and configuration surface. It is the engine used by the cloud version of [Opteryx](https://opteryx.app).
Install:
```bash
pip install opteryx-core
```
Docs: https://docs.opteryx.app/ • Source: https://github.com/mabel-dev/opteryx-core • License: Apache-2.0
| text/markdown | null | Justin Joyce <justin.joyce@joocer.com> | null | Justin Joyce <justin.joyce@joocer.com> | null | null | [] | [] | https://github.com/mabel-dev/opteryx/ | null | >=3.13 | [] | [] | [] | [
"aiohttp==3.13.*",
"minio==7.2.20",
"numpy==2.4.*",
"orso==0.0.*",
"psutil==7.2.*",
"pyarrow==23.0.*",
"requests==2.32.*",
"freezegun; extra == \"testing\"",
"orjson==3.11.*; extra == \"performance\""
] | [] | [] | [] | [
"Homepage, https://opteryx.dev/",
"Documentation, https://opteryx.dev/",
"Repository, https://github.com/mabel-dev/opteryx.git"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:19:02.851171 | opteryx_core-0.6.28.tar.gz | 10,438,765 | fe/28/1ca17266e27e13b56978176f8c0b1b9304cfbeea36b3b765e9d84cd579f3/opteryx_core-0.6.28.tar.gz | source | sdist | null | false | 9cbcbf61f18682eed368110a230e1a52 | 833b2b3cbde78380a06f89c0b443090f8026434aa6e014a49edc795a8c34708c | fe281ca17266e27e13b56978176f8c0b1b9304cfbeea36b3b765e9d84cd579f3 | null | [
"LICENSE"
] | 280 |
2.4 | pico-client-auth | 0.2.1 | JWT authentication client for pico-fastapi. Provides automatic Bearer token validation, SecurityContext, role-based access control, and JWKS key rotation. | # Pico-Client-Auth
[](https://pypi.org/project/pico-client-auth/)
[](https://deepwiki.com/dperezcabrera/pico-client-auth)
[](https://opensource.org/licenses/MIT)

[](https://codecov.io/gh/dperezcabrera/pico-client-auth)
[](https://dperezcabrera.github.io/pico-client-auth/)
**[Pico-Client-Auth](https://github.com/dperezcabrera/pico-client-auth)** provides JWT authentication for **[pico-fastapi](https://github.com/dperezcabrera/pico-fastapi)** applications. It integrates with the pico-ioc container to deliver automatic Bearer token validation, a request-scoped `SecurityContext`, role-based access control, and JWKS key rotation support.
> Requires Python 3.11+
> Built on pico-fastapi + pico-ioc
> Fully async-compatible
> Real JWKS-based token validation
> Auth by default with opt-out via `@allow_anonymous`
---
## Why pico-client-auth?
| Concern | DIY Middleware | pico-client-auth |
|---------|---------------|------------------|
| Token validation | Implement yourself | Built-in with JWKS |
| Key rotation | Manual handling | Automatic on unknown kid |
| Security context | `request.state` ad-hoc | Typed `SecurityContext` with ContextVar |
| Role checking | Scattered if/else | `@requires_role` decorator |
| Configuration | Hardcoded | `@configured` from YAML/env |
| Testing | Build your own fixtures | RSA keypair + `make_token` pattern |
---
## Core Features
- Auth by default on all routes
- `@allow_anonymous` to opt out specific endpoints
- `@requires_role("admin")` for declarative authorization
- `SecurityContext` accessible from controllers, services, and any code within a request
- JWKS fetch with TTL cache and automatic key rotation
- Extensible `RoleResolver` protocol
- Fail-fast startup if issuer/audience are missing
- Auto-discovered via `pico_boot.modules` entry point
---
## Installation
```bash
pip install pico-client-auth
```
---
## Quick Example
```yaml
# application.yaml
auth_client:
issuer: https://auth.example.com
audience: my-api
```
```python
from pico_fastapi import controller, get
from pico_client_auth import SecurityContext, allow_anonymous, requires_role
@controller(prefix="/api")
class ApiController:
@get("/me")
async def get_me(self):
claims = SecurityContext.require()
return {"sub": claims.sub, "email": claims.email}
@get("/health")
@allow_anonymous
async def health(self):
return {"status": "ok"}
@get("/admin")
@requires_role("admin")
async def admin_panel(self):
return {"admin": True}
```
```python
from pico_boot import init
from pico_ioc import configuration, YamlTreeSource
from fastapi import FastAPI
config = configuration(YamlTreeSource("application.yaml"))
container = init(modules=["controllers"], config=config)
app = container.get(FastAPI)
# pico-client-auth is auto-discovered — all routes are now protected
```
---
## Quick Example (without pico-boot)
```python
from pico_ioc import init, configuration, YamlTreeSource
from fastapi import FastAPI
config = configuration(YamlTreeSource("application.yaml"))
container = init(
modules=[
"controllers",
"pico_fastapi",
"pico_client_auth", # Required without pico-boot
],
config=config,
)
app = container.get(FastAPI)
```
---
## SecurityContext
Access authenticated user information from anywhere within a request:
```python
from pico_client_auth import SecurityContext
# In controller, service, or repository
claims = SecurityContext.require() # TokenClaims (raises if not auth'd)
claims = SecurityContext.get() # TokenClaims | None
roles = SecurityContext.get_roles() # list[str]
SecurityContext.has_role("admin") # bool
SecurityContext.require_role("admin") # raises InsufficientPermissionsError
```
---
## Custom Role Resolver
Override how roles are extracted from tokens:
```python
from pico_ioc import component
from pico_client_auth import RoleResolver, TokenClaims
@component
class MyRoleResolver:
async def resolve(self, claims: TokenClaims, raw_claims: dict) -> list[str]:
return raw_claims.get("roles", [])
```
---
## Configuration
| Key | Default | Description |
|-----|---------|-------------|
| `auth_client.enabled` | `true` | Enable/disable auth middleware |
| `auth_client.issuer` | `""` | Expected JWT issuer (`iss` claim) |
| `auth_client.audience` | `""` | Expected JWT audience (`aud` claim) |
| `auth_client.jwks_ttl_seconds` | `300` | JWKS cache TTL in seconds |
| `auth_client.jwks_endpoint` | `""` | JWKS URL (default: `{issuer}/api/v1/auth/jwks`) |
---
## Testing
```python
from pico_client_auth import SecurityContext, TokenClaims
from pico_client_auth.errors import MissingTokenError
def test_require_raises_when_empty():
SecurityContext.clear()
with pytest.raises(MissingTokenError):
SecurityContext.require()
def test_authenticated_flow():
claims = TokenClaims(sub="u1", email="a@b.com", role="admin",
org_id="o1", jti="j1")
SecurityContext.set(claims, ["admin"])
assert SecurityContext.require().sub == "u1"
assert SecurityContext.has_role("admin")
SecurityContext.clear()
```
For full e2e testing with mock JWKS and signed tokens, see the [Testing Guide](https://dperezcabrera.github.io/pico-client-auth/how-to/testing/).
---
## How It Works
- `AuthFastapiConfigurer` (priority=10) registers as an inner middleware
- Every request: extract Bearer token → validate JWT via JWKS → resolve roles → populate SecurityContext
- `@allow_anonymous` endpoints skip validation entirely
- `@requires_role` endpoints check resolved roles, return 403 if missing
- SecurityContext is cleared in `finally` — no leakage between requests
---
## License
MIT
| text/markdown | null | David Perez Cabrera <dperezcabrera@gmail.com> | null | null | MIT License
Copyright (c) 2025 David Pérez Cabrera
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| ioc, di, dependency injection, fastapi, oauth, jwt, authentication, authorization, security | [
"Development Status :: 4 - Beta",
"Framework :: FastAPI",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Security",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pico-ioc>=2.2.4",
"pico-fastapi>=0.3.0",
"python-jose[cryptography]>=3.5",
"httpx>=0.28"
] | [] | [] | [] | [
"Homepage, https://github.com/dperezcabrera/pico-client-auth",
"Repository, https://github.com/dperezcabrera/pico-client-auth",
"Issue Tracker, https://github.com/dperezcabrera/pico-client-auth/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:18:12.074949 | pico_client_auth-0.2.1.tar.gz | 46,935 | be/07/2d6caec7625b1b72bd01743810844d26a0c5fc570451b203e5ba775b1197/pico_client_auth-0.2.1.tar.gz | source | sdist | null | false | e127bcb455efa8a87ddbc6c83dee81d1 | 347aaba59115c4e26c8536c9f392d24fd57ecd1b703548794f153e88e750d835 | be072d6caec7625b1b72bd01743810844d26a0c5fc570451b203e5ba775b1197 | null | [
"LICENSE"
] | 256 |
2.4 | spec-kitty-cli | 0.16.1 | Spec Kitty, a tool for Specification Driven Development (SDD) agentic projects, with kanban and git worktree isolation. | <div align="center">
<img src="https://github.com/Priivacy-ai/spec-kitty/raw/main/media/logo_small.webp" alt="Spec Kitty Logo"/>
<h1>Spec Kitty</h1>
<h2>Spec-Driven Development for AI coding agents</h2>
</div>
Spec Kitty is an open-source CLI workflow for **spec-driven development** with AI coding agents.
It helps teams turn product intent into implementation with a repeatable path:
`spec` -> `plan` -> `tasks` -> `implement` -> `review` -> `merge`.
### Why teams use it
AI coding workflows often break down on larger features:
- Requirements and design decisions drift over long agent sessions
- Parallel work is hard to coordinate across branches
- Review and acceptance criteria become inconsistent from one feature to the next
Spec Kitty addresses this with repository-native artifacts, work package workflows, and git worktree isolation.
### Who it's for
- Engineering teams using tools like Claude Code, Cursor, Codex, Gemini CLI, and Copilot
- Tech leads who want predictable, auditable AI-assisted delivery
- Projects where traceability from requirements to code matters
**Try it now:** `pip install spec-kitty-cli && spec-kitty init my-project --ai claude`
---
## 🚀 What You Get in 0.15.x
| Capability | What Spec Kitty provides |
|------------|--------------------------|
| **Spec-driven artifacts** | Generates and maintains `spec.md`, `plan.md`, and `tasks.md` in `kitty-specs/<feature>/` |
| **Work package execution** | Uses lane-based work package prompts (`planned`, `doing`, `for_review`, `done`) |
| **Parallel implementation model** | Creates isolated git worktrees under `.worktrees/` for work package execution |
| **Live project visibility** | Local dashboard for kanban and feature progress (`spec-kitty dashboard`) |
| **Acceptance + merge workflow** | Built-in acceptance checks and merge helpers (`spec-kitty accept`, `spec-kitty merge`) |
| **Multi-agent support** | Template and command generation for 12 AI agent integrations |
<p align="center">
<a href="#-getting-started-complete-workflow">Quick Start</a> •
<a href="docs/claude-code-integration.md"><strong>Claude Code Guide</strong></a> •
<a href="#-real-time-dashboard">Live Dashboard</a> •
<a href="#-supported-ai-agents">12 AI Agents</a> •
<a href="https://github.com/Priivacy-ai/spec-kitty/blob/main/spec-driven.md">Full Docs</a>
</p>
### From Idea to Production in 6 Automated Steps
```mermaid
graph LR
A[📝 Specify<br/>WHAT to build] --> B[🎯 Plan<br/>HOW to build]
B --> C[📋 Tasks<br/>Work packages]
C --> D[⚡ Implement<br/>Agent workflows]
D --> E[🔍 Review<br/>Quality gates]
E --> F[🚀 Merge<br/>Ship it]
style A fill:#e1f5ff
style B fill:#e1f5ff
style C fill:#fff3e0
style D fill:#f3e5f5
style E fill:#e8f5e9
style F fill:#fce4ec
```
---
## 📊 Project Snapshot
<div align="center">
[](https://github.com/Priivacy-ai/spec-kitty/stargazers)
[](https://pypi.org/project/spec-kitty-cli/)
[](https://pypi.org/project/spec-kitty-cli/)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](#-supported-ai-agents)
[](#-real-time-dashboard)
[](#-getting-started-complete-workflow)
</div>
**Recent stable release:** `v0.15.1` (2026-02-12)
**0.15.x highlights:**
- Primary branch detection now works with `main`, `master`, `develop`, and custom defaults
- Branch routing and merge-base calculation are centralized for more predictable behavior
- Worktree isolation and lane transitions have stronger guardrails and test coverage
**Jump to:**
[Getting Started](#-getting-started-complete-workflow) •
[Examples](#-examples) •
[12 AI Agents](#-supported-ai-agents) •
[CLI Reference](#-spec-kitty-cli-reference) •
[Worktrees](#-worktree-strategy) •
[Troubleshooting](#-troubleshooting)
---
## 📌 Release Track
Spec Kitty is currently published on a stable `0.15.x` track from the `main` branch.
| Branch | Version | Status | Install |
|--------|---------|--------|---------|
| **main** | **0.15.x** | Active stable releases | `pip install spec-kitty-cli` |
**For users:** install from PyPI (`pip install spec-kitty-cli`).
**For contributors:** target `main` unless maintainers specify otherwise in an issue or PR discussion.
---
## 🤝 Multi-Agent Coordination for AI Coding
Orchestrate multiple AI agents on a single feature with lower merge friction. Each agent works in isolated worktrees while the live dashboard tracks progress across all work packages.
```mermaid
sequenceDiagram
participant Lead as 👨💼 Lead Architect
participant Claude as 🤖 Claude (Spec)
participant Cursor as 🤖 Cursor (Impl)
participant Gemini as 🤖 Gemini (Review)
participant Dashboard as 📊 Live Kanban
Lead->>Claude: /spec-kitty.specify
Claude->>Dashboard: WP01-WP05 (planned)
par Parallel Work
Lead->>Cursor: implement WP01
Lead->>Cursor: implement WP02
end
Cursor->>Dashboard: WP01 → doing
Cursor->>Dashboard: WP01 → for_review
Lead->>Gemini: /spec-kitty.review WP01
Gemini->>Dashboard: WP01 → done
Note over Dashboard: Real-time updates<br/>No branch switching
```
**Key Benefits:**
- 🔀 **Parallel execution** - Multiple WPs simultaneously
- 🌳 **Worktree isolation** - One workspace per WP to reduce branch contention
- 👀 **Full visibility** - Dashboard shows who's doing what
- 🔄 **Auto-sequencing** - Dependency tracking in WP frontmatter
---
## 📊 Real-Time Dashboard
Spec Kitty includes a **live dashboard** that automatically tracks your feature development progress. View your kanban board, monitor work package status, and see which agents are working on what—all updating in real-time as you work.
<div align="center">
<img src="https://github.com/Priivacy-ai/spec-kitty/raw/main/media/dashboard-kanban.png" alt="Spec Kitty Dashboard - Kanban Board View" width="800"/>
<p><em>Kanban board showing work packages across all lanes with agent assignments</em></p>
</div>
<div align="center">
<img src="https://github.com/Priivacy-ai/spec-kitty/raw/main/media/dashboard-overview.png" alt="Spec Kitty Dashboard - Feature Overview" width="800"/>
<p><em>Feature overview with completion metrics and available artifacts</em></p>
</div>
The dashboard starts automatically when you run `spec-kitty init` and runs in the background. Access it anytime with the `/spec-kitty.dashboard` command or `spec-kitty dashboard`—the CLI will start the correct project dashboard automatically if it isn't already running, let you request a specific port with `--port`, or stop it cleanly with `--kill`.
**Key Features:**
- 📋 **Kanban Board**: Visual workflow across planned → doing → for review → done lanes
- 📈 **Progress Tracking**: Real-time completion percentages and task counts
- 👥 **Multi-Agent Support**: See which AI agents are working on which tasks
- 📦 **Artifact Status**: Track specification, plan, tasks, and other deliverables
- 🔄 **Live Updates**: Dashboard refreshes automatically as you work
### Kanban Workflow Automation
Work packages flow through automated quality gates. Agents move tasks between lanes, and the dashboard tracks state transitions in real-time.
```mermaid
stateDiagram-v2
[*] --> Planned: /spec-kitty.tasks
Planned --> Doing: /spec-kitty.implement
Doing --> ForReview: Agent completes work
ForReview --> Done: /spec-kitty.review (approved)
ForReview --> Planned: /spec-kitty.review (changes needed)
Done --> [*]: /spec-kitty.merge
note right of Planned
Ready to start
Dependencies clear
end note
note right of Doing
Agent assigned
Work in progress
end note
note right of ForReview
Code complete
Awaiting review
end note
note right of Done
Approved
Ready to merge
end note
```
---
## 🚀 Getting Started: Complete Workflow
**New to Spec Kitty?** Here's the complete lifecycle from zero to shipping features:
### Phase 1: Install & Initialize (Terminal)
```bash
# 1. Install the CLI
pip install spec-kitty-cli
# or
uv tool install spec-kitty-cli
# 2. Initialize your project
spec-kitty init my-project --ai claude
# 3. Verify setup (optional)
cd my-project
spec-kitty verify-setup # Checks that everything is configured correctly
# 4. View your dashboard
spec-kitty dashboard # Opens http://localhost:3000-5000
```
**What just happened:**
- ✅ Created `.claude/commands/` (or `.gemini/`, `.cursor/`, etc.) with 13 slash commands
- ✅ Created `.kittify/` directory with scripts, templates, and mission configuration
- ✅ Started real-time kanban dashboard (runs in background)
- ✅ Initialized git repository with proper `.gitignore`
---
<details>
<summary><h2>🔄 Upgrading Existing Projects</h2></summary>
> **Important:** If you've upgraded `spec-kitty-cli` via pip/uv, run `spec-kitty upgrade` in each of your projects to apply structural migrations.
### Quick Upgrade
```bash
cd your-project
spec-kitty upgrade # Upgrade to current version
```
### What Gets Upgraded
The upgrade command automatically migrates your project structure across versions:
| Version | Migration |
|---------|-----------|
| **0.10.9** | Repair broken templates with bash script references (#62, #63, #64) |
| **0.10.8** | Move memory/ and AGENTS.md to .kittify/ |
| **0.10.6** | Simplify implement/review templates to use workflow commands |
| **0.10.2** | Update slash commands to Python CLI and flat structure |
| **0.10.0** | **Remove bash scripts, migrate to Python CLI** |
| **0.9.1** | Complete lane migration + normalize frontmatter |
| **0.9.0** | Flatten task lanes to frontmatter-only (no directory-based lanes) |
| **0.8.0** | Remove active-mission (missions now per-feature) |
| **0.7.3** | Update scripts for worktree feature numbering |
| **0.6.7** | Ensure software-dev and research missions present |
| **0.6.5** | Rename commands/ → command-templates/ |
| **0.5.0** | Install encoding validation git hooks |
| **0.4.8** | Add all 12 AI agent directories to .gitignore |
| **0.2.0** | Rename .specify/ → .kittify/ and /specs/ → /kitty-specs/ |
> Run `spec-kitty upgrade --verbose` to see which migrations apply to your project.
### Upgrade Options
```bash
# Preview changes without applying
spec-kitty upgrade --dry-run
# Show detailed migration information
spec-kitty upgrade --verbose
# Upgrade to specific version
spec-kitty upgrade --target 0.6.5
# Skip worktree upgrades (main project only)
spec-kitty upgrade --no-worktrees
# JSON output for CI/CD integration
spec-kitty upgrade --json
```
### When to Upgrade
Run `spec-kitty upgrade` after:
- Installing a new version of `spec-kitty-cli`
- Cloning a project that was created with an older version
- Seeing "Unknown mission" or missing slash commands
The upgrade command is **idempotent** - safe to run multiple times. It automatically detects your project's version and applies only the necessary migrations.
</details>
---
### Phase 2: Start Your AI Agent (Terminal)
```bash
# Launch your chosen AI coding agent (12 agents supported)
claude # Eg. for Claude Code
# or
gemini # Eg. for Gemini CLI
# or
code # Eg. for GitHub Copilot / Cursor
```
(You can choose which agents you work with by running `spec-kitty init`, and it is safe to run that command on existing projects, or to run it multiple times)
**Verify slash commands loaded:**
Type `/spec-kitty` and you should see autocomplete with all 13 commands.
### Phase 3: Establish Project Principles (In Agent)
**Still in main repo** - Start with your project's governing principles:
```text
/spec-kitty.constitution
Create principles focused on code quality, testing standards,
user experience consistency, and performance requirements.
```
**What this creates:**
- `.kittify/memory/constitution.md` - Your project's architectural DNA
- These principles will guide all subsequent development
- Missions do not have separate constitutions; the project constitution is the single source of truth
### Phase 4: Create Your First Feature (In Agent)
Now begin the feature development cycle:
#### 4a. Define WHAT to Build
```text
/spec-kitty.specify
Build a user authentication system with email/password login,
password reset, and session management. Users should be able to
register, login, logout, and recover forgotten passwords.
```
**What this does:**
- Creates `kitty-specs/001-auth-system/spec.md` with user stories
- **Enters discovery interview** - Answer questions before continuing!
- All planning happens in the main repo (worktrees created later during implementation)
**⚠️ Important:** Continue in the same session - no need to change directories!
#### 4b. Define HOW to Build (In Main Repo)
```text
/spec-kitty.plan
Use Python FastAPI for backend, PostgreSQL for database,
JWT tokens for sessions, bcrypt for password hashing,
SendGrid for email delivery.
```
**What this creates:**
- `kitty-specs/001-auth-system/plan.md` - Technical architecture
- `kitty-specs/001-auth-system/data-model.md` - Database schema
- `kitty-specs/001-auth-system/contracts/` - API specifications
- **Enters planning interview** - Answer architecture questions!
#### 4c. Optional: Research Phase
```text
/spec-kitty.research
Investigate best practices for password reset token expiration,
JWT refresh token rotation, and rate limiting for auth endpoints.
```
**What this creates:**
- `kitty-specs/001-auth-system/research.md` - Research findings
- Evidence logs for decisions made
#### 4d. Break Down Into Tasks
```text
/spec-kitty.tasks
```
**What this creates:**
- `kitty-specs/001-auth-system/tasks.md` - Kanban checklist
- `kitty-specs/001-auth-system/tasks/WP01.md` - Work package prompts (flat structure)
- Up to 10 work packages ready for implementation
**Check your dashboard:** You'll now see tasks in the "Planned" lane!
### Phase 5: Implement Features (In Feature Worktree)
#### 5a. Execute Implementation
```text
/spec-kitty.implement
```
**What this does:**
- Auto-detects first WP with `lane: "planned"` (or specify WP ID)
- Automatically moves to `lane: "doing"` and displays the prompt
- Shows clear "WHEN YOU'RE DONE" instructions
- Agent implements, then runs command to move to `lane: "for_review"`
**Repeat** until all work packages are done!
#### 5b. Review Completed Work
```text
/spec-kitty.review
```
**What this does:**
- Auto-detects first WP with `lane: "for_review"` (or specify WP ID)
- Automatically moves to `lane: "doing"` and displays the prompt
- Agent reviews code and provides feedback or approval
- Shows commands to move to `lane: "done"` (passed) or `lane: "planned"` (changes needed)
### Phase 6: Accept & Merge (In Feature Worktree)
#### 6a. Validate Feature Complete
```text
/spec-kitty.accept
```
**What this does:**
- Verifies all WPs have `lane: "done"`
- Checks metadata and activity logs
- Confirms no `NEEDS CLARIFICATION` markers remain
- Records acceptance timestamp
#### 6b. Merge to Main
```text
/spec-kitty.merge --push
```
**What this does:**
- Switches to main branch
- Merges feature branch
- Pushes to remote (if `--push` specified)
- Cleans up worktree
- Deletes feature branch
**🎉 Feature complete!** Return to main repo and start your next feature with `/spec-kitty.specify`
---
## 📋 Quick Reference: Command Order
### Required Workflow (Once per project)
```
1️⃣ /spec-kitty.constitution → In main repo (sets project principles)
```
### Required Workflow (Each feature)
```
2️⃣ /spec-kitty.specify → Create spec (in main repo)
3️⃣ /spec-kitty.plan → Define technical approach (in main repo)
4️⃣ /spec-kitty.tasks → Generate work packages (in main repo)
5️⃣ spec-kitty implement WP01 → Create workspace for WP01 (first worktree)
/spec-kitty.implement → Build the work package
6️⃣ /spec-kitty.review → Review completed work
7️⃣ /spec-kitty.accept → Validate feature ready
8️⃣ /spec-kitty.merge → Merge to main + cleanup
```
### Optional Enhancement Commands
```
/spec-kitty.clarify → Before /plan: Ask structured questions about spec
/spec-kitty.research → After /plan: Investigate technical decisions
/spec-kitty.analyze → After /tasks: Cross-artifact consistency check
/spec-kitty.checklist → Anytime: Generate custom quality checklists
/spec-kitty.dashboard → Anytime: Open/restart the kanban dashboard
```
---
## 🔒 Agent Directory Best Practices
**Important**: Agent directories (`.claude/`, `.codex/`, `.gemini/`, etc.) should **NEVER** be committed to git.
### Why?
These directories may contain:
- Authentication tokens and API keys
- User-specific credentials (auth.json)
- Session data and conversation history
### Automatic Protection
Spec Kitty automatically protects you with multiple layers:
**During `spec-kitty init`:**
- ✅ Adds all 12 agent directories to `.gitignore`
- ✅ Installs pre-commit hooks that block commits containing agent files
- ✅ Creates `.claudeignore` to optimize AI scanning (excludes `.kittify/` templates)
**Pre-commit Hook Protection:**
The installed pre-commit hook will block any commit that includes files from:
`.claude/`, `.codex/`, `.gemini/`, `.cursor/`, `.qwen/`, `.opencode/`,
`.windsurf/`, `.kilocode/`, `.augment/`, `.roo/`, `.amazonq/`, `.github/copilot/`
If you need to bypass the hook (not recommended): `git commit --no-verify`
**Worktree Constitution Sharing:**
When creating WP workspaces, Spec Kitty uses symlinks to share the constitution:
```
.worktrees/001-feature-WP01/.kittify/memory -> ../../../../.kittify/memory
```
This ensures all work packages follow the same project principles.
### What Gets Committed?
✅ **DO commit:**
- `.kittify/templates/` - Command templates (source)
- `.kittify/missions/` - Mission workflows
- `.kittify/memory/constitution.md` - Project principles
- `.gitignore` - Protection rules
❌ **NEVER commit:**
- `.claude/`, `.gemini/`, `.cursor/`, etc. - Agent runtime directories
- Any `auth.json` or credentials files
See [AGENTS.md](.kittify/AGENTS.md) for complete guidelines.
---
<details>
<summary><h2>📚 Terminology</h2></summary>
Spec Kitty differentiates between the **project** that holds your entire codebase, the **features** you build within that project, and the **mission** that defines your workflow. Use these definitions whenever you write docs, prompts, or help text.
### Project
**Definition**: The entire codebase (one Git repository) that contains all missions, features, and `.kittify/` automation.
**Examples**:
- "spec-kitty project" (this repository)
- "priivacy_rust project"
- "my-agency-portal project"
**Usage**: Projects are initialized once with `spec-kitty init`. A project contains:
- One active mission at a time
- Multiple features (each with its own spec/plan/tasks)
- Shared automation under `.kittify/`
**Commands**: Initialize with `spec-kitty init my-project` (or `spec-kitty init --here` for the current directory).
---
### Feature
**Definition**: A single unit of work tracked by Spec Kitty. Every feature has its own spec, plan, tasks, and implementation worktree.
**Examples**:
- "001-auth-system feature"
- "005-refactor-mission-system feature" (this document)
- "042-dashboard-refresh feature"
**Structure**:
- Specification: `/kitty-specs/###-feature-name/spec.md`
- Plan: `/kitty-specs/###-feature-name/plan.md`
- Tasks: `/kitty-specs/###-feature-name/tasks.md`
- Implementation: `.worktrees/###-feature-name/`
**Lifecycle**:
1. `/spec-kitty.specify` – Create the feature and its branch
2. `/spec-kitty.plan` – Document the technical design
3. `/spec-kitty.tasks` – Break work into packages
4. `/spec-kitty.implement` – Build the feature inside its worktree
5. `/spec-kitty.review` – Peer review
6. `/spec-kitty.accept` – Validate according to gates
7. `/spec-kitty.merge` – Merge and clean up
**Commands**: Always create features with `/spec-kitty.specify`.
---
### Mission
**Definition**: A domain adapter that configures Spec Kitty (workflows, templates, validation). Missions are project-wide; all features in a project share the same active mission.
**Examples**:
- "software-dev mission" (ship software with TDD)
- "research mission" (conduct systematic investigations)
- "writing mission" (future workflow)
**What missions define**:
- Workflow phases (e.g., design → implement vs. question → gather findings)
- Templates (spec, plan, tasks, prompts)
- Validation rules (tests pass vs. citations documented)
- Path conventions (e.g., `src/` vs. `research/`)
**Scope**: Entire project. Switch missions before starting a new feature if you need a different workflow.
**Commands**:
- Select at init: `spec-kitty init my-project --mission research`
- Switch later: `spec-kitty mission switch research`
- Inspect: `spec-kitty mission current` / `spec-kitty mission list`
---
### Quick Reference
| Term | Scope | Example | Key Command |
|------|-------|---------|-------------|
| **Project** | Entire codebase | "spec-kitty project" | `spec-kitty init my-project` |
| **Feature** | Unit of work | "001-auth-system feature" | `/spec-kitty.specify "auth system"` |
| **Mission** | Workflow adapter | "research mission" | `spec-kitty mission switch research` |
### Common Questions
**Q: What's the difference between a project and a feature?**
A project is your entire git repository. A feature is one unit of work inside that project with its own spec/plan/tasks.
**Q: Can I have multiple missions in one project?**
Only one mission is active at a time, but you can switch missions between features with `spec-kitty mission switch`.
**Q: Should I create a new project for every feature?**
No. Initialize a project once, then create as many features as you need with `/spec-kitty.specify`.
**Q: What's a task?**
Tasks (T001, T002, etc.) are subtasks within a feature's work packages. They are **not** separate features or projects.
</details>
---
## 📦 Examples
Learn from real-world workflows used by teams building production software with AI agents. Each playbook demonstrates specific coordination patterns and best practices:
### Featured Workflows
- **[Multi-Agent Feature Development](https://github.com/Priivacy-ai/spec-kitty/blob/main/examples/multi-agent-feature-development.md)**
*Orchestrate 3-5 AI agents on a single large feature with parallel work packages*
- **[Parallel Implementation Tracking](https://github.com/Priivacy-ai/spec-kitty/blob/main/examples/parallel-implementation-tracking.md)**
*Monitor multiple teams/agents delivering features simultaneously with dashboard metrics*
- **[Dashboard-Driven Development](https://github.com/Priivacy-ai/spec-kitty/blob/main/examples/dashboard-driven-development.md)**
*Product trio workflow: PM + Designer + Engineers using live kanban visibility*
- **[Claude + Cursor Collaboration](https://github.com/Priivacy-ai/spec-kitty/blob/main/examples/claude-cursor-collaboration.md)**
*Blend different AI agents within a single spec-driven workflow*
### More Examples
Browse our [examples directory](https://github.com/Priivacy-ai/spec-kitty/tree/main/examples) for additional workflows including:
- Agency client transparency workflows
- Solo developer productivity patterns
- Enterprise parallel development
- Research mission templates
## 🤖 Supported AI Agents
| Agent | Support | Notes |
|-----------------------------------------------------------|---------|---------------------------------------------------|
| [Claude Code](https://www.anthropic.com/claude-code) | ✅ | |
| [GitHub Copilot](https://code.visualstudio.com/) | ✅ | |
| [Gemini CLI](https://github.com/google-gemini/gemini-cli) | ✅ | |
| [Cursor](https://cursor.sh/) | ✅ | |
| [Qwen Code](https://github.com/QwenLM/qwen-code) | ✅ | |
| [opencode](https://opencode.ai/) | ✅ | |
| [Windsurf](https://windsurf.com/) | ✅ | |
| [Kilo Code](https://github.com/Kilo-Org/kilocode) | ✅ | |
| [Auggie CLI](https://docs.augmentcode.com/cli/overview) | ✅ | |
| [Roo Code](https://roocode.com/) | ✅ | |
| [Codex CLI](https://github.com/openai/codex) | ✅ | |
| [Amazon Q Developer CLI](https://aws.amazon.com/developer/learning/q-developer-cli/) | ⚠️ | Amazon Q Developer CLI [does not support](https://github.com/aws/amazon-q-developer-cli/issues/3064) custom arguments for slash commands. |
<details>
<summary><h2>🔧 Spec Kitty CLI Reference</h2></summary>
The `spec-kitty` command supports the following options. Every run begins with a discovery interview, so be prepared to answer follow-up questions before files are touched.
### Commands
| Command | Description |
|-------------|----------------------------------------------------------------|
| `init` | Initialize a new Spec Kitty project from templates |
| `upgrade` | **Upgrade project structure to current version** (run after updating spec-kitty-cli) |
| `repair` | **Repair broken template installations** (fixes bash script references from v0.10.0-0.10.8) |
| `accept` | Validate feature readiness before merging to main |
| `check` | Check that required tooling is available |
| `dashboard` | Open or stop the Spec Kitty dashboard |
| `diagnostics` | Show project health and diagnostics information |
| `merge` | Merge a completed feature branch into main and clean up resources |
| `research` | Execute Phase 0 research workflow to scaffold artifacts |
| `verify-setup` | Verify that the current environment matches Spec Kitty expectations |
### `spec-kitty init` Arguments & Options
| Argument/Option | Type | Description |
|------------------------|----------|------------------------------------------------------------------------------|
| `<project-name>` | Argument | Name for your new project directory (optional if using `--here`, or use `.` for current directory) |
| `--ai` | Option | AI assistant to use: `claude`, `gemini`, `copilot`, `cursor`, `qwen`, `opencode`, `codex`, `windsurf`, `kilocode`, `auggie`, `roo`, or `q` |
| `--script` | Option | (Deprecated in v0.10.0) Script variant - all commands now use Python CLI |
| `--mission` | Option | Mission key to seed templates (`software-dev`, `research`, ...) |
| `--template-root` | Option | Override template location (useful for development mode or custom sources) |
| `--ignore-agent-tools` | Flag | Skip checks for AI agent tools like Claude Code |
| `--no-git` | Flag | Skip git repository initialization |
| `--here` | Flag | Initialize project in the current directory instead of creating a new one |
| `--force` | Flag | Force merge/overwrite when initializing in current directory (skip confirmation) |
| `--skip-tls` | Flag | Skip SSL/TLS verification (not recommended) |
| `--debug` | Flag | Enable detailed debug output for troubleshooting |
| `--github-token` | Option | GitHub token for API requests (or set GH_TOKEN/GITHUB_TOKEN env variable) |
If you omit `--mission`, the CLI will prompt you to pick one during `spec-kitty init`.
### Examples
```bash
# Basic project initialization
spec-kitty init my-project
# Initialize with specific AI assistant
spec-kitty init my-project --ai claude
# Initialize with the Deep Research mission
spec-kitty init my-project --mission research
# Initialize with Cursor support
spec-kitty init my-project --ai cursor
# Initialize with Windsurf support
spec-kitty init my-project --ai windsurf
# Initialize with PowerShell scripts (Windows/cross-platform)
spec-kitty init my-project --ai copilot --script ps
# Initialize in current directory
spec-kitty init . --ai copilot
# or use the --here flag
spec-kitty init --here --ai copilot
# Force merge into current (non-empty) directory without confirmation
spec-kitty init . --force --ai copilot
# or
spec-kitty init --here --force --ai copilot
# Skip git initialization
spec-kitty init my-project --ai gemini --no-git
# Enable debug output for troubleshooting
spec-kitty init my-project --ai claude --debug
# Use GitHub token for API requests (helpful for corporate environments)
spec-kitty init my-project --ai claude --github-token ghp_your_token_here
# Use custom template location (development mode)
spec-kitty init my-project --ai claude --template-root=/path/to/local/spec-kitty
# Check system requirements
spec-kitty check
```
### `spec-kitty upgrade` Options
| Option | Description |
|--------|-------------|
| `--dry-run` | Preview changes without applying them |
| `--force` | Skip confirmation prompts |
| `--target <version>` | Target version to upgrade to (defaults to current CLI version) |
| `--json` | Output results as JSON (for CI/CD integration) |
| `--verbose`, `-v` | Show detailed migration information |
| `--no-worktrees` | Skip upgrading worktrees (main project only) |
**Examples:**
```bash
# Upgrade to current version
spec-kitty upgrade
# Preview what would be changed
spec-kitty upgrade --dry-run
# Upgrade with detailed output
spec-kitty upgrade --verbose
# Upgrade to specific version
spec-kitty upgrade --target 0.6.5
# JSON output for scripting
spec-kitty upgrade --json
# Skip worktree upgrades
spec-kitty upgrade --no-worktrees
```
### `spec-kitty agent` Commands
The `spec-kitty agent` namespace provides programmatic access to all workflow automation commands. All commands support `--json` output for agent consumption.
**Feature Management:**
- `spec-kitty agent feature create-feature <name>` – Create new feature with worktree
- `spec-kitty agent feature check-prerequisites` – Validate project setup and feature context
- `spec-kitty agent feature setup-plan` – Initialize plan template for feature
- `spec-kitty agent context update` – Update agent context files
- `spec-kitty agent feature accept` – Run acceptance workflow
- `spec-kitty agent feature merge` – Merge feature branch and cleanup
**Task Workflow:**
- `spec-kitty agent workflow implement <id> --agent __AGENT__` – Move planned → doing → for_review automatically
- `spec-kitty agent workflow review <id> --agent __AGENT__` – Move for_review → doing → planned/done automatically
- `spec-kitty agent tasks list-tasks` – List all tasks grouped by lane
- `spec-kitty agent tasks mark-status <id> --status <status>` – Mark task status
- `spec-kitty agent tasks add-history <id> --note <message>` – Add activity log entry
- `spec-kitty agent tasks validate-workflow <id>` – Validate task metadata
**Workflow Commands:**
- `spec-kitty agent workflow implement [WP_ID] --agent __AGENT__` – Display WP prompt and auto-move to "doing" lane
- `spec-kitty agent workflow review [WP_ID] --agent __AGENT__` – Display WP prompt for review and auto-move to "doing" lane
**Note:** In generated agent command files, `__AGENT__` is replaced at init time with the agent key (e.g., `codex`, `claude`). If you run commands manually, replace `__AGENT__` with your agent name.
**Example Usage:**
```bash
# Create feature (agent-friendly)
spec-kitty agent feature create-feature "Payment Flow" --json
# Display WP prompt and auto-move to doing
spec-kitty agent workflow implement WP01 --agent __AGENT__
# Run workflow to advance lanes
spec-kitty agent workflow implement WP01 --agent __AGENT__
# Validate workflow
spec-kitty agent tasks validate-workflow WP01 --json
# Accept feature
spec-kitty agent feature accept --json
```
### `spec-kitty dashboard` Options
| Option | Description |
|--------|-------------|
| `--port <number>` | Preferred port for the dashboard (falls back to first available port) |
| `--kill` | Stop the running dashboard for this project and clear its metadata |
**Examples:**
```bash
# Open dashboard (auto-detects port)
spec-kitty dashboard
# Open on specific port
spec-kitty dashboard --port 4000
# Stop dashboard
spec-kitty dashboard --kill
```
### `spec-kitty accept` Options
| Option | Description |
|--------|-------------|
| `--feature <slug>` | Feature slug to accept (auto-detected by default) |
| `--mode <mode>` | Acceptance mode: `auto`, `pr`, `local`, or `checklist` (default: `auto`) |
| `--actor <name>` | Name to record as the acceptance actor |
| `--test <command>` | Validation command to execute (repeatable) |
| `--json` | Emit JSON instead of formatted text |
| `--lenient` | Skip strict metadata validation |
| `--no-commit` | Skip auto-commit; report only |
| `--allow-fail` | Return checklist even when issues remain |
**Examples:**
```bash
# Validate feature (auto-detect)
spec-kitty accept
# Validate specific feature
spec-kitty accept --feature 001-auth-system
# Get checklist only (no commit)
spec-kitty accept --mode checklist
# Accept with custom test validation
spec-kitty accept --test "pytest tests/" --test "npm run lint"
# JSON output for CI integration
spec-kitty accept --json
```
### `spec-kitty merge` Options
| Option | Description |
|--------|-------------|
| `--strategy <type>` | Merge strategy: `merge`, `squash`, or `rebase` (default: `merge`) |
| `--delete-branch` / `--keep-branch` | Delete or keep feature branch after merge (default: delete) |
| `--remove-worktree` / `--keep-worktree` | Remove or keep feature worktree after merge (default: remove) |
| `--push` | Push to origin after merge |
| `--target <branch>` | Target branch to merge into (default: `main`) |
| `--dry-run` | Show what would be done without executing |
**Examples:**
```bash
# Standard merge and push
spec-kitty merge --push
# Squash commits into one
spec-kitty merge --strategy squash --push
# Keep branch for reference
spec-kitty merge --keep-branch --push
# Preview merge without executing
spec-kitty merge --dry-run
# Merge to different target
spec-kitty merge --target develop --push
```
### `spec-kitty verify-setup`
Verifies that the current environment matches Spec Kitty expectations:
- Checks for `.kittify/` directory structure
- Validates agent command files exist
- Confirms dashboard can start
- Reports any configuration issues
**Example:**
```bash
cd my-project
spec-kitty verify-setup
```
### `spec-kitty diagnostics`
Shows project health and diagnostics information:
- Active mission
- Available features
- Dashboard status
- Git configuration
- Agent command availability
**Example:**
```bash
spec-kitty diagnostics
```
### Available Slash Commands
After running `spec-kitty init`, your AI coding agent will have access to these slash commands for structured development.
> **📋 Quick Reference:** See the [command order flowchart above](#-quick-reference-command-order) for a visual workflow guide.
#### Core Commands (In Recommended Order)
**Workflow sequence for spec-driven development:**
| # | Command | Description |
|---|--------------------------|-----------------------------------------------------------------------|
| 1 | `/spec-kitty.constitution` | (**First in main repo**) Create or update project governing principles and development guidelines |
| 2 | `/spec-kitty.specify` | Define what you want to build (requirements and user stories; creates worktree) |
| 3 | `/spec-kitty.plan` | Create technical implementation plans with your chosen tech stack |
| 4 | `/spec-kitty.research` | Run Phase 0 research scaffolding to populate research.md, data-model.md, and evidence logs |
| 5 | `/spec-kitty.tasks` | Generate actionable task lists and work package prompts in flat tasks/ directory |
| 6 | `/spec-kitty.implement` | Display WP prompt, auto-move to "doing" lane, show completion instructions |
| 7 | `/spec-kitty.review` | Display WP prompt for review, auto-move to "doing" lane, show next steps |
| 8 | `/spec-kitty.accept` | Run final acceptance checks, record metadata, and verify feature complete |
| 9 | `/spec-kitty.merge` | Merge feature into main branch and clean up worktree |
#### Quality Gates & Development Tools
**Optional commands for enhanced quality and development:**
| Command | When to Use |
|----------------------|-----------------------------------------------------------------------|
| `/spec-kitty.clarify` | **Optional, before `/spec-kitty.plan`**: Clarify underspecified areas in your specification to reduce downstream rework |
| `/spec-kitty.analyze` | **Optional, after `/spec-kitty.tasks`, before `/spec-kitty.implement`**: Cross-artifact consistency & coverage analysis |
| `/spec-kitty.checklist` | **Optional, anytime after `/spec-kitty.plan`**: Generate custom quality checklists that validate requirements completeness, clarity, and consistency |
| `/spec-kitty.dashboard` | **Anytime (runs in background)**: Open the real-time kanban dashboard in your browser. Automatically starts with `spec-kitty init` and updates as you work. |
</details>
## 🌳 Worktree Strategy
> **📖 Quick Start:** See the [Getting Started guide](#-getting-started-complete-workflow) for practical examples of worktree usage in context.
Spec Kitty uses an **opinionated worktree approach** for parallel feature development:
### Parallel Development Without Branch Switching
```mermaid
graph TD
Main[main branch<br/>🔒 Clean production code]
WT1[.worktrees/001-auth-WP01<br/>🔐 Authentication]
WT2[.worktrees/001-auth-WP02<br/>💾 Database]
WT3[.worktrees/002-dashboard-WP01<br/>📊 UI Components]
Main --> WT1
Main --> WT2
Main --> WT3
WT1 -.->|🚀 Parallel work| WT2
WT2 -.->|✅ No conflicts| WT3
style Main fill:#e8f5e9
style WT1 fill:#e1f5ff
style WT2 fill:#fff3e0
style WT3 fill:#f3e5f5
```
**Why this works:**
- Each WP gets its own directory + branch
- Work on Feature 001 WP01 while another agent handles Feature 002 WP01
- Main branch stays clean - no `git checkout` juggling
- Merge conflicts detected early with pre-flight val | text/markdown | Spec Kitty Contributors | null | Spec Kitty Contributors | null | MIT License
Copyright GitHub, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | agentic-development, ai-agents, ai-coding, claude-code, cli, code-generation, feature-specs, git-worktree, kanban, llm-tools, planning, requirements, sdd, spec-driven-development, specification, workflow-automation | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development",
"Topic :: Software Development :: Code Generators",
"Topic :: Software Development :: Documentation",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Version Control",
"Topic :: Software Development :: Version Control :: Git",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx[socks]",
"packaging>=23.0",
"platformdirs",
"psutil>=5.9.0",
"pydantic>=2.0",
"python-ulid>=1.1.0",
"pyyaml>=6.0",
"readchar",
"rich",
"ruamel-yaml>=0.18.0",
"spec-kitty-events==0.4.0a0",
"truststore>=0.10.4",
"typer",
"build>=1.0.0; extra == \"test\"",
"pytest-asyncio>=0.21.0; extra == \"test\"",
"pytest>=7.4; extra == \"test\"",
"respx>=0.21.0; extra == \"test\""
] | [] | [] | [] | [
"Repository, https://github.com/Priivacy-ai/spec-kitty",
"Issues, https://github.com/Priivacy-ai/spec-kitty/issues",
"Documentation, https://priivacy-ai.github.io/spec-kitty/",
"Changelog, https://github.com/Priivacy-ai/spec-kitty/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:18:10.156468 | spec_kitty_cli-0.16.1.tar.gz | 2,165,429 | ba/74/df81153d2e52a77e8e3671b342623a47ad7c66d754c65b0acf0f42900449/spec_kitty_cli-0.16.1.tar.gz | source | sdist | null | false | 04b8fc9c32da6baf41cfa9f3e1d3db0e | 9857a917eee2c77bfc8f5c935db3a0c5ac70de1630d12e9d5225e5ca681ade69 | ba74df81153d2e52a77e8e3671b342623a47ad7c66d754c65b0acf0f42900449 | null | [
"LICENSE"
] | 281 |
2.4 | gcop-rs | 0.13.5 | AI-powered Git commit message generator and code reviewer (Rust version) | # gcop-rs
[](https://pypi.org/project/gcop-rs/)
[](https://opensource.org/licenses/MIT)
AI-powered Git commit message generator and code reviewer, written in Rust.
This is a Python wrapper that automatically downloads and runs the pre-compiled Rust binary.
## Installation
```bash
# Using pipx (recommended)
pipx install gcop-rs
# Using pip
pip install gcop-rs
```
## Usage
```bash
# Generate commit message
gcop-rs commit
# Code review
gcop-rs review
# Show help
gcop-rs --help
```
## Other Installation Methods
For native installation without Python:
```bash
# Homebrew (macOS/Linux)
brew tap AptS-1547/tap
brew install gcop-rs
# cargo-binstall
cargo binstall gcop-rs
# cargo install
cargo install gcop-rs
```
## Documentation
See the [main repository](https://github.com/AptS-1547/gcop-rs) for full documentation.
## License
MIT License
| text/markdown; charset=UTF-8; variant=GFM | null | AptS-1547 <apts-1547@esaps.net>, AptS-1738 <apts-1738@esaps.net>, uaih3k9x <uaih3k9x@gmail.com> | null | null | null | git, commit, ai, code-review, cli | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Rust",
"Topic :: Software Development :: Version Control :: Git"
] | [] | https://github.com/AptS-1547/gcop-rs | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/AptS-1547/gcop-rs",
"Issues, https://github.com/AptS-1547/gcop-rs/issues",
"Repository, https://github.com/AptS-1547/gcop-rs"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:16:13.155542 | gcop_rs-0.13.5-py3-none-win_arm64.whl | 2,384,004 | dd/27/4ab5a3620c81704196f4028aa0588f7403b3d2c5a6a1c2dd1b5716ae8518/gcop_rs-0.13.5-py3-none-win_arm64.whl | py3 | bdist_wheel | null | false | 5e0ca88e872119d6909c091cbe1d5b6b | 7a7fd897911ae90a0145fa594d0782085d3cbdf8c805c44fcbbdc38a2d1203fb | dd274ab5a3620c81704196f4028aa0588f7403b3d2c5a6a1c2dd1b5716ae8518 | MIT | [] | 389 |
2.4 | pi-plates | 11.0 | Pi-Plates library setup | README.txt
| null | Jerry Wasinger, WallyWare, inc. | support@pi-plates.com | null | null | BSD | pi-plates, data acquisition, Windows, Mac, Linux, raspberry pi, relays, motors, temperature sensing, HATs, 4-20mA, and so much more | [] | [] | https://www.pi-plates.com | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T21:15:40.516179 | pi_plates-11.0.tar.gz | 62,143 | 91/62/298d3b45c715451f4ea79bd6e57ec35843b934c600797cf268cc26c50bb9/pi_plates-11.0.tar.gz | source | sdist | null | false | 07c038dddbf23f0c39d7c8664c0a2d7d | 7d99a8626e39982b67e415d29a483a070aac8e5a51c113f1311523b36be18381 | 9162298d3b45c715451f4ea79bd6e57ec35843b934c600797cf268cc26c50bb9 | null | [] | 310 |
2.4 | notion-mcp-remote-ldraney | 0.2.0 | Remote MCP connector wrapping notion-mcp with Notion OAuth 2.0 over Streamable HTTP | [](https://pypi.org/project/notion-mcp-remote-ldraney/)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
# notion-mcp-remote
A remote MCP connector for [notion-mcp](https://pypi.org/project/notion-mcp-ldraney/) — connect Claude.ai to your Notion workspace in one step.
## For Users
Paste this URL into Claude.ai to connect:
```
https://archbox.tail5b443a.ts.net/mcp
```
1. Go to [claude.ai](https://claude.ai) → **Settings** → **Connectors**
2. Click **"Add custom connector"**
3. Paste the URL above
4. You'll be redirected to Notion — authorize access to **your own** workspace
5. Done. Claude can now read and write your Notion pages, databases, and blocks.
Works on desktop, iOS, and Android. Requires Claude Pro, Max, Team, or Enterprise.
### What you get
Full Notion API coverage via [notion-mcp](https://pypi.org/project/notion-mcp-ldraney/) (v2025-05-09):
- Page and database CRUD
- Block-level operations (append, update, delete)
- Property retrieval and pagination
- Comment threading
- Search across your workspace
- User and team lookups
### How it works
When you add this connector, Claude.ai initiates a standard OAuth 2.0 flow with Notion. **You authenticate directly with Notion and choose which pages to share.** Your access token is stored encrypted on the server and used only to make Notion API calls on your behalf. The operator of this server never sees your Notion credentials — only the OAuth token Notion issues.
```
Claude.ai This Server Notion
│ │ │
│ 1. Add connector (URL) │ │
│ ─────────────────────────► │ │
│ │ │
│ 2. Redirect to Notion │ │
│ ◄───────────────────────── │ │
│ │ │
│ 3. You authorize ─────────────────────────────────────► │
│ (your workspace, │ │
│ your pages) │ │
│ │ 4. Callback with auth code │
│ │ ◄────────────────────────── │
│ │ │
│ │ 5. Exchange for token │
│ │ ─────────────────────────► │
│ │ │
│ │ 6. Your access token │
│ │ ◄───────────────────────── │
│ │ │
│ 7. MCP tools now work │ 8. API calls to YOUR data │
│ ◄────────────────────────► │ ◄──────────────────────────► │
```
---
## For Operators
Everything below is for deploying your own instance of this connector.
### Why This Exists
Anthropic's built-in Notion connector uses an older API surface. `notion-mcp` provides significantly better coverage. This repo is a thin deployment wrapper that:
1. Imports all tools from `notion-mcp`
2. Serves them over Streamable HTTP (required for Claude.ai connectors)
3. Handles Notion's public OAuth 2.0 flow (per-user authentication)
4. Deploys behind any HTTPS tunnel (Tailscale Funnel, ngrok, Cloudflare Tunnel, etc.)
Each user who adds your connector authenticates with **their own** Notion workspace. You provide the infrastructure; they bring their own data.
### Prerequisites
- Python 3.11+
- An HTTPS tunnel ([Tailscale Funnel](https://tailscale.com/kb/1223/funnel) (free), [ngrok](https://ngrok.com/), or [Cloudflare Tunnel](https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/))
- [Notion public integration](https://www.notion.so/profile/integrations) (OAuth client credentials)
- A machine to host on (always-on Linux box, Raspberry Pi, etc.)
### Notion OAuth Setup
You need a **public** Notion integration. This is what lets users OAuth into **their own** workspaces through your connector — the "public" label means your app can be authorized by any Notion user, not that their data becomes public.
1. Go to [notion.so/profile/integrations](https://www.notion.so/profile/integrations)
2. Click **"New integration"**
3. Choose **"Public"** integration type
4. Fill in required fields:
- **Integration name**: e.g. "Notion MCP Remote"
- **Redirect URI**: `https://<your-domain>/oauth/callback`
- **Company name**, **Website**, **Privacy policy URL**, **Terms of use URL** — Notion requires these even for hobby projects. Your GitHub repo URL and a simple privacy statement work fine.
5. Under **Capabilities**, enable everything you want to expose
6. Save and note your **OAuth client ID** and **OAuth client secret**
### Installation
```bash
git clone https://github.com/ldraney/notion-mcp-remote.git
cd notion-mcp-remote
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
```
### Configuration
```bash
cp .env.example .env
```
```env
# Notion OAuth (from your public integration)
NOTION_OAUTH_CLIENT_ID=your_client_id
NOTION_OAUTH_CLIENT_SECRET=your_client_secret
# Server
HOST=127.0.0.1
PORT=8000
BASE_URL=https://your-public-domain
# Session secret (generate with: python -c "import secrets; print(secrets.token_hex(32))")
SESSION_SECRET=your_random_secret
# Optional: Redis URL for token storage (defaults to local file-based storage)
# REDIS_URL=redis://localhost:6379
```
### Running
```bash
# Terminal 1: Start the MCP server
source .venv/bin/activate
python server.py
# Terminal 2: Expose via HTTPS tunnel (pick one)
sudo tailscale funnel --bg 8000 # Tailscale Funnel (free, stable URL)
ngrok http 8000 --url=your-domain # ngrok (requires paid plan for static domain)
```
Verify:
```bash
curl https://your-public-domain/health
```
Then share your connector URL with users: `https://your-public-domain/mcp`
### Deployment (Systemd)
For always-on hosting on a Linux box:
```bash
# Copy and enable service file
sudo cp systemd/notion-mcp-remote.service /etc/systemd/system/
# Edit paths in the service file to match your setup
sudo systemctl daemon-reload
sudo systemctl enable --now notion-mcp-remote
```
For the HTTPS tunnel, set up a separate systemd service or use your tunnel provider's daemon mode (e.g. `tailscale funnel --bg`, `ngrok service install`).
### Project Structure
```
notion-mcp-remote/
├── server.py # Main entrypoint — FastMCP HTTP server
├── auth/
│ ├── __init__.py
│ ├── provider.py # Notion OAuth 2.0 flow handlers
│ └── storage.py # Per-user token storage (file or Redis)
├── requirements.txt
├── .env.example
├── systemd/
│ └── notion-mcp-remote.service
├── Makefile
└── README.md
```
### Development
```bash
pip install -r requirements-dev.txt
pytest
python server.py --reload
ruff check .
```
### Makefile
```bash
make run # Start the server
make tunnel # Start ngrok tunnel
make dev # Start both (requires tmux)
make test # Run tests
make lint # Run linter
make install # Install dependencies
```
## Troubleshooting
### "Connector failed to connect" in Claude.ai
- Verify tunnel is running: `curl https://your-public-domain/health`
- Check server logs: `journalctl -u notion-mcp-remote -f`
- Ensure your Notion OAuth redirect URI matches exactly
### 421 Misdirected Request
- The server automatically adds `BASE_URL`'s hostname to the MCP transport security `allowed_hosts`. Make sure `BASE_URL` in `.env` matches your public domain exactly.
### OAuth callback errors
- Confirm `BASE_URL` in `.env` matches your public domain exactly
- Check that your Notion integration is set to **Public** (not Internal)
- Verify redirect URI in Notion integration settings matches `{BASE_URL}/oauth/callback`
### Token refresh issues
- Notion access tokens don't expire by default, but users can revoke access
- If a user revokes, they'll need to re-authorize through Claude's connector settings
## Privacy
See [PRIVACY.md](PRIVACY.md) for our privacy policy.
## Roadmap
- [x] Core HTTP wrapper with FastMCP Streamable HTTP transport
- [x] Notion public OAuth 2.0 flow
- [x] Per-user token storage and injection
- [x] Claude.ai connector integration testing
- [x] Encrypted token storage at rest
- [ ] Redis adapter for multi-instance deployments
- [ ] Health check and monitoring endpoints
- [ ] Rate limiting and abuse prevention
- [ ] Docker deployment option
- [ ] One-click deploy templates (Railway, Render)
## Related Projects
- [notion-mcp](https://pypi.org/project/notion-mcp-ldraney/) — The underlying MCP server with full Notion API coverage (also by [@ldraney](https://github.com/ldraney))
- [FastMCP](https://gofastmcp.com) — The MCP framework powering the HTTP transport
- [MCP Specification](https://modelcontextprotocol.io) — The Model Context Protocol standard
## License
MIT
| text/markdown | null | Lucas Draney <ldraney@users.noreply.github.com> | null | null | null | ai, claude, mcp, notion, oauth, remote | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"mcp-remote-auth-ldraney>=0.1.0",
"notion-mcp-ldraney<1.0,>=0.1.12",
"python-dotenv<2.0,>=1.2.1",
"uvicorn>=0.34"
] | [] | [] | [] | [
"Homepage, https://github.com/ldraney/notion-mcp-remote",
"Repository, https://github.com/ldraney/notion-mcp-remote",
"Issues, https://github.com/ldraney/notion-mcp-remote/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:15:07.047932 | notion_mcp_remote_ldraney-0.2.0.tar.gz | 7,256 | e4/bf/22c8d595ab9575574f5656773dc25f5ef1dc7b452bde3f8be207ecbcf843/notion_mcp_remote_ldraney-0.2.0.tar.gz | source | sdist | null | false | 782434b338645d726ca4b16ace9dee82 | 3933ed9f86650081e7ba8c52a335a5d4f109e80f182b4f7090a40564746ebc2b | e4bf22c8d595ab9575574f5656773dc25f5ef1dc7b452bde3f8be207ecbcf843 | MIT | [
"LICENSE"
] | 211 |
2.4 | gcal-mcp-remote-ldraney | 0.2.0 | Remote MCP connector wrapping gcal-mcp with Google OAuth 2.0 over Streamable HTTP | [](https://pypi.org/project/gcal-mcp-remote-ldraney/)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
# gcal-mcp-remote
Remote MCP server wrapping [gcal-mcp](https://github.com/ldraney/gcal-mcp) with Google OAuth 2.0 and Streamable HTTP transport — designed for Claude.ai connectors.
## How it works
```
Claude.ai ──HTTP+Bearer──> gcal-mcp-remote ──Google API──> Google Calendar
│
imports gcal-mcp's FastMCP (all 14 tools)
patches get_client() with per-request ContextVar
adds Google OAuth + /health + /oauth/callback
```
Three-party OAuth: Claude ↔ this server ↔ Google. Each user authorizes their own Google Calendar via OAuth. The server stores per-user Google refresh tokens (encrypted at rest).
## Prerequisites
1. A **Web application** OAuth client in [Google Cloud Console](https://console.cloud.google.com/apis/credentials):
- Create Credentials → OAuth client ID → Type: **Web application**
- Add authorized redirect URI: `{BASE_URL}/oauth/callback`
- Note the Client ID and Client Secret
2. Google Calendar API enabled in the same project
## Setup
```bash
git clone https://github.com/ldraney/gcal-mcp-remote.git
cd gcal-mcp-remote
cp .env.example .env
# Edit .env with your Google OAuth credentials, BASE_URL, and SESSION_SECRET
make install
make run
```
## Environment Variables
| Variable | Description |
|----------|-------------|
| `GCAL_OAUTH_CLIENT_ID` | Google OAuth Web application client ID |
| `GCAL_OAUTH_CLIENT_SECRET` | Google OAuth Web application client secret |
| `SESSION_SECRET` | Random secret for encrypting token store |
| `BASE_URL` | Public HTTPS URL where this server is reachable |
| `HOST` | Bind address (default: `127.0.0.1`) |
| `PORT` | Listen port (default: `8001`) |
## Verification
```bash
curl http://127.0.0.1:8001/health # {"status": "ok"}
curl http://127.0.0.1:8001/.well-known/oauth-authorization-server # OAuth metadata
curl -X POST http://127.0.0.1:8001/mcp # 401 (auth required)
```
## Deploying
### Standalone
Use any HTTPS tunnel (Tailscale Funnel, ngrok, Cloudflare Tunnel) to expose the server publicly, then add the URL as a Claude.ai connector.
A systemd service file is provided in `systemd/gcal-mcp-remote.service`.
### Kubernetes (production)
Production deployment is managed by [mcp-gateway-k8s](https://github.com/ldraney/mcp-gateway-k8s), which runs this server as a pod with Tailscale Funnel ingress. That repo contains the Dockerfile, K8s manifests, and secrets management.
Full integration testing (Claude.ai connector → OAuth → Google Calendar) is tracked in [mcp-gateway-k8s#12](https://github.com/ldraney/mcp-gateway-k8s/issues/12) and [#6](https://github.com/ldraney/gcal-mcp-remote/issues/6).
## Privacy
See [PRIVACY.md](PRIVACY.md) for our privacy policy.
## License
MIT
| text/markdown | Lucas Draney | null | null | null | null | google-calendar, mcp, oauth, remote, claude, ai | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Intended Audience :: Developers"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"gcal-mcp-ldraney>=0.1.0",
"mcp-remote-auth-ldraney>=0.1.0",
"python-dotenv>=1.0",
"httpx>=0.28",
"uvicorn>=0.34"
] | [] | [] | [] | [
"Repository, https://github.com/ldraney/gcal-mcp-remote",
"Issues, https://github.com/ldraney/gcal-mcp-remote/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:15:05.614980 | gcal_mcp_remote_ldraney-0.2.0.tar.gz | 6,794 | cf/fd/948807068d0b66470d666c18a3789a59fffec01a50f7d9e3a0d60f667d77/gcal_mcp_remote_ldraney-0.2.0.tar.gz | source | sdist | null | false | b081aad04f48368da6d89367d919eefa | 3b542b01be3b1e71b67e7ab8ea7d53a1349aa6b9c58558abf24f090e251dda32 | cffd948807068d0b66470d666c18a3789a59fffec01a50f7d9e3a0d60f667d77 | MIT | [
"LICENSE"
] | 207 |
2.4 | linkedin-scheduler-remote-ldraney | 0.2.0 | Remote MCP connector wrapping linkedin-mcp-scheduler with LinkedIn OAuth 2.0 over Streamable HTTP | [](https://pypi.org/project/linkedin-scheduler-remote-ldraney/)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
# linkedin-scheduler-remote
Remote MCP server wrapping [linkedin-mcp-scheduler](https://github.com/ldraney/linkedin-mcp-scheduler) with LinkedIn OAuth 2.0 and Streamable HTTP transport — designed for Claude.ai connectors.
## How it works
```
Claude.ai ──HTTP+Bearer──> linkedin-scheduler-remote ──LinkedIn API──> LinkedIn
│
imports linkedin-mcp-scheduler's FastMCP (all 8 tools)
patches get_client() with per-request ContextVar
adds LinkedIn OAuth + /health + /oauth/callback
```
Three-party OAuth: Claude <-> this server <-> LinkedIn. Each user authorizes their own LinkedIn account via OAuth. The server stores per-user LinkedIn access tokens (encrypted at rest).
## Prerequisites
1. A **LinkedIn Developer App** at [LinkedIn Developer Portal](https://www.linkedin.com/developers/apps):
- Create an app (or use existing)
- Under Auth settings, add authorized redirect URL: `{BASE_URL}/oauth/callback`
- Note the Client ID and Client Secret
- Ensure the app has the products: "Share on LinkedIn" and "Sign In with LinkedIn using OpenID Connect"
2. Scopes required: `openid`, `profile`, `email`, `w_member_social`
## Setup
```bash
git clone https://github.com/ldraney/linkedin-scheduler-remote.git
cd linkedin-scheduler-remote
cp .env.example .env
# Edit .env with your LinkedIn OAuth credentials, BASE_URL, and SESSION_SECRET
make install
make run
```
## Environment Variables
| Variable | Description |
|----------|-------------|
| `LINKEDIN_OAUTH_CLIENT_ID` | LinkedIn app Client ID |
| `LINKEDIN_OAUTH_CLIENT_SECRET` | LinkedIn app Client Secret |
| `SESSION_SECRET` | Random secret for encrypting token store |
| `BASE_URL` | Public HTTPS URL where this server is reachable |
| `HOST` | Bind address (default: `127.0.0.1`) |
| `PORT` | Listen port (default: `8002`) |
## Verification
```bash
curl http://127.0.0.1:8002/health # {"status": "ok"}
curl http://127.0.0.1:8002/.well-known/oauth-authorization-server # OAuth metadata
curl -X POST http://127.0.0.1:8002/mcp # 401 (auth required)
```
## Tools (8)
All scheduling tools from linkedin-mcp-scheduler are exposed:
- **schedule_post** — Schedule a LinkedIn post for future publication
- **list_scheduled_posts** — List posts with optional status filter
- **get_scheduled_post** — Get details of a scheduled post
- **cancel_scheduled_post** — Cancel a pending post
- **update_scheduled_post** — Edit pending post fields
- **reschedule_post** — Change the scheduled time
- **retry_failed_post** — Reset failed posts to pending
- **queue_summary** — Get queue statistics
## Deploying
### Standalone
Use any HTTPS tunnel (Tailscale Funnel, ngrok, Cloudflare Tunnel) to expose the server publicly, then add the URL as a Claude.ai connector.
A systemd service file is provided in `systemd/linkedin-scheduler-mcp-remote.service`.
### Kubernetes (production)
Production deployment is managed by [mcp-gateway-k8s](https://github.com/ldraney/mcp-gateway-k8s), which runs this server as a pod with Tailscale Funnel ingress.
## Design Notes
The scheduler-remote differs from gcal/notion remotes in one key way: there's a **daemon** (`linkedin-mcp-scheduler-daemon`) that publishes posts to LinkedIn. Currently operates in single-user mode — the daemon uses its own env-var credentials. Multi-user daemon support (per-user credentials stored with each post) is a future enhancement.
## Privacy
See [PRIVACY.md](PRIVACY.md) for our privacy policy.
## License
MIT
| text/markdown | null | Lucas Draney <107153866+ldraney@users.noreply.github.com> | null | null | null | ai, claude, linkedin, mcp, oauth, remote | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Communications"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx<1.0,>=0.28",
"linkedin-mcp-scheduler-ldraney<1.0,>=0.1.0",
"mcp-remote-auth-ldraney>=0.1.0",
"python-dotenv<2.0,>=1.0",
"uvicorn<1.0,>=0.34"
] | [] | [] | [] | [
"Repository, https://github.com/ldraney/linkedin-scheduler-remote",
"Issues, https://github.com/ldraney/linkedin-scheduler-remote/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:14:57.884865 | linkedin_scheduler_remote_ldraney-0.2.0.tar.gz | 6,028 | 55/f2/94cb9dadfd055239d20e7d6d25ad3bd12dfaf7bbd76e3b757a732eda7a4a/linkedin_scheduler_remote_ldraney-0.2.0.tar.gz | source | sdist | null | false | 4a52c6a3de68534b26210b394ab9c9b9 | ccfb5eda174327e156e91f9ac3c3aad1d011a0169b7e9699c54955910f3df73a | 55f294cb9dadfd055239d20e7d6d25ad3bd12dfaf7bbd76e3b757a732eda7a4a | MIT | [
"LICENSE"
] | 220 |
2.4 | gmail-mcp-remote-ldraney | 0.3.0 | Remote MCP connector wrapping gmail-mcp with Google OAuth 2.0 over Streamable HTTP | [](https://pypi.org/project/gmail-mcp-remote-ldraney/)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
# gmail-mcp-remote
HTTP/Streamable-HTTP wrapper for gmail-mcp — 3-party OAuth proxy for Claude.ai and cluster integration
## Privacy
See [PRIVACY.md](PRIVACY.md) for our privacy policy.
| text/markdown | Lucas Draney | null | null | null | null | gmail, mcp, oauth, remote, claude, ai, email | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Intended Audience :: Developers",
"Topic :: Communications :: Email"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"gmail-mcp-ldraney>=0.1.2",
"mcp-remote-auth-ldraney>=0.1.0",
"python-dotenv>=1.0",
"httpx>=0.28",
"uvicorn>=0.34"
] | [] | [] | [] | [
"Repository, https://github.com/ldraney/gmail-mcp-remote",
"Issues, https://github.com/ldraney/gmail-mcp-remote/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:14:54.632432 | gmail_mcp_remote_ldraney-0.3.0.tar.gz | 5,004 | 70/2f/0c07f2ad13a00eb2743dc7c969325f255dda3cd2e3e1cda84d32f505c2da/gmail_mcp_remote_ldraney-0.3.0.tar.gz | source | sdist | null | false | d2c7284bb01105f6328d0ac875201caf | 4b0971d71c0fb83cdf14f56cf4cb8f9acf4c0343773fe603ebf246b096ccd633 | 702f0c07f2ad13a00eb2743dc7c969325f255dda3cd2e3e1cda84d32f505c2da | MIT | [
"LICENSE"
] | 212 |
2.4 | FAIRLinked | 0.3.2.7 | Transform materials research data into FAIR-compliant RDF Data. Align your datasets with MDS-Onto and convert them into Linked Data, enhancing interoperability and reusability for seamless data integration. See the README or vignette for more information. This tool is used by the SDLE Research Center at Case Western Reserve University. | # FAIRLinked
FAIRLinked is a powerful tool for transforming research data into FAIR-compliant RDF. It helps you align tabular or semi-structured datasets with the MDS-Onto ontology and convert them into Linked Data formats, enhancing interoperability, discoverability, and reuse.
With FAIRLinked, you can:
- Convert CSV/Excel/JSON into RDF, JSON-LD, or OWL
- Automatically download and track the latest MDS-Onto ontology files
- Add or search terms in your ontology files with ease
- Generate metadata summaries and RDF templates
- Prepare datasets for FAIR repository submission

This tool is actively developed and maintained by the **SDLE Research Center at Case Western Reserve University** and is used in multiple federally funded projects.
Documentations of how to use functions in FAIRLinked can be found [here](https://fairlinked.readthedocs.io/)
---
## ✍️ Authors
* **Van D. Tran**
* **Brandon Lee**
* Ritika Lamba
* Henry Dirks
* Balashanmuga Priyan Rajamohan
* Gabriel Ponon
* Quynh D. Tran
* Ozan Dernek
* Yinghui Wu
* Erika I. Barcelos
* Roger H. French
* Laura S. Bruckman
---
## 🏢 Affiliation
Materials Data Science for Stockpile Stewardship Center of Excellence, Cleveland, OH 44106, USA
---
## 🐍 Python Installation
You can install FAIRLinked using pip:
```bash
pip install FAIRLinked
```
or directly from the FAIRLinked GitHub repository
```bash
git clone https://github.com/cwru-sdle/FAIRLinked.git
cd FAIRLinked
pip install .
```
---
## ⏰ Quick Start
This section provides example runs of the serialization and deserialization processes. All example files can be found in the GitHub repository of `FAIRLinked` under `resources` or can be directly accessed [here](https://raw.githubusercontent.com/cwru-sdle/FAIRLinked/resources). Command-line version of the functions below can be found [here](https://github.com/cwru-sdle/FAIRLinked/blob/main/resources/CLI_Examples.md) and in [change log](https://github.com/cwru-sdle/FAIRLinked/blob/main//CHANGELOG.md).
### Serializing and deserializing with RDFTableConversion
To start serializing with FAIRLinked, we first make a template using `jsonld_template_generator` from `FAIRLinked.RDFTableConversion.csv_to_jsonld_mapper`. In your CSV, make sure to have some (possibly empty or partially filled) rows reserved for metadata about your variable.
**Note**
Please make sure to follow the proper formatting guidelines for input CSV file.
* Each column name should be the "common" or alternative name for this object
* The following three rows should be reserved for the **type**, **units**, and **study stage** in that order
* if values for these are not available, the space should be left blank
* data for each sample can then begin on the 5th row
Please see the following images for reference

Minimum Viable Data

During the template generating process, the user may be prompted for data for different columns. When no units are detected, the user will be prompted for the type of unit, and then given a list of valid units to choose from.


When no study stage is detected, the user will similarly be given a list of study stages to choose from.

The user will automatically be prompted for an optional notes for each column.
**IN THIS FIRST EXAMPLE**, we will use the microindentation data of a PMMA, or Poly(methyl methacrylate), sample.
```python
from FAIRLinked.InterfaceMDS.load_mds_ontology import load_mds_ontology_graph
from FAIRLinked.RDFTableConversion.csv_to_jsonld_mapper import jsonld_template_generator
mds_graph = load_mds_ontology_graph()
jsonld_template_generator(csv_path="resources/worked-example-RDFTableConversion/microindentation/sa17455_00.csv",
ontology_graph=mds_graph,
output_path="resources/worked-example-RDFTableConversion/microindentation/output_template.json",
matched_log_path="resources/worked-example-RDFTableConversion/microindentation/microindentation_matched.txt",
unmatched_log_path="resources/worked-example-RDFTableConversion/microindentation/microindentation_unmatched.txt",
skip_prompts=False)
```
The template is designed to caputre the metadata associated with a variable, including units, study stage, row key, and variable definition. If the user do not wish to go through the prompts to put in the metadata, set `skip_prompts` to `True`.
After creating the template, run `extract_data_from_csv` using the template and CSV input to create JSON-LDs filled with data instances.
```python
from FAIRLinked.RDFTableConversion.csv_to_jsonld_template_filler import extract_data_from_csv
import json
from FAIRLinked.InterfaceMDS.load_mds_ontology import load_mds_ontology_graph
mds_graph = load_mds_ontology_graph()
with open("resources/worked-example-RDFTableConversion/microindentation/output_template.json", "r") as f:
metadata_template = json.load(f)
prop_col_pair_dict = {"is about": [("PolymerGrade", "Sample"),
("Hardness (GPa)", "Sample"),
("VickersHardness", "YoungModulus"),
("Load (Newton)", "Measurement"),
("ExposureStep","Measurement"),
("ExposureType","Measurement"),
("MeasurementNumber","Measurement")]}
extract_data_from_csv(metadata_template=metadata_template,
csv_file="resources/worked-example-RDFTableConversion/microindentation/sa17455_00.csv",
orcid="0000-0001-2345-6789",
output_folder="resources/worked-example-RDFTableConversion/microindentation/test_data_microindentation/output_microindentation",
row_key_cols=["Measurement", "Sample"],
id_cols=["Measurement", "Sample"],
prop_column_pair_dict=prop_col_pair_dict,
ontology_graph=mds_graph)
```
The arguments `row_key_cols`, `id_cols`, `prop_column_pair_dict`, and `ontology_graph` are all optional arguments. `row_key_cols` identify columns in which concatenation of values create row keys which are used to identify the columns, while `id_cols` are columns whose value specify identifiers of unique entities which must be kept track across multiple rows. `prop_column_pair_dict` is a Python dictionary specifying the object properties or data properties which will be used in the resulting RDF graph and the instances connected by those properties. Finally, `ontology_graph` is a required argument if `prop_column_pair_dict` is provided, and this is the source of the properties available to the user.
To view the list of properties in MDS-Onto, run the following script:
```python
from FAIRLinked.InterfaceMDS.load_mds_ontology import load_mds_ontology_graph
from FAIRLinked.RDFTableConversion.csv_to_jsonld_template_filler import generate_prop_metadata_dict
mds_graph = load_mds_ontology_graph()
view_all_props = generate_prop_metadata_dict(mds_graph)
for key, value in view_all_props.items():
print(f"{key}: {value}")
```
To deserialize your data, use `jsonld_directory_to_csv`, which will turn a folder of JSON-LDs (with the same data schema) back into a CSV with metadata right below the column headers.
```python
import rdflib
from rdflib import Graph
import FAIRLinked.RDFTableConversion.jsonld_batch_converter
from FAIRLinked.RDFTableConversion.jsonld_batch_converter import jsonld_directory_to_csv
jsonld_directory_to_csv(input_dir="resources/worked-example-RDFTableConversion/microindentation/test_data_microindentation/output_microindentation",
output_basename="sa17455_00_microindentation",
output_dir="resources/worked-example-RDFTableConversion/microindentation/test_data_microindentation/output_deserialize_microindentation")
```
## Serializing and deserializing using RDF Data Cube with QBWorkflow
The RDF Data Cube Workflow is better run in `bash`.
```shell
$ FAIRLinked data-cube-run
```
This will start a series of prompts for users to serialize their data using RDF Data Cube vocabulary.
```text
Welcome to FAIRLinked RDF Data Cube 🚀
Do you have an existing RDF data cube dataset? (yes/no): no
```
Answer 'yes' to deserialize your data from linked data back to tabular format. If you do not wish to deserialize, answer 'no'. After answering 'no', you will be asked whether you are currently running an experiment. To generate a data template, answer 'yes'. Otherwise, answer 'no'.
```text
Are you running an experiment now? (yes/no): yes
```
Once you've answered 'yes', you will be prompted to provide two ontology files.
```text
Do you have these ontology files (lowest-level, MDS combined)? (yes/no): yes
```
This question is asking if you have the following two turtle files: 'lowest-level' and 'MDS combined'. A 'lowest-level' ontology file is a turtle file that contains all the terms you want to use in your dataset, while 'MDS combined' is the general MDS-Onto which can be downloaded from our website https://cwrusdle.bitbucket.io/. If you answer 'yes', you will be prompted to provide file paths to these files. If you answer 'no', then a generic template will be created with unspecified variable name.
```text
Enter the path to the Lowest-level MDS ontology file: resources/worked-example-QBWorkflow/test_data/Final_Corrected_without_DetectorName/Low-Level_Corrected.ttl
Enter the path to the Combined MDS ontology file: resources/worked-example-QBWorkflow/test_data/Final_Corrected_without_DetectorName/MDS_Onto_Corrected.ttl
```
This will generate a template in the current working directory. For the result of this run, see `resources/worked-example-QBWorkflow/test_data/Final_Corrected_without_DetectorName/data_template.xlsx`.
To serialize your data, start the workflow again.
```shell
$ FAIRLinked data-cube-run
```
Answer 'no' to the first question.
```text
Welcome to FAIRLinked RDF Data Cube 🚀
Do you have an existing RDF data cube dataset? (yes/no): no
```
When asked if you are running an experiment, answer 'no'.
```text
Are you running an experiment now? (yes/no): no
```
For most users, the answer to the question below should be 'no'. However, if you are working with a distributed database where 'hotspotting' could be a potential problem (too many queries directed towards one node), then answering 'yes' will make sure serialized files are "salted". If you answer 'yes', each row in a single sheet will be serialized, and the file names will be "salted" with two random letters in front of the other row identifiers.
```text
Do you have data for CRADLE ingestion? (yes/no): no
```
Next, you will be prompted for a namespace template (which contains a mapping of all the prefixes to the proper namespace in the serialized files), your filled-out data file, and the path to output the serialized files.
```text
Enter ORC_ID: 0000-0001-2345-6789
Enter the path to the namespace Excel file: resources/worked-example-QBWorkflow/test_data/Final_Corrected_without_DetectorName/namespace_template.xlsx
Enter the path to the data Excel file: resources/worked-example-QBWorkflow/test_data/Final_Corrected_without_DetectorName/mock_xrd_data.xlsx
Enter the path to the output folder: resources/worked-example-QBWorkflow/test_data/Final_Corrected_without_DetectorName/output_serialize
```
The next question asks for the mode of conversion. If 'entire', then the full table will be serialized into one RDF graph. If 'row-by-row', the each row will be serialized into its own graph. In this example, we will choose row-by-row.
```text
Do you want to convert the entire DataFrame as one dataset or row-by-row? (entire/row-by-row): row-by-row
```
You will then be prompted to select your row identifiers and the `FAIRLinked` will automatically exits.
```text
The following columns appear to be identifiers (contain 'id' in their name):
Include column 'ExperimentId' in the row-based dataset naming? (yes/no): yes
Include column 'DetectorWidth' in the row-based dataset naming? (yes/no): no
Approved ID columns for naming: ['ExperimentId']
Conversion completed under mode='row-by-row'. Outputs in: resources/worked-example-QBWorkflow/test_data/Final_Corrected_without_DetectorName/output_serialize/Output_0000000123456789_20260216110630
FAIRLinked exiting
```
To deserialize your data, start the workflow and answer 'yes' to the first question:
```text
Welcome to FAIRLinked RDF Data Cube 🚀
Do you have an existing RDF data cube dataset? (yes/no): yes
```
```text
Enter the path to your RDF data cube file/folder (can be .ttl/.jsonld or a directory): resources/worked-example-QBWorkflow/test_data/Final_Corrected_without_DetectorName/output_serialize/Output_0000000123456789_20260216110630/jsonld
Enter the path to the output folder: resources/worked-example-QBWorkflow/test_data/Final_Corrected_without_DetectorName/output_deserialize
```
---
## 💡 Acknowledgments
This work was supported by:
* U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy (EERE) under the Solar Energy Technologies Office (SETO) — Agreement Numbers **DE-EE0009353** and **DE-EE0009347**
* Department of Energy (National Nuclear Security Administration) — Award Number **DE-NA0004104** and Contract Number **B647887**
* U.S. National Science Foundation — Award Number **2133576**
---
## 🤝 Contributing
We welcome new ideas and community contributions! If you use FAIRLinked in your research, please **cite the project** or **reach out to the authors**.
Let us know if you'd like to include:
* Badges (e.g., PyPI version, License, Docs)
* ORCID links or contact emails
* Example datasets or a GIF walkthrough
| text/markdown | Van D. Tran, Brandon Lee, Henry Dirks, Ritika Lamba, Balashanmuga Priyan Rajamohan, Gabriel Ponon, Quynh D. Tran, Ozan Dernek, Yinghui Wu, Erika I. Barcelos, Roger H. French, Laura S. Bruckman | rxf131@case.edu | null | null | BSD-3-Clause | null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Information Analysis"
] | [] | null | null | >=3.9.18 | [] | [] | [] | [
"rdflib>=7.0.0",
"typing-extensions>=4.0.0",
"pyarrow>=11.0.0",
"openpyxl>=3.0.0",
"pandas>=1.0.0",
"cemento>=0.6.1",
"fuzzysearch>=0.8.0",
"tqdm>=4.0.0",
"pyld>=2.0.3",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\""
] | [] | [] | [] | [
"Documentation, https://fairlinked.readthedocs.io/en/latest/",
"Source, https://github.com/cwru-sdle/FAIRLinked",
"Tracker, https://github.com/cwru-sdle/FAIRLinked/issues",
"Homepage, https://cwrusdle.bitbucket.io/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:14:34.624419 | fairlinked-0.3.2.7.tar.gz | 66,359 | ed/2e/a65168813fc8041a3a5df882cc9a81bb41b126d7f34d1d9e01c6595e9897/fairlinked-0.3.2.7.tar.gz | source | sdist | null | false | ec859627adb4744eee40c7d579f25b67 | 4be0750ab16f7ea2d68eb473eab52fba0d89545ec55ef2772bbf8277126a0025 | ed2ea65168813fc8041a3a5df882cc9a81bb41b126d7f34d1d9e01c6595e9897 | null | [
"LICENSE.txt"
] | 0 |
2.4 | backup-helper | 0.3.1 | Helper tool for creating plain-file cold-storage archives including checksum files | # BackupHelper
A tool for simplifying the process of archiving multiple directories
onto several different drives. For each directory a checksum file
will be created, which will be verified after the transfer.
You can stage multiple sources and add targets to them.
Once you're done you can start the transfer, which will run
all copy operations at the same time, while making sure that
all disks in a transfer aren't busy with another BackupHelper operation.
## Quick start
Add a directory as a source for copying/archiving:
```
python -m backup_helper stage ~/Documents --alias docs
Staged: /home/m/Documents
with alias: docs
```
By default the BackupHelper state will be saved in the file
`backup_status.json` in the current working directory.
Alternatively a custom path can be used by passing
`--status-file /path/to/status.json` to __each__ command.
Add targets to that source. Either the normalized absolute path
can be used as `source` or the alias (here: _"docs"_) if present:
```
$ python -m backup_helper add-target docs /media/storage1/docs_2024 --alias storage1
Added target /media/storage1/docs_2024
with alias: storage1
$ python -m backup_helper add-target docs /media/storage1/docs_2024 --alias storage2
Added target /media/storage1/docs_2024
with alias: storage2
```
Now you can use the command `start` run the whole backup process
in sequence.
```
python -m backup_helper start
18:22:01 - INFO - Wrote /home/m/Documents/Documents_bh_2024-02-25T18-22-01.cshd
...
18:22:02 - INFO -
NO MISSING FILES!
NO FAILED CHECKSUMS!
SUMMARY:
TOTAL FILES: 3
MATCHES: 3
FAILED CHECKSUMS: 0
MISSING: 0
...
18:22:02 - INFO - /home/m/Documents/Documents_bh_2024-02-25T18-22-01.cshd: No missing files and all files matching their hashes
...
18:22:02 - INFO - Successfully completed the following 5 operation(s):
Hashed '/home/m/Documents':
Hash file: /home/m/Documents/Documents_bh_2024-02-25T18-22-01.cshd
Transfer successful:
From: /home/m/Documents
To: /media/storage1/docs_2024
Transfer successful:
From: /home/m/Documents
To: /media/storage2/docs_2024
Verified transfer '/media/storage1/docs_2024':
Checked: 3
CRC Errors: 0
Missing: 0
Verified transfer '/media/storage2/docs_2024':
Checked: 3
CRC Errors: 0
Missing: 0
```
Each part of the backup process can be run on its own and on a
specific source/target combination only. For more information
see the [backup process section](#backup-process).
## Backup process
The backup process, which can be run automatically using the
`start` command is split into the subprocesses:
1) Hash all source directories. The checksum file will be added to
the directory. A log file of creating the checksum file will
be written next to status JSON file.
2) Transfer all sources to their targets. Only one read __or__ write
operation per disk will be allowed at the same time.
3) Verify the transfer by comparing the hashes of the generated
checksum file with the hashes of the files on the target.
A log of the verification process will be written to the target.
The verification process (3) will be run last if there are more
transfer operations on a disk, so:
1) More expensive write operations are performed first.
2) The transferred files are less likely to be in cache when hashing.
Each part of the backup process can be run on its own and/or on a
specific source/target combination only. Required previous steps
will be run automatically.
Using the `interactive` command it's possible to add sources/targets
while the transfer is running, otherwise all running operations would
need to be completed before executing further commands.
## Commands
See `python -m backup_helper --help`
| text/markdown | null | omgitsmoe <60219950+omgitsmoe@users.noreply.github.com> | null | null | null | script, verify, backup, archival, bit-rot | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"checksum_helper<0.5,>=0.4",
"pytest<8,>=7.2; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/omgitsmoe/backup_helper",
"Bug Tracker, https://github.com/omgitsmoe/backup_helper/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T21:14:03.987073 | backup_helper-0.3.1-py3-none-any.whl | 23,282 | a3/53/f7939e207562d5153f765673a0f287b860495bc7be9fa7237b868b972aa7/backup_helper-0.3.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 2c207124387bd5f1ee2a98c1e94f0a99 | e9bfc3da394469aeff92d4a3c3d77eafdf7133666506386cee91a29a892d78f2 | a353f7939e207562d5153f765673a0f287b860495bc7be9fa7237b868b972aa7 | null | [] | 89 |
2.4 | vivarium-testing-utils | 0.3.4 | Project to store testing utilities for Vivarium software. | ======================
Vivarium Testing Utils
======================
This is a repository that will store utility features to help test
Vivarium software.
**Vivarium Testing Utils requires Python 3.8-3.11 to run**
You can install ``vivarium_testing_utils`` from PyPI with pip:
``> pip install vivarium_testing_utils``
or build it from source with
``> git clone https://github.com/ihmeuw/vivarium_testing_utils.git``
``> cd vivarium_testing_utils``
``> conda create -n ENVIRONMENT_NAME python=3.11``
``> pip install -e .[dev]``
| null | The vivarium developers | vivarium.dev@gmail.com | null | null | BSD-3-Clause | null | [
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Natural Language :: English",
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX",
"Operating System :: POSIX :: BSD",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Education",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Artificial Life",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Medical Science Apps.",
"Topic :: Scientific/Engineering :: Physics",
"Topic :: Software Development :: Libraries"
] | [] | https://github.com/ihmeuw/vivarium_testing_utils | null | null | [] | [] | [] | [
"vivarium_dependencies[click,loguru,networkx,numpy,pyyaml,scipy,tables]",
"pandas",
"vivarium_build_utils<3.0.0,>=2.0.1",
"pyarrow",
"seaborn",
"layered-config-tree",
"types-setuptools",
"vivarium_dependencies[docutils,ipython,matplotlib,sphinx,sphinx-click]; extra == \"docs\"",
"vivarium_dependencies[pytest]; extra == \"test\"",
"pytest-check; extra == \"test\"",
"vivarium_dependencies[interactive]; extra == \"interactive\"",
"vivarium>=3.4.0; extra == \"validation\"",
"vivarium-inputs<8.0.0,>=7.1.0; extra == \"validation\"",
"pandera<0.23.0; extra == \"validation\"",
"gbd_mapping; extra == \"validation\"",
"vivarium_dependencies[docutils,ipython,matplotlib,sphinx,sphinx-click]; extra == \"dev\"",
"vivarium_dependencies[pytest]; extra == \"dev\"",
"pytest-check; extra == \"dev\"",
"vivarium_dependencies[interactive]; extra == \"dev\"",
"vivarium_dependencies[lint]; extra == \"dev\"",
"vivarium>=3.4.0; extra == \"dev\"",
"vivarium-inputs<8.0.0,>=7.1.0; extra == \"dev\"",
"pandera<0.23.0; extra == \"dev\"",
"gbd_mapping; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:14:00.371878 | vivarium_testing_utils-0.3.4.tar.gz | 1,057,760 | c6/df/6312e8f60aa68f98536328101f5712924efcd38a5aded8cd7ccf78cf3b62/vivarium_testing_utils-0.3.4.tar.gz | source | sdist | null | false | b3fa980bbff3d84f13c4307711bbaa2c | d77b4d122723044e1f48c500458ca7c8b46ff3c12242897710773d667094eaaf | c6df6312e8f60aa68f98536328101f5712924efcd38a5aded8cd7ccf78cf3b62 | null | [
"LICENSE"
] | 262 |
2.4 | module-utilities | 0.11.1.dev0 | Collection of utilities to aid working with python modules. | <!-- markdownlint-disable MD041 -->
<!-- prettier-ignore-start -->
[![Repo][repo-badge]][repo-link]
[![Docs][docs-badge]][docs-link]
[![PyPI license][license-badge]][license-link]
[![PyPI version][pypi-badge]][pypi-link]
[![Conda (channel only)][conda-badge]][conda-link]
[![Code style: ruff][ruff-badge]][ruff-link]
[![uv][uv-badge]][uv-link]
<!--
For more badges, see
https://shields.io/category/other
https://naereen.github.io/badges/
[pypi-badge]: https://badge.fury.io/py/module-utilities
-->
[ruff-badge]: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json
[ruff-link]: https://github.com/astral-sh/ruff
[uv-badge]: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/uv/main/assets/badge/v0.json
[uv-link]: https://github.com/astral-sh/uv
[pypi-badge]: https://img.shields.io/pypi/v/module-utilities
[pypi-link]: https://pypi.org/project/module-utilities
[docs-badge]: https://img.shields.io/badge/docs-sphinx-informational
[docs-link]: https://pages.nist.gov/module-utilities/
[repo-badge]: https://img.shields.io/badge/--181717?logo=github&logoColor=ffffff
[repo-link]: https://github.com/usnistgov/module-utilities
[conda-badge]: https://img.shields.io/conda/v/conda-forge/module-utilities
[conda-link]: https://anaconda.org/conda-forge/module-utilities
[license-badge]: https://img.shields.io/pypi/l/module-utilities?color=informational
[license-link]: https://github.com/usnistgov/module-utilities/blob/main/LICENSE
[changelog-link]: https://github.com/usnistgov/module-utilities/blob/main/CHANGELOG.md
<!-- other links -->
[cachetools]: https://github.com/tkem/cachetools/
<!-- prettier-ignore-end -->
# `module-utilities`
A Python package for creating python modules.
## Overview
I was using the same code snippets over and over, so decided to put them in one
place.
## Features
- `cached`: A module to cache class attributes and methods. Right now, this uses
a standard python dictionary for storage. Future versions will hopefully
integrate with something like [cachetools].
- `docfiller`: A module to share documentation. This is adapted from the
[pandas `doc` decorator](https://github.com/pandas-dev/pandas/blob/main/pandas/util/_decorators.py).
There are some convenience functions and classes for sharing documentation.
- `docinhert`: An interface to [docstring-inheritance] module. This can be
combined with `docfiller` to make creating related function/class
documentation easy.
[docstring-inheritance]: https://github.com/AntoineD/docstring-inheritance
## Status
This package is actively used by the author. Please feel free to create a pull
request for wanted features and suggestions!
## Example usage
Simple example of using `cached` module.
```pycon
>>> from module_utilities import cached
>>>
>>> class Example:
... @cached.prop
... def aprop(self):
... print("setting prop")
... return ["aprop"]
... @cached.meth
... def ameth(self, x=1):
... print("setting ameth")
... return [x]
... @cached.clear
... def method_that_clears(self):
... pass
...
>>> x = Example()
>>> x.aprop
setting prop
['aprop']
>>> x.aprop
['aprop']
>>> x.ameth(1)
setting ameth
[1]
>>> x.ameth(x=1)
[1]
>>> x.method_that_clears()
>>> x.aprop
setting prop
['aprop']
>>> x.ameth(1)
setting ameth
[1]
```
Simple example of using `DocFiller`.
```pycon
>>> from module_utilities.docfiller import DocFiller, indent_docstring
>>> d = DocFiller.from_docstring(
... """
... Parameters
... ----------
... x : int
... x param
... y : int
... y param
... z0 | z : int
... z int param
... z1 | z : float
... z float param
...
... Returns
... -------
... output0 | output : int
... Integer output.
... output1 | output : float
... Float output
... """,
... combine_keys="parameters",
... )
>>> @d.decorate
... def func(x, y, z):
... """
... Parameters
... ----------
... {x}
... {y}
... {z0}
...
... Returns
... --------
... {returns.output0}
... """
... return x + y + z
...
>>> print(indent_docstring(func))
+ Parameters
+ ----------
+ x : int
+ x param
+ y : int
+ y param
+ z : int
+ z int param
<BLANKLINE>
+ Returns
+ --------
+ output : int
+ Integer output.
# Note that for python version <= 3.8 that method chaining
# in decorators doesn't work, so have to do the following.
# For newer python, you can inline this.
>>> dd = d.assign_keys(z="z0", out="returns.output0")
>>> @dd.decorate
... def func1(x, y, z):
... """
... Parameters
... ----------
... {x}
... {y}
... {z}
... Returns
... -------
... {out}
... """
... pass
...
>>> print(indent_docstring(func1))
+ Parameters
+ ----------
+ x : int
+ x param
+ y : int
+ y param
+ z : int
+ z int param
+ Returns
+ -------
+ output : int
+ Integer output.
>>> dd = d.assign_keys(z="z1", out="returns.output1")
>>> @dd(func1)
... def func2(x, y, z):
... pass
...
>>> print(indent_docstring(func2))
+ Parameters
+ ----------
+ x : int
+ x param
+ y : int
+ y param
+ z : float
+ z float param
+ Returns
+ -------
+ output : float
+ Float output
```
<!-- end-docs -->
## Installation
<!-- start-installation -->
Use one of the following
```bash
pip install module-utilities
```
or
```bash
conda install -c conda-forge module-utilities
```
Optionally, you can install [docstring-inheritance] to use the `docinherit`
module:
```bash
pip install docstring-inheritance
# or
conda install -c conda-forge docstring-inheritance
```
<!-- end-installation -->
## Documentation
See the [documentation][docs-link] for a look at `module-utilities` in action.
## What's new?
See [changelog][changelog-link].
## License
This is free software. See [LICENSE][license-link].
## Related work
`module-utilities` is used in the following packages
- [cmomy]
- [analphipy]
- [tmmc-lnpy]
- [thermoextrap]
[cmomy]: https://github.com/usnistgov/cmomy
[analphipy]: https://github.com/usnistgov/analphipy
[tmmc-lnpy]: https://github.com/usnistgov/tmmc-lnpy
[thermoextrap]: https://github.com/usnistgov/thermoextrap
## Contact
The author can be reached at <wpk@nist.gov>.
## Credits
This package was created using
[Cookiecutter](https://github.com/audreyr/cookiecutter) with the
[usnistgov/cookiecutter-nist-python](https://github.com/usnistgov/cookiecutter-nist-python)
template.
| text/markdown | William P. Krekelberg | William P. Krekelberg <wpk@nist.gov> | null | null | null | module-utilities | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"typing-extensions; python_full_version < \"3.12\"",
"module-utilities[inherit]; extra == \"all\"",
"docstring-inheritance; extra == \"inherit\""
] | [] | [] | [] | [
"Documentation, https://pages.nist.gov/module-utilities/",
"Homepage, https://github.com/usnistgov/module-utilities"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:13:59.143149 | module_utilities-0.11.1.dev0.tar.gz | 37,555 | 17/58/aef8bb8fce258278da0d1572cb5ff1ca16b748c69808aaa074a701975f49/module_utilities-0.11.1.dev0.tar.gz | source | sdist | null | false | 4a320e0d0301f9aa90cb8a5af3dbe91f | 10dcd924e0a77049faa8a7be8aaf7d60b552fde24124057b2449006b8dd69a31 | 1758aef8bb8fce258278da0d1572cb5ff1ca16b748c69808aaa074a701975f49 | NIST-PD | [
"LICENSE"
] | 185 |
2.4 | visier-platform-sdk | 22222222.99201.2587 | API Reference | Detailed API reference documentation for Visier APIs. Includes all endpoints, headers, path parameters, query parameters, request body schema, response schema, JSON request samples, and JSON response samples.
| text/markdown | Visier | alpine@visier.com | null | null | Apache License, Version 2.0 | Visier, Visier-SDK, API Reference | [] | [] | https://github.com/visier/python-sdk | null | null | [] | [] | [] | [
"urllib3<2; platform_python_implementation == \"PyPy\"",
"urllib3<3.0.0,>=2.1.0; platform_python_implementation != \"PyPy\"",
"python-dateutil>=2.8.2",
"Flask>=3.0.0",
"pydantic>=2",
"typing-extensions>=4.7.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:13:55.799335 | visier_platform_sdk-22222222.99201.2587.tar.gz | 604,062 | ef/f2/31d59029e8ba367e58d85f787e95a6865e07d3b9e9c082a816b4f4aab7ca/visier_platform_sdk-22222222.99201.2587.tar.gz | source | sdist | null | false | c45bc7881a9c2e5f3a3930054bfe7310 | 4c8a25d8ef1b6b4396ba8a1dad48ebee220e7061661f817d73a4dd008346945f | eff231d59029e8ba367e58d85f787e95a6865e07d3b9e9c082a816b4f4aab7ca | null | [] | 208 |
2.4 | csm-dashboard | 0.4.4 | Lido CSM Operator Dashboard for tracking validator earnings | # Lido CSM Operator Dashboard
Track your Lido Community Staking Module (CSM) validator earnings, excess bond, and cumulative rewards.


## Features
- Look up operator by Ethereum address (manager or rewards address) or operator ID
- View current bond vs required bond (excess is claimable)
- Track cumulative rewards and unclaimed amounts
- Operator type detection (Permissionless, ICS/Legacy EA, etc.)
- Detailed validator status from beacon chain (with `--detailed` flag)
- APY metrics: reward APY, bond APY (stETH rebase), and net APY
- Full distribution history with per-frame breakdown (with `--history` flag)
- Withdrawal/claim history tracking (with `--withdrawals` flag)
- JSON output for scripting and automation
- CLI for quick terminal lookups
- Web interface for browser-based monitoring
## Installation
### Option 1: Docker (Recommended)
```bash
# Clone the repository
git clone <repo-url>
cd lido-csm-dashboard
# Copy and configure environment
cp .env.example .env
# Start the web dashboard
docker compose up -d
# View logs
docker compose logs -f
```
The web dashboard will be available at http://localhost:3000
### Option 2: Local Python Installation
```bash
# Clone the repository
git clone <repo-url>
cd lido-csm-dashboard
# Install with pip
pip install -e .
# Or with uv
uv pip install -e .
```
## Configuration
Copy `.env.example` to `.env` and configure:
```bash
cp .env.example .env
```
Available settings:
- `ETH_RPC_URL`: Ethereum RPC endpoint (default: https://eth.llamarpc.com)
- `BEACON_API_URL`: Beacon chain API (default: https://beaconcha.in/api/v1)
- `BEACON_API_KEY`: Optional API key for beaconcha.in (higher rate limits)
- `ETHERSCAN_API_KEY`: Optional API key for Etherscan (recommended for accurate historical data)
- `CACHE_TTL_SECONDS`: Cache duration in seconds (default: 300)
## Usage
### Docker Usage
The web dashboard runs automatically when you start the container. You can also use CLI commands inside the container:
```bash
# Check operator rewards
docker compose exec csm-dashboard csm rewards 0xYourAddress
# Check by operator ID
docker compose exec csm-dashboard csm rewards --id 42
# Get detailed info with APY metrics
docker compose exec csm-dashboard csm rewards --id 42 --detailed
# JSON output
docker compose exec csm-dashboard csm rewards --id 42 --json
# List all operators
docker compose exec csm-dashboard csm list
# Monitor continuously (refresh every 60 seconds)
docker compose exec csm-dashboard csm watch 0xYourAddress --interval 60
```
### Local CLI Usage
### `csm rewards` - Check operator rewards
```bash
csm rewards [ADDRESS] [OPTIONS]
```
> **Note:** The `check` command is still available as an alias for backwards compatibility.
| Argument/Option | Short | Description |
|-----------------|-------|-------------|
| `ADDRESS` | | Ethereum address (required unless `--id` is provided) |
| `--id` | `-i` | Operator ID (skips address lookup, faster) |
| `--detailed` | `-d` | Include validator status from beacon chain and APY metrics |
| `--history` | `-H` | Show all historical distribution frames with per-frame APY |
| `--withdrawals` | `-w` | Include withdrawal/claim history |
| `--json` | `-j` | Output as JSON (same format as API) |
| `--rpc` | `-r` | Custom RPC URL |
**Examples:**
```bash
# Check by address
csm rewards 0xYourAddress
# Check by operator ID (faster)
csm rewards --id 42
# Get detailed validator info and APY
csm rewards --id 42 --detailed
# Show full distribution history with Previous/Current/Lifetime columns
csm rewards --id 42 --history
# Include withdrawal history
csm rewards --id 42 --withdrawals
# JSON output for scripting
csm rewards --id 42 --json
# JSON with detailed info
csm rewards --id 42 --detailed --json
```
### `csm watch` - Continuous monitoring
```bash
csm watch ADDRESS [OPTIONS]
```
| Argument/Option | Short | Description |
|-----------------|-------|-------------|
| `ADDRESS` | | Ethereum address to monitor (required) |
| `--interval` | `-i` | Refresh interval in seconds (default: 300) |
| `--rpc` | `-r` | Custom RPC URL |
**Examples:**
```bash
# Monitor with default 5-minute refresh
csm watch 0xYourAddress
# Monitor with 60-second refresh
csm watch 0xYourAddress --interval 60
```
### `csm list` - List all operators
```bash
csm list [OPTIONS]
```
| Option | Short | Description |
|--------|-------|-------------|
| `--rpc` | `-r` | Custom RPC URL |
Lists all operator IDs that have rewards in the current merkle tree.
### `csm serve` - Start web dashboard
```bash
csm serve [OPTIONS]
```
| Option | Description |
|--------|-------------|
| `--host` | Host to bind to (default: 127.0.0.1) |
| `--port` | Port to bind to (default: 8080) |
| `--reload` | Enable auto-reload for development |
**Examples:**
```bash
# Start on default port
csm serve
# Start on custom port
csm serve --port 3000
# Development mode with auto-reload
csm serve --reload
```
Then open http://localhost:8080 in your browser.
**Docker:** The web dashboard is already running when you use `docker compose up`. Access it at http://localhost:3000
## JSON Output
The `--json` flag outputs data in the same format as the API, making it easy to integrate with scripts or other tools:
```bash
csm rewards --id 333 --json
```
```json
{
"operator_id": 333,
"manager_address": "0x6ac683C503CF210CCF88193ec7ebDe2c993f63a4",
"reward_address": "0x55915Cf2115c4D6e9085e94c8dAD710cabefef31",
"curve_id": 2,
"operator_type": "Permissionless",
"rewards": {
"current_bond_eth": 651.55,
"required_bond_eth": 650.2,
"excess_bond_eth": 1.35,
"cumulative_rewards_eth": 10.96,
"distributed_eth": 9.61,
"unclaimed_eth": 1.35,
"total_claimable_eth": 2.70
},
"validators": {
"total": 500,
"active": 500,
"exited": 0
}
}
```
With `--detailed`, additional fields are included:
```json
{
"operator_id": 333,
"curve_id": 2,
"operator_type": "Permissionless",
"validators": {
"total": 500,
"active": 500,
"exited": 0,
"by_status": {
"active": 500,
"pending": 0,
"exiting": 0,
"exited": 0,
"slashed": 0
}
},
"performance": {
"avg_effectiveness": 98.5
},
"apy": {
"current_distribution_apy": 2.77,
"current_bond_apr": 2.56,
"net_apy_28d": 5.33,
"lifetime_reward_apy": 2.80,
"lifetime_bond_apy": 2.60,
"lifetime_net_apy": 5.40
},
"active_since": "2025-02-16T12:00:00"
}
```
With `--history`, you also get the full distribution frame history:
```json
{
"apy": {
"frames": [
{
"frame_number": 1,
"start_date": "2025-03-14T00:00:00",
"end_date": "2025-04-11T00:00:00",
"rewards_eth": 1.2345,
"validator_count": 500,
"duration_days": 28.0,
"apy": 2.85
}
]
}
}
```
## API Endpoints
- `GET /api/operator/{address_or_id}` - Get operator rewards data
- Query param: `?detailed=true` for validator status and APY
- Query param: `?history=true` for all historical distribution frames
- Query param: `?withdrawals=true` for withdrawal/claim history
- `GET /api/operator/{address_or_id}/strikes` - Get detailed validator strikes
- `GET /api/operators` - List all operators with rewards
- `GET /api/health` - Health check
## Understanding APY Metrics
The dashboard shows three APY metrics when using the `--detailed` or `--history` flags:
| Metric | What It Means |
|--------|---------------|
| **Reward APY** | Your earnings from CSM fee distributions, based on your validators' performance |
| **Bond APY** | Automatic growth of your stETH bond from protocol rebasing (same for all operators) |
| **NET APY** | Total return = Reward APY + Bond APY |
### Display Modes
- **`--detailed`**: Shows only the Current frame column (simpler view)
- **`--history`**: Shows Previous, Current, and Lifetime columns with full distribution history
### How APY is Calculated
**Reward APY** is calculated from actual reward distribution data published by Lido. Every ~28 days, Lido calculates how much each operator earned and publishes a "distribution frame" to IPFS (a decentralized file storage network). The dashboard fetches all these historical frames to calculate APY.
- **Current APY**: Based on the most recent distribution frame (~28 days)
- **Previous APY**: Based on the second-to-last distribution frame
- **Lifetime APY**: Duration-weighted average of all frames, using **per-frame bond requirements** for accuracy
The **Lifetime APY** calculation is particularly sophisticated: it uses each frame's actual validator count to determine the bond requirement for that period, then calculates a duration-weighted average. This produces accurate lifetime APY even for operators who have grown significantly over time.
**Bond APY** represents the stETH rebase rate—the automatic growth of your bond due to Ethereum staking rewards. This rate is set by the Lido protocol and applies equally to all operators.
> **Note**: Bond APY uses historical stETH rebase rates from on-chain `TokenRebased` events, showing the actual rate for each distribution frame.
### Operator Types
The dashboard detects your operator type from the CSAccounting bond curve:
| Type | Description |
|------|-------------|
| **Permissionless** | Standard operators (Curve 2, current default) |
| **Permissionless (Legacy)** | Early permissionless operators (Curve 0, deprecated) |
| **ICS/Legacy EA** | Incentivized Community Stakers / Early Adopters (Curve 1) |
### Why You Might Want an Etherscan API Key
The actual reward data lives on IPFS and is always accessible. However, to *discover* which IPFS files exist, the dashboard needs to find historical `DistributionLogUpdated` events on the blockchain. This can be done in several ways:
| Method | Description |
|--------|-------------|
| **With Etherscan API key** | Most reliable. Queries Etherscan directly for complete, up-to-date distribution history. |
| **Without API key** | Uses a built-in list of known distributions. Works fine but may be slightly behind if new distributions happened recently. |
**How to get one (free):**
1. Go to [etherscan.io/apis](https://etherscan.io/apis)
2. Create a free account
3. Generate an API key
4. Add to your `.env` file: `ETHERSCAN_API_KEY=your_key_here`
The free tier allows 5 calls/second, which is plenty for this dashboard.
## Data Sources
- **On-chain contracts**: CSModule, CSAccounting, CSFeeDistributor, stETH
- **Rewards tree**: https://github.com/lidofinance/csm-rewards (updates hourly)
- **Beacon chain**: beaconcha.in API (for validator status)
- **Lido API**: stETH APR data (for bond APY calculations)
- **IPFS**: Historical reward distribution logs (cached locally after first fetch)
## Contract Addresses (Mainnet)
- CSModule: `0xdA7dE2ECdDfccC6c3AF10108Db212ACBBf9EA83F`
- CSAccounting: `0x4d72BFF1BeaC69925F8Bd12526a39BAAb069e5Da`
- CSFeeDistributor: `0xD99CC66fEC647E68294C6477B40fC7E0F6F618D0`
- stETH: `0xae7ab96520DE3A18E5e111B5EaAb095312D7fE84`
## Development
```bash
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
```
## License
MIT
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"aiosqlite>=0.19",
"fastapi>=0.104",
"httpx>=0.25",
"pydantic-settings>=2.0",
"pydantic>=2.5",
"python-dotenv>=1.0",
"rich>=13.0",
"typer>=0.9",
"uvicorn>=0.24",
"web3>=6.0",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest-httpx>=0.21; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:13:43.054697 | csm_dashboard-0.4.4.tar.gz | 824,048 | ba/4b/f1933f8e0020d698a41bf10a8d6cb89defd013e87173ee8ec19aec0b75d2/csm_dashboard-0.4.4.tar.gz | source | sdist | null | false | ec22fa5ceb632a2de0628e658abc03bc | c91e5e355e9a8279b39e959bf573236b8d059c906ab6b9b3c8fdda23d04a0970 | ba4bf1933f8e0020d698a41bf10a8d6cb89defd013e87173ee8ec19aec0b75d2 | null | [] | 204 |
2.4 | llama-index-observability-otel | 0.4.1 | llama-index observability integration with OpenTelemetry | # LlamaIndex OpenTelemetry Observability Integration
## Installation
```shell
pip install llama-index-observability-otel
```
## Usage
You can use the default OpenTelemetry observability class as follows:
```python
from llama_index.observability.otel import LlamaIndexOpenTelemetry
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.core.llms import MockLLM
from llama_index.core.embeddings import MockEmbedding
from llama_index.core import Settings
# initialize the instrumentation object
instrumentor = LlamaIndexOpenTelemetry()
if __name__ == "__main__":
embed_model = MockEmbedding(embed_dim=256)
llm = MockLLM()
Settings.embed_model = embed_model
# start listening!
instrumentor.start_registering()
# register events
documents = SimpleDirectoryReader(
input_dir="./data/paul_graham/"
).load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(llm=llm)
query_result = query_engine.query("Who is Paul?")
query_result_one = query_engine.query("What did Paul do?")
```
Or you can add some customization to the `LlamaIndexOpenTelemetry` class by, for example, set a custom span exporter, a custom service name, activating the debugging, set a custom list of extra span processors...
```python
from llama_index.observability.otel import LlamaIndexOpenTelemetry
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
OTLPSpanExporter,
)
from opentelemetry.sdk.trace import SpanProcessor
from llama_index.core.llms import MockLLM
from llama_index.core.embeddings import MockEmbedding
from llama_index.core import Settings
class CustomSpanProcessor(SpanProcessor):
# your implementation
...
# define a custom span exporter
span_exporter = OTLPSpanExporter("http://0.0.0.0:4318/v1/traces")
# initialize the instrumentation object
instrumentor = LlamaIndexOpenTelemetry(
service_name_or_resource="my.test.service.1",
span_exporter=span_exporter,
debug=True,
extra_span_processors=[CustomSpanProcessor()],
)
if __name__ == "__main__":
embed_model = MockEmbedding(embed_dim=256)
llm = MockLLM()
Settings.embed_model = embed_model
# start listening!
instrumentor.start_registering()
# register events
documents = SimpleDirectoryReader(
input_dir="./data/paul_graham/"
).load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(llm=llm)
query_result = query_engine.query("Who is Paul?")
query_result_one = query_engine.query("What did Paul do?")
```
| text/markdown | null | Clelia Astra Bertelli <clelia@runllama.ai> | null | null | null | null | [] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"llama-index-instrumentation<0.5,>=0.4.2",
"opentelemetry-api<2,>=1.33.0",
"opentelemetry-sdk<2,>=1.33.0",
"opentelemetry-semantic-conventions<1,>=0.54b0",
"termcolor<4,>=3.1.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T21:13:01.772851 | llama_index_observability_otel-0.4.1.tar.gz | 7,076 | a6/93/b1c1e46117b4b702ff3a28a4d0bddfdba973e8849e33db328501d7c78309/llama_index_observability_otel-0.4.1.tar.gz | source | sdist | null | false | 0f2b824b21895d269389c4a9b4d917ae | fa8b27b0c56aeade5ce6d0b00f3172fb7b090819591866afaae01e676acb70df | a693b1c1e46117b4b702ff3a28a4d0bddfdba973e8849e33db328501d7c78309 | MIT | [
"LICENSE"
] | 217 |
2.4 | dve-lumipy-testing | 1.0.562 | Python library for Luminesce | # Lumipy
[*loom-ee-pie*]
## Introduction
Lumipy is a python library that integrates Luminesce and the Python Data Science Stack.
It is designed to be used in Jupyter, but you can use it scripts and modules as well.
It has two components
* **Getting data:** a fluent syntax for scripting up queries using python code. This makes it easy to build complex queries and get your data back as pandas DataFrames.
* **Integration:** infrastructure to build providers in python. This allows you to build data sources and transforms such as ML models and connect them to Luminesce. They can then be used by other users from the Web UI, Power BI, etc.
Lumipy is designed to be as easy to use and as unobtrusive as possible.
You should have to do minimal imports and everything should be explorable from Jupyter through tab completion and `shift + tab`.
## Install
Lumipy is available from PyPI:
**LumiPy** Is our latest package which utilises the V2 Finbourne SDKs.
It is important to uninstall `dve-lumipy-preview` before installing `lumipy`. You can do this by running:
```
pip uninstall dve-lumipy-preview
```
We recommend using the --force-reinstall option to make this transition smoother. Please note that this will force
update all dependencies for lumipy and could affect your other Python projects.
```
pip install --force-reinstall lumipy
```
If you prefer not to update all dependencies, you can omit the `--force-reinstall` and use the regular pip install
command instead:
```
pip install lumipy
```
If the above commands do not work, you may need to upgrade pip first:
```
pip install --upgrade pip
```
### Install via Brew
Else you can install lumipy via brew.
First need to install Python 3.11:
```
brew install python@3.11
```
Next using Python 3.11, install the lumipy package:
```
python3.11 -m pip install lumipy
```
**Dve-Lumipy-Preview** uses the V1 Finbourne SDKs and is no longer maintained.
```
pip install dve-lumipy-preview
```
## Configure
Add a personal access token to your config. This first one will be the active one.
```python
import lumipy as lm
lm.config.add('fbn-prd', '<your PAT>')
```
If you add another domain and PAT you will need to switch to it.
```python
import lumipy as lm
lm.config.add('fbn-ci', '<your PAT>')
lm.config.domain = 'fbn-ci'
```
If the above does not work, can also configure via environment variables.
```
lumipy config add --domain='my-domain' --token='my-token'
```
## Connect
Python providers are build by inheriting from a base class, `BaseProvider`, and implementing the `__init__` and `get_data` methods.
The former defines the 'shape' of the output data and the parameters it takes. The latter is where the provider actually does something.
This can be whatever you want as long as it returns a dataframe with the declared columns.
### Running Providers
This will run the required setup on the first startup.
Once that's finished it'll spin up a provider that returns Fisher's Irises dataset.
Try it out from the web GUI, or from an atlas in another notebook. Remember to get the atlas again once it's finished starting up.
This uses the built-in `PandasProvider` class to make a provider object, adds it to a `ProviderManager` and then starts it.
```python
import lumipy.provider as lp
p = lp.PandasProvider(
'https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv',
'iris'
)
lp.ProviderManager(p).run()
```
This will also default to the active domain if none is specified as an argument to the provider manager.
You can run globally in the domain so other users can query your provider by setting `user='global'` and `whitelist_me=True` in
the `ProviderManager` constructor.
The setup consists of getting the dlls for the dotnet app (provider factory) and getting the pem files to run in the domain.
To run the setup on its own run the `lp.setup()` function. This takes the same arguments as `get_client` and `get_atlas`.
### Building Providers
The following example will simulate a set of coin flips. It has two columns `Label` and `Result`, and one
parameter `Probability` with a default value of 0.5.
Its name and column/param content are specified in `__init__`. The simulation of the coin flips
happens inside `get_data` where we draw numbers from a binomial distribution with the given probability and n = 1.
We also have a check for the probability value. If it's out of range an error will be thrown in python and reported back
in the progress log and query status.
Finally, the provider object is instantiated and given to a provider manager. The provider manager is then started up with
the `run()` method.
```python
import lumipy.provider as lp
from pandas import DataFrame
from typing import Union, Iterator
import numpy as np
class CoinFlips(lp.BaseProvider):
def __init__(self):
columns = [
lp.ColumnMeta('Label', lp.DType.Text),
lp.ColumnMeta('Result', lp.DType.Int),
]
params = [lp.ParamMeta('Probability', lp.DType.Double, default_value=0.5)]
super().__init__('test.coin.flips', columns, params)
def get_data(self, context) -> Union[DataFrame, Iterator[DataFrame]]:
# If no limit is given, default to 100 rows.
limit = context.limit()
if limit is None:
limit = 100
# Get param value from params dict. If it's out of bounds throw an error.
p = context.get('Probability')
if not 0 <= p <= 1:
raise ValueError(f'Probability must be between 0 and 1. Was {p}.')
# Generate the coin flips and return.
return DataFrame({'Label':f'Flip {i}', 'Result': np.random.binomial(1, p)} for i in range(limit))
coin_flips = CoinFlips()
lp.ProviderManager(coin_flips).run()
```
# CLI
Lumipy also contains a command line interface (CLI) app with five different functions.
You can view help for the CLI and each of the actions using `--help`. Try this to start with
```shell
$ lumipy --help
```
## Config
This lets you configure your domains and PATs. You can show, add, set, delete and deactivate domains.
To see all options and args run the following
```shell
$ lumipy config --help
```
### Config Examples
`set` Set a domain as the active one.
```shell
$ lumipy config set --domain=my-domain
```
`add` Add a domain and PAT to your config.
```shell
$ lumipy config add --domain=my-domain --token=<my token>
(--overwrite)
```
`show` Show a censored view of the config contents.
```shell
$ lumipy config show
```
`delete` Delete a domain from the config.
```shell
$ lumipy config delete --domain=my-domain
```
`deactivate` Deactivate the config so no domain is used by default.
```shell
$ lumipy config deactivate
```
## Run
This lets you run python providers. You can run prebuilt named sets, CSV files, python files containing provider objects,
or even a directory containing CSVs and py files.
### Run Examples
`.py` File
```shell
$ lumipy run path/to/my_providers.py
```
`.csv` File
```shell
$ lumipy run path/to/my_data.csv
```
Built-in Set
```shell
$ lumipy run demo
```
Directory
```shell
$ lumipy run path/to/dir
```
## Query Using SQL
This command runs a SQL query, gets the result back, shows it on screen and then saves it as a CSV.
### Query Examples
Run a query (saves as CSV to a temp directory).
```shell
$ lumipy query --sql="select ^ from lusid.instrument limit 5"
```
Run a query to a defined location.
```shell
$ lumipy query --sql="select ^ from lusid.instrument limit 5" --save-to=/path/to/output.csv
```
## Setup
This lets you run the provider infrastructure setup on your machine.
You must have the .NET 8 SDK installed to run providers.
### Setup Examples
Run the py providers setup. This will redownload the certs and get the latest dlls, overwriting any that are already there.
```shell
$ lumipy setup --domain=my-domain
```
## Test
This lets you run the Lumipy test suites.
### Test Examples
You can run `unit` tests, `integration` tests, `provider` tests, or `everything`.
```shell
$ lumipy test unit
```
To run a specific test:
```
python3.11 -m unittest
python3.11 -m unittest lumipy.test.{test_type}.{test_name}
```
## Windows Setup
To use LumiPy and run local providers it is recommended that you use an admin `powershell` terminal.
Install (or update) LumiPy using your `powerhsell` terminal.
### LumiPy (V2 Finbourne SDK)
```shell
$ pip install lumipy --upgrade
```
Verify that your install was succesful.
```shell
$ lumipy --help
```
Setup your config with a personal access token (PAT).
```shell
$ lumipy config add --domain=my-domain --token=my-pat-token
```
Ensure you can run local providers. To run these providers globally add `--user==global` and `--whitelist-me` to the command below.
```shell
$ lumipy run demo
```
### Testing Local Changes on Windows
To test your local `dve-lumipy` changes on Windows add `dve-lumipy` to your python path (inside your environment variables).
## Authenticating with the SDK (Lumipy)
Example using the `lumipy.client.get_client()` method:
```python
from lumipy.client import get_client
client = get_client()
```
### Recommended Method
Authenticate by setting up the PAT token via the CLI or directly in Python (see the Configure section above).
### Secrets File
Initialize `get_client` using a secrets file:
```python
client = get_client(api_secrets_file="secrets_file_path/secrets.json")
```
File structure should be:
```json
{
"api": {
"luminesceUrl": "https://fbn-ci.lusid.com/honeycomb/",
"clientId": "clientId",
"clientSecret": "clientSecret",
"appName": "appName",
"certificateFilename": "test_certificate.pem",
"accessToken": "personal access token"
},
"proxy": {
"address": "http://myproxy.com:8080",
"username": "proxyuser",
"password": "proxypass"
}
}
```
### Keyword Arguments
Initialize `get_client` with keyword arguments:
```python
client = get_client(username="myusername", ...)
```
Relevant keyword arguments include:
- token_url
- api_url
- username
- password
- client_id
- client_secret
- app_name
- certificate_filename
- proxy_address
- proxy_username
- proxy_password
- access_token
### Environment Variables
The following environment variables can also be set:
- FBN_TOKEN_URL
- FBN_LUMINESCE_API_URL
- FBN_USERNAME
- FBN_PASSWORD
- FBN_CLIENT_ID
- FBN_CLIENT_SECRET
- FBN_APP_NAME
- FBN_CLIENT_CERTIFICATE
- FBN_PROXY_ADDRESS
- FBN_PROXY_USERNAME
- FBN_PROXY_PASSWORD
- FBN_ACCESS_TOKEN
# Lumipy Atlas and LumiFlex
This guide demonstrates how to use the Lumipy Atlas interface to query data providers, transform data, join datasets, create views, and interact with external services like AWS S3, Drive, and Slack.
### Query
All built around the `atlas` object. This is the starting point for exploring your data sources and then using them.
Build your atlas with `lm.get_atlas`. If you don't supply credentials it will default to your active domain in the config.
If there is no active domain in your config it will fall back to env vars.
```python
import lumipy as lm
atlas = lm.get_atlas()
ins = atlas.lusid_instrument()
ins.select('^').limit(10).go()
```
You can also specify the domain here by a positional argument, e.g. `lm.get_atlas('fbn-ci')` will use `fbn-ci` and will override the active domain.
Client objects are created in the same way. You can submit raw SQL strings as queries using `run()`
```python
import lumipy as lm
client = lm.get_client()
client.run('select ^ from lusid.instrument limit 10')
```
You can create a `client` or `atlas` for a domain other than the active one by specifying it in `get_client` or `get_atlas`.
```python
import lumipy as lm
client = lm.get_client('fbn-prd')
atlas = lm.get_atlas('fbn-prd')
```
### Querying a Provider
```python
pf = atlas.lusid_portfolio(as_at=dt.datetime(2022, 9, 1))
df = pf.select('*').limit(10).go()
```
### Select New Columns
```python
df = pf.select(
'^', # all original main columns
LoudNoises=pf.portfolio_code.str.upper(),
IntVal=7,
BoolVal=False
).limit(10).go()
```
### Where Filtering
```python
df = pf.select('*').where(pf.portfolio_scope == 'Finbourne-Examples').go()
```
### Order By
```python
df = pf.select('*').where(
pf.portfolio_scope == 'Finbourne-Examples'
).order_by(
pf.portfolio_code.ascending()
).go()
```
### Case Statements and Group By
```python
region = lm.when(pf.portfolio_code.str.contains('GLOBAL')).then('GLOBAL') \
.when(pf.portfolio_code.str.contains('US')).then('US') \
.otherwise('OTHER')
df = pf.select(Region=region).where(
pf.portfolio_scope == 'Finbourne-Examples'
).group_by(
Region=region
).aggregate(
PortfolioCount=pf.portfolio_code.count()
).go()
```
### Having
```python
df = pf.select(Region=region).where(
pf.portfolio_scope == 'Finbourne-Examples'
).group_by(
Region=region
).aggregate(
PortfolioCount=pf.portfolio_code.count()
).having(
pf.portfolio_code.count() > 3
).go()
```
### Joins
```python
tv = pf.select('^').where(pf.portfolio_scope == 'Finbourne-Examples').to_table_var()
hld = atlas.lusid_portfolio_holding(as_at=dt.datetime(2022, 9, 1))
df = tv.left_join(hld, on=tv.portfolio_code == hld.portfolio_code).select('*').go()
```
### Union / Concat
```python
df = lm.concat([query1, query2, query3]).go()
```
### Sampling
```python
# Sample by fraction
df = pf.select('*').limit(200).sample(prob=0.5).go()
# Sample by count
df = pf.select('*').limit(200).sample(100).go()
```
### Create View
```python
df = pf.select('*').setup_view('Lumipy.View.MyView').go()
```
### S3, Drive, and SaveAs Integration
```python
# Load CSV from S3
csv = atlas.awss3_csv(file='bucket/path/file.csv')
df = csv.select('*').go()
# Save table variable to Drive
save = atlas.drive_saveas(tv, type='CSV', path='/my-path/', file_names='exported_file')
df = save.select('*').go()
```
### Slack Integration
```python
# Send table var as CSV
slack = atlas.dev_slack_send(
tv,
attach_as='CSV',
channel='#channel-name',
text='Here is your result!'
)
df = slack.select('*').go()
```
### Built-in Analytics
```python
df = ar.select(
ar.timestamp,
ar.duration,
Drawdown=lm.window().finance.drawdown(ar.duration)
).go()
```
### Cumulative and Fractional Functions
```python
df = ar.select(
CumeSum=ar.duration.cume.sum(),
FracDiff=ar.duration.frac_diff()
).go()
```
| text/markdown | FINBOURNE Technology | engineering@finbourne.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"pandas>=2.0.0",
"numpy",
"termcolor",
"requests",
"luminesce-sdk==2.4.26",
"ipywidgets",
"ipytree",
"uvicorn",
"fastapi",
"semver",
"click<8.2.0",
"tqdm",
"toml",
"pydantic>2",
"h11>=0.16.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T21:12:59.242994 | dve_lumipy_testing-1.0.562-py3-none-any.whl | 810,604 | 5a/80/6b64e68abaf7682e33e9ae61dd1dc3728a1a0391bdbb2e644749ac3002db/dve_lumipy_testing-1.0.562-py3-none-any.whl | py3 | bdist_wheel | null | false | 584aaed588d77f4a930ddc9cd368f0da | 7e94a86581930881ce20e5f2cf9bd864f934f762007635154e693e2715886d5b | 5a806b64e68abaf7682e33e9ae61dd1dc3728a1a0391bdbb2e644749ac3002db | null | [] | 102 |
2.4 | astra-engine | 0.0.2b7 | A high-performance WhatsApp Web automation library for Python. | <div align="center">
<img src="AstraClient.png" alt="Astra Engine Logo" width="600" />
# Astra Engine
**High-performance WhatsApp Web Automation Library for Python**
[](https://www.python.org/)
[](LICENSE)
[](https://github.com/astral-sh/ruff)
[](https://astra-engine.readthedocs.io/)
</div>
---
Astra Engine is a Python library that lets you automate WhatsApp Web. It uses Playwright to run a real browser instance, which means it works exactly like a real user. No hidden APIs, no ban risks from modified protocols—just meaningful automation.
We built this because existing libraries were either too slow, too brittle, or abandoned. Astra is designed to be the tool we wanted to use ourselves: **written from scratch**, typed, documented, and reliable.
## ⚡ What makes it different?
* **It's a Real Browser**: We don't try to reverse-engineer the encrypted WebSocket protocol. We drive the official WhatsApp Web client, so if it works in Chrome, it works in Astra.
* **Privacy Focused**: The engine automatically strips telemetry and tracking metrics. Your bot looks just like a normal user to WhatsApp's servers.
* **Developer Friendly**: Full type hints, proper documentation, and sensible error messages. You won't have to guess what methods do.
* **Stays Connected**: The connection manager handles QR code refreshes, internet dropouts, and browser crashes without you needing to write retry loops.
---
## 🚀 Features
| Feature | Status | Notes |
| :--- | :---: | :--- |
| **Multi-Device** | ✅ | Works with the latest MD (Multi-Device) beta and stable. |
| **Messaging** | ✅ | Send text, reply to messages, mention users, edit sent messages. |
| **Media** | ✅ | Send and receive Images, Videos, Audio, Documents, and Stickers. |
| **Interaction** | ✅ | Create Polls, react to messages with emojis, delete messages. |
| **Groups** | ✅ | Extensive admin controls: promote/demote, change settings, manage participants. |
| **Privacy** | ✅ | Detailed control over who sees your profile, status, and last seen. |
| **Events** | ✅ | Real-time event loop to listen for new messages, status updates, and more. |
| **Phone Pairing** | ✅ | Login using your phone number instead of scanning a QR code. |
---
## 📦 Installation
You need Python 3.9 or newer.
```bash
# 1. Install Astra Engine
pip install astra-engine
# 2. Install the browser binaries (Chromium)
python -m playwright install chromium
```
---
## 🛠️ Quick Start
### Option A: QR Code Login
Here is the shortest way to get a bot running. This will print a QR code in your terminal—scan it with WhatsApp on your phone.
```python
import asyncio
from astra import Client, Filters
async def main():
# session_id saves your login so you don't have to scan every time
async with Client(session_id="my_bot") as client:
# Respond to "!ping"
@client.on_message(Filters.command("ping"))
async def ping(msg):
await msg.respond("Pong! 🚀")
# React to any message containing "hello"
@client.on_message(Filters.text_contains("hello"))
async def greet(msg):
await msg.react("👋")
print("Bot is running... Press Ctrl+C to stop.")
await client.run_forever()
if __name__ == "__main__":
asyncio.run(main())
```
### Option B: Phone Number Pairing
If you can't scan a QR code, you can use phone number pairing. Astra will print an 8-character code that you enter on your phone.
```python
from astra import Client
client = Client(session_id="pairing_bot", phone="919876543210")
client.run_forever_sync()
```
👉 Enter the code printed in the terminal into WhatsApp > Linked Devices > Link with phone number.
👉 **[Check out the examples folder](examples/)** for more scripts like Group Management, Media Sending, and Background Tasks.
---
## 🤝 Want to Contribute?
We'd love your help! Astra is an open project. Whether you want to fix a bug, add a new feature, or just improve the docs, here is how you can get started.
1. **Fork the repo** on GitHub.
2. **Clone it** to your machine.
3. **Set up your environment**:
```bash
# Create a virtual environment
python -m venv .venv
source .venv/bin/activate # or .venv\Scripts\activate on Windows
# Install in editable mode with dev dependencies
pip install -e ".[dev]"
python -m playwright install chromium
```
4. **Make your changes**.
5. **Run the checks** to make sure everything is clean:
```bash
# format code
ruff check . --fix
# check types
mypy astra
```
6. **Submit a Pull Request**! We'll review it and get it merged.
---
## ⚖️ Disclaimer
This project is **not** affiliated, associated, authorized, endorsed by, or in any way officially connected with WhatsApp or any of its subsidiaries or its affiliates. The official WhatsApp website can be found at https://www.whatsapp.com.
**This software is for educational purposes only.** Automating user accounts may violate WhatsApp's Terms of Service. Use it responsibly and at your own risk.
---
<div align="center">
<sub>Built with ❤️ by Aman Kumar Pandey</sub>
</div>
| text/markdown | Aman Kumar Pandey | null | null | null | Apache-2.0 | whatsapp, whatsapp-web, automation, bot, playwright, async, messaging, chat | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Communications :: Chat",
"Topic :: Software Development :: Libraries :: Python Modules",
"Framework :: AsyncIO",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"playwright>=1.42.0",
"qrcode>=7.4",
"Pillow>=10.0",
"requests>=2.31",
"aiohttp>=3.9.0",
"motor>=3.3.2",
"aiosqlite>=0.19.0",
"psutil>=5.9.0",
"yt-dlp>=2023.12.30",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"sphinx>=7.0; extra == \"dev\"",
"sphinx-rtd-theme>=2.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\"",
"mypy>=1.5; extra == \"dev\"",
"build>=1.0; extra == \"dev\"",
"twine>=4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/paman7647/Astra",
"Documentation, https://astra-engine.readthedocs.io",
"Source, https://github.com/paman7647/Astra",
"Issues, https://github.com/paman7647/Astra/issues",
"Changelog, https://github.com/paman7647/Astra/blob/dev/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:12:50.426633 | astra_engine-0.0.2b7.tar.gz | 1,353,815 | 0f/9d/3fd152da6532b30162423f8c918d911e3528cffb39f2877797153fc68adb/astra_engine-0.0.2b7.tar.gz | source | sdist | null | false | a6ebea753499d8eaab802a5ba7739f06 | 3c0f311ea5a4d4b2daf967db7525b8f5949273f80e6042574dd891ceaad65065 | 0f9d3fd152da6532b30162423f8c918d911e3528cffb39f2877797153fc68adb | null | [
"LICENSE"
] | 208 |
2.4 | doc-fetch | 2.0.4 | Dynamic documentation fetching CLI that converts entire documentation sites to single markdown files for AI/LLM consumption | # DocFetch - Dynamic Documentation Fetcher 📚
**Transform entire documentation sites into AI-ready, single-file markdown with intelligent LLM.txt indexing**
Most AIs can't navigate documentation like humans do. They can't scroll through sections, click sidebar links, or explore related pages. **DocFetch solves this fundamental problem** by converting entire documentation sites into comprehensive, clean markdown files that contain every section and piece of information in a format that LLMs love.
## 🚀 Why DocFetch is Essential for AI Development
### 🤖 **AI/LLM Optimization**
- **Single-file consumption**: No more fragmented context across multiple pages
- **Clean, structured markdown**: Perfect token efficiency for LLM context windows
- **Intelligent LLM.txt generation**: AI-friendly index with semantic categorization
- **Noise removal**: Automatically strips navigation, headers, footers, ads, and buttons
### ⚡ **Developer Productivity**
- **One command automation**: Replace hours of manual copy-pasting with a single CLI command
- **Complete documentation access**: Give your AI agents full access to official documentation
- **Consistent formatting**: Uniform structure across different documentation sites
- **Version control friendly**: Markdown files work perfectly with Git
### 🎯 **Smart Content Intelligence**
- **Automatic page classification**: Identifies APIs, guides, references, and examples
- **Semantic descriptions**: Generates concise, relevant descriptions for each section
- **URL preservation**: Maintains original source links for verification
- **Adaptive content extraction**: Works with diverse documentation site structures
### 🔧 **Production Ready**
- **Concurrent fetching**: Fast downloads with configurable concurrency
- **Respectful crawling**: Honors robots.txt and includes rate limiting
- **Cross-platform**: Works on Windows, macOS, and Linux
- **Multiple installation options**: NPM, Go install, or direct binary download
## 📦 Installation
### PyPI (Recommended for Python developers) ✨ NEW
```bash
pip install doc-fetch
```
### NPM (Recommended for JavaScript/Node.js developers)
```bash
npm install -g doc-fetch
```
### Go (For Go developers)
```bash
go install github.com/AlphaTechini/doc-fetch/cmd/docfetch@latest
```
### Direct Binary Download
Visit [Releases](https://github.com/AlphaTechini/doc-fetch/releases) and download your platform's binary.
## 🎯 Usage
### Basic Usage
```bash
# Fetch entire documentation site to single markdown file
doc-fetch --url https://golang.org/doc/ --output ./docs/golang-full.md
# With LLM.txt generation for AI optimization
doc-fetch --url https://react.dev/learn --output docs.md --llm-txt
```
### Advanced Usage
```bash
# Comprehensive documentation fetch with all features
doc-fetch \
--url https://docs.example.com \
--output ./internal/docs.md \
--depth 4 \
--concurrent 10 \
--llm-txt \
--user-agent "MyBot/1.0"
```
### Command Options
| Flag | Short | Description | Default |
|------|-------|-------------|---------|
| `--url` | `-u` | Base URL to fetch documentation from | **Required** |
| `--output` | `-o` | Output file path | `docs.md` |
| `--depth` | `-d` | Maximum crawl depth | `2` |
| `--concurrent` | `-c` | Number of concurrent fetchers | `3` |
| `--llm-txt` | | Generate AI-friendly llm.txt index | `false` |
| `--user-agent` | | Custom user agent string | `DocFetch/1.0` |
## 📁 Output Files
When using `--llm-txt`, DocFetch generates two files:
### `docs.md` - Complete Documentation
```markdown
# Documentation
This file contains documentation fetched by DocFetch.
---
## Getting Started
This guide covers installation, setup, and first program...
---
## Language Specification
Complete Go language specification and syntax...
```
### `docs.llm.txt` - AI-Friendly Index
```txt
# llm.txt - AI-friendly documentation index
[GUIDE] Getting Started
https://golang.org/doc/install
Covers installation, setup, and first program.
[REFERENCE] Language Specification
https://golang.org/ref/spec
Complete Go language specification and syntax.
[API] net/http
https://pkg.go.dev/net/http
HTTP client/server implementation.
```
## 🌟 Real-World Examples
### Fetch Go Documentation
```bash
doc-fetch --url https://golang.org/doc/ --output ./docs/go-documentation.md --depth 4 --llm-txt
```
### Fetch React Documentation
```bash
doc-fetch --url https://react.dev/learn --output ./docs/react-learn.md --concurrent 10 --llm-txt
```
### Fetch Your Own Project Docs
```bash
doc-fetch --url https://your-project.com/docs/ --output ./internal/docs.md --llm-txt
```
## 🤖 How LLM.txt Supercharges Your AI
The generated `llm.txt` file acts as a **semantic roadmap** for your AI agents:
1. **Precise Navigation**: Agents can query specific sections without scanning entire documents
2. **Context Awareness**: Know whether they're looking at an API reference vs. a tutorial
3. **Efficient Retrieval**: Jump directly to relevant content based on query intent
4. **Source Verification**: Always maintain links back to original documentation
**Example AI Prompt Enhancement:**
```
Instead of: "What does the net/http package do?"
Your AI can now: "Check the [API] net/http section in llm.txt for HTTP client/server implementation details"
```
## 🏗️ How It Works
1. **Link Discovery**: Parses the base URL to find all internal documentation links
2. **Content Fetching**: Downloads all pages concurrently with respect for robots.txt
3. **HTML Cleaning**: Removes non-content elements (navigation, headers, footers, etc.)
4. **Markdown Conversion**: Converts cleaned HTML to structured markdown
5. **Intelligent Classification**: Categorizes pages as API, GUIDE, REFERENCE, or EXAMPLE
6. **Description Generation**: Creates concise, relevant descriptions for each section
7. **Single File Output**: Combines all documentation into one comprehensive file
8. **LLM.txt Generation**: Creates AI-friendly index with semantic categorization
## 🚀 Future Features
- **Incremental updates**: Only fetch changed pages on subsequent runs
- **Custom selectors**: Allow users to specify content areas for different sites
- **Multiple formats**: Support PDF, JSON, and other output formats
- **Token counting**: Estimate token usage for LLM context planning
- **Advanced classification**: Machine learning-based page type detection
## 💡 Why This Exists
Traditional documentation sites are designed for **human navigation**, not **AI consumption**. When working with LLMs, you often need to manually copy-paste multiple sections or provide incomplete context. DocFetch automates this process, giving your AI agents complete access to documentation without the manual overhead.
**Stop wasting time copying documentation. Start building AI agents with complete knowledge.**
## 🤝 Contributing
Contributions are welcome! Please open an issue or pull request on GitHub.
## 📄 License
MIT License
---
**Built with ❤️ for AI developers who deserve better documentation access**
| text/markdown | AlphaTechini | AlphaTechini <rehobothokoibu@gmail.com> | null | null | MIT | documentation, ai, llm, markdown, crawler, security | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Documentation",
"Topic :: Software Development :: Documentation",
"Topic :: Utilities"
] | [] | https://github.com/AlphaTechini/doc-fetch | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/AlphaTechini/doc-fetch",
"Repository, https://github.com/AlphaTechini/doc-fetch",
"Documentation, https://github.com/AlphaTechini/doc-fetch#readme"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T21:12:26.814420 | doc_fetch-2.0.4.tar.gz | 40,305,433 | ba/bf/0b08854b33b9441fbe9935afc658b2c3457706889a2d85458627406272ac/doc_fetch-2.0.4.tar.gz | source | sdist | null | false | c1deab9a88251076b915db3a6c6966fe | 83b6db9f6eeea76c2142f1451b55d057bb81c39433684435bbe608daafe56f68 | babf0b08854b33b9441fbe9935afc658b2c3457706889a2d85458627406272ac | null | [] | 204 |
2.4 | amazon-braket-algorithm-library | 1.7.3 | An open source library of quantum computing algorithms implemented on Amazon Braket | # Amazon Braket Algorithm Library
[](https://github.com/amazon-braket/amazon-braket-algorithm-library/actions/workflows/build.yml)
[](https://amazon-braket-algorithm-library.readthedocs.io)
The Braket Algorithm Library provides Amazon Braket customers with pre-built implementations of prominent quantum algorithms and experimental workloads as ready-to-run example notebooks.
---
### Braket algorithms
Currently, Braket algorithms are tested on Linux, Windows, and Mac.
Running notebooks locally requires additional dependencies located in [notebooks/textbook/requirements.txt](https://github.com/amazon-braket/amazon-braket-algorithm-library/blob/main/notebooks/textbook/requirements.txt). See notebooks/textbook/README.md for more information.
| Textbook algorithms | Notebook | References |
| ----- | ----- | ----- |
| Bell's Inequality | [Bells_Inequality.ipynb](notebooks/textbook/Bells_Inequality.ipynb) | [Bell1964](https://journals.aps.org/ppf/abstract/10.1103/PhysicsPhysiqueFizika.1.195), [Greenberger1990](https://doi.org/10.1119/1.16243) |
| Bernstein–Vazirani | [Bernstein_Vazirani_Algorithm.ipynb](notebooks/textbook/Bernstein_Vazirani_Algorithm.ipynb) | [Bernstein1997](https://epubs.siam.org/doi/10.1137/S0097539796300921) |
| CHSH Inequality | [CHSH_Inequality.ipynb](notebooks/textbook/CHSH_Inequality.ipynb) | [Clauser1970](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.23.880) |
| Deutsch-Jozsa | [Deutsch_Jozsa_Algorithm.ipynb](notebooks/textbook/Deutsch_Jozsa_Algorithm.ipynb) | [Deutsch1992](https://royalsocietypublishing.org/doi/10.1098/rspa.1992.0167) |
| Grover's Search | [Grovers_Search.ipynb](notebooks/textbook/Grovers_Search.ipynb) | [Figgatt2017](https://www.nature.com/articles/s41467-017-01904-7), [Baker2019](https://arxiv.org/abs/1904.01671) |
| QAOA | [Quantum_Approximate_Optimization_Algorithm.ipynb](notebooks/textbook/Quantum_Approximate_Optimization_Algorithm.ipynb) | [Farhi2014](https://arxiv.org/abs/1411.4028) |
| Quantum Circuit Born Machine | [Quantum_Circuit_Born_Machine.ipynb](notebooks/textbook/Quantum_Circuit_Born_Machine.ipynb) | [Benedetti2019](https://www.nature.com/articles/s41534-019-0157-8), [Liu2018](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.98.062324) |
| QFT | [Quantum_Fourier_Transform.ipynb](notebooks/textbook/Quantum_Fourier_Transform.ipynb) | [Coppersmith2002](https://arxiv.org/abs/quant-ph/0201067) |
| QPE | [Quantum_Phase_Estimation_Algorithm.ipynb](notebooks/textbook/Quantum_Phase_Estimation_Algorithm.ipynb) | [Kitaev1995](https://arxiv.org/abs/quant-ph/9511026) |
| Quantum Walk | [Quantum_Walk.ipynb](notebooks/textbook/Quantum_Walk.ipynb) | [Childs2002](https://arxiv.org/abs/quant-ph/0209131) |
|Shor's| [Shors_Algorithm.ipynb](notebooks/textbook/Shors_Algorithm.ipynb) | [Shor1998](https://arxiv.org/abs/quant-ph/9508027) |
| Simon's | [Simons_Algorithm.ipynb](notebooks/textbook/Simons_Algorithm.ipynb) | [Simon1997](https://epubs.siam.org/doi/10.1137/S0097539796298637) |
| Advanced algorithms | Notebook | References |
| ----- | ----- | ----- |
| Quantum PCA | [Quantum_Principal_Component_Analysis.ipynb](notebooks/advanced_algorithms/Quantum_Principal_Component_Analysis.ipynb) | [He2022](https://ieeexplore.ieee.org/document/9669030) |
| QMC | [Quantum_Computing_Quantum_Monte_Carlo.ipynb](notebooks/advanced_algorithms/Quantum_Computing_Quantum_Monte_Carlo.ipynb) | [Motta2018](https://wires.onlinelibrary.wiley.com/doi/10.1002/wcms.1364), [Peruzzo2014](https://www.nature.com/articles/ncomms5213) |
| Adaptive Shot Allocation | [2_Adaptive_Shot_Allocation.ipynb](notebooks/advanced_algorithms/adaptive_shot_allocation/2_Adaptive_Shot_Allocation.ipynb) | [Shlosberg2023](https://doi.org/10.22331/q-2023-01-26-906) |
| Auxiliary functions | Notebook |
| ----- | ----- |
| Random circuit generator | [Random_Circuit.ipynb](notebooks/auxiliary_functions/Random_Circuit.ipynb) |
---
### Community repos
> :warning: **The following includes projects that are not provided by Amazon Braket. You are solely responsible for your use of those projects (including compliance with any applicable licenses and fitness of the project for your particular purpose).**
Quantum algorithm implementations using Braket in other repos:
| Algorithm | Repo | References | Additional dependencies |
| ----- | ----- | ----- | ----- |
| Quantum Reinforcement Learning | [quantum-computing-exploration-for-drug-discovery-on-aws](https://github.com/awslabs/quantum-computing-exploration-for-drug-discovery-on-aws)| [Learning Retrosynthetic Planning through Simulated Experience(2019)](https://pubs.acs.org/doi/10.1021/acscentsci.9b00055) | [dependencies](https://github.com/awslabs/quantum-computing-exploration-for-drug-discovery-on-aws/blob/main/source/src/notebook/healthcare-and-life-sciences/d-1-retrosynthetic-planning-quantum-reinforcement-learning/requirements.txt)
[comment]: <> (If you wish to highlight your implementation, append the following content in a new line to the table above : | <Name> | <link to github repo> | <published reference> | <list of required packages on top of what is listed in amazon-braket-algorithm-library setup.py> |)
---
## <a name="install">Installing the Amazon Braket Algorithm Library</a>
The Amazon Braket Algorithm Library can be installed from source by cloning this repository and running a pip install command in the root directory of the repository.
```bash
git clone https://github.com/amazon-braket/amazon-braket-algorithm-library.git
cd amazon-braket-algorithm-library
pip install .
```
To run the notebook examples locally on your IDE, first, configure a profile to use your account to interact with AWS. To learn more, see [Configure AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
After you create a profile, use the following command to set the `AWS_PROFILE` so that all future commands can access your AWS account and resources.
```bash
export AWS_PROFILE=YOUR_PROFILE_NAME
```
### Configure your AWS account with the resources necessary for Amazon Braket
If you are new to Amazon Braket, onboard to the service and create the resources necessary to use Amazon Braket using the [AWS console](https://console.aws.amazon.com/braket/home ).
## Support
### Issues and Bug Reports
If you encounter bugs or face issues while using the algorithm library, please let us know by posting
the issue on our [GitHub issue tracker](https://github.com/amazon-braket/amazon-braket-algorithm-library/issues).
For other issues or general questions, please ask on the [Quantum Computing Stack Exchange](https://quantumcomputing.stackexchange.com/questions/ask) and add the tag amazon-braket.
### Feedback and Feature Requests
If you have feedback or features that you would like to see on Amazon Braket, we would love to hear from you!
[GitHub issues](https://github.com/amazon-braket/amazon-braket-algorithm-library/issues) is our preferred mechanism for collecting feedback and feature requests, allowing other users
to engage in the conversation, and +1 issues to help drive priority.
## License
This project is licensed under the Apache-2.0 License.
| text/markdown | Amazon Web Services | null | null | null | Apache License 2.0 | Amazon AWS Quantum | [
"Intended Audience :: Developers",
"Natural Language :: English",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://github.com/amazon-braket/amazon-braket-algorithm-library | null | >=3.11 | [] | [] | [] | [
"amazon-braket-sdk>=1.35.1",
"numpy",
"openfermion>=1.5.1",
"pennylane>=0.34.0",
"scipy>=1.5.2",
"sympy<1.13",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-rerunfailures; extra == \"test\"",
"pytest-xdist; extra == \"test\"",
"ruff; extra == \"test\"",
"sphinx; extra == \"test\"",
"sphinx-rtd-theme; extra == \"test\"",
"sphinxcontrib-apidoc; extra == \"test\"",
"tox; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:12:13.209227 | amazon_braket_algorithm_library-1.7.3.tar.gz | 41,243 | 0b/63/523a59031fe0d830dbf40b806ce22ae17533a903a4314ad856dec942f119/amazon_braket_algorithm_library-1.7.3.tar.gz | source | sdist | null | false | 97fc407d36fdfa75b045e315a62adf35 | f08e8f947369586f0d3285591f5319923765d5a29d1ab3d8c58ee71ce77db1a9 | 0b63523a59031fe0d830dbf40b806ce22ae17533a903a4314ad856dec942f119 | null | [
"LICENSE",
"NOTICE"
] | 192 |
2.4 | sonika-ai-toolkit | 0.2.7 | Toolkit para creación de agentes de IA y procesamiento de documentos | # Sonika AI Toolkit <a href="https://pepy.tech/projects/sonika-ai-toolkit"><img src="https://static.pepy.tech/badge/sonika-ai-toolkit" alt="PyPI Downloads"></a>
A robust Python library designed to build state-of-the-art conversational agents and AI tools. It leverages `LangChain` and `LangGraph` to create autonomous bots capable of complex reasoning and tool execution.
## Installation
```bash
pip install sonika-ai-toolkit
```
## Prerequisites
You'll need the following API keys depending on the model you wish to use:
- OpenAI API Key
- DeepSeek API Key (Optional)
- Google Gemini API Key (Optional)
- AWS Bedrock API Key (Optional, for Bedrock)
Create a `.env` file in the root of your project with the following variables:
```env
OPENAI_API_KEY=your_openai_key_here
DEEPSEEK_API_KEY=your_deepseek_key_here
GOOGLE_API_KEY=your_gemini_key_here
AWS_BEARER_TOKEN_BEDROCK=your_bedrock_api_key_here
AWS_REGION=us-east-1
```
## Key Features
- **Multi-Model Support**: Agnostic integration with OpenAI, DeepSeek, Google Gemini, and Amazon Bedrock.
- **Conversational Agent**: Robust agent (`ReactBot`) with native tool execution capabilities and LangGraph state management.
- **Tasker Agent**: Advanced planner-executor agent (`TaskerBot`) for complex multi-step tasks.
- **Structured Classification**: Text classification with strongly typed outputs.
- **Document Processing**: Utilities for processing PDFs, DOCX, and other formats with intelligent chunking.
- **Custom Tools**: Easy integration of custom tools via Pydantic and LangChain.
## Basic Usage
### Conversational Agent with Tools
```python
import os
from dotenv import load_dotenv
from sonika_ai_toolkit.tools.integrations import EmailTool
from sonika_ai_toolkit.agents.react import ReactBot
from sonika_ai_toolkit.utilities.types import Message
from sonika_ai_toolkit.utilities.models import OpenAILanguageModel
# Load environment variables
load_dotenv()
# Configure model
api_key = os.getenv("OPENAI_API_KEY")
language_model = OpenAILanguageModel(api_key, model_name='gpt-4o-mini', temperature=0.7)
# Configure tools
tools = [EmailTool()]
# Create agent instance
bot = ReactBot(language_model, instructions="You are a helpful assistant", tools=tools)
# Get response
user_message = 'Send an email to erley@gmail.com saying hello'
messages = [Message(content="My name is Erley", is_bot=False)]
response = bot.get_response(user_message, messages, logs=[])
print(response["content"])
```
### Text Classification
```python
import os
from sonika_ai_toolkit.classifiers.text import TextClassifier
from sonika_ai_toolkit.utilities.models import OpenAILanguageModel
from pydantic import BaseModel, Field
# Define classification structure
class Classification(BaseModel):
intention: str = Field()
sentiment: str = Field(..., enum=["happy", "neutral", "sad", "excited"])
# Initialize classifier
model = OpenAILanguageModel(os.getenv("OPENAI_API_KEY"))
classifier = TextClassifier(llm=model, validation_class=Classification)
# Classify text
result = classifier.classify("I am very happy today!")
print(result.result)
```
## Available Components
### Agents
- **ReactBot**: Standard agent using LangGraph workflow.
- **TaskerBot**: Advanced planner agent for multi-step tasks.
### Utilities
- **ILanguageModel**: Unified interface for LLM providers.
- **DocumentProcessor**: Text extraction and chunking utilities.
## Project Structure
```
src/sonika_ai_toolkit/
├── agents/ # Bot implementations
├── classifiers/ # Text classification tools
├── document_processing/# PDF and document tools
├── tools/ # Tool definitions
└── utilities/ # Models and common types
```
## License
This project is licensed under the MIT License.
| text/markdown | Erley Blanco Carvajal | null | null | null | MIT License | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"langchain-mcp-adapters==0.1.9",
"langchain-community==0.3.26",
"langchain-core==0.3.66",
"langchain-openai==0.3.24",
"langgraph==0.4.8",
"langgraph-checkpoint==2.1.0",
"langgraph-sdk==0.1.70",
"dataclasses-json==0.6.7",
"python-dateutil==2.9.0.post0",
"pydantic==2.11.7",
"faiss-cpu==1.11.0",
"pypdf==5.6.1",
"python-dotenv==1.0.1",
"typing_extensions==4.14.0",
"typing-inspect==0.9.0",
"PyPDF2==3.0.1",
"python-docx==1.2.0",
"openpyxl==3.1.5",
"python-pptx==1.0.2",
"nest-asyncio==1.6.0",
"sphinx<9.0.0,>=8.1.3; extra == \"dev\"",
"sphinx-rtd-theme<4.0.0,>=3.0.1; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-20T21:12:12.138534 | sonika_ai_toolkit-0.2.7.tar.gz | 56,856 | 0b/ad/549ca3be09b5cf639224c4f8fc3da26b8b231109dfdda40b4b24c97177c8/sonika_ai_toolkit-0.2.7.tar.gz | source | sdist | null | false | 966877b851fc3b135aeda5cef9595577 | 8dbb4795357c0f865ba2389795a1e65fa35c51ffd4948351d9e927685e63818a | 0bad549ca3be09b5cf639224c4f8fc3da26b8b231109dfdda40b4b24c97177c8 | null | [
"LICENSE"
] | 192 |
2.4 | checksum-helper | 0.4.0 | Helper tool for checksum file operations | # checksum-helper
Convenient tool that facilitates a lot of common checksum file operations.
Features:
- Generate checksums for a whole directory tree either verifying files based on checksum files in
the tree or skipping unchanged files based on the last modification time
- Combine all found checksum files in a directory tree into one common checksum file while only
using the most recent checksums and filtering deleted files (can be turned off)
- Check whether all files in a directory tree have checksums
- Copy a hash file modifying the relative paths in the file accordingly
- Move files modifying all relative paths in checksum files accordingly
- Verify operations:
- Verify all hashes of a single file
- Verify all checksums that were found in the directory tree
- Verify files based on a wildcard filter
## Usage
Use `checksum_helper.py -h` to display a list of subcommands and how to use them.
Subcommands (short alias):
- incremental (inc)
- build-most-current (build)
- check-missing (check)
- copy\_hf (cphf)
- move (mv)
- verify (vf)
For almost all commands the directory tree is searched for known checksum files.
This can be customized by specifying exclusion patterns using `--hash-filename-filter [PATTERN ...]`
and the traversal depth can be limited with `-d DEPTH`.
ChecksumHelper has it's own format that also stores the last modification time as well as
the hash type. If you want to avoid a custom format you can specify a filename with
`-o OUT_FILENAME` which has to end in a hash name (based on hashlib's naming) as
extension. Single hash files won't support emitting extra warnings when doing
incremental checksums or skipping unchanged files based on the last modification
time though.
For the filter/whitelist/.. wildcard patterns:
- On POSIX platforms: only `/` can be used as path separator
- On Windows: both `/` and `\` can be used interchangeably
### incremental
```
checksum_helper incremental path hash_algorithm
```
Generate checksums for a whole directory tree starting at `path`. The tree is searched
for known checksum files (\*.md5, \*.sha512, etc.). When generating new checksums
the files are verified against the most recent checksum that was found.
`--skip-unchanged`: skip verifying files by hash if the the last modification time remains unchanged
`--dont-include-unchanged`: Unchanged files are included in the generated checksum
file by default, this can be turned off by using this flag
`-s` or `--single-hash`: Force writing to a single hash file
### build-most-current
```
checksum_helper build path
```
Combine all found checksum files in a directory tree starting at `path` into
one common checksum file while only using the most recent checksums. By default
files that have been deleted in `path` will not be included which can be turned off
using `--dont-filter-deleted`.
### check-missing
```
checksum_helper check path
```
Check whether all files in a directory tree starting at `path` have checksums
available (in discovered checksum files)
### copy\_hf
```
checksum_helper cphf source_path dest_path
```
Copy a hash file at `source_path` to `dest_path` modifying the relative paths in
the file accordingly
### move
```
checksum_helper mv root_dir source_path mv_path
```
Move file(s) or a directory from `source_path` to `mv_path` modifying all relative
paths in checksum files, that were found in the directory tree starting at `root_dir`,
accordingly.
Make sure to be careful about choosing `root_dir` since relative paths to the moved
file(s) won't be modified in parent directories.
### verify
For all verify operations a summary containing the `FAILED`/`MISSING` files
and the amount of total files, matches, etc. are printed so you don't have to
go through all the logs manually.
Verify operations:
#### all
```
checksum_helper vf all root_dir
```
Verify all checksums that were found in the directory tree starting at `root_dir`
#### hash\_file
```
checksum_helper vf hf hash_file_name
```
Verify all hashes in a checksum file at `hash_file_name`.
#### filter
```
checksum_helper vf filter root_dir filter [filter ...]
```
Verify files based on mutiple wildcard filters such that only files matching
one of the filters is verified, assuming there is a hash in a checksum file
somewhere in `root_dir` for it.
Example:
```
checksum_helper vf filter phone_backup "*.jpg" "*.mp4" "Books/*"
```
This would verify all `jpg` and `mp4` files as well as all files in the
sub-directory `Books` (as long as there are checksums for it in `phone_backup`)
| text/markdown | null | omgitsmoe <60219950+omgitsmoe@users.noreply.github.com> | null | null | MIT License
Copyright (c) 2018-2023 omgitsmoe
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| script, checksum, verify, sha512, md5, sha256, backup, archival, bit-rot | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"pytest<8,>=7.2; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/omgitsmoe/checksum_helper",
"Bug Tracker, https://github.com/omgitsmoe/checksum_helper/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T21:10:43.508330 | checksum_helper-0.4.0-py3-none-any.whl | 30,328 | 65/b6/6ef80ee5220d4947b24e75739b5768981668ef73738abbbb3044add302f2/checksum_helper-0.4.0-py3-none-any.whl | py3 | bdist_wheel | null | false | c15af76762205a664465fd4141605476 | 2c56e3c1474bc0571c164dcec8e7d3ff352622d44d313c3974347d5543b90fe2 | 65b66ef80ee5220d4947b24e75739b5768981668ef73738abbbb3044add302f2 | null | [
"LICENSE.txt"
] | 103 |
2.4 | microperf | 1.0.2 | A small tool using perf to provide more performance insights. | # microperf
[](https://pypi.org/project/microperf/)
[](https://pepy.tech/project/microperf)
`microperf` is a [`perf`](https://perfwiki.github.io) wrapper. The basic idea is
that it converts a `perf.data` file by inserting all samples into a database,
making it then easier to query for specific patterns or code smells.
## Usage
### Generating a profile
First, note that your executable should be compiled with debug symbols (`-g`,
`-DCMAKE_BUILD_TYPE=RelWithDebInfo`, ...).
Since `microperf` is simply a wrapper, generating a profile can be done directly
with `perf`.
```bash
perf record -F99 --call-graph=dwarf -- <COMMAND>
```
Alternatively, `microperf perf` provides a convenience passthrough to `perf`.
This can be useful when a different `perf` executable should be used (see
`MICROPERF_PERF_EXE` option below).
### Running the Patterns analyzer
I've written a couple queries to identify common bad patterns. At time of this
writing, this includes cycles spent in:
1. tree-based structures (`std::map`, `std::set`): these can often be replaced
with hash-based data structures.
2. constructors: these are often signs of excessive copying.
## Options
The environment variable `MICROPERF_PERF_EXE` can be set to the path of a `perf`
executable to be used instead of the default `perf` command.
| text/markdown | null | Nicolas van Kempen <nvankemp@gmail.com> | null | null | null | null | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"docker==7.1.0",
"presto-python-client==0.8.4",
"rich==13.9.4"
] | [] | [] | [] | [
"Homepage, https://github.com/nicovank/microperf",
"Bug Tracker, https://github.com/nicovank/microperf/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:10:14.148263 | microperf-1.0.2.tar.gz | 11,972 | 4c/e2/335d31a82350bfef02663454b042a399e641c7b48402536f33c3a97abcb4/microperf-1.0.2.tar.gz | source | sdist | null | false | 1b6d7a4e767016bac92d9f29d8baaef1 | 47c7362dc5844043040243849752d6e59711c916836234b4503109bba9ff868f | 4ce2335d31a82350bfef02663454b042a399e641c7b48402536f33c3a97abcb4 | null | [
"LICENSE"
] | 191 |
2.4 | dataclass-extensions | 0.3.0 | Additional functionality for Python dataclasses | dataclass-extensions
====================
Additional functionality for Python dataclasses
## Installation
Python 3.10 or newer is required. You can install the package from PyPI:
```fish
pip install dataclass-extensions
```
## Features
### Encode/decode to/from JSON-safe dictionaries
```python
from dataclasses import dataclass
from dataclass_extensions import decode, encode
@dataclass
class Fruit:
calories: int
price: float
@dataclass
class FruitBasket:
fruit: Fruit
count: int
basket = FruitBasket(fruit=Fruit(calories=200, price=1.0), count=2)
assert encode(basket) == {"fruit": {"calories": 200, "price": 1.0}, "count": 2}
assert decode(FruitBasket, encode(basket)) == basket
```
You can also define how to encode/decode non-dataclass types:
```python
from dataclasses import dataclass
from dataclass_extensions import decode, encode
class Foo:
def __init__(self, x: int):
self.x = x
@dataclass
class Bar:
foo: Foo
encode.register_encoder(lambda foo: {"x": foo.x}, Foo)
decode.register_decoder(lambda d: Foo(d["x"]), Foo)
bar = Bar(foo=Foo(10))
assert encode(bar) == {"foo": {"x": 10}}
assert decode(Bar, encode(bar)) == bar
```
### Polymorphism through registrable subclasses
```python
from dataclasses import dataclass
from dataclass_extensions import Registrable, decode, encode
@dataclass
class Fruit(Registrable):
calories: int
price: float
@Fruit.register("banana")
@dataclass
class Banana(Fruit):
calories: int = 200
price: float = 1.25
@Fruit.register("apple")
@dataclass
class Apple(Fruit):
calories: int = 150
price: float = 1.50
variety: str = "Granny Smith"
@dataclass
class FruitBasket:
fruit: Fruit
count: int
basket = FruitBasket(fruit=Apple(), count=2)
assert encode(basket) == {
"fruit": {
"type": "apple", # corresponds to the registered name
"calories": 150,
"price": 1.5,
"variety": "Granny Smith",
},
"count": 2,
}
assert decode(FruitBasket, encode(basket)) == basket
```
| text/markdown | null | Pete Walsh <epwalsh10@gmail.com> | null | null | Apache License
Version 2.0, January 2004
https://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"typing_extensions",
"ruff; extra == \"dev\"",
"mypy; extra == \"dev\"",
"black<24.0,>=23.1; extra == \"dev\"",
"isort<5.14,>=5.12; extra == \"dev\"",
"pytest; extra == \"dev\"",
"twine>=1.11.0; extra == \"dev\"",
"setuptools; extra == \"dev\"",
"wheel; extra == \"dev\"",
"build; extra == \"dev\"",
"dataclass-extensions[dev]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/epwalsh/dataclass-extensions",
"Changelog, https://github.com/epwalsh/dataclass-extensions/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T21:09:42.300240 | dataclass_extensions-0.3.0.tar.gz | 17,508 | c4/2f/b302edadca2e40e9d5756c78fcf12cdc00873b5154fd348deab0ba0061b1/dataclass_extensions-0.3.0.tar.gz | source | sdist | null | false | f01e4c9eab05ddf5a6f4f0120e50310e | d984dac35e182eaa850aeb75716f3e2612232d06be71eaae41ed7d6e74bde88e | c42fb302edadca2e40e9d5756c78fcf12cdc00873b5154fd348deab0ba0061b1 | null | [
"LICENSE"
] | 873 |
2.1 | die-python | 0.5.0 | Python bindings for Detect It Easy (DIE). | # DetectItEasy-Python
[](https://pypi.org/project/die-python/)
[](https://pepy.tech/project/die-python)
[](https://github.com/psf/black)
[](https://github.com/elastic/die-python/blob/main/LICENSE)
[](https://github.com/elastic/die-python/actions/workflows/build.yml)
Native Python 3.11+ bindings for [@horsicq](https://github.com/horsicq/)'s [Detect-It-Easy](https://github.com/horsicq/Detect-It-Easy)
## Install
### From PIP
The easiest and recommended installation is through `pip`.
```console
pip install die-python
```
### Using Git
```console
git clone https://github.com/elastic/die-python
cd die-python
```
Install Qt into the `build`. It can be easily installed using [`aqt`](https://github.com/miurahr/aqtinstall) as follow (here with Qt version 6.7.3):
```console
python -m pip install aqtinstall --user -U
python -m aqt install-qt -O ./build linux desktop 6.7.3 linux_gcc_64 # linux x64 only
python -m aqt install-qt -O ./build linux_arm64 desktop 6.7.3 linux_gcc_arm64 # linux arm64 only
python -m aqt install-qt -O ./build windows desktop 6.7.3 win64_msvc2019_64 # windows x64 only
python -m aqt install-qt -O ./build windows desktop 6.7.3 win64_msvc2019_arm64 # windows arm64 only (will requires `win64_msvc2019_64`)
python -m aqt install-qt -O ./build mac desktop 6.7.3 clang_64 # mac only
```
Then you can install the package
```console
python -m pip install . --user -U
```
## Quick start
```python
import die, pathlib
print(die.scan_file("c:/windows/system32/ntdll.dll", die.ScanFlags.DEEP_SCAN))
'PE64'
print(die.scan_file("../upx.exe", die.ScanFlags.RESULT_AS_JSON, str(die.database_path) ))
{
"detects": [
{
"filetype": "PE64",
"parentfilepart": "Header",
"values": [
{
"info": "Console64,console",
"name": "GNU linker ld (GNU Binutils)",
"string": "Linker: GNU linker ld (GNU Binutils)(2.28)[Console64,console]",
"type": "Linker",
"version": "2.28"
},
{
"info": "",
"name": "MinGW",
"string": "Compiler: MinGW",
"type": "Compiler",
"version": ""
},
{
"info": "NRV,brute",
"name": "UPX",
"string": "Packer: UPX(4.24)[NRV,brute]",
"type": "Packer",
"version": "4.24"
}
]
}
]
}
for db in die.databases():
print(db)
\path\to\your\pyenv\site-packages\die\db\ACE
\path\to\your\pyenv\site-packages\die\db\Amiga\DeliTracker.1.sg
\path\to\your\pyenv\site-packages\die\db\Amiga\_Amiga.0.sg
\path\to\your\pyenv\site-packages\die\db\Amiga\_init
\path\to\your\pyenv\site-packages\die\db\APK\AlibabaProtection.2.sg
[...]
```
## Licenses
Released under Apache 2.0 License and integrates the following repositories:
- [Detect-It-Easy](https://github.com/horsicq/Detect-It-Easy): MIT license
- [die_library](https://github.com/horsicq/die_library): MIT license
- [qt](https://github.com/qt/qt): LGPL license
| text/markdown | @calladoum-elastic | null | null | null | Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Natural Language :: English"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"setuptools",
"wheel",
"nanobind",
"pytest; extra == \"tests\"",
"black; extra == \"tests\"",
"beautifulsoup4; extra == \"tests\"",
"lxml; extra == \"tests\""
] | [] | [] | [] | [
"Homepage, https://github.com/elastic/die-python"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T21:09:19.485895 | die_python-0.5.0-cp313-abi3-win_amd64.whl | 10,016,477 | 07/08/fb63c07fa224359d95608e07d25b87ae36efa894ba9bbf9d524ce0f51b2e/die_python-0.5.0-cp313-abi3-win_amd64.whl | cp313 | bdist_wheel | null | false | 4f4a0bf08dba9a50d600d28ca2a74649 | 1c45e6589e1b6e85b1cf6a9d24df7591b6ec9b03ad275065d757764420a518aa | 0708fb63c07fa224359d95608e07d25b87ae36efa894ba9bbf9d524ce0f51b2e | null | [] | 731 |
2.4 | easyrunner-cli | 0.15.0b1 | EasyRunner CLI. | # EasyRunner CLI
Application hosting platform that runs on a single server. Easily turn your VPS into a secure web host.
Copyright (c) 2024 - 2025 Janaka Abeywardhana
## Contribution
Setup python tools on a new machine
- `brew install pyenv` - python virtual environment manager
- `brew install pipx` - pipx python package manager, for install poetry
- `pipx install poetry` (pipx installs global packages in isolated environments)
- add `export PATH="$HOME/.local/bin:$PATH"` to ~/.zshrc for poetry.
Setup python environment for an application
- `pyenv install 3.13` install this version of python.
- `pyenv local` show the version in this environment
- `poetry env use $(pyenv which python)` to create a poetry environment in this project for dependencies. the `.venv`
- `source $(poetry env info --path)/bin/activate` to activate the environment (avail on path etc.)
- `poetry config virtualenvs.in-project true`
- `poetry install`
if the location of the repo changes on your local machine then the virtual env will get disconnected. Therefore remove and recreate
- `poetry env remove $(poetry env list --full-path | grep -Eo '/.*')` Remove the current Poetry environment to force a clean rebuild:
- `poetry install` Recreate the environment and install dependencies
- `source $(poetry env info --path)/bin/activate` activate the environment
| text/markdown | Janaka Abeywardhana | janaka@easyrunner.xyz | null | null | Proprietary | null | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.13 | [] | [] | [] | [
"cryptography<45.0.0,>=44.0.0",
"fabric<4.0.0,>=3.2.2",
"keyring<26.0.0,>=25.6.0",
"pulumi<4.0.0,>=3.185.0",
"pulumi-aws<8.0.0,>=7.1.0",
"pulumi-hcloud<2.0.0,>=1.23.1",
"pyjwt<3.0.0,>=2.9.0",
"pyobjc-framework-Security<11.0.0,>=10.3.1; sys_platform == \"darwin\"",
"python-dotenv<2.0.0,>=1.1.1",
"pyyaml<7.0.0,>=6.0.2",
"requests<3.0.0,>=2.32.5",
"rich<14.0.0,>=13.9.4",
"typer<0.16.0,>=0.15.1",
"typing-extensions<5.0.0,>=4.12.2"
] | [] | [] | [] | [
"Homepage, https://easyrunner.xyz",
"Repository, https://github.com/janaka/easyrunner"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T21:09:18.522239 | easyrunner_cli-0.15.0b1-py3-none-any.whl | 254,044 | c2/38/7f8241ad8ebfb53ef20fa6e07a98c31b611cf7b7a48445853fd2d6b6d2d1/easyrunner_cli-0.15.0b1-py3-none-any.whl | py3 | bdist_wheel | null | false | 00cec6e6c27501d9cb278203d63cfca0 | 006ca3ce09c994ff75585c7fd276a419a1ac69c7d681fc6fd97b6da583b9e537 | c2387f8241ad8ebfb53ef20fa6e07a98c31b611cf7b7a48445853fd2d6b6d2d1 | null | [] | 69 |
2.4 | civicstream | 1.1.1 | CivicAlert Streaming Data Capture and Visualization Tool | CivicStream
===========
This package provides a command-line tool for capturing and visualizing streaming data
from a CivicAlert sensor device. It can be accessed from a command terminal by entering:
``civicstream``
Enter ``civicstream -h`` to see a listing of available command line parameters, including
activation of an IMU visualizer or configuration of the number of incoming audio channels.
| text/x-rst | Will Hedgecock | ronald.w.hedgecock@vanderbilt.edu | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://github.com/vu-civic/tools | null | >=3.8 | [] | [] | [] | [
"numpy",
"pyserial",
"pygame",
"PyOpenGL"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T21:08:28.242614 | civicstream-1.1.1.tar.gz | 6,413 | 04/18/97894fd41d44abbb0ed300d185bde0b4289061421f62ac4d1ac2e02f9554/civicstream-1.1.1.tar.gz | source | sdist | null | false | c1197b7671c5235cf3e31276477ff5f3 | 6208338ecf4761fea0ad745bbdb3531fb04c12005987041edcf3d128894beae1 | 041897894fd41d44abbb0ed300d185bde0b4289061421f62ac4d1ac2e02f9554 | null | [
"LICENSE"
] | 200 |
2.4 | ign-lint | 0.4.1 | A linter for Ignition JSON files | # Ignition Lint Documentation
## Overview
Ignition Lint is a Python framework designed to analyze and lint Ignition Perspective view.json files. It provides a structured way to parse view files, build an object model representation, and apply customizable linting rules to ensure code quality and consistency across your Ignition projects.
## Getting Started with Poetry
### Prerequisites
- Python 3.10 or higher
- Poetry >= 2.0 (install from [python-poetry.org](https://python-poetry.org/docs/#installation))
### Installation Methods
#### Option 1: Install from PyPI (Recommended for Users)
```bash
# Install the package
pip install ignition-lint
# Verify installation
ignition-lint --help
```
#### Option 2: Development Setup with Poetry
1. **Clone the repository:**
```bash
git clone https://github.com/design-group/ignition-lint.git
cd ignition-lint
```
2. **Install dependencies with Poetry:**
```bash
poetry install
```
3. **Activate the virtual environment:**
```bash
poetry shell
```
4. **Verify installation:**
```bash
poetry run python -m ignition_lint --help
```
### Development Setup
For development work, install with development dependencies:
```bash
# Install all dependencies including dev tools
poetry install --with dev
# Run tests
cd tests
poetry run python test_runner.py --run-all
# Run linting
poetry run pylint ignition_lint/
# Format code
poetry run yapf -ir ignition_lint/
```
### Running Without Activating Shell
You can run commands directly through Poetry without activating the shell:
```bash
# Run linting on a view file
poetry run python -m ignition_lint path/to/view.json
# Run with custom configuration
poetry run python -m ignition_lint --config my_rules.json --files "views/**/view.json"
# Using the CLI entry point
poetry run ignition-lint path/to/view.json
```
### Building and Distribution
```bash
# Build the package
poetry build
# Install locally for testing
poetry install
# Export requirements.txt for CI/CD or Docker
poetry export --output requirements.txt --without-hashes
```
## Key Features
- **Object Model Representation**: Converts flattened JSON structures into a hierarchical object model
- **Extensible Rule System**: Easy-to-extend framework for creating custom linting rules
- **Built-in Rules**: Includes rules for script validation (via Pylint) and binding checks
- **Batch Processing**: Efficiently processes multiple scripts and files in a single run
- **Pre-commit Integration**: Can be integrated into your Git workflow
## Architecture
### Core Components
```
ignition_lint/
├── common/ # Utilities for JSON processing
├── model/ # Object model definitions
├── rules/ # Linting rule implementations
├── linter.py # Main linting engine
└── main.py # CLI entry point
```
### Object Model
The framework decompiles Ignition Perspective views into a structured object model with the following node types:
#### Base Classes
- **ViewNode**: Abstract base class for all nodes in the view tree
- **Visitor**: Interface for implementing the visitor pattern
#### Component Nodes
- **Component**: Represents UI components with properties and metadata
- **Property**: Individual component properties
#### Binding Nodes
- **Binding**: Base class for all binding types
- **ExpressionBinding**: Expression-based bindings
- **PropertyBinding**: Property-to-property bindings
- **TagBinding**: Tag-based bindings
#### Script Nodes
- **Script**: Base class for all script types
- **MessageHandlerScript**: Scripts that handle messages
- **CustomMethodScript**: Custom component methods
- **TransformScript**: Script transforms in bindings
- **EventHandlerScript**: Event handler scripts
#### Event Nodes
- **EventHandler**: Base class for event handlers
## How It Works
### 1. JSON Flattening
The framework first flattens the hierarchical view.json structure into path-value pairs:
```python
# Original JSON
{
"root": {
"children": [{
"meta": {"name": "Button"},
"props": {"text": "Click Me"}
}]
}
}
# Flattened representation
{
"root.children[0].meta.name": "Button",
"root.children[0].props.text": "Click Me"
}
```
### 2. Model Building
The `ViewModelBuilder` class parses the flattened JSON and constructs the object model:
```python
from ignition_lint.common.flatten_json import flatten_file
from ignition_lint.model import ViewModelBuilder
# Flatten the JSON file
flattened_json = flatten_file("path/to/view.json")
# Build the object model
builder = ViewModelBuilder()
model = builder.build_model(flattened_json)
# Access different node types
components = model['components']
bindings = model['bindings']
scripts = model['scripts']
```
### 3. Rule Application
Rules are applied using the visitor pattern, allowing each rule to process relevant nodes:
```python
from ignition_lint.linter import LintEngine
from ignition_lint.rules import PylintScriptRule, PollingIntervalRule
# Create linter with rules
linter = LintEngine()
linter.register_rule(PylintScriptRule())
linter.register_rule(PollingIntervalRule(minimum_interval=10000))
# Run linting
errors = linter.lint(flattened_json)
```
## Understanding the Visitor Pattern
### What is the Visitor Pattern?
The Visitor pattern is a behavioral design pattern that lets you separate algorithms from the objects on which they operate. In Ignition Lint, it allows you to define new operations (linting rules) without changing the node classes.
### How It Works in Ignition Lint
1. **Node Classes**: Each node type (Component, Binding, Script, etc.) has an `accept()` method that takes a visitor
2. **Visitor Interface**: The `Visitor` base class defines visit methods for each node type
3. **Double Dispatch**: When a node accepts a visitor, it calls the appropriate visit method on that visitor
Here's the flow:
```python
# 1. The linter calls accept on a node
node.accept(rule)
# 2. The node's accept method calls back to the visitor
def accept(self, visitor):
return visitor.visit_component(self) # for a Component node
# 3. The visitor's method processes the node
def visit_component(self, node):
# Your rule logic here
pass
```
### Why Use the Visitor Pattern?
- **Separation of Concerns**: Node structure is separate from operations
- **Easy Extension**: Add new rules without modifying node classes
- **Type Safety**: Each node type has its own visit method
- **Flexible Processing**: Rules can choose which nodes to process
## Creating Custom Rules - Deep Dive
### What You Have Access To
When writing a custom rule, you have access to extensive information about each node:
#### Component Nodes
```python
class MyComponentRule(LintingRule):
def visit_component(self, node):
# Available properties:
node.path # Full path in the view: "root.children[0].components.Label"
node.name # Component name: "Label_1"
node.type # Component type: "ia.display.label"
node.properties # Dict of all component properties
# Example: Check component positioning
x_position = node.properties.get('position.x', 0)
y_position = node.properties.get('position.y', 0)
if x_position < 0 or y_position < 0:
self.errors.append(
f"{node.path}: Component '{node.name}' has negative position"
)
```
#### Binding Nodes
```python
class MyBindingRule(LintingRule):
def visit_expression_binding(self, node):
# Available for all bindings:
node.path # Path to the bound property
node.binding_type # Type of binding: "expr", "property", "tag"
node.config # Full binding configuration dict
# Specific to expression bindings:
node.expression # The expression string
# Example: Check for hardcoded values in expressions
if '"localhost"' in node.expression or "'localhost'" in node.expression:
self.errors.append(
f"{node.path}: Expression contains hardcoded localhost"
)
def visit_tag_binding(self, node):
# Specific to tag bindings:
node.tag_path # The tag path string
# Example: Ensure tags follow naming convention
if not node.tag_path.startswith("[default]"):
self.errors.append(
f"{node.path}: Tag binding should use [default] provider"
)
```
#### Script Nodes
```python
class MyScriptRule(LintingRule):
def visit_custom_method(self, node):
# Available properties:
node.path # Path to the method
node.name # Method name: "refreshData"
node.script # Raw script code
node.params # List of parameter names
# Special method:
formatted_script = node.get_formatted_script()
# Returns properly formatted Python with function definition
# Example: Check for print statements
if 'print(' in node.script:
self.errors.append(
f"{node.path}: Method '{node.name}' contains print statement"
)
def visit_message_handler(self, node):
# Additional properties:
node.message_type # The message type this handles
node.scope # Dict with scope settings:
# {'page': False, 'session': True, 'view': False}
# Example: Warn about session-scoped handlers
if node.scope.get('session', False):
self.errors.append(
f"{node.path}: Message handler '{node.message_type}' "
f"uses session scope - ensure this is intentional"
)
```
### Advanced Rule Patterns
#### Pattern 1: Cross-Node Validation
```python
class CrossReferenceRule(LintingRule):
def __init__(self):
super().__init__(node_types=[Component, PropertyBinding])
self.component_paths = set()
self.binding_targets = []
def visit_component(self, node):
# Collect all component paths
self.component_paths.add(node.path)
def visit_property_binding(self, node):
# Store binding for later validation
self.binding_targets.append((node.path, node.target_path))
def process_collected_scripts(self):
# This method is called after all nodes are visited
for binding_path, target_path in self.binding_targets:
if target_path not in self.component_paths:
self.errors.append(
f"{binding_path}: Binding targets non-existent component"
)
```
#### Pattern 2: Context-Aware Rules
```python
class ContextAwareRule(LintingRule):
def __init__(self):
super().__init__(node_types=[Component, Script])
self.current_component = None
self.component_stack = []
def visit_component(self, node):
# Track component context
self.component_stack.append(node)
self.current_component = node
def visit_script(self, node):
# Use component context
if self.current_component and self.current_component.type == "ia.display.table":
if "selectedRow" in node.script and "rowData" not in node.script:
self.errors.append(
f"{node.path}: Table script uses selectedRow without rowData check"
)
```
#### Pattern 3: Statistical Analysis
```python
class ComplexityAnalysisRule(LintingRule):
def __init__(self, max_complexity_score=100):
super().__init__(node_types=[Component])
self.max_complexity = max_complexity_score
self.complexity_scores = {}
def visit_component(self, node):
score = 0
# Calculate complexity based on various factors
score += len(node.properties) * 2 # Property count
# Check for deeply nested properties
for prop_name in node.properties:
score += prop_name.count('.') * 3 # Nesting depth
# Store score
self.complexity_scores[node.path] = score
if score > self.max_complexity:
self.errors.append(
f"{node.path}: Component complexity score {score} "
f"exceeds maximum {self.max_complexity}"
)
```
### Accessing Raw JSON Data
Sometimes you need access to the original flattened JSON data:
```python
class RawDataRule(LintingRule):
def __init__(self):
super().__init__()
self.flattened_json = None
def lint(self, flattened_json):
# Store the flattened JSON for use in visit methods
self.flattened_json = flattened_json
return super().lint(flattened_json)
def visit_component(self, node):
# Access any part of the flattened JSON
style_classes = self.flattened_json.get(
f"{node.path}.props.style.classes",
""
)
if style_classes and "/" in style_classes:
self.errors.append(
f"{node.path}: Style classes contain invalid '/' character"
)
```
### Rule Lifecycle Methods
```python
class LifecycleAwareRule(LintingRule):
def __init__(self):
super().__init__()
self.setup_complete = False
def before_visit(self):
"""Called before visiting any nodes."""
self.setup_complete = True
self.errors = [] # Reset errors
def visit_component(self, node):
"""Process each component."""
# Your logic here
pass
def process_collected_scripts(self):
"""Called after all nodes are visited."""
# Batch processing, cross-validation, etc.
pass
def after_visit(self):
"""Called after all processing is complete."""
# Cleanup, summary generation, etc.
pass
```
## Node Properties Reference
### Component
- `path`: Full path to the component
- `name`: Component instance name
- `type`: Component type (e.g., "ia.display.label")
- `properties`: Dictionary of all component properties
- `children`: List of child components (if container)
### ExpressionBinding
- `path`: Path to the bound property
- `expression`: The expression string
- `binding_type`: Always "expr"
- `config`: Full binding configuration
### PropertyBinding
- `path`: Path to the bound property
- `target_path`: Path to the source property
- `binding_type`: Always "property"
- `config`: Full binding configuration
### TagBinding
- `path`: Path to the bound property
- `tag_path`: The tag path string
- `binding_type`: Always "tag"
- `config`: Full binding configuration
### MessageHandlerScript
- `path`: Path to the handler
- `script`: Script string
- `message_type`: Type of message handled
- `scope`: Scope configuration dict
- `get_formatted_script()`: Returns formatted Python code
### CustomMethodScript
- `path`: Path to the method
- `name`: Method name
- `script`: Script string
- `params`: List of parameter names
- `get_formatted_script()`: Returns formatted Python code
### TransformScript
- `path`: Path to the transform
- `script`: Script string
- `binding_path`: Path to parent binding
- `get_formatted_script()`: Returns formatted Python code
### EventHandlerScript
- `path`: Path to the handler
- `event_type`: Event type (e.g., "onClick")
- `script`: Script string
- `scope`: Scope setting ("L", "P", "S")
- `get_formatted_script()`: Returns formatted Python code
## Available Rules
The following rules are currently implemented and available for use:
| Rule | Type | Description | Configuration Options | Default Enabled |
|------|------|-------------|----------------------|-----------------|
| `NamePatternRule` | Warning | Validates naming conventions for components and other elements | `convention`, `pattern`, `target_node_types`, `node_type_specific_rules` | ✅ |
| `PollingIntervalRule` | Error | Ensures polling intervals meet minimum thresholds to prevent performance issues | `minimum_interval` (default: 10000ms) | ✅ |
| `PylintScriptRule` | Error | Runs Pylint analysis on all scripts to detect syntax errors, undefined variables, and code quality issues | `pylintrc` (path to custom pylintrc file, defaults to `.config/ignition.pylintrc`) | ✅ |
| `UnusedCustomPropertiesRule` | Warning | Detects custom properties and view parameters that are defined but never referenced | None | ✅ |
| `BadComponentReferenceRule` | Error | Identifies brittle component object traversal patterns (getSibling, getParent, etc.) | `forbidden_patterns`, `case_sensitive` | ✅ |
| `ExcessiveContextDataRule` | Error | Detects excessive data stored in custom properties using 4 detection methods | `max_array_size`, `max_sibling_properties`, `max_nesting_depth`, `max_data_points` | ✅ |
### Rule Details
#### NamePatternRule
Validates naming conventions across different node types with flexible configuration options.
**Supported Conventions:**
- `PascalCase` (default)
- `camelCase`
- `snake_case`
- `kebab-case`
- `SCREAMING_SNAKE_CASE`
- `Title Case`
- `lower case`
**Configuration Options:**
- `convention`: Use a predefined naming convention (e.g., "PascalCase", "camelCase")
- `pattern`: Define a custom regex pattern for validation (takes priority over `convention`)
- `pattern_description`: Optional description of the pattern for error messages
- `suggestion_convention`: When using `pattern`, specify which convention to use for generating helpful suggestions
- `target_node_types`: Specify which node types this rule applies to
- `node_type_specific_rules`: Override settings for specific node types
**Configuration Priority:** `pattern` > `convention`
- If `pattern` is specified, it's used directly as the validation regex
- If only `convention` is specified, it's converted to a pattern automatically
- `suggestion_convention` determines how to generate suggestions for both
**Configuration Example (Predefined Convention):**
```json
{
"NamePatternRule": {
"enabled": true,
"kwargs": {
"convention": "PascalCase",
"target_node_types": ["component"],
"node_type_specific_rules": {
"custom_method": {
"convention": "camelCase"
}
}
}
}
}
```
**Configuration Example (Custom Pattern with Suggestions):**
```json
{
"NamePatternRule": {
"enabled": true,
"kwargs": {
"node_type_specific_rules": {
"component": {
"pattern": "^([A-Z][a-zA-Z0-9]*|[A-Z][A-Z0-9_]*)$",
"pattern_description": "PascalCase or SCREAMING_SNAKE_CASE",
"suggestion_convention": "PascalCase",
"min_length": 3
}
}
}
}
}
```
**Note on Custom Patterns:**
- When using `pattern`, the rule validates names against your regex
- Add `suggestion_convention` to enable helpful naming suggestions in error messages
- Add `pattern_description` to customize error messages (e.g., "PascalCase or SCREAMING_SNAKE_CASE")
- The `suggestion_convention` should match the intent of your pattern (e.g., if your pattern enforces PascalCase-like formatting, use "PascalCase")
#### PollingIntervalRule
Prevents performance issues by enforcing minimum polling intervals in `now()` expressions.
**What it checks:**
- Expression bindings containing `now()` calls
- Property and tag bindings with polling configurations
- Validates interval values are above minimum threshold
#### PylintScriptRule
Comprehensive Python code analysis using Pylint for all script types:
- Custom method scripts
- Event handler scripts (all domains: component, dom, system)
- Message handler scripts
- Transform scripts
**Detected Issues:**
- Syntax errors
- Undefined variables
- Unused imports
- Code style violations
- Logical errors
**Configuration:**
```json
{
"PylintScriptRule": {
"enabled": true,
"kwargs": {
"pylintrc": ".config/ignition.pylintrc"
}
}
}
```
**Pylintrc File:**
- Specify a custom pylintrc file path using the `pylintrc` parameter
- Supports both absolute paths (`/path/to/.pylintrc`) and relative paths (relative to working directory)
- Falls back to `.config/ignition.pylintrc` if not specified
- If no pylintrc is found, Pylint uses its default configuration
**Example Custom Configuration:**
```json
{
"PylintScriptRule": {
"enabled": true,
"kwargs": {
"pylintrc": "config/my-custom-pylintrc",
"debug": false
}
}
}
```
**Debug Files (Automatic Error Reporting):**
When PylintScriptRule detects errors (syntax errors, undefined variables, etc.), it **automatically** saves the combined Python script to a debug file for inspection:
- **Location**: `debug/pylint_input_temp.py` (or `tests/debug/` if running from tests directory)
- **Automatic**: Debug files are saved whenever pylint finds issues (no configuration needed)
- **Manual**: Set `"debug": true` to save script files even when there are no errors (useful for development)
**Example output when errors are found:**
```
🐛 Pylint found issues. Debug file saved to: /path/to/debug/pylint_input_temp.py
```
This makes it easy to inspect the actual script content when debugging syntax errors or other pylint issues.
#### UnusedCustomPropertiesRule
Identifies unused custom properties and view parameters to reduce view complexity.
**Detection Coverage:**
- View-level custom properties (`custom.*`)
- View parameters (`params.*`)
- Component-level custom properties (`*.custom.*`)
- References in expressions, bindings, and scripts
#### BadComponentReferenceRule
Prevents brittle view dependencies by detecting object traversal patterns.
**Forbidden Patterns:**
- `.getSibling()`, `.getParent()`, `.getChild()`, `.getChildren()`
- `self.parent`, `self.children` property access
- Any direct component tree navigation
**Recommended Alternatives:**
- Use `view.custom` properties for data sharing
- Implement message handling for component communication
- Design views with explicit data flow patterns
#### ExcessiveContextDataRule
Detects excessive data stored in custom properties that should be in databases instead. Large datasets in view JSON cause performance issues, memory bloat, and violate separation of concerns.
**Detection Methods:**
1. **Array Size** - Detects arrays with too many items
- Parameter: `max_array_size` (default: 50)
- Example: `custom.filteredData[784]` exceeds threshold
2. **Property Breadth** - Detects too many sibling properties at the same level
- Parameter: `max_sibling_properties` (default: 50)
- Example: `custom.device1`, `custom.device2`, ..., `custom.device100`
3. **Nesting Depth** - Detects overly deep nesting structures
- Parameter: `max_nesting_depth` (default: 5 levels)
- Example: `custom.a.b.c.d.e.f` (6 levels deep)
4. **Data Points** - Detects total volume of data in custom properties
- Parameter: `max_data_points` (default: 1000)
- Counts all flattened paths under `custom.*`
**Configuration Example:**
```json
{
"ExcessiveContextDataRule": {
"enabled": true,
"kwargs": {
"max_array_size": 50,
"max_sibling_properties": 50,
"max_nesting_depth": 5,
"max_data_points": 1000
}
}
}
```
**Best Practices:**
- Custom properties should contain configuration, not data
- Use databases, named queries, or tag historian for large datasets
- Views should fetch data at runtime, not store it statically
- Large arrays (>50 items) indicate data that belongs in a database
## Usage Methods
This package can be utilized in several ways to fit different development workflows:
### 1. Command Line Interface (CLI)
#### Using the Installed Package
```bash
# After pip install ignition-lint
ignition-lint path/to/view.json
# Lint multiple files with glob pattern
ignition-lint --files "**/view.json"
# Use custom configuration
ignition-lint --config my_rules.json --files "views/**/view.json"
# Show help
ignition-lint --help
```
#### Using Poetry (Development)
```bash
# Using the CLI entry point
poetry run ignition-lint path/to/view.json
# Using the module directly
poetry run python -m ignition_lint path/to/view.json
# If you've activated the Poetry shell
poetry shell
ignition-lint path/to/view.json
```
### 2. Pre-commit Hook Integration
**Option A: Standard (clones entire repository ~64MB):**
```yaml
repos:
- repo: https://github.com/bw-design-group/ignition-lint
rev: v0.2.4 # Use the latest release tag
hooks:
- id: ignition-lint
# Hook runs on view.json files by default with warnings-only mode
```
**Option B: Lightweight (installs only Python package ~1MB, recommended):**
```yaml
repos:
- repo: local
hooks:
- id: ignition-lint
name: Ignition Lint
entry: ignition-lint
language: python
types: [json]
files: view\.json$
args: ['--config=rule_config.json', '--files', '--warnings-only']
pass_filenames: true
additional_dependencies:
- 'git+https://github.com/bw-design-group/ignition-lint@v0.2.4'
```
> **Note**: Option B installs only the Python package without cloning tests, docker files, and documentation, reducing download size from ~64MB to ~1MB.
**With custom configuration (Option A - Standard):**
```yaml
repos:
- repo: https://github.com/bw-design-group/ignition-lint
rev: v0.2.4
hooks:
- id: ignition-lint
args: ['--config=rule_config.json', '--files']
```
**With custom configuration (Option B - Lightweight):**
```yaml
repos:
- repo: local
hooks:
- id: ignition-lint
name: Ignition Lint
entry: ignition-lint
language: python
types: [json]
files: view\.json$
args: ['--config=rule_config.json', '--files']
pass_filenames: true
additional_dependencies:
- 'git+https://github.com/bw-design-group/ignition-lint@v0.2.4'
```
Install and run:
```bash
# Install pre-commit hooks
pre-commit install
# Run on all files
pre-commit run --all-files
# Run on staged files only
pre-commit run
```
**Notes:**
- Hook automatically runs only on `view.json` files
- **Default behavior**: Both warnings and errors block commits
- Pre-commit checks only **modified files** (incremental linting)
- Config paths are resolved relative to your repository root
- Customize pylintrc via the `pylintrc` parameter in your `rule_config.json`
- **Recommended**: Use Option B (lightweight) to reduce initial download from ~64MB to ~1MB
**Warnings vs Errors:**
By default, both warnings and errors will block commits:
- **Warnings** (e.g., naming conventions): Style issues that should be fixed
- **Errors** (e.g., undefined variables, excessive context data): Critical issues that must be fixed
To allow commits with warnings (only block on errors), add `--ignore-warnings`:
```yaml
repos:
- repo: https://github.com/bw-design-group/ignition-lint
rev: v0.2.4
hooks:
- id: ignition-lint
args: ['--config=rule_config.json', '--files', '--ignore-warnings']
```
This is useful for teams that want to gradually address warnings without blocking development.
**For full repository scans**, use the CLI directly instead of `pre-commit run --all-files`:
```bash
# Scan all view.json files in the repository
ignition-lint --files "services/**/view.json" --config rule_config.json
# With timing and results output
ignition-lint --files "services/**/view.json" \
--config rule_config.json \
--timing-output timing.txt \
--results-output results.txt
```
> **Why not use `pre-commit run --all-files`?** Pre-commit passes all matched filenames as command-line arguments, which can exceed system ARG_MAX limits in large repositories (e.g., 725 files with long paths). The CLI tool uses internal glob matching to avoid this limitation.
### 3. Whitelist Configuration (Managing Technical Debt)
The whitelist feature allows you to exclude specific files from linting, which is essential for managing technical debt at scale. This is particularly useful when you have legacy code that can't be immediately fixed but shouldn't block development workflows.
#### Quick Start
```bash
# Generate whitelist from legacy files
ignition-lint --generate-whitelist "views/legacy/**/*.json" "views/deprecated/**/*.json"
# Use whitelist during linting
ignition-lint --config rule_config.json --whitelist .whitelist.txt --files "**/view.json"
```
#### Whitelist File Format
**Filename:** `.whitelist.txt` (recommended default)
**Format:** Plain text, one file path per line
```text
# Comments start with # and are ignored
# Document WHY files are whitelisted (JIRA tickets, dates, etc.)
# Legacy views - scheduled for refactor Q2 2026 (JIRA-1234)
views/legacy/OldDashboard/view.json
views/legacy/MainScreen/view.json
# Deprecated views - being replaced
views/deprecated/TempView/view.json
views/deprecated/OldWidget/view.json
# Known issues - technical debt tracked in backlog
views/components/ComponentWithKnownIssues/view.json
```
#### Generating Whitelists
```bash
# Generate from single pattern
ignition-lint --generate-whitelist "views/legacy/**/*.json"
# Generate from multiple patterns
ignition-lint --generate-whitelist \
"views/legacy/**/*.json" \
"views/deprecated/**/*.json"
# Custom output file
ignition-lint --generate-whitelist "views/legacy/**/*.json" \
--whitelist-output custom-whitelist.txt
# Append to existing whitelist
ignition-lint --generate-whitelist "views/temp/**/*.json" --append
# Dry run (preview without writing)
ignition-lint --generate-whitelist "views/legacy/**/*.json" --dry-run
```
#### Using Whitelists
```bash
# Use whitelist (whitelisted files are skipped)
ignition-lint --config rule_config.json \
--whitelist .whitelist.txt \
--files "**/view.json"
# Disable whitelist (overrides --whitelist)
ignition-lint --config rule_config.json \
--whitelist .whitelist.txt \
--no-whitelist \
--files "**/view.json"
# Verbose mode (show ignored files)
ignition-lint --config rule_config.json \
--whitelist .whitelist.txt \
--files "**/view.json" \
--verbose
```
**Important:** By default, ignition-lint does NOT use a whitelist unless you explicitly specify `--whitelist <path>`.
#### Pre-commit Integration with Whitelist
```yaml
# .pre-commit-config.yaml
repos:
- repo: https://github.com/bw-design-group/ignition-lint
rev: v0.2.4
hooks:
- id: ignition-lint
# Add whitelist argument to use project-specific whitelist
args: ['--config=rule_config.json', '--whitelist=.whitelist.txt', '--files']
```
**Workflow:**
1. Generate whitelist: `ignition-lint --generate-whitelist "views/legacy/**/*.json"`
2. Review and edit `.whitelist.txt` to add comments explaining why files are whitelisted
3. Commit whitelist: `git add .whitelist.txt && git commit -m "Add whitelist for legacy views"`
4. Update `.pre-commit-config.yaml` to use whitelist (add `--whitelist=.whitelist.txt` to args)
5. Pre-commit now skips whitelisted files automatically
**For detailed documentation**, see [docs/whitelist-guide.md](docs/whitelist-guide.md).
### 4. GitHub Actions Workflow
Create `.github/workflows/ignition-lint.yml`:
```yaml
name: Ignition Lint
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install ignition-lint
run: pip install ignition-lint
- name: Run ignition-lint
run: |
# Lint all view.json files in the repository
find . -name "view.json" -type f | while read file; do
echo "Linting $file"
ignition-lint "$file"
done
```
### 5. Development Mode with Poetry
For contributors and package developers:
```bash
# Clone and set up development environment
git clone https://github.com/design-group/ignition-lint.git
cd ignition-lint
# Install with Poetry
poetry install
# Test the package locally
poetry run ignition-lint tests/cases/PreferredStyle/view.json
# Run the full test suite
cd tests
poetry run python test_runner.py --run-all
# Test GitHub Actions workflows locally
./test-actions.sh
# Format and lint code before committing
poetry run yapf -ir src/ tests/
poetry run pylint src/ignition_lint/
```
## Configuration System
### Rule Configuration
Rules are configured via JSON files (default: `rule_config.json`):
```json
{
"NamePatternRule": {
"enabled": true,
"kwargs": {
"convention": "PascalCase",
"target_node_types": ["component"]
}
},
"PollingIntervalRule": {
"enabled": true,
"kwargs": {
"minimum_interval": 10000
}
}
}
```
### Severity Levels
Severity levels are determined by rule developers based on what each rule checks. Users cannot configure severity levels.
- **Warnings**: Style and preference issues that don't prevent functionality
- **Errors**: Critical issues that can cause functional problems or break systems
#### Built-in Rule Severities
| Rule | Severity | Reason |
|------|----------|---------|
| `NamePatternRule` | Warning | Naming conventions are style preferences |
| `PollingIntervalRule` | Error | Performance issues can cause system problems |
| `PylintScriptRule` | Error | Syntax errors, undefined variables break functionality |
#### Output Examples
**Warnings (exit code 0):**
```
⚠️ Found 3 warnings in view.json:
📋 NamePatternRule (warning):
• component: Name doesn't follow PascalCase convention
✅ No errors found (warnings only)
```
**Errors (exit code 1):**
```
❌ Found 2 errors in view.json:
📋 PollingIntervalRule (error):
• binding: Polling interval 5000ms below minimum 10000ms
📈 Summary:
❌ Total issues: 2
```
### Developer Guidelines for Rule Severity
When creating custom rules, set the severity based on the impact:
```python
class MyCustomRule(LintingRule):
# Use "warning" for style/preference issues
severity = "warning"
# Use "error" for functional/performance issues
# severity = "error"
```
## Best Practices
1. **Rule Granularity**: Keep rules focused on a single concern
2. **Performance**: Use batch processing for operations like script analysis
3. **Error Messages**: Provide clear, actionable error messages with paths
4. **Configuration**: Make rules configurable for different project requirements
5. **Testing**: Test rules with various edge cases and malformed inputs
6. **Node Type Selection**: Only register for node types you actually need to process
## Future Enhancements
The framework is designed to be extended with:
- Additional node types (e.g., style classes, custom properties)
- More sophisticated analysis rules
- Integration with CI/CD pipelines
- Performance metrics and reporting
- Auto-fix capabilities for certain rule violations
## Contributing
When adding new features:
1. Follow the existing object model patterns
2. Implement the visitor pattern for new node types
3. Provide configuration options for new rules
4. Document rule behavior and configuration
5. Add appropriate error handling
### Development Workflow
```bash
# Fork and clone the repository
git clone https://github.com/yourusername/ignition-lint.git
cd ignition-lint
# Install development dependencies
poetry install --with dev
# Create a feature branch
git checkout -b feature/my-new-feature
# Make your changes and test
poetry run pytest
poetry run pylint ignition_lint/
# Commit and push
git commit -m "Add new feature"
git push origin feature/my-new-feature
```
| text/markdown | Alex Spyksma | null | null | null | MIT | ignition, perspective, linting, json, scada, quality-assurance, static-analysis | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Manufacturing",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Testing",
"Typing :: Typed"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"pylint"
] | [] | [] | [] | [
"Homepage, https://github.com/design-group/ignition-lint"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:07:52.856368 | ign_lint-0.4.1.tar.gz | 95,114 | c8/f0/a3a8db49656a03a7615c2c574af7c1b06dd50eba4154658ec9a944f0bdad/ign_lint-0.4.1.tar.gz | source | sdist | null | false | 1437e9c0520c168ec0b81dc96b753e9c | 22ad8fc2a1001dde9079d0acd54484df97cec3936e66ade307b32cb990403093 | c8f0a3a8db49656a03a7615c2c574af7c1b06dd50eba4154658ec9a944f0bdad | null | [
"LICENSE"
] | 203 |
2.4 | remote-terminal-mcp | 1.3.1 | Full SSH terminal for Linux servers with AI/user control and dual-stream visibility | # Remote Terminal
**AI-Powered Remote Linux Server Management via MCP**
Remote Terminal lets Claude (the AI assistant) execute commands on your remote Linux servers through a natural chat interface. Watch full output in your browser in real-time while Claude receives smart-filtered summaries optimized for token efficiency.
---
## 🎯 What Is This?
Imagine telling Claude:
```
"Install nginx on my server and configure it with SSL"
"Run complete system diagnostics and tell me if anything looks wrong"
"Find all log errors from the last hour and summarize them"
"Save this batch script and run it again next week"
```
And Claude does it - executing commands, analyzing output, saving useful scripts, and taking action on your behalf.
**That's Remote Terminal.**
---
## ✨ Key Features
### Core Capabilities
- **🖥️ Remote Command Execution** - Run any bash command on Linux servers
- **🌐 Multi-Server Management** - Switch between multiple servers easily
- **📁 File Transfer (SFTP)** - Upload/download files and directories with compression
- **📜 Batch Script Execution** - Run multi-command scripts 10-50x faster
- **📚 Batch Script Library** - Save, browse, and reuse batch scripts (NEW in 3.1)
- **💬 Conversation Tracking** - Group commands by goal with rollback support
- **🎯 Recipe System** - Save successful workflows for reuse
- **🗄️ Database Integration** - Full audit trail with SQLite
- **🌍 Interactive Web Terminal** - Full-featured terminal in browser (type, paste, scroll history)
- **🔄 Multi-Terminal Sync** - Open multiple terminals, all perfectly synchronized
- **✨ Bash Syntax Highlighting** - VS Code-style colors in standalone UI (NEW in 3.1)
### The Interactive Web Terminal
Remote Terminal provides a **fully interactive terminal window** in your browser at `http://localhost:8080` - it looks and feels just like WSL, PuTTY, or any standard terminal:
**You can:**
- Type commands directly (just like any terminal)
- Copy/paste text (Ctrl+C, Ctrl+V)
- Scroll through command history
- Use arrow keys for history navigation
- View real-time command output with colors preserved
**Claude can:**
- Execute commands that appear in your terminal
- See command results instantly
- Continue working while you watch
**The key advantage:** You maintain complete visibility and control. Every command Claude runs appears in your terminal window in real-time. You're never in the dark about what's happening on your server - it's like sitting side-by-side with an assistant who types commands for you while you watch the screen.
**Multi-Terminal Support:** Open multiple browser windows at `http://localhost:8080` - they all stay perfectly synchronized via WebSocket broadcast. Type in one terminal, see it in all terminals instantly. Perfect for multi-monitor setups or sharing your view with others.
⚠️ **Best Practice:** Close unused terminal tabs when done. While the system handles multiple connections efficiently, keeping many old tabs open can consume unnecessary resources and may cause connection issues.
#### 🎬 See It In Action
<video width="800" controls>
<source src="https://raw.githubusercontent.com/TiM00R/remote-terminal/master/docs/demo.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
https://github.com/user-attachments/assets/98a6fa41-ec4f-410b-8d4a-a2422d8ac7c9
*Watch the interactive web terminal in action - see Claude execute commands while you maintain full visibility and control*
### The Dual-Stream Architecture
Behind the scenes, Remote Terminal uses a smart two-stream approach:
```
SSH Output from Remote Server
↓
[Raw Output]
↓
┌────┴────┐
│ │
↓ ↓
[FULL] [FILTERED]
│ │
↓ ↓
Web Terminal Claude
(You see all) (Smart summary)
```
**Result:**
- **You:** Full visibility and control in interactive terminal
- **Claude:** Efficient work with 95% token savings
- **Both:** Shared SSH session, synchronized state
- **Best of both worlds!**
## 🚀 Quick Start
### Installation
**Step 1: Create Installation Directory**
```powershell
# Choose a location for your installation (example: C:\RemoteTerminal)
mkdir C:\RemoteTerminal
cd C:\RemoteTerminal
```
**Step 2: Install Package**
```powershell
# Create dedicated virtual environment
python -m venv remote-terminal-env
remote-terminal-env\Scripts\activate
pip install remote-terminal-mcp
```
**Step 3: Configure Claude Desktop**
Edit `%APPDATA%\Claude\claude_desktop_config.json`:
```json
{
"mcpServers": {
"remote-terminal": {
"command": "C:\\RemoteTerminal\\remote-terminal-env\\Scripts\\remote-terminal-mcp.exe",
"env": {
"REMOTE_TERMINAL_ROOT": "C:\\RemoteTerminal"
}
}
}
}
```
**Important:** Replace `C:\RemoteTerminal` with your actual installation path from Step 1.
**Step 4: First Run - Auto Setup**
Restart Claude Desktop. On first use, configuration files will automatically copy to `C:\RemoteTerminal`:
- `config.yaml` - Default settings (auto-created from package defaults)
- `hosts.yaml` - Server list (auto-created from template)
**Step 5: Configure Your Servers**
You have two options to configure your servers:
**Option A: Manual Configuration (Recommended for first server)**
Edit `C:\RemoteTerminal\hosts.yaml`:
```yaml
servers:
- name: My Server
host: 192.168.1.100
user: username
password: your_password
port: 22
description: My development server
tags:
- development
# Optional: Set default server for auto-connect
# Use list_servers to see which server is marked as [DEFAULT]
default_server: My Server
```
**Option B: AI-Assisted Configuration**
Ask Claude to help you add a new server:
```
Claude, add a new server to my configuration:
- Name: Production Server
- Host: 192.168.1.100
- User: admin
- Password: mypassword
- Port: 22
```
Claude will use the `add_server` tool to update your `hosts.yaml` file automatically.
Restart Claude Desktop and test:
```
List my configured servers
```
**Step 6: (Optional) Run Standalone Web Interface**
```powershell
cd C:\RemoteTerminal
remote-terminal-env\Scripts\activate
remote-terminal-standalone
```
Access at:
- Control Panel: http://localhost:8081
- Terminal: http://localhost:8082
---
## 📖 Documentation
Complete guides for every use case:
- **[Quick Start](https://github.com/TiM00R/remote-terminal/blob/master/docs/QUICK_START.md)** — Get running in 5 minutes
- **[Installation](https://github.com/TiM00R/remote-terminal/blob/master/docs/INSTALLATION.md)** — Detailed setup instructions
- **[User Guide](https://github.com/TiM00R/remote-terminal/blob/master/docs/USER_GUIDE.md)** — Complete feature walkthrough
- **[Feature Reference](https://github.com/TiM00R/remote-terminal/blob/master/docs/FEATURE_REFERENCE.md)** — All MCP tools reference
- **[Troubleshooting](https://github.com/TiM00R/remote-terminal/blob/master/docs/TROUBLESHOOTING.md)** — Common problems and solutions
- **[WebSocket Broadcast](https://github.com/TiM00R/remote-terminal/blob/master/docs/WEBSOCKET_BROADCAST.md)** — Multi-terminal synchronization details
- **[Release Notes v3.1](https://github.com/TiM00R/remote-terminal/blob/master/docs/RELEASE_NOTES_v3.1.md)** — Release notes for version 3.1
---
## 💡 Usage Examples
### System Administration
```
"Check disk space and memory usage"
"What processes are using the most CPU?"
"Is nginx running? If not, start it"
"Show me the last 50 lines of the system log"
```
### Software Management
```
"Install htop and verify it works"
"Update all packages"
"Check if docker is installed and what version"
```
### Troubleshooting
```
"Run complete network diagnostics"
"Find all errors in the last hour of logs"
"Check why nginx won't start"
"Analyze disk usage by directory"
```
### File Operations
```
"Upload my local config.json to /etc/app/"
"Download all log files from /var/log/myapp/"
"List files in /var/log modified today"
"Find all files larger than 100MB"
```
### Automation & Script Management
```
"Create a recipe from our successful nginx installation"
"Execute the network diagnostics recipe"
"Start a conversation to configure WiFi"
"List my saved batch scripts"
"Execute script 5"
"Load script 3 for editing"
```
---
## 🎨 Example Session
**You:**
> Install nginx on my production server
**Claude:**
> I'll install nginx. This will create a package installation...
*Interactive terminal opens automatically at http://localhost:8080*
**Web Terminal shows (real-time):**
```
$ sudo apt install nginx
Reading package lists... Done
Building dependency tree... Done
[... 500+ lines of installation output ...]
Setting up nginx (1.18.0-0ubuntu1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service
Processing triggers for systemd (245.4-4ubuntu3.23) ...
```
**Claude receives (filtered summary):**
```
nginx installed successfully in 45s
12 packages installed
Service: nginx.service is active
```
**Claude responds:**
> nginx has been successfully installed and is now running. The service is active. Would you like me to configure it or show you the default page?
**Token savings: 96% (15,000 tokens → 600 tokens)**
---
## 🏗️ Architecture
### Project Structure
```
remote_terminal/
├── config/ # Default configuration templates
│ ├── config.yaml # Default settings (packaged)
│ └── hosts.yaml.example # Server template (packaged)
├── data/ # SQLite database (user directory)
│ └── remote_terminal.db # Command history, conversations, recipes, scripts
├── docs/ # Documentation
│ ├── DATABASE_SCHEMA.md
│ ├── FEATURE_REFERENCE.md
│ ├── INDEX.md
│ ├── INSTALLATION.md
│ ├── QUICK_START.md
│ ├── RELEASE_NOTES_v3.1.md
│ ├── TROUBLESHOOTING.md
│ ├── USER_GUIDE.md
│ └── WEBSOCKET_BROADCAST.md
├── recipes/ # Example automation recipes
├── src/ # Source code (modular architecture)
│ ├── batch/ # Batch execution system
│ │ ├── batch_executor.py
│ │ ├── batch_helpers.py
│ │ └── batch_parser.py
│ ├── config/ # Configuration management
│ │ ├── config.py
│ │ ├── config_dataclasses.py
│ │ ├── config_init.py
│ │ └── config_loader.py
│ ├── database/ # Database operations (SQLite)
│ │ ├── database_manager.py # Core database manager
│ │ ├── database_batch.py # Batch script storage
│ │ ├── database_batch_execution.py
│ │ ├── database_batch_queries.py
│ │ ├── database_batch_scripts.py
│ │ ├── database_commands.py # Command history
│ │ ├── database_conversations.py
│ │ ├── database_recipes.py # Recipe storage
│ │ └── database_servers.py # Machine identity tracking
│ ├── output/ # Output filtering & formatting
│ │ ├── output_buffer.py
│ │ ├── output_buffer_base.py
│ │ ├── output_buffer_filtered.py
│ │ ├── output_filter.py # Smart filtering (95% token savings)
│ │ ├── output_filter_commands.py
│ │ ├── output_filter_decision.py
│ │ └── output_formatter.py
│ ├── prompt/ # Command completion detection
│ │ ├── prompt_detector.py
│ │ ├── prompt_detector_checks.py
│ │ ├── prompt_detector_pager.py
│ │ └── prompt_detector_patterns.py
│ ├── ssh/ # SSH/SFTP operations
│ │ ├── ssh_manager.py # High-level SSH manager
│ │ ├── ssh_connection.py # Connection lifecycle
│ │ ├── ssh_commands.py # Command execution
│ │ └── ssh_io.py # Input/output streaming
│ ├── state/ # Shared state management
│ │ ├── shared_state_conversation.py
│ │ ├── shared_state_monitor.py
│ │ └── shared_state_transfer.py
│ ├── static/ # Web terminal static assets
│ │ ├── fragments/ # HTML fragments
│ │ ├── vendor/ # xterm.js library
│ │ ├── terminal.css
│ │ ├── terminal.js
│ │ └── transfer-panel.js
│ ├── tools/ # MCP tool modules (modular)
│ │ ├── decorators.py # Tool decorators
│ │ ├── tools_hosts.py # Server management (main)
│ │ ├── tools_hosts_crud.py # Add/remove/update servers
│ │ ├── tools_hosts_select.py # Server selection & connection
│ │ ├── tools_commands.py # Command execution (main)
│ │ ├── tools_commands_database.py
│ │ ├── tools_commands_execution.py
│ │ ├── tools_commands_status.py
│ │ ├── tools_commands_system.py
│ │ ├── tools_conversations.py # Conversation tracking (main)
│ │ ├── tools_conversations_lifecycle.py
│ │ ├── tools_conversations_query.py
│ │ ├── tools_batch.py # Batch script execution (main)
│ │ ├── tools_batch_execution.py
│ │ ├── tools_batch_helpers.py
│ │ ├── tools_batch_management.py
│ │ ├── tools_recipes.py # Recipe automation (main)
│ │ ├── tools_recipes_create.py
│ │ ├── tools_recipes_crud.py
│ │ ├── tools_recipes_execution.py
│ │ ├── tools_recipes_helpers.py
│ │ ├── tools_recipes_modify.py
│ │ ├── tools_recipes_query.py
│ │ ├── tools_sftp.py # File transfer (main)
│ │ ├── tools_sftp_single.py # Single file transfer
│ │ ├── tools_sftp_directory.py # Directory transfer
│ │ ├── tools_sftp_directory_download.py
│ │ ├── tools_sftp_directory_upload.py
│ │ ├── tools_sftp_exceptions.py
│ │ ├── tools_sftp_utils.py
│ │ ├── sftp_compression.py # Compression logic
│ │ ├── sftp_compression_download.py
│ │ ├── sftp_compression_tar.py
│ │ ├── sftp_compression_upload.py
│ │ ├── sftp_decisions.py # Auto/manual compression decisions
│ │ ├── sftp_progress.py # Progress tracking
│ │ ├── sftp_transfer_compressed.py
│ │ ├── sftp_transfer_download.py
│ │ ├── sftp_transfer_scan.py
│ │ ├── sftp_transfer_standard.py
│ │ ├── sftp_transfer_upload.py
│ │ └── tools_info.py # System information
│ ├── utils/ # Utility functions
│ │ ├── utils.py
│ │ ├── utils_format.py
│ │ ├── utils_machine_id.py # Hardware/OS fingerprinting
│ │ ├── utils_output.py
│ │ └── utils_text.py
│ ├── web/ # Web terminal (WebSocket-enabled)
│ │ ├── web_terminal.py # Main web server
│ │ ├── web_terminal_ui.py # UI components
│ │ └── web_terminal_websocket.py # Multi-terminal sync
│ ├── mcp_server.py # MCP server entry point
│ ├── shared_state.py # Global shared state
│ ├── command_state.py # Command registry & tracking
│ ├── hosts_manager.py # Multi-server configuration
│ └── error_check_helper.py # Error detection
└── standalone/ # Standalone web UI (no Claude)
├── static/
│ ├── css/ # Standalone UI styles
│ │ ├── control-forms.css
│ │ ├── control-layout.css
│ │ ├── control-response.css
│ │ └── control-styles.css # Bash syntax highlighting
│ ├── js/ # Standalone UI scripts
│ │ ├── control-forms.js
│ │ ├── control-forms-fields.js
│ │ ├── control-forms-generation.js
│ │ ├── control-forms-utils.js
│ │ ├── control-main.js
│ │ └── control-response.js
│ └── tool-schemas/ # MCP tool schemas
│ ├── batch.json
│ ├── commands.json
│ ├── file-transfer.json
│ ├── servers.json
│ └── workflows.json
├── mcp_control.html # Control panel HTML
├── standalone_mcp.py # Standalone server entry point
├── standalone_mcp_endpoints.py # API endpoints
└── standalone_mcp_startup.py # Initialization & connection
```
### Technology Stack
- **Python 3.9+** - Core language
- **MCP Protocol** - Claude integration
- **Paramiko** - SSH/SFTP library
- **NiceGUI + WebSockets** - Web terminal with multi-terminal sync
- **SQLite** - Database for history/recipes/scripts
- **FastAPI** - Web framework
---
## 🔧 Configuration
### Configuration Files Location
Configuration files are automatically copied to your working directory on first run:
**For PyPI users:**
- Set `REMOTE_TERMINAL_ROOT` in Claude Desktop config
- Files auto-copy to that directory on first run
- Location: `%REMOTE_TERMINAL_ROOT%\config.yaml` and `hosts.yaml`
- User data preserved when reinstalling/upgrading
**Default template files packaged with installation:**
- `config/config.yaml` - Default settings template
- `config/hosts.yaml.example` - Server configuration template
### hosts.yaml
Define your servers:
```yaml
servers:
- name: production
host: 192.168.1.100
user: admin
password: secure_pass
port: 22
description: Production server
tags: production, critical
- name: development
host: 192.168.1.101
user: dev
password: dev_pass
tags: development
default_server: production
```
---
## 🛡️ Security Considerations
### Current Status
- Passwords stored in plain text in `hosts.yaml`
- Web terminal bound to localhost only (not network-exposed)
- Full command audit trail in database
- SSH uses standard security (password authentication)
- User config files stored outside package (preserved on reinstall)
---
## 📊 Performance
### Token Efficiency
Average token savings on verbose commands:
| Command Type | Full Output | Filtered | Savings |
|--------------|-------------|----------|---------|
| apt install | ~15,000 | ~600 | **96%** |
| ls -la /var | ~8,000 | ~400 | **95%** |
| Log search | ~12,000 | ~500 | **96%** |
| find / | ~30,000 | ~800 | **97%** |
**Average: 95-98% token reduction on verbose commands**
### Speed Improvements
Batch execution vs sequential:
- **10 commands sequential:** 5 minutes (10 round-trips)
- **10 commands batch:** 30 seconds (1 round-trip)
- **Speed improvement: 10x faster!**
---
## 🔍 Advanced Features
### Batch Script Library
Save batch scripts for reuse:
```
1. Run diagnostics → Script auto-saved with deduplication
2. Browse library → "List my batch scripts"
3. Execute saved script → "Execute script 5"
4. Edit existing → "Load script 3 for editing"
5. Track usage → times_used, last_used_at
```
Features:
- **Automatic deduplication** via SHA256 hash
- **Usage statistics** tracking
- **Edit mode** for modifications
- **Search and sort** capabilities
- **Two-step deletion** with confirmation
### Conversation Tracking
Group related commands by goal:
```
Start conversation: "Configure nginx with SSL"
→ [Execute multiple commands]
→ End conversation: success
→ Create recipe from conversation
```
Benefits:
- Organized command history
- Rollback capability
- Context for AI
- Recipe generation
### Recipe System
Save successful workflows:
```python
# Recipe: wifi_diagnostics
1. lspci | grep -i network
2. iwconfig
3. ip link show
4. dmesg | grep -i wifi
5. systemctl status NetworkManager
```
Reuse on any compatible server:
```
Execute wifi_diagnostics recipe on my new server
```
### Machine Identity
Each server tracked by unique machine_id (hardware + OS fingerprint):
- Commands tracked per physical machine
- Recipes execute on compatible systems
- Audit trail maintains integrity
- Handles server IP changes
---
## 🐛 Known Issues & Limitations
### Current Limitations
1. **Designed for Windows local machine**
- Currently optimized for Windows 10/11
- Linux/Mac support possible with modifications
2. **SSH Key Support not implemented**
- Password authentication only
- SSH keys work with manual SSH but not integrated with MCP tools
3. **Works with only one remote server at a time**
- Can configure multiple servers
- Can only actively work with one server per session
- Switch between servers as needed
---
## 🤝 Contributing
This is Tim's personal project. If you'd like to contribute:
1. Test thoroughly on your setup
2. Document any issues found
3. Suggest improvements
4. Share recipes and scripts you create
---
## 📜 Version History
### Version 1.3.1 (Current - February 20, 2026)
**Prompt Detection Logging Overhaul:**
- ✅ Production mode: 2 log lines per command (Start + Detected) instead of hundreds
- ✅ Debug mode: full verbose logging via `prompt_detection.debug_logging: true` in config.yaml
- ✅ Detection summary includes: polls, lines_checked, last_non_match, matched string, total_lines
- ✅ Pager/sudo/verify events still logged in production (rare, meaningful)
**Startup Config Dump:**
- ✅ Full config.yaml logged at startup (passwords redacted)
- ✅ Hosts summary logged (server count + default server)
- ✅ Simplifies remote support and troubleshooting for distributed users
**Auto-Default Server:**
- ✅ Default server automatically updated on every successful connection
- ✅ Eliminates "connected to wrong server after restart" problem
- ✅ Persisted to hosts.yaml immediately
**Buffer Overflow Fix:**
- ✅ Commands no longer return [No output] after buffer reaches max_lines
- ✅ command_start_line now tracks absolute position with offset correction
- ✅ Sliding window deque buffer works correctly at any fill level
### Version 1.3.0 (January 2026)
**Virtual Environment Prompt Support:**
- ✅ Added venv prefix support to all prompt patterns `(\(.+\)\s+)?`
- ✅ Supports Python venv, Conda, Poetry, Pipenv and similar tools
- ✅ Fixed GENERIC_PROMPT patterns in server selection and machine ID detection
- ✅ Fixed pattern defaults not applied when missing from config.yaml
- ✅ Fixed config file copy path for pip installations
- ✅ Fixed hosts.yaml.example template (removed invalid default field)
- ✅ All fixes tested and verified
### Version 1.2.1 (December 2024)
**Standalone Import Fix:**
- ✅ Fixed ModuleNotFoundError in standalone modules
- ✅ Added try/except fallback imports for both source and installed package usage
**Code Modularization:**
- ✅ Reorganized source into modular directory structure (batch, config, database, output, prompt, ssh, state, utils, web)
- ✅ Split large tool modules into smaller focused files
- ✅ All 37 MCP tools tested and working after refactor
### Version 1.2.0 (December 2024)
**PyPI Package Distribution:**
- ✅ Full PyPI package with modern pyproject.toml configuration
- ✅ Automatic config initialization via REMOTE_TERMINAL_ROOT environment variable
- ✅ Config files auto-copy to user directory on first run (survives upgrades)
- ✅ Enhanced list_servers tool with [CURRENT] and [DEFAULT] markers
- ✅ Fixed standalone mode crash when default server unreachable
**Standalone Web UI - Help System:**
- ✅ Help button (❓) added to all MCP tools with comprehensive documentation
- ✅ Help modal with full usage examples and parameter descriptions
- ✅ Updated all tool schemas: workflows, commands, batch, servers, file-transfer
- ✅ Enhanced CSS with consistent button colors (green Execute, blue Help)
**Recipe Management (3 new tools):**
- ✅ `delete_recipe` - Permanent deletion with two-step confirmation
- ✅ `create_recipe_from_commands` - Manual recipe creation without execution
- ✅ `update_recipe` - In-place recipe modification (preserves ID/stats)
- ✅ Recipe dropdown selectors replacing manual ID entry in standalone UI
**WebSocket Multi-Terminal Sync:**
- ✅ Replaced HTTP polling with WebSocket broadcast architecture
- ✅ Multiple browser terminals stay perfectly synchronized
- ✅ Auto-reconnect on connection loss
### Version 1.1.0 (December 2024)
**Batch Script Management (5 new tools):**
- ✅ `list_batch_scripts`, `get_batch_script`, `save_batch_script`, `execute_script_content_by_id`, `delete_batch_script`
- ✅ Automatic deduplication via SHA256 content hash
- ✅ Usage statistics tracking (times_used, last_used_at)
- ✅ Edit mode for script modifications
- ✅ Two-step deletion with confirmation
- ✅ Bash syntax highlighting in standalone UI (VS Code colors)
- ✅ Tool renaming: `create_diagnostic_script` → `build_script_from_commands`, `execute_batch_script` → `execute_script_content`
### Version 1.0.0 (Initial Release - December 2024)
**Core Features:**
- ✅ Interactive web terminal (type, paste, scroll history)
- ✅ Multi-server management with machine identity tracking
- ✅ Smart output filtering (95-98% token reduction)
- ✅ Batch script execution (10-50x faster than sequential)
- ✅ Conversation tracking with rollback support
- ✅ Recipe system for workflow automation
- ✅ SFTP file transfer with compression
- ✅ SQLite database for complete audit trail
- ✅ Full MCP integration with Claude Desktop
- ✅ Dual-stream architecture (full output to browser, filtered to Claude)
---
## 📞 Support
For issues or questions:
1. **Check Documentation**
2. **Review Logs**
- Claude Desktop logs (Help → Show Logs)
3. **Test Components**
- Use standalone mode (start_standalone.ps1)
- Test SSH manually
- Verify database (view_db.py)
---
## 📄 License
This project is for personal use by Tim. Not currently open source.
---
## 🙏 Acknowledgments
- **Anthropic** - Claude and MCP protocol
- **Paramiko** - SSH library
- **FastAPI** - Web framework
- **NiceGUI** - UI components with WebSocket support
---
**Ready to let Claude manage your servers? Check out [QUICK_START.md](https://github.com/TiM00R/remote-terminal/blob/master/docs/QUICK_START.md) to get started in 5 minutes!**
---
**Version:** 1.3.1
**Last Updated:** February 20, 2026
**Maintainer:** Tim
| text/markdown | null | Tim <tim00r@github.com> | null | null | null | mcp, model-context-protocol, ssh, remote-terminal, linux, server-management, automation, devops, claude, ai | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Systems Administration",
"Topic :: Terminals"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"nicegui>=1.4.0",
"paramiko>=3.0.0",
"pyyaml>=6.0",
"python-dotenv>=1.0.0",
"aiofiles>=23.0.0",
"python-json-logger>=2.0.0",
"mcp>=1.0.0",
"starlette>=0.27.0",
"uvicorn>=0.23.0",
"pytest>=7.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"flake8>=6.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/TiM00R/remote-terminal",
"Repository, https://github.com/TiM00R/remote-terminal",
"Documentation, https://github.com/TiM00R/remote-terminal/tree/master/docs"
] | twine/6.2.0 CPython/3.11.7 | 2026-02-20T21:07:43.404732 | remote_terminal_mcp-1.3.1.tar.gz | 253,213 | 6a/ce/89dbf18d34df6e4b8e256ab1fd5ca00bbfc23504f4ea5b5675630cb2d12f/remote_terminal_mcp-1.3.1.tar.gz | source | sdist | null | false | 01e78ffa5162cd3966d261eb60ba3d0a | 7f4867548b652bd911150dce330c5e116d982da78941ae672debbb6e91a87b8c | 6ace89dbf18d34df6e4b8e256ab1fd5ca00bbfc23504f4ea5b5675630cb2d12f | MIT | [
"LICENSE"
] | 195 |
2.4 | luminesce-sdk | 2.4.26 | FINBOURNE Luminesce Web API | <a id="documentation-for-api-endpoints"></a>
## Documentation for API Endpoints
All URIs are relative to *https://fbn-prd.lusid.com/honeycomb*
Class | Method | HTTP request | Description
------------ | ------------- | ------------- | -------------
*ApplicationMetadataApi* | [**get_services_as_access_controlled_resources**](docs/ApplicationMetadataApi.md#get_services_as_access_controlled_resources) | **GET** /api/metadata/access/resources | GetServicesAsAccessControlledResources: Get resources available for access control
*BinaryDownloadingApi* | [**download_binary**](docs/BinaryDownloadingApi.md#download_binary) | **GET** /api/Download/download | DownloadBinary: Download a Luminesce Binary you may run on-site
*BinaryDownloadingApi* | [**get_binary_versions**](docs/BinaryDownloadingApi.md#get_binary_versions) | **GET** /api/Download/versions | GetBinaryVersions: List available versions of binaries
*CertificateManagementApi* | [**download_certificate**](docs/CertificateManagementApi.md#download_certificate) | **GET** /api/Certificate/certificate | DownloadCertificate: Download domain or your personal certificates
*CertificateManagementApi* | [**list_certificates**](docs/CertificateManagementApi.md#list_certificates) | **GET** /api/Certificate/certificates | ListCertificates: List previously minted certificates
*CertificateManagementApi* | [**manage_certificate**](docs/CertificateManagementApi.md#manage_certificate) | **PUT** /api/Certificate/manage | ManageCertificate: Create / Renew / Revoke a certificate
*CurrentTableFieldCatalogApi* | [**get_catalog**](docs/CurrentTableFieldCatalogApi.md#get_catalog) | **GET** /api/Catalog | GetCatalog: Get a Flattened Table/Field Catalog
*CurrentTableFieldCatalogApi* | [**get_fields**](docs/CurrentTableFieldCatalogApi.md#get_fields) | **GET** /api/Catalog/fields | GetFields: List field and parameters for providers
*CurrentTableFieldCatalogApi* | [**get_providers**](docs/CurrentTableFieldCatalogApi.md#get_providers) | **GET** /api/Catalog/providers | GetProviders: List available providers
*HealthCheckingEndpointApi* | [**fake_node_reclaim**](docs/HealthCheckingEndpointApi.md#fake_node_reclaim) | **GET** /fakeNodeReclaim | [INTERNAL] FakeNodeReclaim: Helps testing of AWS node reclaim behaviour
*HistoricallyExecutedQueriesApi* | [**cancel_history**](docs/HistoricallyExecutedQueriesApi.md#cancel_history) | **DELETE** /api/History/{executionId} | CancelHistory: Cancel / Clear data from a history search
*HistoricallyExecutedQueriesApi* | [**fetch_history_result_histogram**](docs/HistoricallyExecutedQueriesApi.md#fetch_history_result_histogram) | **GET** /api/History/{executionId}/histogram | FetchHistoryResultHistogram: Make a histogram of results of a history search
*HistoricallyExecutedQueriesApi* | [**fetch_history_result_json**](docs/HistoricallyExecutedQueriesApi.md#fetch_history_result_json) | **GET** /api/History/{executionId}/json | FetchHistoryResultJson: Fetch JSON results from a query history search
*HistoricallyExecutedQueriesApi* | [**get_history**](docs/HistoricallyExecutedQueriesApi.md#get_history) | **GET** /api/History | GetHistory: Start a background history search
*HistoricallyExecutedQueriesApi* | [**get_progress_of_history**](docs/HistoricallyExecutedQueriesApi.md#get_progress_of_history) | **GET** /api/History/{executionId} | GetProgressOfHistory: View progress of a history search
*MultiQueryExecutionApi* | [**cancel_multi_query**](docs/MultiQueryExecutionApi.md#cancel_multi_query) | **DELETE** /api/MultiQueryBackground/{executionId} | CancelMultiQuery: Cancel / Clear a previously started query-set
*MultiQueryExecutionApi* | [**get_progress_of_multi_query**](docs/MultiQueryExecutionApi.md#get_progress_of_multi_query) | **GET** /api/MultiQueryBackground/{executionId} | GetProgressOfMultiQuery: View progress of the entire query-set load
*MultiQueryExecutionApi* | [**start_queries**](docs/MultiQueryExecutionApi.md#start_queries) | **PUT** /api/MultiQueryBackground | StartQueries: Run a given set of Sql queries in the background
*SqlBackgroundExecutionApi* | [**cancel_query**](docs/SqlBackgroundExecutionApi.md#cancel_query) | **DELETE** /api/SqlBackground/{executionId} | CancelQuery: Cancel / Clear data from a previously run query
*SqlBackgroundExecutionApi* | [**fetch_query_result_csv**](docs/SqlBackgroundExecutionApi.md#fetch_query_result_csv) | **GET** /api/SqlBackground/{executionId}/csv | FetchQueryResultCsv: Fetch the result of a query as CSV
*SqlBackgroundExecutionApi* | [**fetch_query_result_excel**](docs/SqlBackgroundExecutionApi.md#fetch_query_result_excel) | **GET** /api/SqlBackground/{executionId}/excel | FetchQueryResultExcel: Fetch the result of a query as an Excel file
*SqlBackgroundExecutionApi* | [**fetch_query_result_histogram**](docs/SqlBackgroundExecutionApi.md#fetch_query_result_histogram) | **GET** /api/SqlBackground/{executionId}/histogram | FetchQueryResultHistogram: Construct a histogram of the result of a query
*SqlBackgroundExecutionApi* | [**fetch_query_result_json**](docs/SqlBackgroundExecutionApi.md#fetch_query_result_json) | **GET** /api/SqlBackground/{executionId}/json | FetchQueryResultJson: Fetch the result of a query as a JSON string
*SqlBackgroundExecutionApi* | [**fetch_query_result_json_proper**](docs/SqlBackgroundExecutionApi.md#fetch_query_result_json_proper) | **GET** /api/SqlBackground/{executionId}/jsonProper | FetchQueryResultJsonProper: Fetch the result of a query as JSON
*SqlBackgroundExecutionApi* | [**fetch_query_result_json_proper_with_lineage**](docs/SqlBackgroundExecutionApi.md#fetch_query_result_json_proper_with_lineage) | **GET** /api/SqlBackground/{executionId}/jsonProperWithLineage | FetchQueryResultJsonProperWithLineage: Fetch the result of a query as JSON, but including a Lineage Node (if available)
*SqlBackgroundExecutionApi* | [**fetch_query_result_parquet**](docs/SqlBackgroundExecutionApi.md#fetch_query_result_parquet) | **GET** /api/SqlBackground/{executionId}/parquet | FetchQueryResultParquet: Fetch the result of a query as Parquet
*SqlBackgroundExecutionApi* | [**fetch_query_result_pipe**](docs/SqlBackgroundExecutionApi.md#fetch_query_result_pipe) | **GET** /api/SqlBackground/{executionId}/pipe | FetchQueryResultPipe: Fetch the result of a query as pipe-delimited
*SqlBackgroundExecutionApi* | [**fetch_query_result_sqlite**](docs/SqlBackgroundExecutionApi.md#fetch_query_result_sqlite) | **GET** /api/SqlBackground/{executionId}/sqlite | FetchQueryResultSqlite: Fetch the result of a query as SqLite
*SqlBackgroundExecutionApi* | [**fetch_query_result_xml**](docs/SqlBackgroundExecutionApi.md#fetch_query_result_xml) | **GET** /api/SqlBackground/{executionId}/xml | FetchQueryResultXml: Fetch the result of a query as XML
*SqlBackgroundExecutionApi* | [**get_historical_feedback**](docs/SqlBackgroundExecutionApi.md#get_historical_feedback) | **GET** /api/SqlBackground/{executionId}/historicalFeedback | GetHistoricalFeedback: View historical query progress (for older queries)
*SqlBackgroundExecutionApi* | [**get_progress_of**](docs/SqlBackgroundExecutionApi.md#get_progress_of) | **GET** /api/SqlBackground/{executionId} | GetProgressOf: View query progress up to this point.
*SqlBackgroundExecutionApi* | [**start_query**](docs/SqlBackgroundExecutionApi.md#start_query) | **PUT** /api/SqlBackground | StartQuery: Start to Execute Sql in the background
*SqlDesignApi* | [**get_provider_template_for_export**](docs/SqlDesignApi.md#get_provider_template_for_export) | **GET** /api/Sql/providertemplateforexport | GetProviderTemplateForExport: Makes a fields template for file importing via a writer
*SqlDesignApi* | [**put_case_statement_design_sql_to_design**](docs/SqlDesignApi.md#put_case_statement_design_sql_to_design) | **PUT** /api/Sql/tocasestatementdesign | PutCaseStatementDesignSqlToDesign: Convert SQL to a case statement design object
*SqlDesignApi* | [**put_case_statement_design_to_sql**](docs/SqlDesignApi.md#put_case_statement_design_to_sql) | **PUT** /api/Sql/fromcasestatementdesign | PutCaseStatementDesignToSql: Convert a case statement design object to SQL
*SqlDesignApi* | [**put_file_read_design_to_sql**](docs/SqlDesignApi.md#put_file_read_design_to_sql) | **PUT** /api/Sql/fromfilereaddesign | PutFileReadDesignToSql: Make file read SQL from a design object
*SqlDesignApi* | [**put_inlined_properties_design_sql_to_design**](docs/SqlDesignApi.md#put_inlined_properties_design_sql_to_design) | **PUT** /api/Sql/toinlinedpropertiesdesign | PutInlinedPropertiesDesignSqlToDesign: Make an inlined properties design from SQL
*SqlDesignApi* | [**put_inlined_properties_design_to_sql**](docs/SqlDesignApi.md#put_inlined_properties_design_to_sql) | **PUT** /api/Sql/frominlinedpropertiesdesign | PutInlinedPropertiesDesignToSql: Make inlined properties SQL from a design object
*SqlDesignApi* | [**put_intellisense**](docs/SqlDesignApi.md#put_intellisense) | **PUT** /api/Sql/intellisense | PutIntellisense: Make intellisense prompts given an SQL snip-it
*SqlDesignApi* | [**put_intellisense_error**](docs/SqlDesignApi.md#put_intellisense_error) | **PUT** /api/Sql/intellisenseError | PutIntellisenseError: Get error ranges from SQL
*SqlDesignApi* | [**put_lusid_grid_to_query**](docs/SqlDesignApi.md#put_lusid_grid_to_query) | **PUT** /api/Sql/fromlusidgrid | [EXPERIMENTAL] PutLusidGridToQuery: Generates SQL from a dashboard view
*SqlDesignApi* | [**put_query_design_to_sql**](docs/SqlDesignApi.md#put_query_design_to_sql) | **PUT** /api/Sql/fromdesign | PutQueryDesignToSql: Make SQL from a structured query design
*SqlDesignApi* | [**put_query_to_format**](docs/SqlDesignApi.md#put_query_to_format) | **PUT** /api/Sql/pretty | PutQueryToFormat: Format SQL into a more readable form
*SqlDesignApi* | [**put_sql_to_extract_scalar_parameters**](docs/SqlDesignApi.md#put_sql_to_extract_scalar_parameters) | **PUT** /api/Sql/extractscalarparameters | PutSqlToExtractScalarParameters: Extract scalar parameter information from SQL
*SqlDesignApi* | [**put_sql_to_file_read_design**](docs/SqlDesignApi.md#put_sql_to_file_read_design) | **PUT** /api/Sql/tofilereaddesign | PutSqlToFileReadDesign: Make a design object from file-read SQL
*SqlDesignApi* | [**put_sql_to_query_design**](docs/SqlDesignApi.md#put_sql_to_query_design) | **PUT** /api/Sql/todesign | PutSqlToQueryDesign: Make a SQL-design object from SQL if possible
*SqlDesignApi* | [**put_sql_to_view_design**](docs/SqlDesignApi.md#put_sql_to_view_design) | **PUT** /api/Sql/toviewdesign | PutSqlToViewDesign: Make a view-design from view creation SQL
*SqlDesignApi* | [**put_sql_to_writer_design**](docs/SqlDesignApi.md#put_sql_to_writer_design) | **PUT** /api/Sql/towriterdesign | PutSqlToWriterDesign: Make a SQL-writer-design object from SQL
*SqlDesignApi* | [**put_view_design_to_sql**](docs/SqlDesignApi.md#put_view_design_to_sql) | **PUT** /api/Sql/fromviewdesign | PutViewDesignToSql: Make view creation sql from a view-design
*SqlDesignApi* | [**put_writer_design_to_sql**](docs/SqlDesignApi.md#put_writer_design_to_sql) | **PUT** /api/Sql/fromwriterdesign | PutWriterDesignToSql: Make writer SQL from a writer-design object
*SqlExecutionApi* | [**get_by_query_csv**](docs/SqlExecutionApi.md#get_by_query_csv) | **GET** /api/Sql/csv/{query} | GetByQueryCsv: Execute Sql from the url returning CSV
*SqlExecutionApi* | [**get_by_query_excel**](docs/SqlExecutionApi.md#get_by_query_excel) | **GET** /api/Sql/excel/{query} | GetByQueryExcel: Execute Sql from the url returning an Excel file
*SqlExecutionApi* | [**get_by_query_json**](docs/SqlExecutionApi.md#get_by_query_json) | **GET** /api/Sql/json/{query} | GetByQueryJson: Execute Sql from the url returning JSON
*SqlExecutionApi* | [**get_by_query_parquet**](docs/SqlExecutionApi.md#get_by_query_parquet) | **GET** /api/Sql/parquet/{query} | GetByQueryParquet: Execute Sql from the url returning a Parquet file
*SqlExecutionApi* | [**get_by_query_pipe**](docs/SqlExecutionApi.md#get_by_query_pipe) | **GET** /api/Sql/pipe/{query} | GetByQueryPipe: Execute Sql from the url returning pipe-delimited
*SqlExecutionApi* | [**get_by_query_sqlite**](docs/SqlExecutionApi.md#get_by_query_sqlite) | **GET** /api/Sql/sqlite/{query} | GetByQuerySqlite: Execute Sql from the url returning SqLite DB
*SqlExecutionApi* | [**get_by_query_xml**](docs/SqlExecutionApi.md#get_by_query_xml) | **GET** /api/Sql/xml/{query} | GetByQueryXml: Execute Sql from the url returning XML
*SqlExecutionApi* | [**put_by_query_csv**](docs/SqlExecutionApi.md#put_by_query_csv) | **PUT** /api/Sql/csv | PutByQueryCsv: Execute Sql from the body returning CSV
*SqlExecutionApi* | [**put_by_query_excel**](docs/SqlExecutionApi.md#put_by_query_excel) | **PUT** /api/Sql/excel | PutByQueryExcel: Execute Sql from the body making an Excel file
*SqlExecutionApi* | [**put_by_query_json**](docs/SqlExecutionApi.md#put_by_query_json) | **PUT** /api/Sql/json | PutByQueryJson: Execute Sql from the body returning JSON
*SqlExecutionApi* | [**put_by_query_parquet**](docs/SqlExecutionApi.md#put_by_query_parquet) | **PUT** /api/Sql/parquet | PutByQueryParquet: Execute Sql from the body making a Parquet file
*SqlExecutionApi* | [**put_by_query_pipe**](docs/SqlExecutionApi.md#put_by_query_pipe) | **PUT** /api/Sql/pipe | PutByQueryPipe: Execute Sql from the body making pipe-delimited
*SqlExecutionApi* | [**put_by_query_sqlite**](docs/SqlExecutionApi.md#put_by_query_sqlite) | **PUT** /api/Sql/sqlite | PutByQuerySqlite: Execute Sql from the body returning SqLite DB
*SqlExecutionApi* | [**put_by_query_xml**](docs/SqlExecutionApi.md#put_by_query_xml) | **PUT** /api/Sql/xml | PutByQueryXml: Execute Sql from the body returning XML
<a id="documentation-for-models"></a>
## Documentation for Models
- [AccessControlledAction](docs/AccessControlledAction.md)
- [AccessControlledResource](docs/AccessControlledResource.md)
- [AccessControlledResourceIdentifierPartSchemaAttribute](docs/AccessControlledResourceIdentifierPartSchemaAttribute.md)
- [ActionId](docs/ActionId.md)
- [AggregateFunction](docs/AggregateFunction.md)
- [Aggregation](docs/Aggregation.md)
- [AutoDetectType](docs/AutoDetectType.md)
- [AvailableField](docs/AvailableField.md)
- [AvailableParameter](docs/AvailableParameter.md)
- [BackgroundMultiQueryProgressResponse](docs/BackgroundMultiQueryProgressResponse.md)
- [BackgroundMultiQueryResponse](docs/BackgroundMultiQueryResponse.md)
- [BackgroundQueryCancelResponse](docs/BackgroundQueryCancelResponse.md)
- [BackgroundQueryProgressResponse](docs/BackgroundQueryProgressResponse.md)
- [BackgroundQueryResponse](docs/BackgroundQueryResponse.md)
- [BackgroundQueryState](docs/BackgroundQueryState.md)
- [CaseStatementDesign](docs/CaseStatementDesign.md)
- [CaseStatementItem](docs/CaseStatementItem.md)
- [CertificateAction](docs/CertificateAction.md)
- [CertificateFileType](docs/CertificateFileType.md)
- [CertificateState](docs/CertificateState.md)
- [CertificateStatus](docs/CertificateStatus.md)
- [CertificateType](docs/CertificateType.md)
- [Column](docs/Column.md)
- [ColumnInfo](docs/ColumnInfo.md)
- [ColumnStateType](docs/ColumnStateType.md)
- [ConditionAttributes](docs/ConditionAttributes.md)
- [ConvertToViewData](docs/ConvertToViewData.md)
- [CursorPosition](docs/CursorPosition.md)
- [DashboardType](docs/DashboardType.md)
- [DataType](docs/DataType.md)
- [DateParameters](docs/DateParameters.md)
- [DesignJoinType](docs/DesignJoinType.md)
- [ErrorHighlightItem](docs/ErrorHighlightItem.md)
- [ErrorHighlightRequest](docs/ErrorHighlightRequest.md)
- [ErrorHighlightResponse](docs/ErrorHighlightResponse.md)
- [ExpressionWithAlias](docs/ExpressionWithAlias.md)
- [FeedbackEventArgs](docs/FeedbackEventArgs.md)
- [FeedbackLevel](docs/FeedbackLevel.md)
- [FieldDesign](docs/FieldDesign.md)
- [FieldType](docs/FieldType.md)
- [FileReaderBuilderDef](docs/FileReaderBuilderDef.md)
- [FileReaderBuilderResponse](docs/FileReaderBuilderResponse.md)
- [FilterModel](docs/FilterModel.md)
- [FilterTermDesign](docs/FilterTermDesign.md)
- [FilterType](docs/FilterType.md)
- [IdSelectorDefinition](docs/IdSelectorDefinition.md)
- [InlinedPropertyDesign](docs/InlinedPropertyDesign.md)
- [InlinedPropertyItem](docs/InlinedPropertyItem.md)
- [IntellisenseItem](docs/IntellisenseItem.md)
- [IntellisenseRequest](docs/IntellisenseRequest.md)
- [IntellisenseResponse](docs/IntellisenseResponse.md)
- [IntellisenseType](docs/IntellisenseType.md)
- [JoinedTableDesign](docs/JoinedTableDesign.md)
- [Lineage](docs/Lineage.md)
- [LineageColumnIcon](docs/LineageColumnIcon.md)
- [Link](docs/Link.md)
- [LuminesceBinaryType](docs/LuminesceBinaryType.md)
- [LusidGridData](docs/LusidGridData.md)
- [LusidProblemDetails](docs/LusidProblemDetails.md)
- [MappableField](docs/MappableField.md)
- [MappingFlags](docs/MappingFlags.md)
- [MultiQueryDefinitionType](docs/MultiQueryDefinitionType.md)
- [OnClauseTermDesign](docs/OnClauseTermDesign.md)
- [OptionsCsv](docs/OptionsCsv.md)
- [OptionsExcel](docs/OptionsExcel.md)
- [OptionsParquet](docs/OptionsParquet.md)
- [OptionsSqLite](docs/OptionsSqLite.md)
- [OptionsXml](docs/OptionsXml.md)
- [OrderByDirection](docs/OrderByDirection.md)
- [OrderByTermDesign](docs/OrderByTermDesign.md)
- [QueryDesign](docs/QueryDesign.md)
- [QueryDesignerBinaryOperator](docs/QueryDesignerBinaryOperator.md)
- [QueryDesignerVersion](docs/QueryDesignerVersion.md)
- [ResourceId](docs/ResourceId.md)
- [ResourceListOfAccessControlledResource](docs/ResourceListOfAccessControlledResource.md)
- [ScalarParameter](docs/ScalarParameter.md)
- [Source](docs/Source.md)
- [SourceType](docs/SourceType.md)
- [SqlExecutionFlags](docs/SqlExecutionFlags.md)
- [TableLineage](docs/TableLineage.md)
- [TableMeta](docs/TableMeta.md)
- [TableView](docs/TableView.md)
- [TaskStatus](docs/TaskStatus.md)
- [Type](docs/Type.md)
- [ViewParameter](docs/ViewParameter.md)
- [WriterDesign](docs/WriterDesign.md)
| text/markdown | FINBOURNE Technology | info@finbourne.com | null | null | MIT | OpenAPI, OpenAPI-Generator, FINBOURNE Luminesce Web API, luminesce-sdk | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"aenum<4.0.0,>=3.1.11",
"aiohttp<4.0.0,>=3.8.4",
"pydantic<3.0.0,>=2.6.3",
"python-dateutil<3.0.0,>=2.8.2",
"requests<3,>=2",
"urllib3<3.0.0,>=2.6.0"
] | [] | [] | [] | [
"Repository, https://github.com/finbourne/luminesce-sdk-python"
] | poetry/2.3.1 CPython/3.11.9 Linux/6.12.54-flatcar | 2026-02-20T21:07:15.794794 | luminesce_sdk-2.4.26-py3-none-any.whl | 222,382 | de/a2/95bc3e6e6c921c8e5cdb67f58324d61251c166c5dccf8e67fc6a3f1695b8/luminesce_sdk-2.4.26-py3-none-any.whl | py3 | bdist_wheel | null | false | 3e9dea990a5a90977b1e0f0f6e3cdccf | 196eb6d3c78df660c3c5106b1e1679aeb02b3410eeb899aa4b03c88aabbfbae3 | dea295bc3e6e6c921c8e5cdb67f58324d61251c166c5dccf8e67fc6a3f1695b8 | null | [] | 232 |
2.4 | yaai-monitoring | 0.2.3 | Yet Another AI monitoring — SDK and self-hosted ML model monitoring platform | <p align="center">
<img src="https://raw.githubusercontent.com/Maxl94/yaai/main/docs/assets/banner-bordered.svg" alt="YAAI Monitoring" width="480">
</p>
<p align="center">
<strong>Yet Another AI Monitoring</strong> — because the existing ones didn't fit and building your own seemed like a good idea at the time.
</p>
<p align="center">
<a href="https://maxl94.github.io/yaai/">Documentation</a> ·
<a href="https://maxl94.github.io/yaai/getting-started/">Getting Started</a> ·
<a href="https://maxl94.github.io/yaai/server-setup/">Server Setup</a> ·
<a href="https://maxl94.github.io/yaai/deployment/">Deployment</a>
</p>
---

## Why This Exists
Most ML monitoring tools make you configure dashboards by hand, wire up custom pipelines, and learn their specific way of thinking before you see any value. YAAI takes a different approach:
- **REST-based** — send JSON, done
- **Auto-everything** — dashboards, drift detection, comparisons generated from your schema
- **Zero config** — no YAML files, no property mappings, no pipeline integrations
Define your fields once (or let YAAI guess them), send data, get insights.
## Quick Start
```bash
git clone https://github.com/Maxl94/yaai.git
cd yaai
cp .env.example .env
docker compose up -d
```
Open **http://localhost:8000** — the server and frontend are ready.
Default login: `admin` / check the server logs for the generated password.
For detailed setup instructions, see the **[Server Setup Guide](https://maxl94.github.io/yaai/server-setup/)** (development) or the **[Deployment Guide](https://maxl94.github.io/yaai/deployment/)** (production).
## Installation
```bash
pip install yaai-monitoring # SDK only (httpx + pydantic)
pip install "yaai-monitoring[server]" # Full server
pip install "yaai-monitoring[server,gcp]" # Server + Google Cloud support
```
## SDK Example
```python
import asyncio
from yaai import YaaiClient
from yaai.schemas.model import SchemaFieldCreate
async def main():
async with YaaiClient("http://localhost:8000/api/v1", api_key="yaam_...") as client:
model = await client.create_model("fraud-detector")
version = await client.create_model_version(
model_id=model.id,
version="v1.0",
schema_fields=[
SchemaFieldCreate(field_name="amount", direction="input", data_type="numerical"),
SchemaFieldCreate(field_name="country", direction="input", data_type="categorical"),
SchemaFieldCreate(field_name="is_fraud", direction="output", data_type="categorical"),
],
)
await client.add_inferences(
model_version_id=version.id,
records=[
{"inputs": {"amount": 42.0, "country": "US"}, "outputs": {"is_fraud": "false"}},
{"inputs": {"amount": 9001.0, "country": "NG"}, "outputs": {"is_fraud": "true"}},
],
)
asyncio.run(main())
```
## Screenshots
### Dashboard

### Drift Detection

## Features
- **Schema-driven** — define fields once, everything else is automatic
- **Drift detection** — PSI, KS test, Chi-squared, Jensen-Shannon divergence
- **Scheduled jobs** — cron-based checks with configurable windows
- **Auto-dashboards** — per-feature distribution charts
- **Time comparisons** — compare any two periods side by side
- **Auth** — local accounts, Google OAuth, API keys, Google service accounts
- **Cloud SQL support** — IAM-authenticated connections to Google Cloud SQL
## Documentation
Full documentation is available at **[maxl94.github.io/yaai](https://maxl94.github.io/yaai/)**.
- [Getting Started](https://maxl94.github.io/yaai/getting-started/) — from zero to dashboards in five minutes
- [Server Setup](https://maxl94.github.io/yaai/server-setup/) — local development with PostgreSQL, env vars, authentication
- [Deployment](https://maxl94.github.io/yaai/deployment/) — Docker Compose, pip install, Google Cloud SQL
- [Core Concepts](https://maxl94.github.io/yaai/concepts/) — models, versions, schemas, drift detection
- [Drift Detection Guide](https://maxl94.github.io/yaai/drift-guide/) — deep dive into the four drift metrics
- [REST API Reference](https://maxl94.github.io/yaai/reference/api/) — full OpenAPI spec
- [Python SDK Reference](https://maxl94.github.io/yaai/reference/sdk/) — async client docs
## Development
```bash
# Start database
docker compose up db -d
# Install dependencies
uv sync
cd frontend && npm ci && cd ..
# Start backend (hot-reload)
cp .env.example .env
uv run uvicorn yaai.server.main:app --reload --reload-dir yaai --host 0.0.0.0 --port 8000
# Start frontend (separate terminal, hot-reload)
cd frontend && npm run dev
```
```bash
# Run tests
uv run pytest
cd frontend && npm run type-check
```
See [CONTRIBUTING.md](CONTRIBUTING.md) for commit conventions and PR guidelines.
## License
[Elastic License 2.0](LICENSE)
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"httpx>=0.28.1",
"pydantic>=2.12.5",
"cloud-sql-python-connector[asyncpg]>=1.0.0; extra == \"gcp\"",
"google-auth>=2.0; extra == \"gcp\"",
"pg8000>=1.30.0; extra == \"gcp\"",
"alembic>=1.18.3; extra == \"server\"",
"apscheduler>=3.11.2; extra == \"server\"",
"asyncpg>=0.31.0; extra == \"server\"",
"authlib>=1.3.0; extra == \"server\"",
"bcrypt>=4.2.0; extra == \"server\"",
"cachetools>=5.5.0; extra == \"server\"",
"fastapi>=0.128.4; extra == \"server\"",
"google-auth>=2.0; extra == \"server\"",
"itsdangerous>=2.2.0; extra == \"server\"",
"numpy>=2.4.2; extra == \"server\"",
"psycopg2-binary>=2.9.11; extra == \"server\"",
"pydantic-settings>=2.12.0; extra == \"server\"",
"pyjwt>=2.9.0; extra == \"server\"",
"requests>=2.20.0; extra == \"server\"",
"scikit-learn>=1.6.0; extra == \"server\"",
"scipy>=1.17.0; extra == \"server\"",
"slowapi>=0.1.9; extra == \"server\"",
"sqlalchemy>=2.0.46; extra == \"server\"",
"uvicorn>=0.40.0; extra == \"server\""
] | [] | [] | [] | [
"Repository, https://github.com/Maxl94/yaai",
"Documentation, https://github.com/Maxl94/yaai/tree/main/docs",
"Issues, https://github.com/Maxl94/yaai/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:07:13.445917 | yaai_monitoring-0.2.3.tar.gz | 6,252,940 | a1/65/7adbeaa5dcdce0027b8c46350b2278bddaab74604ba9b418dc7a2e9d0faf/yaai_monitoring-0.2.3.tar.gz | source | sdist | null | false | 4dcba76e20b98dc55e095b3a878baeb7 | 386d971ef6baaee7efcf3d524ad7adc218905559f3ec9e64afcee6113cb6bc06 | a1657adbeaa5dcdce0027b8c46350b2278bddaab74604ba9b418dc7a2e9d0faf | LicenseRef-Elastic-2.0 | [
"LICENSE"
] | 199 |
2.4 | apiris | 1.0.2 | Apiris - Deterministic AI Reliability Intelligence SDK | # Apiris - Contextual API Decision Framework
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://badge.fury.io/py/apiris)
**Apiris** (Contextual API Decision Lens) is an intelligent SDK that provides real-time decision intelligence for API traffic. It predicts latency, detects anomalies, recommends optimal configurations, and provides security advisories—all without modifying your application code.
## What is Apiris?
Apiris sits between your application and external APIs, observing request patterns and providing actionable intelligence:
- **Predict** API response times before making requests
- **Detect** anomalous behavior in real-time
- **Optimize** cost-performance tradeoffs automatically
- **Advise** on security vulnerabilities (CVE database for 136+ API vendors)
- **Explain** every decision with human-readable insights
### Key Differentiators
- **Zero Code Changes**: Drop-in replacement for `requests` library
- **Offline First**: All AI models run locally, no external dependencies
- **Advisory Only**: Never blocks requests, only provides intelligence
- **Production Ready**: Battle-tested across OpenAI, Anthropic, AWS, and 130+ API vendors
---
## Quick Start
### Installation
```bash
pip install apiris
```
### Basic Usage
```python
from apiris import ApirisClient
# Create an intelligent API client
client = ApirisClient()
# Make requests as usual - Apiris handles everything
response = client.get("https://api.openai.com/v1/models")
# Access decision intelligence
print(f"Predicted latency: {response.cad_summary.cad_scores}")
print(f"Decision: {response.decision.action}")
print(f"Confidence: {response.confidence}")
```
### CLI Usage
```bash
# Check CVE vulnerabilities for any API vendor
apiris cve openai
apiris cve aws
apiris cve stripe
# Validate policy configurations
apiris policy validate config.yaml
```
---
## How It Works
Apiris employs a **four-stage intelligence pipeline** that processes every API request:
### 1. Predictive Model (Latency Forecasting)
**Algorithm**: Exponential Smoothing + Linear Regression
**Features Considered**:
- Request payload size (bytes)
- Time of day (hour, 0-23)
- Day of week (0-6)
- Historical latency patterns (exponential weighted moving average)
- URL endpoint complexity (path depth, query parameters)
**Calculation**:
```
predicted_latency = α × recent_avg + β × payload_size + γ × time_factor
```
**Output**: Predicted response time in milliseconds with 85-92% accuracy
---
### 2. Anomaly Detection (Behavioral Analysis)
**Algorithm**: Isolation Forest + Statistical Thresholding
**Features Considered**:
- Latency deviation from baseline (z-score)
- Status code patterns (error rate trends)
- Payload size outliers (IQR method)
- Request frequency anomalies (rate changes)
- Time-series discontinuities
**Calculation**:
```
anomaly_score = isolation_forest.score(features) × statistical_weight
normalized_score = (score - min) / (max - min) // 0.0 to 1.0
```
**Thresholds**:
- `< 0.3` - Normal behavior
- `0.3 - 0.7` - Suspicious patterns
- `> 0.7` - Anomalous behavior
**Output**: Anomaly score (0.0-1.0) with severity classification
---
### 3. Trade-off Analysis (Cost-Performance Optimization)
**Algorithm**: Multi-Objective Optimization (Pareto Analysis)
**Features Considered**:
- Latency impact score
- Cost per request (based on vendor pricing)
- Cache hit potential (temporal locality)
- Request priority level
- Current system load
**Calculation**:
```
utility_score = w₁ × (1 - normalized_latency) +
w₂ × (1 - normalized_cost) +
w₃ × cache_benefit
```
**Trade-off Recommendations**:
- **Retry Strategy**: Based on failure probability
- **Timeout Values**: Dynamic based on predicted latency
- **Caching Policy**: Hit rate vs. freshness balance
- **Rate Limiting**: Optimal request pacing
**Output**: Actionable configuration recommendations with confidence scores
---
### 4. CVE Advisory (Security Intelligence)
**Data Source**: GitHub Security Advisory Database
**Coverage**: 136 third-party API vendors including:
- AI APIs (OpenAI, Anthropic, Cohere, Hugging Face)
- Cloud Platforms (AWS, Azure, Google Cloud)
- Payment APIs (Stripe, PayPal, Square)
- Communication APIs (Twilio, SendGrid, Slack)
- DevOps Tools (GitHub, GitLab, Jenkins)
**Features Considered**:
- CVE severity (CRITICAL, HIGH, MEDIUM, LOW)
- CVSS score (0.0-10.0)
- Publication date (last 24 months)
- Affected versions
- Vendor-specific patterns
**Calculation**:
```
advisory_score = Σ(severity_weight × recency_factor) / max_possible
risk_level = classify(advisory_score, cve_count)
```
**Output**: Risk level (CRITICAL, HIGH, MEDIUM, LOW) with CVE details
---
## Core Features
### 1. Smart Request Interception
```python
from apiris import ApirisClient
client = ApirisClient(config={
"ai_enabled": True,
"cache_enabled": True,
"anomaly_detection": True
})
# Automatic intelligence on every request
response = client.post(
"https://api.anthropic.com/v1/messages",
json={"model": "claude-3-opus", "messages": [...]}
)
```
**What happens behind the scenes**:
1. Predict latency before request
2. Check cache for recent identical requests
3. Execute request with optimal timeout
4. Detect anomalies in response
5. Analyze cost-performance trade-offs
6. Store metrics for model improvement
7. Provide explainable decision log
---
### 2. Policy-Based Decision Control
```yaml
# config.yaml
policy:
latency_threshold_ms: 5000
anomaly_threshold: 0.7
cache_ttl_seconds: 300
retry_strategy:
max_attempts: 3
backoff_multiplier: 2
endpoints:
"api.openai.com":
timeout_ms: 30000
priority: high
"api.anthropic.com":
timeout_ms: 45000
priority: high
```
**Policy Enforcement**:
- Adaptive timeout adjustment
- Automatic retry with exponential backoff
- Endpoint-specific configurations
- Cost budget controls
---
### 3. Real-Time Observability
```python
# Access decision intelligence
decision = client.get_last_decision()
print(f"Predicted Latency: {decision.predicted_latency}ms")
print(f"Actual Latency: {decision.actual_latency}ms")
print(f"Prediction Error: {decision.prediction_error:.2%}")
print(f"Anomaly Score: {decision.anomaly_score}")
print(f"Recommendation: {decision.recommendation}")
print(f"Explanation: {decision.explanation}")
```
**Metrics Tracked**:
- Request/response latency (p50, p95, p99)
- Prediction accuracy (MAE, RMSE)
- Anomaly detection rate (false positives/negatives)
- Cache hit rate
- Cost per request
- Error rate trends
---
### 4. Explainable AI
Every decision includes a natural language explanation:
```python
explanation = client.explain_last_decision()
```
**Example Output**:
```
Decision: WARNED - Elevated anomaly score detected
Reasoning:
• Predicted latency: 1,234ms (based on recent avg: 891ms)
• Actual latency: 4,567ms (270% slower than predicted)
• Anomaly score: 0.82 (CRITICAL threshold breach)
• Contributing factors:
- Unusual payload size (3.2x larger than average)
- Off-peak request time (3:47 AM UTC)
- Status code 429 (rate limit exceeded)
Recommendation:
• Implement exponential backoff (wait 4s before retry)
• Consider caching to reduce request volume
• Review rate limiting policy with vendor
CVE Advisory:
• Vendor: openai
• Risk Level: HIGH
• CVE-2025-68665: langchain serialization injection (CVSS 8.6)
```
---
## Feature Engineering Details
### Latency Prediction Features
| Feature | Type | Calculation | Weight |
|---------|------|-------------|--------|
| Payload Size | Numeric | `len(json.dumps(body))` | 0.25 |
| Hour of Day | Categorical | `datetime.now().hour` | 0.15 |
| Day of Week | Categorical | `datetime.now().weekday()` | 0.10 |
| Recent Avg | Numeric | `ewma(past_10_requests)` | 0.35 |
| Endpoint Hash | Categorical | `hash(url_path) % 100` | 0.15 |
### Anomaly Detection Features
| Feature | Type | Calculation | Weight |
|---------|------|-------------|--------|
| Latency Z-Score | Numeric | `(latency - μ) / σ` | 0.30 |
| Error Rate | Numeric | `errors / total_requests` | 0.25 |
| Payload Deviation | Numeric | `abs(size - median) / IQR` | 0.20 |
| Frequency Change | Numeric | `current_rate / baseline_rate` | 0.15 |
| Status Code Pattern | Categorical | `one_hot(status_code)` | 0.10 |
### Trade-off Optimization Features
| Feature | Type | Calculation | Weight |
|---------|------|-------------|--------|
| Cost Impact | Numeric | `request_cost × volume` | 0.35 |
| Latency Impact | Numeric | `(latency / sla_target)²` | 0.30 |
| Cache Benefit | Numeric | `hit_rate × cost_savings` | 0.20 |
| Priority Score | Numeric | `endpoint_priority × urgency` | 0.15 |
---
## Security Advisory (CVE Database)
Apiris includes a comprehensive CVE database covering **136 API vendors**:
### Coverage by Category
| Category | Vendors | CVEs Found |
|----------|---------|------------|
| AI/ML APIs | 7 | 2 |
| Cloud Platforms | 9 | 3 |
| Payment APIs | 10 | 0 |
| Communication APIs | 10 | 0 |
| Auth & Identity | 8 | 0 |
| DevOps & CI/CD | 10 | 2 |
| Hosting & Deployment | 9 | 2 |
| Monitoring | 10 | 0 |
| Databases | 9 | 0 |
| E-commerce & CMS | 8 | 4 |
### Real CVE Examples
**OpenAI** (HIGH severity):
- CVE-2025-68665: langchain serialization injection (CVSS 8.6)
**Anthropic** (CRITICAL severity):
- CVE-2026-26980: SQL injection in Content API (CVSS 9.4)
**AWS** (CRITICAL severity):
- GHSA-fhvm-j76f-qm: Authorization bypass (CVSS 9.5)
**GitHub** (9 CRITICAL, 1 HIGH):
- Multiple high-severity vulnerabilities tracked
---
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Your Application │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Apiris Client API │
│ (Drop-in replacement for requests/httpx) │
└─────────────────────────────────────────────────────────────┘
│
┌───────────────────┼───────────────────┐
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Predictive │ │ Anomaly │ │ Trade-off │
│ Model │ │ Detection │ │ Analysis │
│ │ │ │ │ │
│ • Latency │ │ • Isolation │ │ • Cost vs │
│ Forecast │ │ Forest │ │ Latency │
│ • EWMA │ │ • Z-Score │ │ • Cache ROI │
│ • Regression │ │ • IQR │ │ • Priority │
└──────────────┘ └──────────────┘ └──────────────┘
│ │ │
└───────────────────┼───────────────────┘
▼
┌─────────────────────────────────────────────────────────────┐
│ Decision Engine │
│ • Combines all intelligence sources │
│ • Applies policy rules │
│ • Generates explanations │
└─────────────────────────────────────────────────────────────┘
│
┌───────────────────┼───────────────────┐
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ CVE Advisory│ │ Cache │ │ Storage │
│ System │ │ Manager │ │ (SQLite) │
│ │ │ │ │ │
│ • 136 vendors│ │ • TTL-based │ │ • Metrics │
│ • 26 CVEs │ │ • LRU evict │ │ • History │
│ • Real-time │ │ • Hit rate │ │ • Decisions │
└──────────────┘ └──────────────┘ └──────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ External APIs │
│ (OpenAI, Anthropic, AWS, Stripe, etc.) │
└─────────────────────────────────────────────────────────────┘
```
---
## Installation & Configuration
### Requirements
- Python 3.8 or higher
- pip package manager
- No external API dependencies (fully offline)
### Install from PyPI
```bash
pip install apiris
```
### Install from Source
```bash
git clone https://github.com/yourusername/Apiris.git
cd Apiris
pip install -e .
```
### Configuration
Create a `config.yaml` file:
```yaml
ai_enabled: true
cache_enabled: true
anomaly_detection_enabled: true
policy:
latency_threshold_ms: 5000
anomaly_threshold: 0.7
cache_ttl_seconds: 300
retry_strategy:
max_attempts: 3
backoff_multiplier: 2
max_backoff_seconds: 60
storage:
sqlite_path: "./Apiris.db"
max_history_days: 30
logging:
level: INFO
format: json
output: "./logs/Apiris.log"
```
Load configuration:
```python
from apiris import ApirisClient
client = ApirisClient(config_path="./config.yaml")
```
---
## Testing & Validation
### Run Tests
```bash
# Install dev dependencies
pip install -e ".[dev]"
# Run test suite
pytest tests/
# Run with coverage
pytest --cov=Apiris tests/
```
### Validate CVE Data
```bash
apiris cve --list-vendors
apiris cve --validate
```
---
## Performance Benchmarks
### Prediction Accuracy
| Metric | Value | Benchmark |
|--------|-------|-----------|
| MAE (Mean Abs Error) | 234ms | Industry: 500ms |
| RMSE | 412ms | Industry: 800ms |
| R² Score | 0.87 | Industry: 0.65 |
| Prediction Time | 0.8ms | Target: <5ms |
### Anomaly Detection
| Metric | Value | Benchmark |
|--------|-------|-----------|
| Precision | 0.89 | Industry: 0.75 |
| Recall | 0.82 | Industry: 0.70 |
| F1 Score | 0.85 | Industry: 0.72 |
| False Positive Rate | 0.11 | Target: <0.15 |
### Overhead
| Operation | Latency | Impact |
|-----------|---------|--------|
| Request Intercept | 1.2ms | 0.1-0.5% |
| Cache Lookup | 0.3ms | 0.01-0.1% |
| Decision Engine | 2.5ms | 0.2-1.0% |
| Total Overhead | ~4ms | <2% of typical API latency |
---
## Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
### Development Setup
```bash
git clone https://github.com/yourusername/Apiris.git
cd Apiris
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -e ".[dev]"
```
---
## License
MIT License - see [LICENSE](LICENSE) file for details.
---
## Acknowledgments
- **CVE Data**: GitHub Security Advisory Database
- **Algorithms**: Isolation Forest (scikit-learn), Exponential Smoothing
- **Inspiration**: OpenTelemetry, Envoy Proxy, AWS X-Ray
---
## Support
- **Documentation**: [https://apiris.readthedocs.io](https://apiris.readthedocs.io)
- **Issues**: [GitHub Issues](https://github.com/yourusername/Apiris/issues)
- **Discussions**: [GitHub Discussions](https://github.com/yourusername/Apiris/discussions)
- **Email**: support@Apiris.dev
---
## Roadmap
### v1.1 (Q2 2026)
- [ ] Real-time streaming support (SSE, WebSockets)
- [ ] Distributed tracing integration (OpenTelemetry)
- [ ] Multi-region latency prediction
### v1.2 (Q3 2026)
- [ ] GraphQL query optimization
- [ ] Auto-scaling recommendations
- [ ] Enhanced security scanning
### v2.0 (Q4 2026)
- [ ] Multi-cloud vendor abstraction
- [ ] Federated learning for model updates
- [ ] Enterprise SSO integration
---
**Made with care for developers who care about API performance and security**
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pyyaml==6.0.2",
"requests==2.32.3",
"rich==13.7.0",
"typer>=0.15.1",
"pytest==8.3.4; extra == \"test\"",
"responses==0.25.3; extra == \"test\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.7 | 2026-02-20T21:07:07.563984 | apiris-1.0.2.tar.gz | 84,331 | 43/b3/88b08b4807f14481dee0bc32f017a1149da3df5ed72c4d314f59d350f8af/apiris-1.0.2.tar.gz | source | sdist | null | false | d9469bfc32408c87739f81c2c63b6b05 | 7e367855ab2650045687dc7b3e01127760e83d4510e86e1e553e892a8e6eefe5 | 43b388b08b4807f14481dee0bc32f017a1149da3df5ed72c4d314f59d350f8af | null | [
"LICENSE"
] | 201 |
2.4 | melee | 0.45.0 | Open API written in Python 3 for making your own Smash Bros: Melee AI that works with Slippi Online | # libmelee
This is a fork of [libmelee](https://github.com/altf4/libmelee) geared toward machine learning.
## Differences from upstream
* Gamestates match raw values from slp files, allowing faster tools such as [peppi](https://github.com/hohav/peppi) to be used to process replays for imitation learning without risking mismatch between replay data and live data. Upstream on the other hand preprocesses some values to make them more legible, e.g. sets intangibility for ledge grabbing.
* A separate process is used to keep the enet connection to dolphin alive. Otherwise, it will time out after one minute of inactivity.
* Sets up gecko codes for exi-inputs/fast-forward mode, which allows the game to run much faster than normal. These codes internally disable melee's rendering in the same way that is used to fast-forward a replay during playback. A custom dolphin build is required for this (see below).
* Fixes input stick and analog trigger values to match what the game outputs. This makes imitation-trained bots behave correctly. See this [commit](https://github.com/vladfi1/libmelee/commit/06d5709fae0c5111932408f54ae88f386502e3f2) for details.
* Various other miscellaneous improvements, such as being able to control dolphin's debug logging, interfacing with [mainline slippi-dolphin](https://github.com/project-slippi/dolphin), setting infinite time mode, and playing as Sheik.
## Installing Libmelee
To install this fork, either clone it and install locally, or run
```
pip install "git+https://github.com/vladfi1/libmelee"
```
## Setup Instructions
Linux / OSX / Windows
1. You can install and configure Slippi just like you would for rollback netplay -- see https://slippi.gg for instructions. If you want to use fast-forward mode, you will need to use my [fork](https://github.com/vladfi1/slippi-Ishiiruka/tree/exi-ai-rebase) of slippi-Ishiiruka. A prebuilt Linux AppImage is available [here](https://github.com/vladfi1/slippi-Ishiiruka/releases/download/exi-ai-0.2.0/Slippi_Online-x86_64-ExiAI.AppImage), which can be used like a regular executable. This build is also headless, meaning it has no graphical elements at all. There is also a [Linux mainline build](https://github.com/vladfi1/dolphin/releases/tag/slippi-nogui-v0.1.0) that can run either headless or with graphics (but not in fast-forward mode).
2. If you want to play interactively with or against your AI, you'll probably want a GameCube Adapter, available on [Amazon](https://www.amazon.com/Super-Smash-GameCube-Adapter-Wii-U/dp/B00L3LQ1FI). Alternatively the [HitBox adapter](https://www.hitboxarcade.com/products/gamecube-controller-adapter) works well too.
3. Run the example script:
```
./example.py -e PATH_TO_SLIPPI_FOLDER_OR_EXE
```
## Fast-Forward Mode
To use fast-forward mode, set these arguments in the `Console` constructor:
```python
console = melee.Console(
path="PATH_TO_CUSTOM_DOLPHIN",
gfx_backend="Null",
disable_audio=True,
use_exi_inputs=True,
enable_ffw=True,
)
```
## Known Issues
* On MacOS, mainline slippi dolphin crashes (segfaults) for unknown reasons. You should use [Ishiiruka](https://github.com/project-slippi/Ishiiruka/releases) instead, or you can try [building](https://github.com/vladfi1/dolphin/blob/mac-nogui/build-mac.sh) a "nogui" executable (this is what I use).
## Playing Online
*Do not play on Unranked* There is no libmelee option for it, but don't try. Eventually we'll have a way to register an account as a "bot account" that others will have the ability to opt in or out of playing against. But we don't have it yet. Until then, do not play any bots on Unranked. If you do, we'll know about it, ban your account, overcook all of your food, and seed you against a campy Luigi every tournament. Don't do it.
## Quickstart Video
Here's a ~10 minute video that will show you how easy it can be to write a Melee AI from scratch.
[](https://www.youtube.com/watch?v=1R723AS1P-0)
Some of the minor aspects of the API have changed since this video was made, but it's still a good resource.
## The API
This readme will give you a very high level overview of the API. For a more detailed view into specific functions and their params, check out the ReadTheDocs page here: https://libmelee.readthedocs.io/
## GameState
The GameState represents the current state of the game as a snapshot in time. It's your primary way to view what's happening in the game, holding all the information about the game that you probably care about including things like:
- Current frame count
- Current stage
Also a list of PlayerState objects that represent the state of the 4 players:
- Character X,Y coordinates
- Animation of each character
- Which frame of the animation the character is in
The GameState object should be treated as immutable. Changing it won't have any effect on the game, and you'll receive a new copy each frame anyway.
### Note About Consistency and Binary Compatibility
Libmelee tries to create a sensible and intuitive API for Melee. So it may break with some low-level binary structures that the game creates. Some examples:
- Melee is wildly inconsistent with whether animations start at 0 or 1. For some animations, the first frame is 0, for others the first frame is 1. This is very annoying when trying to program a bot. So libmelee re-indexes all animations to start at 1. This way the math is always simple and consistent. IE: If grab comes out on "frame 7", you can reliably check `character.animation_frame == 7`.
- Libmelee treats Sheik and Zelda as one character that transforms back and forth. This is actually not how the game stores the characters internally, though. Internally to Melee, Sheik and Zelda are the same as Ice Climbers: there's always two of them. One just happens to be invisible and intangible at a time. But dealing with that would be a pain.
### Some Values are Unintuitive but Unavoidable
Other values in Melee are unintuitive, but are a core aspect of how the game works so we can't abstract it away.
- Melee doesn't have just two velocity values (X, Y) it has five! In particular, the game tracks separately your speed "due to being hit" versus "self-induced" speed. This is why after an Amsah tech, you can still go flying off stage. Because your "attack based speed" was high despite not moving anywhere for a while. Libmelee *could* produce a single X,Y speed pair but this would not accurately represent the game state. (For example, SmashBot fails at tech chasing without these 5 speed values)
- Melee tracks whether or not you're "on ground" separately from your character's Y position. It's entirely possible to be "in the air" but be below the stage, and also possible to be "on ground" but have a positive Y value. This is just how the game works and we can't easily abstract this away.
- Your character model can be in a position very different from the X, Y coordinates. A great example of this is Marth's Forward Smash. Marth leans WAAAAY forward when doing this attack, but his X position never actually changes. This is why Marth can smash off the stage and be "standing" on empty air in the middle of it. (Because the game never actually moves Marth's position forward)
## Controller
Libmelee lets you programatically press buttons on a virtual controller via Dolphin's named pipes input mechanism. The interface for this is pretty simple, after setting up a controller and connecting it, you can:
`controller.press_button(melee.enums.BUTTON_A)`
or
`controller.release_button(melee.enums.BUTTON_A)`
Or tilt one of the analog sticks by:
`controller.tilt_analog(melee.enums.BUTTON_MAIN, X, Y)`
(X and Y are numbers between 0->1. Where 0 is left/down and 1 is right/up. 0.5 is neutral)
### Note on Controller Input
Dolphin will accept whatever your last button input was each frame. So if you press A, and then release A on the same frame, only the last action will matter and A will never be seen as pressed to the game.
Also, if you don't press a button, Dolphin will just use whatever you pressed last frame. So for example, if on frame 1 you press A, and on frame 2 you press Y, both A and Y will be pressed. The controller does not release buttons for you between frames. Though there is a helper function:
`controller.release_all()`
which will release all buttons and set all sticks / shoulders to neutral.
### API Changes
Each of these old values will be removed in version 1.0.0. So update your programs!
1. `gamestate.player` has been changed to `gamestate.players` (plural) to be more Pythonic.
2. `gamestate.x` and `gamestate.y` have been combined into a named tuple: `gamestate.position`. So you can now access it via `gamestate.position.x`.
3. `projectile.x` and `projectile.y` have been combined into a named tuple: `projectile.position`. So you can now access it via `projectile.position.x`.
4. `projectile.x_speed` and `projectile.y_speed` have been combined into a named tuple: `projectile.speed`. So you can now access it via `projectile.speed.x`
5. `gamestate.stage_select_cursor_x` and `gamestate.stage_select_cursor_x` have both been combined into the PlayerState `cursor`. It makes the API cleaner to just have cursor be separate for each player, even though it's a shared cursor there.
6. `playerstate.character_selected` has been combined into `playerstate.charcter`. Just use the menu to know the context.
7. `playerstate.ecb_left` and the rest have been combined into named tuples like: `playerstate.ecb.left.x` for each of `left`, `right`, `top`, `bottom`. And `x`, `y` coords.
8. `hitlag` boolean has been changed to `hitlag_left` int
9. `ProjectileSubtype` has been renamed to `ProjectileType` to refer to its primary type enum. There is a new `subtype` int that refers to a subtype.
## OpenAI Gym
libmelee is inspired by, but not exactly conforming to, the OpenAI Gym API.
| text/markdown | null | "AltF4,vladfi1" <vladfi2@gmail.com> | null | null | null | dolphin, AI, video games, melee, ssbm, smash bros, slippi | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pyenet-vladfi",
"py-ubjson",
"numpy",
"pywin32; platform_system == \"Windows\"",
"packaging"
] | [] | [] | [] | [
"Homepage, https://github.com/vladfi1/libmelee",
"Repository, https://github.com/vladfi1/libmelee"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:06:57.094468 | melee-0.45.0.tar.gz | 587,281 | 55/e4/c3e17b7082cde099a2bb7779c5bdabd597833a9cffc5b9a0336460b700a8/melee-0.45.0.tar.gz | source | sdist | null | false | 88720c056ee044f3cec9a0bfcf75d8a3 | b6fe2a6d2ecf03e335d4bc97ada0d422e167470fa7531bf2231f41b28ad84b3b | 55e4c3e17b7082cde099a2bb7779c5bdabd597833a9cffc5b9a0336460b700a8 | LGPL-3.0-only | [
"LICENSE.txt"
] | 206 |
2.1 | chq-pybos | 0.1.2 | A Python client library for the BOS API with improved developer ergonomics and type safety | # Python BOS
A Python client library for the BOS API with improved developer ergonomics and type safety.
## Important Note
The BOS API is covered under an NDA signed by a Chautauqua Officer. The NDA may, by extension, cover this library. Its use should be limited to those covered under the NDA, namely CHQ employees.
## Recent Improvements
### Enhanced Developer Experience
The library has been significantly improved with structured data types that replace tuple-based parameters, providing:
- **Type Safety**: Compile-time error detection and IDE validation
- **Self-Documenting API**: Clear field names and structured parameters
- **IDE Support**: Autocomplete, type hints, and go-to-definition
### Example: Account Service Improvements
**Old Approach (deprecated):**
```python
# Unclear tuple structure - what do these numbers mean?
filter_list = [(1, "john@example.com", 1)]
result = service.search_account(filter_list=filter_list)
```
**New Approach (recommended):**
```python
from pybos.types import AccountFilter, SearchType
# Clear, type-safe structure
filter_obj = AccountFilter(
object_type=1,
value="john@example.com",
search_type=SearchType.EQUAL
)
result = service.search_account(filters=[filter_obj], active_only=True)
```
**Complex Operations (using request objects):**
```python
from pybos.types import SearchAccountRequest, AccountFilter, SearchType
# For complex operations, create request objects directly
search_request = SearchAccountRequest(
filters=[
AccountFilter(1, "john@example.com", SearchType.EQUAL),
AccountFilter(2, "Smith", SearchType.LIKE)
],
active_only=True,
dmg_category_list=["cat1", "cat2"]
)
result = service.search_account(**search_request.__dict__)
```
## Installing
The Python BOS library can be installed through pip or pipenv using the git repository as a source. A read-only access token has been created to facilitate http access to the repository. This token and link should not be shared publicly.
### Main Branch (Major Releases):
```
pipenv install git+https://oauth2:glpat-bHSfMffW14FbYDWFuyzs@gitlab.it.chq.org/IT/pybos.git@main#egg=pybos
```
Version specific tagging (corresponding to BOS versions) to be implemented at a later date.
## BOS API Details
The BOS API is a SOAP API that provides WSDL information in XML file that denotes the structure of the requests and responses it expects as well as what operations are available. | text/markdown | null | Jared Brown <jbrown@chq.org>, Randy Butts <rbutts@chq.org>, Ian Drake <idrake@chq.org> | null | null | MIT | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"suds>=1.0.0",
"requests>=2.25.0",
"xmltodict>=0.12.0",
"pytest>=8.4.2",
"black>=25.1.0; extra == \"dev\"",
"pytest>=8.4.2; extra == \"dev\""
] | [] | [] | [] | [] | pdm/2.26.6 CPython/3.12.12 Linux/5.4.0-216-generic | 2026-02-20T21:06:42.172623 | chq_pybos-0.1.2.tar.gz | 139,216 | 0a/6f/7e3840b5138243ff9cdbd06065a635172fa9ada177d8d173b0439f6a30ba/chq_pybos-0.1.2.tar.gz | source | sdist | null | false | a8bba39e24f303fa1ff4a537a35cd232 | 59bf6e4a074d440c3b63461cd72639ce67f25db94466455af64245b6358d7fe9 | 0a6f7e3840b5138243ff9cdbd06065a635172fa9ada177d8d173b0439f6a30ba | null | [] | 202 |
2.4 | omni-governance | 0.7.3 | The Federation Governance Tricorder | # 🔱 Omni — The All-Seeing Eye

**The Federation Governance Tricorder** — A modular, extensible observation engine that scans, maps, and guards codebases at galactic scale.
<!-- mcp-name: io.github.Pantheon-LadderWorks/omni-scanner -->
> *"Never trust documentation, trust reality."* — ACE
Omni is a Python-powered **passive observation platform** that discovers the truth about your code. It doesn't modify files or break builds — it sees, maps, and reports. Think of it as a **tricorder for your codebase**: point it at any directory and it reveals structure, dependencies, health, drift, and compliance in seconds.
---
## ✨ At a Glance
| Dimension | Reading |
| :----------------------- | :-------------------------------------------------------------------- |
| 🔍 **Scanner Categories** | 12 (from static analysis to git archaeology) |
| 📦 **Total Scanners** | 55 instruments across all categories |
| ⚡ **CLI Commands** | 14 verbs for every observation need |
| 🧠 **MCP Server** | Exposes all scanners as AI-callable tools |
| 🏛️ **Pillars** | 4 orchestration subsystems (Cartography, Intel, Gatekeeper, Registry) |
| 🔌 **Federation Mode** | Optional deep integration with a governance backend |
| 🦴 **Standalone Mode** | Works anywhere — no backend required |
---
## 🚀 Quick Start
> **New Here?** Check out the **[Beginner's Guide: Zero to Hero](docs/BEGINNERS_GUIDE.md)** for a step-by-step setup tutorial.
### Install
```bash
# From the omni directory
pip install -e .
```
### Your First Scan
```bash
# Scan the current directory with all static scanners
omni scan .
# Run a specific scanner
omni scan . --scanner surfaces
# See what Omni knows about itself
omni introspect
```
### Explore the Ecosystem
```bash
# Map your entire project constellation
omni map
# Check governance compliance
omni gate .
# Generate a full report
omni report . --format markdown
```
---
## 🏗️ Architecture — The Trinity
Omni follows the **Trinity Architecture** — three layers with strict separation of concerns:
```
┌──────────────────────────┐
│ CLI (cli.py) │ ← User interface
│ 14 verbs, 1 brain │
└────────────┬─────────────┘
│
┌──────────────────┼──────────────────┐
│ │ │
┌─────────▼──────┐ ┌───────▼────────┐ ┌──────▼───────┐
│ 🧠 CORE │ │ 🏛️ PILLARS │ │ 📚 LIB │
│ Identity │ │ Cartography │ │ I/O, Render │
│ Registry │ │ Intel │ │ Reporting │
│ Gate │ │ Gatekeeper │ │ Tree, TAP │
│ Paths │ │ Registry │ │ Requirements│
└─────────┬──────┘ └───────┬────────┘ └──────────────┘
│ │
┌─────────▼─────────────────▼──────────────────────────┐
│ 🔍 SCANNERS (55 Instruments) │
│ 12 categories • Dynamic plugin loading │
│ Each scanner: scan(target: Path) → dict │
└──────────────────────────────────────────────────────┘
│
┌─────────▼──────┐
│ 🔧 BUILDERS │ ← The only layer that writes
│ Registry Gen │
│ Report Gen │
└────────────────┘
```
> **Read-Only Guarantee**: Scanners never modify source files. Only Builders write, and only to designated artifact directories.
For the full architectural deep-dive, see **[ARCHITECTURE.md](ARCHITECTURE.md)**.
---
## 🔍 Scanner Categories
Omni's 55 scanners are organized into 12 categories. Each scanner implements the universal `scan(target: Path) → dict` contract and is auto-discovered via `SCANNER_MANIFEST.yaml` files.
### Open Source Scanners (Included in Build)
| Category | Scanners | Purpose |
| :------------------------------------------------ | :------: | :------------------------------------------------------------------------------------------ |
| **📁 [static](omni/scanners/static/)** | 9 | Filesystem analysis — contracts, deps, docs, events, hooks, imports, surfaces, tools, UUIDs |
| **🏗️ [architecture](omni/scanners/architecture/)** | 4 | Structural enforcement — import boundaries, coupling detection, drift analysis, compliance |
| **🔎 [discovery](omni/scanners/discovery/)** | 8 | Component cataloging — projects, CLI commands, cores, MCP servers, archives, census |
| **🌐 [polyglot](omni/scanners/polyglot/)** | 4 | Language ecosystems — Python packages, Node.js, Rust crates, generic (Go/Java/.NET/Docker) |
| **📚 [library](omni/scanners/library/)** | 6 | Document intelligence — cohesion analysis, content depth, knowledge graphs, rituals |
| **🔀 [git](omni/scanners/git/)** | 5 | Repository intelligence — status, velocity, commit history, PR telemetry, utilities |
| **🔍 [search](omni/scanners/search/)** | 3 | Pattern matching — file search, text search, regex pattern search with context |
| **🗄️ [db](omni/scanners/db/)** | 1 | Generic configuration-driven database scanning |
### Federation-Exclusive Scanners (Not in Open Source Build)
> These scanners require the **Federation Heart** backend and are part of the proprietary governance layer. They appear in `omni introspect` when the Heart is available but are not distributed with the open-source release.
| Category | Scanners | Purpose |
| :------------- | :------: | :----------------------------------------------------------------------------------- |
| **🛡️ health** | 6 | Runtime health — Federation, CMP, pillar, station, tunnel, and system status |
| **🗃️ database** | 5 | CMP entity scanning — agents, artifacts, conversations, entities, projects |
| **⚓ fleet** | 1 | Fleet registry generation and validation |
| **🔥 phoenix** | 3 | Git history resurrection — archive scanning, orphan detection, temporal gap analysis |
Each category has its own README with detailed scanner documentation. See the [Scanner Architecture Guide](omni/scanners/README.md) for the complete reference.
---
## 🏛️ The Four Pillars
Pillars are orchestration subsystems that coordinate multiple scanners and produce higher-level intelligence:
| Pillar | Role | Key Capability |
| :---------------- | :-------------------- | :-------------------------------------------------------- |
| **🗺️ Cartography** | Ecosystem Mapper | Maps project constellations and dependency webs |
| **🕵️ Intel** | Intelligence Gatherer | Aggregates multi-scanner data into actionable insights |
| **⚖️ Gatekeeper** | Policy Enforcer | Validates compliance, catches drift, flags violations |
| **📋 Registry** | Registry Operator | Parses, validates, and manages `PROJECT_REGISTRY_V1.yaml` |
See [Pillars Architecture](omni/pillars/README.md) for the deep dive.
---
## ⚡ CLI Command Reference
| Command | Purpose |
| :------------------ | :-------------------------------------------------------- |
| `omni scan` | Run scanners against a target directory |
| `omni inspect` | Deep inspection of a single project |
| `omni gate` | Policy enforcement and compliance checks |
| `omni map` | Ecosystem cartography and dependency mapping |
| `omni tree` | Directory tree visualization |
| `omni audit` | Provenance, dependency, and lock auditing |
| `omni registry` | Registry operations and event scanning |
| `omni library` | Grand Librarian document intelligence |
| `omni canon` | Canon validation and discovery |
| `omni report` | Generate structured reports |
| `omni init` | Scaffold new Federation-compliant projects |
| `omni introspect` | Self-inspection — shows all scanners, drift, capabilities |
| `omni interpret` | Interpret and explain scan results |
| `omni inspect-tree` | Combined tree + inspection |
---
## 🔌 Federation Mode vs. Standalone
Omni operates in two modes, transparently:
### Standalone Mode (Default)
No external dependencies. Configuration from `omni.yml` and environment variables. All open-source scanners work perfectly. Ideal for individual developers and open-source projects.
### Federation Mode (Optional)
When `federation_heart` is installed, Omni gains:
- **CartographyPillar** — Canonical path resolution across the entire Federation
- **Constitution** — Governance rule enforcement from a central authority
- **CMP Integration** — Project identity resolution against the Canonical Master Project database
- **Runtime Health** — Live status of Federation services, stations, and tunnels
The integration is handled by a **single shim** (`omni/config/settings.py`) that bridges to the Heart when available and falls back gracefully when it's not.
---
## 🧠 MCP Server
Omni includes a Model Context Protocol (MCP) server that exposes all 55 scanners as AI-callable tools. Any MCP-compatible AI assistant can invoke Omni's scanners programmatically.
```bash
# The MCP server auto-discovers all registered scanners
python -m mcp_server.omni_mcp_server
```
See [MCP Server Documentation](mcp_server/README.md) for setup and configuration.
---
## 📁 Project Structure
```
omni/
├── README.md ← You are here
├── ARCHITECTURE.md ← Full architectural deep-dive
├── CONTRIBUTING.md ← How to add scanners and contribute
├── CHANGELOG.md ← Version history
├── ROADMAP.md ← Future plans
├── pyproject.toml ← Package definition
├── omni/
│ ├── cli.py ← CLI entry point (14 commands)
│ ├── core/ ← Brain — identity, registry, gate, paths
│ ├── config/ ← Configuration & Federation Heart bridge
│ ├── scanners/ ← 55 scanners across 12 categories
│ ├── pillars/ ← 4 orchestration subsystems
│ ├── lib/ ← Shared utilities (I/O, rendering, reporting)
│ ├── builders/ ← Registry and report generators
│ ├── scaffold/ ← Project templates
│ └── templates/ ← Jinja2 report templates
├── mcp_server/ ← MCP server exposing scanners as AI tools
├── scripts/ ← Operational scripts
├── tests/ ← Test suite (pytest)
├── docs/ ← Historical docs and plans
└── contracts/ ← Crown Contracts (C-TOOLS-OMNI-*)
```
---
## 🔧 Configuration
Omni follows a strict configuration hierarchy (highest priority wins):
1. **CLI flags** (e.g., `--scanner surfaces`)
2. **Environment variables** (e.g., `OMNI_ROOT`)
3. **`omni.yml`** (project-level configuration)
4. **Built-in defaults** (sensible fallbacks)
Key environment variables:
| Variable | Purpose |
| :-------------------- | :-------------------------------- |
| `OMNI_ROOT` | Override root path for scanning |
| `OMNI_REPO_INVENTORY` | Path to repository inventory JSON |
| `OMNI_WORKSPACES` | Workspace root paths |
| `OMNI_DB_CONFIG_PATH` | Database configuration directory |
See [Configuration Guide](omni/config/README.md) for full details.
---
## 🧪 Testing
```bash
# Run all tests
pytest tests/ -v
# With coverage
pytest tests/ --cov=omni --cov-report=html
```
See [Test Suite Documentation](tests/README.md) for fixtures, standards, and CI setup.
---
## 🤝 Contributing
We welcome new scanners, pillars, and improvements. The scanner plugin system makes it straightforward to add new observation capabilities:
1. Create a scanner file with a `scan(target: Path) → dict` function
2. Register it in the category's `SCANNER_MANIFEST.yaml`
3. Add tests and documentation
See **[CONTRIBUTING.md](CONTRIBUTING.md)** for the full guide.
---
## 📜 Requirements
- **Python**: 3.8+
- **Dependencies**: `pyyaml`, `pydantic` (core); `federation_heart` (optional, for Federation mode)
- **OS**: Windows, macOS, Linux
---
## 📋 License
Open source. See [LICENSE](LICENSE) for details.
---
<p align="center">
<em>The All-Seeing Eye observes. The Code writes the Code.</em><br/>
<strong>Omni v0.7.0</strong> — Pantheon LadderWorks
</p>
| text/markdown | null | Kode_Animator <kode.animator@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pyyaml",
"seraphina-federation>=1.0.0; extra == \"federation\""
] | [] | [] | [] | [
"Homepage, https://github.com/Kryssie6985/Infrastructure",
"Repository, https://github.com/Kryssie6985/Infrastructure"
] | twine/6.2.0 CPython/3.13.7 | 2026-02-20T21:06:10.172778 | omni_governance-0.7.3.tar.gz | 202,961 | ec/64/5d9b88e39dd9fe6435cfbba576185763aa18b429463bc3289b0f2307ec10/omni_governance-0.7.3.tar.gz | source | sdist | null | false | e5c9a1e27e18cfb3d66428fe1f74f35d | 768f23467a650e006ea32f36fdcbeef5ea5ca6c4cea650ce5bada20b3aab9b5b | ec645d9b88e39dd9fe6435cfbba576185763aa18b429463bc3289b0f2307ec10 | null | [
"LICENSE"
] | 199 |
2.4 | styrene | 0.6.0 | Styrene mesh networking suite — daemon, TUI, and tools for Reticulum networks | # styrene
Meta-package for the Styrene mesh networking suite. Installs the full user-facing stack for [Reticulum](https://reticulum.network/) mesh networks.
## Install
```bash
pip install styrene # full stack: daemon + TUI
pip install styrene[web] # + FastAPI/Uvicorn HTTP API
pip install styrene[metrics] # + Prometheus metrics
pip install styrene[yubikey] # + YubiKey identity support
```
## What's Included
| Package | PyPI | Description |
|---------|------|-------------|
| [styrened](https://github.com/styrene-lab/styrened) | `pip install styrened` | Headless daemon and shared library — RPC server, device discovery, auto-reply |
| [styrene-tui](https://github.com/styrene-lab/styrene-tui) | `pip install styrene-tui` | Terminal UI client for mesh management (Imperial CRT aesthetic) |
## CLI Commands
After installing, two commands are available:
```bash
styrened # Start the headless daemon
styrene # Launch the terminal UI
```
## Individual Installation
Each component can be installed independently:
```bash
pip install styrened # daemon/library only
pip install styrene-tui # TUI + styrened (as dependency)
```
## License
MIT
| text/markdown | Vanderlyn Labs | null | null | null | null | reticulum, mesh, lora, lxmf, fleet | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Communications"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"styrened>=0.6.0",
"styrene-tui>=0.5.0",
"styrened[web]>=0.6.0; extra == \"web\"",
"styrened[metrics]>=0.6.0; extra == \"metrics\"",
"styrened[yubikey]>=0.6.0; extra == \"yubikey\""
] | [] | [] | [] | [
"Homepage, https://github.com/styrene-lab",
"Repository, https://github.com/styrene-lab/styrene-pypi"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T21:05:46.854324 | styrene-0.6.0-py3-none-any.whl | 1,805 | 99/0b/6de73f37c9206bd34fea577f9bbb0044b6f74c9c41bc3655ea35fe7c3ca8/styrene-0.6.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 6241f50921a0bb6c7019fbd6693a6aa6 | 057073269df1203087be9dd2df2a0eac7e52ec91db0151bfab69e3031526143d | 990b6de73f37c9206bd34fea577f9bbb0044b6f74c9c41bc3655ea35fe7c3ca8 | MIT | [] | 207 |
2.4 | explainable-agent | 0.1.1 | A local-first, explainable AI agent framework with self-healing, detailed error diagnostics, and interactive tool-calling traces. | # 🔬 Explainable Agent Lab
> A local-first, explainable agent framework designed to guide developers in building robust AI agents.
Building reliable agents is hard. LLMs hallucinate, get stuck in infinite loops, or fail to parse tools correctly. **Explainable Agent Lab** is built to solve this by focusing on **explainability and guidance**.
✨ **Key Features:**
- **Show the Hidden Errors:** Reveal exactly where and why an agent fails (e.g., low confidence, schema violations).
- **Self-Healing:** The agent automatically analyzes its own errors and proposes alternative tool-based solutions.
- **Visual Terminal Tracking:** Step-by-step interactive and colorful tracking using the `rich` library (`--verbose`).
- **Detailed Diagnostic Reports:** Actionable suggestions on hallucination risks, loop patterns, and prompt improvements.
---
## 🚀 Quick Start
### 1. Install
Install directly from PyPI:
```bash
pip install explainable-agent
```
*(Optional: for development, clone the repo and run `pip install -e .[dev]`)*
### 2. Connect Your Local LLM
You can use any OpenAI-compatible local server like **Ollama** or **LM Studio**.
- **Ollama:** `http://localhost:11434/v1` (e.g., model: `ministral-3:14b`)
- **LM Studio:** `http://localhost:1234/v1` (e.g., model: `gpt-oss-20b`)
*Tip: You can create a `.env` file in your working directory to set your defaults (see `.env.example`).*
### 3. Run the Agent
The package installs a global CLI command `explainable-agent`.
**Example using Ollama:**
```bash
explainable-agent \
--base-url http://localhost:11434/v1 \
--model ministral-3:14b \
--task "calculate_math: (215*4)-12" \
--verbose
```
---
## 💻 Using the Python API
Easily integrate the agent into your codebase or create custom tools using the `@define_tool` decorator.
Check out the `examples/` directory:
- [`examples/basic_usage.py`](examples/basic_usage.py) - Initialize and run the agent programmatically.
- [`examples/custom_tool_usage.py`](examples/custom_tool_usage.py) - Learn how to build custom tools and watch the agent self-heal from errors.
Run an example:
```bash
python examples/custom_tool_usage.py
```
---
## 📊 Evaluation & Custom Datasets
Evaluate your fine-tuned models or custom datasets easily. The pipeline parses messy outputs, repairs broken JSON, and generates actionable Markdown reports.
**1. Create a `.jsonl` dataset** (See `examples/custom_eval_sample.jsonl`)
**2. Run the evaluation:**
```bash
python scripts/eval_hf_tool_calls.py \
--dataset examples/custom_eval_sample.jsonl \
--model ministral-3:14b
```
We also support standard benchmarks out of the box:
- **HF Tool Calls:** `data/evals/hf_xlam_fc_sample.jsonl`
- **BFCL SQL:** `data/evals/bfcl_sql/BFCL_v3_sql.json`
- **SWE-bench Lite:** `data/evals/swebench_lite_test.jsonl`
---
## 🛠️ Built-in Tools
The agent comes with out-of-the-box tools ready to use:
`duckduckgo_search`, `calculate_math`, `read_text_file`, `list_workspace_files`, `now_utc`, `sqlite_init_demo`, `sqlite_list_tables`, `sqlite_describe_table`, `sqlite_query`, `sqlite_execute`.
---
*License: MIT | Current Release: v0.1.0*
| text/markdown | Emre | null | null | null | MIT | agent, llm, tool-calling, evaluation, xai, self-healing | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"openai>=1.54.0",
"json-repair>=0.30.3",
"tenacity>=8.2.3",
"pydantic>=2.9.0",
"ddgs>=0.1.0",
"rich>=13.0.0",
"pytest>=8.3.0; extra == \"dev\"",
"twine>=5.1.1; extra == \"dev\"",
"build>=1.2.1; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/emredeveloper/explainable-agent-lab",
"Issues, https://github.com/emredeveloper/explainable-agent-lab/issues"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T21:05:20.238128 | explainable_agent-0.1.1.tar.gz | 33,283 | 64/4f/0791a987d4093d4f7b04e574a516f21de295c29d6577e0330858845c237c/explainable_agent-0.1.1.tar.gz | source | sdist | null | false | ad8883b0c56afaa8405aacd7facbf7e7 | 577dac155a740c2f1d9f8e2fa69bcfe57de564d7dfa085e70df9a6cbcd83fa0b | 644f0791a987d4093d4f7b04e574a516f21de295c29d6577e0330858845c237c | null | [
"LICENSE"
] | 213 |
2.4 | ims-mcp | 2.0.0b46 | Model Context Protocol server for Rosetta (Instruction Management System) | # ims-mcp
**Model Context Protocol (MCP) server for Rosetta (Enterprise Engineering Governance and Instructions Management System)**
*Powered by R2R technology for advanced RAG capabilities*
This package provides a FastMCP server that connects to Rosetta servers for advanced retrieval-augmented generation (RAG) capabilities. It enables AI assistants like Claude Desktop, Cursor, and other MCP clients to search, retrieve, and manage documents in Rosetta knowledge bases.
## Features
- 🔍 **Semantic Search** - Vector-based and full-text search across documents
- 🤖 **RAG Queries** - Retrieval-augmented generation with configurable LLM settings
- 📝 **Document Management** - Upload, update, list, and delete documents with upsert semantics
- 🏷️ **Metadata Filtering** - Advanced filtering by tags, domain, and custom metadata
- 🌐 **Environment-Based Config** - Zero configuration, reads from environment variables
- 📋 **Bootstrap Instructions** - Automatically includes PREP step instructions for LLMs on connection
- 📊 **Usage Analytics** - Built-in PostHog integration for tracking feature adoption (enabled by default, opt-out)
## Installation
### Using uvx (recommended)
The easiest way to use ims-mcp is with `uvx`, which automatically handles installation:
```bash
uvx ims-mcp
```
### Using pip
Install globally or in a virtual environment:
```bash
pip install ims-mcp
```
Then run:
```bash
ims-mcp
```
### As a Python Module
You can also run it as a module:
```bash
python -m ims_mcp
```
## Configuration
The server automatically reads configuration from environment variables:
| Variable | Description | Default |
|----------|-------------|---------|
| `R2R_API_BASE` or `R2R_BASE_URL` | Rosetta server URL | `http://localhost:7272` |
| `R2R_COLLECTION` | Collection name for queries | Server default |
| `R2R_API_KEY` | API key for authentication | None |
| `R2R_EMAIL` | Email for authentication (requires R2R_PASSWORD) | None |
| `R2R_PASSWORD` | Password for authentication (requires R2R_EMAIL) | None |
| `POSTHOG_API_KEY` | PostHog Project API key (format: `phc_*`, opt-in analytics) | None (disabled) |
| `POSTHOG_HOST` | PostHog instance URL | `https://us.i.posthog.com` |
| `IMS_DEBUG` | Enable debug logging to stderr (1/true/yes/on) | None (disabled) |
**Authentication Priority:**
1. If `R2R_API_KEY` is set, it will be used
2. If `R2R_EMAIL` and `R2R_PASSWORD` are set, they will be used to login and obtain an access token
3. If neither is set, the client will attempt unauthenticated access (works for local servers)
**Note:** Environment variables use `R2R_` prefix for compatibility with the underlying R2R SDK.
## Usage with MCP Clients
### Cursor IDE
**Local server (no authentication):**
Add to `.cursor/mcp.json`:
```json
{
"mcpServers": {
"KnowledgeBase": {
"command": "uvx",
"args": ["ims-mcp@latest"],
"env": {
"R2R_API_BASE": "http://localhost:7272",
"R2R_COLLECTION": "aia-r1"
}
}
}
}
```
**Remote server (with email/password authentication):**
```json
{
"mcpServers": {
"KnowledgeBase": {
"command": "uvx",
"args": ["ims-mcp@latest"],
"env": {
"R2R_API_BASE": "https://your-server.example.com/",
"R2R_COLLECTION": "your-collection",
"R2R_EMAIL": "your-email@example.com",
"R2R_PASSWORD": "your-password"
}
}
}
}
```
### Claude Desktop
Add to Claude Desktop configuration (`~/Library/Application Support/Claude/claude_desktop_config.json` on macOS):
```json
{
"mcpServers": {
"ims": {
"command": "uvx",
"args": ["ims-mcp@latest"],
"env": {
"R2R_API_BASE": "http://localhost:7272",
"R2R_COLLECTION": "my-collection"
}
}
}
}
```
### Other MCP Clients
Any MCP client can use ims-mcp by specifying the command and environment variables:
```json
{
"command": "uvx",
"args": ["ims-mcp@latest"],
"env": {
"R2R_API_BASE": "http://localhost:7272"
}
}
```
## Available MCP Tools
### 1. search
Perform semantic and full-text search across documents.
**Parameters:**
- `query` (str): Search query
- `filters` (dict, optional): Metadata filters (e.g., `{"tags": {"$in": ["agents"]}}`)
- `limit` (int, optional): Maximum results
- `use_semantic_search` (bool, optional): Enable vector search
- `use_fulltext_search` (bool, optional): Enable full-text search
**Example:**
```python
search("machine learning", filters={"tags": {"$in": ["research"]}}, limit=5)
```
### 2. rag
Retrieval-augmented generation with LLM.
**Parameters:**
- `query` (str): Question to answer
- `filters` (dict, optional): Metadata filters
- `limit` (int, optional): Max search results to use
- `model` (str, optional): LLM model name
- `temperature` (float, optional): Response randomness (0-1)
- `max_tokens` (int, optional): Max response length
**Example:**
```python
rag("What is machine learning?", model="gpt-4", temperature=0.7)
```
### 3. put_document
Upload or update a document with upsert semantics.
**Parameters:**
- `content` (str): Document text content
- `title` (str): Document title
- `metadata` (dict, optional): Custom metadata (e.g., `{"tags": ["research"], "author": "John"}`)
- `document_id` (str, optional): Explicit document ID
**Example:**
```python
put_document(
content="Machine learning is...",
title="ML Guide",
metadata={"tags": ["research", "ml"]}
)
```
### 4. list_documents
List documents with pagination and optional tag filtering.
**Parameters:**
- `offset` (int, optional): Documents to skip (default: 0)
- `limit` (int, optional): Max documents (default: 100)
- `document_ids` (list[str], optional): Specific IDs to retrieve
- `compact_view` (bool, optional): Show only ID and title (default: True)
- `tags` (list[str], optional): Filter by tags (e.g., `["agents", "r1"]`)
- `match_all_tags` (bool, optional): If True, document must have ALL tags; if False (default), document must have ANY tag
**Examples:**
```python
# List all documents (compact view - ID and title only)
list_documents(offset=0, limit=10)
# List with full details
list_documents(offset=0, limit=10, compact_view=False)
# Filter by tags (ANY mode - documents with "research" OR "ml")
list_documents(tags=["research", "ml"])
# Filter by tags (ALL mode - documents with both "research" AND "ml")
list_documents(tags=["research", "ml"], match_all_tags=True)
```
**Note:** Tag filtering is performed client-side after fetching results. For large collections with complex filtering needs, consider using the `search()` tool with metadata filters instead.
### 5. get_document
Retrieve a specific document by ID or title.
**Parameters:**
- `document_id` (str, optional): Document ID
- `title` (str, optional): Document title
**Example:**
```python
get_document(title="ML Guide")
```
### 6. delete_document
Delete a document by ID.
**Parameters:**
- `document_id` (str, required): The unique identifier of the document to delete
**Example:**
```python
delete_document(document_id="550e8400-e29b-41d4-a716-446655440000")
```
**Returns:**
- Success message with document ID on successful deletion
- Error message if document not found or permission denied
## Metadata Filtering
All filter operators supported:
- `$eq`: Equal
- `$neq`: Not equal
- `$gt`, `$gte`: Greater than (or equal)
- `$lt`, `$lte`: Less than (or equal)
- `$in`: In array
- `$nin`: Not in array
- `$like`, `$ilike`: Pattern matching (case-sensitive/insensitive)
**Examples:**
```python
# Filter by tags
filters={"tags": {"$in": ["research", "ml"]}}
# Filter by domain
filters={"domain": {"$eq": "instructions"}}
# Combined filters
filters={"tags": {"$in": ["research"]}, "created_at": {"$gte": "2024-01-01"}}
```
## Development
### Local Installation
Install directly from PyPI:
```bash
pip install ims-mcp
```
Or for the latest development version, install from source if you have the code locally:
```bash
pip install -e .
```
### Running Tests
```bash
pip install -e ".[dev]"
pytest
```
### Building for Distribution
```bash
python -m build
```
## Usage Analytics
Rosetta MCP includes built-in usage analytics via PostHog to help understand feature adoption and usage patterns.
### Default Behavior
**Published packages** (from PyPI via CI/CD): Analytics are **ENABLED BY DEFAULT** with a built-in Project API Key (write-only, safe for client-side use). No configuration required.
**Local development builds**: Analytics are **DISABLED** (placeholder key remains in source code).
### Disable Analytics
To **disable** analytics, set `POSTHOG_API_KEY` to an empty string in your MCP configuration:
```json
{
"mcpServers": {
"KnowledgeBase": {
"command": "uvx",
"args": ["ims-mcp@latest"],
"env": {
"R2R_API_BASE": "https://your-server.com/",
"R2R_COLLECTION": "aia-r1",
"POSTHOG_API_KEY": ""
}
}
}
}
```
### Use Custom PostHog Project
To track analytics in your own PostHog project, provide your Project API Key:
```json
{
"mcpServers": {
"KnowledgeBase": {
"env": {
"POSTHOG_API_KEY": "phc_YOUR_CUSTOM_PROJECT_API_KEY",
"POSTHOG_HOST": "https://us.i.posthog.com"
}
}
}
}
```
**Where to Find Your Project API Key:**
1. Log into PostHog dashboard
2. Navigate to: **Project Settings** → **Project API Key**
3. Copy the key (starts with `phc_`)
**Important**: Use **Project API Key** (write-only, for event ingestion), not Personal API Key.
### What's Tracked
**User Context:**
- Username (from `USER`/`USERNAME`/`LOGNAME` environment variables + `whoami` fallback)
- Repository names (from MCP `roots/list` protocol request, comma-separated if multiple; fallback to `client_id` parsing; 5-min cache)
- MCP server identifier (`mcp_server: "Rosetta"`) and version (`mcp_server_version: "1.0.30"`)
- GeoIP enabled via `disable_geoip=False` in client initialization (MCP runs locally on user's machine, IP is user's actual location)
**Business Parameters** (usage patterns):
- `query` - Search queries
- `filters`, `tags` - Filter/tag usage patterns
- `title` - Document title searches
- `document_id`, `document_ids` - Document access patterns (kept for tracking)
- `use_semantic_search`, `use_fulltext_search` - Search method preferences
- `match_all_tags` - Tag matching logic
**Excluded** (technical parameters):
- `limit`, `offset`, `page` - Pagination
- `compact_view` - View settings
- `model`, `temperature`, `max_tokens` - RAG tuning parameters
### Privacy & Control
- **Opt-out**: Analytics enabled by default with built-in key, easy to disable
- **Write-only**: Project API key can only send events, cannot read analytics data
- **Non-blocking**: Analytics never delays or breaks MCP tool responses
- **User control**: Set `POSTHOG_API_KEY=""` to disable tracking anytime
- **Custom tracking**: Use your own PostHog project by setting custom API key
## Requirements
- Python >= 3.10
- Rosetta server running and accessible (powered by R2R Light)
- r2r Python SDK >= 3.6.0
- mcp >= 1.0.0
- posthog >= 7.0.0 (for built-in analytics)
## License
MIT License - see LICENSE file for details
This package is built on R2R (RAG to Riches) technology by SciPhi AI, which is licensed under the MIT License. We gratefully acknowledge the R2R project and its contributors.
## Links
- **R2R Technology**: https://github.com/SciPhi-AI/R2R
- **Model Context Protocol**: https://modelcontextprotocol.io/
- **FastMCP**: https://github.com/jlowin/fastmcp
## Support
For issues and questions, visit the package page: https://pypi.org/project/ims-mcp/
| text/markdown | Igor Solomatov | null | null | null | null | mcp, ims, retrieval, rag, ai, llm, model-context-protocol, knowledge-base | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"ragflow-sdk<1.0.0,>=0.24.0",
"mcp<2.0.0,>=1.26.0",
"fastmcp<3.0.0,>=2.14.5",
"posthog<8.0.0,>=7.0.0",
"uuid7-standard<2.0.0,>=1.0.0",
"build>=1.0.0; extra == \"dev\"",
"twine>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://pypi.org/project/ims-mcp/"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-20T21:04:42.494450 | ims_mcp-2.0.0b46.tar.gz | 57,461 | e8/6b/7dcfc5190cde62f16bfba3116313e4768f9d345b4cf52448a6640e276af0/ims_mcp-2.0.0b46.tar.gz | source | sdist | null | false | e90176b00be673828144aef7fb96470f | 097364ff3fb585b0d8c2c73e2e7c894e952104be271a60b2c152abe0a4e49c54 | e86b7dcfc5190cde62f16bfba3116313e4768f9d345b4cf52448a6640e276af0 | MIT | [
"LICENSE"
] | 182 |
2.4 | data-designer | 0.5.1 | General framework for synthetic data generation | # 🎨 NeMo Data Designer
[](https://github.com/NVIDIA-NeMo/DataDesigner/actions/workflows/ci.yml)
[](https://opensource.org/licenses/Apache-2.0)
[](https://www.python.org/downloads/) [](https://docs.nvidia.com/nemo/microservices/latest/index.html) [](https://nvidia-nemo.github.io/DataDesigner/) 
**Generate high-quality synthetic datasets from scratch or using your own seed data.**
---
## Welcome!
Data Designer helps you create synthetic datasets that go beyond simple LLM prompting. Whether you need diverse statistical distributions, meaningful correlations between fields, or validated high-quality outputs, Data Designer provides a flexible framework for building production-grade synthetic data.
## What can you do with Data Designer?
- **Generate diverse data** using statistical samplers, LLMs, or existing seed datasets
- **Control relationships** between fields with dependency-aware generation
- **Validate quality** with built-in Python, SQL, and custom local and remote validators
- **Score outputs** using LLM-as-a-judge for quality assessment
- **Iterate quickly** with preview mode before full-scale generation
---
## Quick Start
### 1. Install
```bash
pip install data-designer
```
Or install from source:
```bash
git clone https://github.com/NVIDIA-NeMo/DataDesigner.git
cd DataDesigner
make install
```
### 2. Set your API key
Start with one of our default model providers:
- [NVIDIA Build API](https://build.nvidia.com)
- [OpenAI](https://platform.openai.com/api-keys)
- [OpenRouter](https://openrouter.ai)
Grab your API key(s) using the above links and set one or more of the following environment variables:
```bash
export NVIDIA_API_KEY="your-api-key-here"
export OPENAI_API_KEY="your-openai-api-key-here"
export OPENROUTER_API_KEY="your-openrouter-api-key-here"
```
### 3. Start generating data!
```python
import data_designer.config as dd
from data_designer.interface import DataDesigner
# Initialize with default settings
data_designer = DataDesigner()
config_builder = dd.DataDesignerConfigBuilder()
# Add a product category
config_builder.add_column(
dd.SamplerColumnConfig(
name="product_category",
sampler_type=dd.SamplerType.CATEGORY,
params=dd.CategorySamplerParams(
values=["Electronics", "Clothing", "Home & Kitchen", "Books"],
),
)
)
# Generate personalized customer reviews
config_builder.add_column(
dd.LLMTextColumnConfig(
name="review",
model_alias="nvidia-text",
prompt="Write a brief product review for a {{ product_category }} item you recently purchased.",
)
)
# Preview your dataset
preview = data_designer.preview(config_builder=config_builder)
preview.display_sample_record()
```
---
## What's next?
### 📚 Learn more
- **[Quick Start Guide](https://nvidia-nemo.github.io/DataDesigner/latest/quick-start/)** – Detailed walkthrough with more examples
- **[Tutorial Notebooks](https://nvidia-nemo.github.io/DataDesigner/latest/notebooks/)** – Step-by-step interactive tutorials
- **[Column Types](https://nvidia-nemo.github.io/DataDesigner/latest/concepts/columns/)** – Explore samplers, LLM columns, validators, and more
- **[Validators](https://nvidia-nemo.github.io/DataDesigner/latest/concepts/validators/)** – Learn how to validate generated data with Python, SQL, and remote validators
- **[Model Configuration](https://nvidia-nemo.github.io/DataDesigner/latest/concepts/models/model-configs/)** – Configure custom models and providers
- **[Person Sampling](https://nvidia-nemo.github.io/DataDesigner/latest/concepts/person_sampling/)** – Learn how to sample realistic person data with demographic attributes
### 🔧 Configure models via CLI
```bash
data-designer config providers # Configure model providers
data-designer config models # Set up your model configurations
data-designer config list # View current settings
```
### 🤝 Get involved
- **[Contributing Guide](https://nvidia-nemo.github.io/DataDesigner/latest/CONTRIBUTING)** – Help improve Data Designer
- **[GitHub Issues](https://github.com/NVIDIA-NeMo/DataDesigner/issues)** – Report bugs or make a feature request
---
## Telemetry
Data Designer collects telemetry to help us improve the library for developers. We collect:
* The names of models used
* The count of input tokens
* The count of output tokens
**No user or device information is collected.** This data is not used to track any individual user behavior. It is used to see an aggregation of which models are the most popular for SDG. We will share this usage data with the community.
Specifically, a model name that is defined a `ModelConfig` object, is what will be collected. In the below example config:
```python
ModelConfig(
alias="nv-reasoning",
model="openai/gpt-oss-20b",
provider="nvidia",
inference_parameters=ChatCompletionInferenceParams(
temperature=0.3,
top_p=0.9,
max_tokens=4096,
),
)
```
The value `openai/gpt-oss-20b` would be collected.
To disable telemetry capture, set `NEMO_TELEMETRY_ENABLED=false`.
### Top Models
This chart represents the breakdown of models used for Data Designer across all synthetic data generation jobs from 1/5/2026 to 2/5/2026.

_Last updated on 2/05/2026_
---
## License
Apache License 2.0 – see [LICENSE](LICENSE) for details.
---
## Citation
If you use NeMo Data Designer in your research, please cite it using the following BibTeX entry:
```bibtex
@misc{nemo-data-designer,
author = {The NeMo Data Designer Team, NVIDIA},
title = {NeMo Data Designer: A framework for generating synthetic data from scratch or based on your own seed data},
howpublished = {\url{https://github.com/NVIDIA-NeMo/DataDesigner}},
year = {2025},
note = {GitHub Repository},
}
```
| text/markdown | null | null | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"data-designer-config==0.5.1",
"data-designer-engine==0.5.1",
"prompt-toolkit<4,>=3.0.0",
"typer<1,>=0.12.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T21:04:41.802661 | data_designer-0.5.1.tar.gz | 103,063 | d5/e3/235e3d7b33db7cd7d885ccaf9d0c0b8a0f7eb36c144da9a087141e0f510d/data_designer-0.5.1.tar.gz | source | sdist | null | false | 93a85a73097a3ffcc70e582e0cfa3e88 | 0397f6067b27da4717e7c2fb228093f20150ce560007ba705b20dbd7fbe4c345 | d5e3235e3d7b33db7cd7d885ccaf9d0c0b8a0f7eb36c144da9a087141e0f510d | Apache-2.0 | [] | 229 |
2.4 | data-designer-engine | 0.5.1 | Generation engine for DataDesigner synthetic data generation | # data-designer-engine
Generation engine for NeMo Data Designer synthetic data generation framework.
This package contains the execution engine that powers Data Designer. It depends on `data-designer-config` and includes heavy dependencies like pandas, numpy, and LLM integration via litellm.
## Installation
```bash
pip install data-designer-engine
```
This automatically installs `data-designer-config` as a dependency.
See main [README.md](https://github.com/NVIDIA-NeMo/DataDesigner/blob/main/README.md) for more information.
| text/markdown | null | null | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"anyascii<1,>=0.3.3",
"data-designer-config==0.5.1",
"duckdb<2,>=1.1.3",
"faker<21,>=20.1.0",
"httpx-retries<1,>=0.4.2",
"httpx<1,>=0.27.2",
"huggingface-hub<2,>=1.0.1",
"json-repair<1,>=0.48.0",
"jsonpath-rust-bindings<2,>=1.0",
"jsonschema<5,>=4.0.0",
"litellm<1.80.12,>=1.73.6",
"lxml<7,>=6.0.2",
"marko<3,>=2.1.2",
"mcp<2,>=1.26.0",
"networkx<4,>=3.0",
"ruff<1,>=0.14.10",
"scipy<2,>=1.11.0",
"sqlfluff<4,>=3.2.0",
"tiktoken<1,>=0.8.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T21:04:38.872568 | data_designer_engine-0.5.1.tar.gz | 664,057 | 2d/34/5301e05db2b915eeca9cf7219c745fdff7fc2e1e65dc6e1535586c5881a6/data_designer_engine-0.5.1.tar.gz | source | sdist | null | false | b0dfd67eed33afc3a417e567417fee8a | f440f8257d3c8d353d5acab820ef5a522b7e927b6991383c4b76d5216a63e096 | 2d345301e05db2b915eeca9cf7219c745fdff7fc2e1e65dc6e1535586c5881a6 | Apache-2.0 | [] | 230 |
2.4 | data-designer-config | 0.5.1 | Configuration layer for DataDesigner synthetic data generation | # data-designer-config
Configuration layer for NeMo Data Designer synthetic data generation framework.
This package provides the configuration API for defining synthetic data generation pipelines. It's a lightweight dependency that can be used standalone for configuration management.
## Installation
```bash
pip install data-designer-config
```
## Usage
```python
import data_designer.config as dd
# Initialize config builder with model config(s)
config_builder = dd.DataDesignerConfigBuilder(
model_configs=[
dd.ModelConfig(
alias="my-model",
model="nvidia/nemotron-3-nano-30b-a3b",
inference_parameters=dd.ChatCompletionInferenceParams(temperature=0.7),
),
]
)
# Add columns
config_builder.add_column(
dd.SamplerColumnConfig(
name="user_id",
sampler_type=dd.SamplerType.UUID,
params=dd.UUIDSamplerParams(prefix="user-"),
)
)
config_builder.add_column(
dd.LLMTextColumnConfig(
name="description",
prompt="Write a product description",
model_alias="my-model",
)
)
# Build configuration
config = config_builder.build()
```
See main [README.md](https://github.com/NVIDIA-NeMo/DataDesigner/blob/main/README.md) for more information.
| text/markdown | null | null | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"jinja2<4,>=3.1.6",
"numpy<3,>=1.23.5",
"pandas<3,>=2.3.3",
"pillow<13,>=12.0.0",
"pyarrow<20,>=19.0.1",
"pydantic[email]<3,>=2.9.2",
"pygments<3,>=2.19.2",
"python-json-logger<4,>=3",
"pyyaml<7,>=6.0.1",
"requests<3,>=2.32",
"rich<15,>=13.7.1"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T21:04:35.221116 | data_designer_config-0.5.1.tar.gz | 111,873 | 7f/55/11ea7ae3b54d070581e301d3ef44c3beeb739d55cb4fc33d3ce61ad6dd1d/data_designer_config-0.5.1.tar.gz | source | sdist | null | false | 69c02fa295914857d7600866d8a5efaa | 55c1fff2b0d8fb873797425c313c1dbb7deb901740a0f1b5b90458a268284b81 | 7f5511ea7ae3b54d070581e301d3ef44c3beeb739d55cb4fc33d3ce61ad6dd1d | Apache-2.0 | [] | 234 |
2.4 | styrene-tui | 0.5.0 | Terminal UI for Styrene mesh network management | # Styrene
Terminal UI for Reticulum mesh network management.
A production-ready terminal interface for LXMF messaging, device discovery, and remote management over Reticulum mesh networks. Built on **styrene-core** for headless library functionality with an Imperial CRT terminal interface.
**For headless deployments**, see **[styrened](https://github.com/styrene-lab/styrened)** - a lightweight daemon optimized for edge devices and NixOS.
## Architecture
Styrene is part of a three-package ecosystem:
```
┌──────────────────┐ ┌──────────────────┐
│ styrene (TUI) │ │ styrened │
│ (this package) │ │ (daemon) │
├──────────────────┤ ├──────────────────┤
│ styrene-core │
│ (headless library) │
├────────────────────────────────────────┤
│ Reticulum Network Stack │
└────────────────────────────────────────┘
```
**Package Roles**:
- **styrene-core** - Headless library for RNS/LXMF applications
- **styrene** (this) - Interactive terminal UI
- **styrened** - Lightweight daemon for edge devices (no UI deps)
This separation enables:
- **Interactive management** with full TUI (this package)
- **Headless services** with minimal footprint (styrened)
- **Custom applications** building on styrene-core
## Quick Start
```bash
# Install styrene (includes styrene-core)
pip install styrene
# Run the TUI
styrene
# Or for development:
git clone https://github.com/styrene-lab/styrene.git
cd styrene
pip install -e ".[dev]"
make run
```
**For headless deployments** (edge devices, servers):
```bash
pip install styrened
styrened
# See https://github.com/styrene-lab/styrened
```
## Features
### Text Messaging
- Send/receive messages over LXMF mesh networks
- Conversation history with persistent SQLite storage
- Unread message tracking and notifications
- Imperial CRT theme - Classic green phosphor terminal aesthetics
### Remote Device Management
- **Device status monitoring** - CPU, memory, disk, network, services
- **Remote command execution** - Execute shell commands securely
- **Device rebooting** - Immediate or scheduled with delay
- **Configuration management** - Update YAML configs remotely
- **SSH-like console** - Familiar command-line interface over mesh
### Security and Authorization
- **Identity-based access control** - Per-user, per-command permissions
- **Command whitelisting** - Only safe commands executable
- **Audit logging** - All operations logged with timestamps
- **Systemd hardening** - Resource limits, sandboxing, privilege restrictions
## Usage Guide
### Text Messaging
```python
from styrene_core.protocols.chat import ChatProtocol
from styrene_core.services.lxmf_service import get_lxmf_service
# Initialize
lxmf = get_lxmf_service()
if lxmf.is_initialized:
chat = ChatProtocol(
router=lxmf.router,
identity=lxmf._identity,
db_engine=None # Or provide SQLAlchemy engine
)
# Send message
chat.send_message(
destination_hash="abc123def456...",
content="Hello over LXMF!"
)
```
### Device Management
#### RPC Client Usage
```python
from styrene.services.rpc_client import RPCClient
# Initialize
lxmf = LXMFService()
rpc = RPCClient(lxmf)
device_hash = "remote_device_identity_hash"
# Query device status
status = await rpc.call_status(device_hash)
print(f"IP: {status.ip}, Uptime: {status.uptime}s")
# Execute command
result = await rpc.call_exec(device_hash, "systemctl", ["status", "reticulum"])
print(f"Exit code: {result.exit_code}\n{result.stdout}")
# Reboot device (with 5-minute delay)
reboot = await rpc.call_reboot(device_hash, delay=300)
print(f"Reboot scheduled: {reboot.message}")
# Update configuration
config = await rpc.call_update_config(
device_hash,
{"log_level": "DEBUG", "max_retries": "5"}
)
print(f"Updated: {config.updated_keys}")
```
#### Device Console UI
SSH-like interface accessible via TUI:
```
$ status
IP Address: 192.168.1.100
Uptime: 1d 5h 23m (106380s)
Services: reticulum, lxmf, sshd
Disk Usage: 65% (32GB/50GB)
$ exec systemctl status reticulum
● reticulum.service - Reticulum Network
Active: active (running) since...
$ reboot 300
Success: Reboot scheduled in 5 minutes
```
**Available commands:**
- `status` - Display device information
- `exec <command> [args...]` - Execute shell command
- `reboot [delay]` - Reboot device (delay in seconds)
- `update-config <key> <value>` - Update configuration
**Keyboard shortcuts:**
- `Enter` - Execute command
- `Escape` - Return to previous screen
- `Ctrl+L` - Clear history
## Hub Deployment
Styrene can be deployed as a public mesh hub for community infrastructure. When run in hub mode:
- **RPC relay mode**: Routes RPC messages between devices without executing commands
- **Device discovery**: Tracks mesh topology and announces hub presence
- **Message propagation**: LXMF store-and-forward via lxmd
- **Security**: Command execution is disabled on public hubs
For detailed hub deployment:
- **Kubernetes**: See [reticulum/k8s](../reticulum/k8s) for manifests
- **Docker**: Use `reticulum-hub` container image
- **Configuration**: See [Hub Config Guide](../reticulum/docs/HUB-CONFIG.md)
Quick start:
```bash
# Run as hub (relay mode, no command execution)
styrene --headless --mode hub
```
## RPC Server Setup
### 1. Installation (NixOS Devices)
```bash
# Install server package
pip install -e packages/styrene-bond-rpc/
# Create config directory
mkdir -p ~/.config/styrene-bond-rpc
```
### 2. Configure Authorization
Create `~/.config/styrene-bond-rpc/auth.yaml`:
```yaml
identities:
# Admin - full access
- hash: "abc123def456..." # Client identity hash
name: "Admin User"
permissions:
- status
- exec
- reboot
- update_config
# Monitor - read-only
- hash: "789ghi012jkl..."
name: "Monitor User"
permissions:
- status
# Operator - limited control
- hash: "345mno678pqr..."
name: "Operator"
permissions:
- status
- exec
- reboot
```
**Getting your identity hash:**
```bash
# Using Reticulum CLI
rnstatus
# Look for: Identity: <abc123def456...>
```
Or programmatically:
```python
from styrene.services.lxmf_service import LXMFService
lxmf = LXMFService()
print(f"My hash: {lxmf.identity.hexhash}")
```
### 3. Deploy Systemd Service
```bash
# Install service file
sudo cp packages/styrene-bond-rpc/systemd/styrene-bond-rpc.service /etc/systemd/system/
# Start service
sudo systemctl daemon-reload
sudo systemctl enable --now styrene-bond-rpc
# Verify
sudo systemctl status styrene-bond-rpc
journalctl -u styrene-bond-rpc -f
```
### 4. Verify from Client
```python
from styrene.services.rpc_client import RPCClient
rpc = RPCClient(lxmf)
status = await rpc.call_status("server_device_hash")
print(f"Server alive! IP: {status.ip}")
```
## API Reference
### Chat Protocol
```python
from styrene.protocols.chat import ChatProtocol
chat = ChatProtocol(router=lxmf, identity=identity)
# Send message
await chat.send_message(destination="...", content="Hello!")
# Protocol ID
assert chat.protocol_id == "chat"
```
### RPC Client
```python
from styrene.services.rpc_client import RPCClient
rpc = RPCClient(lxmf_service)
# Set default timeout
rpc.default_timeout = 60.0
# Query status
status: StatusResponse = await rpc.call_status(dest, timeout=30.0)
# Execute command
result: ExecResult = await rpc.call_exec(dest, "cmd", ["args"])
# Reboot device
reboot: RebootResult = await rpc.call_reboot(dest, delay=300)
# Update config
config: UpdateConfigResult = await rpc.call_update_config(dest, {...})
```
### Response Types
```python
from styrene.models.rpc_messages import (
StatusResponse,
ExecResult,
RebootResult,
UpdateConfigResult
)
# StatusResponse
status.uptime: int # Uptime in seconds
status.ip: str # IP address
status.services: list[str] # Running services
status.disk_used: int # Disk used (bytes)
status.disk_total: int # Total disk (bytes)
# ExecResult
result.exit_code: int # Command exit code
result.stdout: str # Standard output
result.stderr: str # Standard error
# RebootResult
reboot.success: bool # Reboot scheduled?
reboot.message: str # Human-readable message
reboot.scheduled_time: float? # Unix timestamp (if delayed)
# UpdateConfigResult
config.success: bool # Update succeeded?
config.message: str # Human-readable message
config.updated_keys: list[str] # Updated keys
```
## Architecture
```
┌─────────────── Styrene TUI Application ───────────────┐
│ │
│ UI Layer (Textual) │
│ ┌──────────┐ ┌──────────┐ ┌──────────────┐ │
│ │ Inbox │ │Conversation│ │Device Console│ │
│ └────┬─────┘ └─────┬──────┘ └──────┬───────┘ │
│ │ │ │ │
│ ┌────▼──────────────▼───────────────▼──────┐ │
│ │ ProtocolRegistry (Router) │ │
│ │ ┌──────────┐ ┌──────────┐ │ │
│ │ │ Chat │ │ RPC │ │ │
│ │ │ Protocol │ │ Client │ │ │
│ │ └─────┬────┘ └─────┬────┘ │ │
│ └────────┼────────────────────┼─────────────┘ │
│ │ │ │
│ ┌────────▼────────────────────▼─────────┐ │
│ │ LXMFService (Transport) │ │
│ └────────┬──────────────────────────┬───┘ │
└───────────┼──────────────────────────┼────────────────┘
│ │
┌───────▼────────┐ ┌────────▼────────┐
│ SQLite DB │ │ Reticulum │
│ (Messages) │ │ Mesh Network │
└────────────────┘ └─────────────────┘
│
┌───────▼────────┐
│ Remote Device │
│ RPC Server │
└────────────────┘
```
**Key Components:**
- **ProtocolRegistry** - Routes messages to correct protocol handler
- **ChatProtocol** - Text messaging with persistence
- **RPCClient** - Device management commands
- **LXMFService** - LXMF/Reticulum integration
- **Message Model** - SQLAlchemy ORM for persistence
## Security
### Threat Mitigation
- **Unauthorized commands** - Identity-based authorization
- **Arbitrary code execution** - Command whitelisting
- **Privilege escalation** - Systemd hardening
- **Resource exhaustion** - CPU/memory limits
### Best Practices
**1. Principle of Least Privilege**
Only grant necessary permissions:
```yaml
# Give minimal access
identities:
- hash: "operator_hash"
permissions:
- status
- exec
# NO reboot or update_config
```
**2. Command Whitelisting**
Review allowed commands in `handlers.py`:
```python
allowed_commands = {
"systemctl", "journalctl", "cat", "ls",
# NEVER: "rm", "dd", "mkfs", "curl"
}
```
**3. Identity Rotation**
```bash
# Generate new identity periodically
rnid -g -n my-device-v2
# Update auth.yaml with new hash
# Remove old identity
```
**4. Audit Logs**
```bash
# Check for denied access
journalctl -u styrene-bond-rpc | grep "Authorization denied"
# Monitor failed commands
journalctl -u styrene-bond-rpc | grep "exit_code.*[^0]"
```
## Troubleshooting
### "RPC timeout - no response"
**Check device is reachable:**
```bash
rnprobe <device_hash>
```
**Verify RPC server is running:**
```bash
ssh device-ip
sudo systemctl status styrene-bond-rpc
```
### "Authorization denied"
**Get your identity hash:**
```bash
rnstatus # Look for: Identity: <abc123...>
```
**Add to server's auth.yaml:**
```yaml
identities:
- hash: "abc123..."
permissions: ["status", "exec"]
```
**Reload server:**
```bash
sudo systemctl restart styrene-bond-rpc
```
### Debug Logging
```python
import logging
logging.basicConfig(level=logging.DEBUG)
# Or specific modules
logging.getLogger('styrene.protocols').setLevel(logging.DEBUG)
logging.getLogger('styrene.services.rpc_client').setLevel(logging.DEBUG)
```
## Development
### Running Tests
```bash
# All tests
python -m pytest tests/ packages/styrene-bond-rpc/tests/ -v
# Specific test file
python -m pytest tests/protocols/test_chat.py -v
# With coverage
python -m pytest --cov=src --cov-report=html
```
### Code Quality
```bash
# Linter
ruff check src/ tests/
ruff check --fix src/ tests/
# Type checker
mypy src/
# Full validation
make validate
```
### Project Structure
```
src/styrene/
├── protocols/ # Protocol implementations
│ ├── base.py # Protocol ABC
│ ├── registry.py # Protocol router
│ └── chat.py # Chat protocol
├── services/ # Business logic
│ ├── lxmf_service.py # LXMF transport
│ └── rpc_client.py # RPC client
├── screens/ # TUI screens
│ ├── inbox.py
│ ├── conversation.py
│ └── device_console.py
├── models/ # Data models
│ ├── messages.py
│ └── rpc_messages.py
└── widgets/ # Custom widgets
packages/styrene-bond-rpc/ # RPC server package
├── src/
│ ├── server.py
│ ├── handlers.py
│ └── auth.py
├── systemd/
└── tests/
tests/ # Client tests
├── protocols/
├── services/
├── screens/
└── integration/
```
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Acknowledgments
- [Reticulum](https://github.com/markqvist/Reticulum)
- [LXMF](https://github.com/markqvist/LXMF)
- [Textual](https://github.com/Textualize/textual)
- [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy)
| text/markdown | styrene-lab | null | null | null | MIT | fleet, provisioning, reticulum, terminal, tui | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: System :: Systems Administration"
] | [] | null | null | <3.13,>=3.11 | [] | [] | [] | [
"psutil>=5.9",
"styrened>=0.6.0",
"textual>=0.47.0",
"mypy>=1.8; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest-forked>=1.6; extra == \"dev\"",
"pytest-textual-snapshot>=1.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\"",
"textual-dev>=1.0; extra == \"dev\"",
"types-pyyaml; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/styrene-lab/styrene-tui"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T21:04:21.439416 | styrene_tui-0.5.0.tar.gz | 1,016,698 | 31/bd/7524e1525b597afc98ec15cc8514d3fd931d182571167a40d9a8da5c3bf7/styrene_tui-0.5.0.tar.gz | source | sdist | null | false | 0cf088ebc10c1337f88ab849e9ff4ff0 | 9e7dcc17caa79e888d9f05fb5f38d890e33dd7ead1d672832548758df46515c3 | 31bd7524e1525b597afc98ec15cc8514d3fd931d182571167a40d9a8da5c3bf7 | null | [] | 212 |
2.4 | hyperstack-py | 1.4.0 | Cloud memory for AI agents. 3 lines to integrate. No SDK dependencies. | # 🃏 HyperStack Python SDK
**Cloud memory for AI agents. Zero dependencies. 3 lines to integrate.**
## Install
```bash
pip install hyperstack-py
```
## Quick Start
```python
from hyperstack import HyperStack
hs = HyperStack("hs_your_key")
# Store a memory
hs.store("project-api", "API", "FastAPI 3.12 on AWS", stack="projects", keywords=["fastapi", "python"])
# Search memories
results = hs.search("python")
# List all cards
cards = hs.list()
# Delete a card
hs.delete("project-api")
# Get usage stats
stats = hs.stats()
print(f"Saving {stats['savings_pct']}% on tokens!")
# Auto-extract from conversation text
hs.ingest("Alice is a senior engineer. We decided to use FastAPI over Django.")
```
## Why HyperStack?
- **Zero dependencies** — just Python stdlib
- **No LLM costs** — memory ops are free
- **94% token savings** — ~350 tokens vs ~6,000 per message
- **30-second setup** — get key at [cascadeai.dev](https://cascadeai.dev)
## API Reference
| Method | Description |
|--------|-------------|
| `store(slug, title, body, stack, keywords)` | Create/update a card |
| `search(query)` | Search cards |
| `list(stack=None)` | List all cards |
| `get(slug)` | Get one card |
| `delete(slug)` | Delete a card |
| `stats()` | Usage summary |
| `ingest(text)` | Auto-extract memories |
## Get a free key
→ [cascadeai.dev](https://cascadeai.dev) — 50 cards free, no credit card.
## License
MIT © [CascadeAI](https://cascadeai.dev)
| text/markdown | CascadeAI | deeq.yaqub1@gmail.com | null | null | MIT | ai memory agent llm hyperstack mcp claude cursor | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | https://cascadeai.dev | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://cascadeai.dev",
"Source, https://github.com/deeqyaqub1-cmd/hyperstack-py",
"Discord, https://discord.gg/tdnXaV6e"
] | twine/6.2.0 CPython/3.12.10 | 2026-02-20T21:03:45.988739 | hyperstack_py-1.4.0.tar.gz | 6,906 | c6/4d/3752edeb15e9953305ebc9e1f074f00858656ef642d20f8ae1053cdd922a/hyperstack_py-1.4.0.tar.gz | source | sdist | null | false | e543122049fc4cc0a38b2089ca1f9cd5 | f5b261653e9243760b5d11054b8c7b6513a5e240c74d49d7aa5885a706f8fbbb | c64d3752edeb15e9953305ebc9e1f074f00858656ef642d20f8ae1053cdd922a | null | [] | 210 |
2.1 | simba-uw-tf-dev | 4.9.4 | Toolkit for computer classification and analysis of behaviors in experimental animals | # SimBA (Simple Behavioral Analysis)

SimBA (Simple Behavioral Analysis) is a platform for analyzing behaviors of experimental animals within video recordings.
### More Information
See below for raison d'être, detailed API, tutorials, data, documentation, support, and walkthroughs:
- GitHub: [https://github.com/sgoldenlab/simba](https://github.com/sgoldenlab/simba)
- Documentation readthedocs: [https://simba-uw-tf-dev.readthedocs.io/en/latest/](https://simba-uw-tf-dev.readthedocs.io/en/latest/)
- API: [https://simba-uw-tf-dev.readthedocs.io/en/latest/api.html](https://simba-uw-tf-dev.readthedocs.io/en/latest/api.html)
- Gitter chat support: [https://app.gitter.im/#/room/#SimBA-Resource_community:gitter.im](https://app.gitter.im/#/room/#SimBA-Resource_community:gitter.im)
- bioRxiv preprint: [https://www.biorxiv.org/content/10.1101/2020.04.19.049452v2](https://www.biorxiv.org/content/10.1101/2020.04.19.049452v2)
- Nature Neuroscience paper: [https://www.nature.com/articles/s41593-024-01649-9](https://www.nature.com/articles/s41593-024-01649-9)
- Open Science Framework (OSF) data buckets: [https://osf.io/tmu6y/](https://osf.io/tmu6y/)
- Video examples: [http://youtube.com/playlist?list=PLi5Vwf0hhy1R6NDQJ3U28MOUJPfl2YWYl](http://youtube.com/playlist?list=PLi5Vwf0hhy1R6NDQJ3U28MOUJPfl2YWYl)
- Slide examples: [https://simba-uw-tf-dev.readthedocs.io/en/latest/docs/workflow.html](https://simba-uw-tf-dev.readthedocs.io/en/latest/docs/workflow.html)
### Installation
To install SimBA, use the following command:
```bash
pip install simba-uw-tf-dev
```
### Citation
If you use the code, please cite:
```bash
@article{Nilsson2020.04.19.049452,
author = {Nilsson, Simon RO and Goodwin, Nastacia L. and Choong, Jia Jie and Hwang, Sophia and Wright, Hayden R and Norville, Zane C and Tong, Xiaoyu and Lin, Dayu and Bentzley, Brandon S. and Eshel, Neir and McLaughlin, Ryan J and Golden, Sam A.},
title = {Simple Behavioral Analysis (SimBA) – an open source toolkit for computer classification of complex social behaviors in experimental animals},
elocation-id = {2020.04.19.049452},
year = {2020},
doi = {10.1101/2020.04.19.049452},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2020/04/21/2020.04.19.049452},
eprint = {https://www.biorxiv.org/content/early/2020/04/21/2020.04.19.049452.full.pdf},
journal = {bioRxiv}
}
```
### Licence
SimBA is licensed under GNU Lesser General Public License v3.0.
### Contributors
Contributers on Github https://github.com/sgoldenlab/simba#contributors
### Contact
* [Simon N](https://github.com/sronilsson), [sronilsson@gmail.com](mailto:sronilsson@gmail.com)
| text/markdown | Simon Nilsson, Jia Jie Choong, Sophia Hwang | sronilsson@gmail.com | null | null | Modified BSD 3-Clause License | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://github.com/sgoldenlab/simba | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/3.8.0 pkginfo/1.10.0 readme-renderer/34.0 requests/2.27.1 requests-toolbelt/1.0.0 urllib3/1.26.20 tqdm/4.30.0 importlib-metadata/4.8.3 keyring/23.4.1 rfc3986/1.5.0 colorama/0.4.4 CPython/3.6.13 | 2026-02-20T21:03:32.017322 | simba_uw_tf_dev-4.9.4.tar.gz | 6,792,131 | 98/9c/9714e376e30500861d9033f2e515a54efddb7234509e4819d222f557a823/simba_uw_tf_dev-4.9.4.tar.gz | source | sdist | null | false | c7326d63221977fd5aaaf17697b59d64 | fec96373aeb198b7c66dddf840c62633169d03708dc53cdf5ec6172162ba8fe2 | 989c9714e376e30500861d9033f2e515a54efddb7234509e4819d222f557a823 | null | [] | 236 |
2.4 | nexus-dev | 5.0.0 | MCP server for persistent AI coding assistant memory with local RAG | # Nexus-Dev
[](https://github.com/mmornati/nexus-dev/actions/workflows/ci.yml)
[](https://codecov.io/gh/mmornati/nexus-dev)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/astral-sh/ruff)
**Persistent Memory for AI Coding Agents**
Nexus-Dev is an open-source MCP (Model Context Protocol) server that provides a local RAG (Retrieval-Augmented Generation) system for AI coding assistants like GitHub Copilot, Cursor, and Windsurf. It learns from your codebase and mistakes, enabling cross-project knowledge sharing.
## Features
- 🧠 **Persistent Memory**: Index your code and documentation for semantic search
- 📚 **Lesson Learning**: Record problems and solutions that the AI can recall later
- 🐙 **GitHub Integration**: Import Issues and Pull Requests into your knowledge base (see [docs/github-import.md](docs/github-import.md))
- 🌐 **Multi-Language Support**: Python, JavaScript/TypeScript, Java (extensible via tree-sitter)
- 📖 **Documentation Indexing**: Parse and index Markdown/RST documentation
- 🔄 **Cross-Project Learning**: Share knowledge across all your projects
- 🏠 **Local-First**: All data stays on your machine with LanceDB
## 📖 Full Documentation
For comprehensive documentation, visit [mmornati.github.io/nexus-dev](https://mmornati.github.io/nexus-dev/).
## Installation
### Isolated Global Installation (Recommended)
To avoid conflicts with project-specific virtual environments, install Nexus-Dev globally using `pipx` or `uv tool`.
```bash
# Using pipx
pipx install nexus-dev
# Using uv
uv tool install nexus-dev
```
### Development Installation
If you are contributing to Nexus-Dev, you can install it in editable mode:
```bash
# Using pip
pip install -e .
# Using uv
uv pip install -e .
```
## Quick Start
### 1. Initialize a Project
```bash
cd your-project
nexus-init --project-name "my-project" --embedding-provider openai
```
This creates:
- `nexus_config.json` - Project configuration
- `.nexus/lessons/` - Directory for learned lessons
### 2. Set Your API Key (OpenAI only)
The CLI commands require the API key in your environment:
```bash
export OPENAI_API_KEY="sk-..."
```
> **Tip**: Add this to your shell profile (`~/.zshrc`, `~/.bashrc`) so it's always available.
>
> If using **Ollama**, no API key is needed—just ensure Ollama is running locally.
### 3. Index Your Code
```bash
# Index directories recursively (recommended)
nexus-index src/ -r
# Index multiple directories
nexus-index src/ docs/ -r
# Index specific files (no -r needed)
nexus-index main.py utils.py
```
> **Note**: The `-r` flag is required to recursively index subdirectories. Without it, only files directly inside the given folder are indexed.
### 4. Configure Your AI Agent
Add to your MCP client configuration (e.g., Claude Desktop):
```json
{
"mcpServers": {
"nexus-dev": {
"command": "nexus-dev",
"args": []
}
}
}
```
### 5. Verify Your Setup
**Check indexed content** via CLI:
```bash
nexus-status
```
**Test in your AI agent** — copy and paste this prompt:
```
Search the Nexus-Dev knowledge base for functions related to "embeddings"
and show me the project statistics.
```
If the AI uses the `search_code` or `get_project_context` tools and returns results, your setup is complete! 🎉
## MCP Tools
Nexus-Dev exposes 7 tools to AI agents:
### Search Tools
| Tool | Description |
|------|-------------|
| `search_knowledge` | Search all content (code, docs, lessons) with optional `content_type` filter |
| `search_code` | Search specifically in indexed code (functions, classes, methods) |
| `search_docs` | Search specifically in documentation (Markdown, RST, text) |
| `search_lessons` | Search in recorded lessons (problems & solutions) |
### Indexing Tools
| Tool | Description |
|------|-------------|
| `index_file` | Index a file into the knowledge base |
| `record_lesson` | Store a problem/solution pair for future reference |
| `get_project_context` | Get project statistics and recent lessons |
## MCP Gateway Mode
Nexus-Dev can act as a gateway to other MCP servers, reducing tool count for AI agents.
### Setup
1. Initialize MCP configuration:
```bash
nexus-mcp init --from-global
```
2. Index tools from configured servers:
```bash
nexus-index-mcp --all
```
### Usage
Instead of configuring 10 MCP servers (50+ tools), configure only Nexus-Dev:
```json
{
"mcpServers": {
"nexus-dev": {
"command": "nexus-dev"
}
}
}
```
AI uses these Nexus-Dev tools to access other servers:
| Tool | Description |
|------|-------------|
| `search_tools` | Find the right tool for a task |
| `invoke_tool` | Execute a tool on any configured server |
| `list_servers` | Show available MCP servers |
### Workflow
1. AI searches: `search_tools("create GitHub issue")`
2. Nexus-Dev returns: `github.create_issue` with schema
3. AI invokes: `invoke_tool("github", "create_issue", {...})`
4. Nexus-Dev proxies to GitHub MCP
### Server Configuration
You can configure downstream MCP servers in `.nexus/mcp_config.json` using either **Stdio** (local process) or **SSE** (HTTP remote) transports.
**Local Server (Stdio):**
```json
{
"servers": {
"github-local": {
"transport": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "..."
}
}
}
}
```
**Remote Server (SSE):**
```json
{
"servers": {
"github-remote": {
"transport": "sse",
"url": "https://api.githubcopilot.com/mcp/",
"headers": {
"Authorization": "Bearer ..."
}
}
}
}
```
## Configuration
`nexus_config.json` example:
```json
{
"project_id": "550e8400-e29b-41d4-a716-446655440000",
"project_name": "my-project",
"embedding_provider": "openai",
"embedding_model": "text-embedding-3-small",
"docs_folders": ["docs/", "README.md"],
"include_patterns": ["**/*.py", "**/*.js", "**/*.java"],
"exclude_patterns": ["**/node_modules/**", "**/__pycache__/**"]
}
```
### Project Context & Startup
Nexus-Dev needs to know *which* project to load on startup. It determines this in two ways:
1. **Automatic Detection (Recommended)**: If the MCP server process is started with your project root as its **current working directory (cwd)**, it automatically loads `nexus_config.json` and `.nexus/mcp_config.json`.
2. **Environment Variable**: Setting `NEXUS_PROJECT_ROOT=/path/to/project` explicitly tells the server where to look.
**When to use `refresh_agents`:**
If the server starts in a generic location (like a global Docker container or default system path) without a project context, it starts "empty". You must then use the `refresh_agents` tool. This tool asks your IDE for the active workspace path and re-initializes the server with that context.
> **Pro Tip**: Configure your MCP client (Cursor, Claude Desktop) to set `cwd` or `NEXUS_PROJECT_ROOT` to your project path. This matches the server's lifecycle to your open project and avoids the need for manual refreshing.
```
📖 See [docs/adding-mcp-servers.md](docs/adding-mcp-servers.md) for a guide on adding custom MCP servers.
### Supported Embedding Providers
Nexus-Dev supports multiple embedding providers (OpenAI, Ollama, Vertex AI, AWS Bedrock, Voyage AI, Cohere).
For detailed information and configuration settings for each, see [Supported Embedding Providers](docs/embedding-providers.md).
## Optional: Pre-commit Hook
Install automatic indexing on commits:
```bash
nexus-init --project-name "my-project" --install-hook
```
Or manually add to `.git/hooks/pre-commit`:
```bash
#!/bin/bash
MODIFIED=$(git diff --cached --name-only --diff-filter=ACM | grep -E '\.(py|js|ts|java)$')
if [ -n "$MODIFIED" ]; then
nexus-index $MODIFIED
fi
```
## Multi-Repository Projects
Nexus-Dev supports multi-repository setups where a parent folder contains the nexus configuration and multiple sub-folders contain independent git repositories.
### Quick Setup
```bash
# Initialize parent project
cd /path/to/parent-project
nexus-init --project-name "My Multi-Repo Project"
# Install hooks in all sub-repositories
nexus-init --discover-repos
```
Or install hooks manually in each repository:
```bash
cd sub-repo-1
nexus-init --link-hook
cd ../sub-repo-2
nexus-init --link-hook
```
All repositories:
- Share a single project ID and knowledge base
- Index to the parent project's database
- Store lessons centrally in parent `.nexus/lessons/`
📖 See [Multi-Repository Projects](docs/advanced/multi-repo-projects.md) for detailed guide.
## Configuring AI Agents
To maximize Nexus-Dev's value, configure your AI coding assistant to use its tools automatically.
### Add AGENTS.md to Your Project
Copy our template to your project:
```bash
cp path/to/nexus-dev/docs/AGENTS_TEMPLATE.md your-project/AGENTS.md
```
This instructs AI agents to:
- **Search first** before implementing features
- **Record lessons** after solving bugs
- Use `get_project_context()` at session start
### Add Workflow Files (Optional)
```bash
cp -r path/to/nexus-dev/.agent/workflows your-project/.agent/
```
This adds slash commands: `/start-session`, `/search-first`, `/record-lesson`, `/index-code`
📖 See [docs/configuring-agents.md](docs/configuring-agents.md) for detailed setup instructions.
## Architecture
For a detailed overview of the Nexus-Dev component architecture and data flow, please refer to the [Architecture Documentation](docs/architecture.md).
## Development Setup
Since Nexus-Dev is not yet published to PyPI/Docker Hub, developers must build from source.
Detailed development setup instructions are available in [CONTRIBUTING.md](CONTRIBUTING.md).
### Quick Development Start
The easiest way to get started is by using our robust `Makefile`:
```bash
# Clone repository
git clone https://github.com/mmornati/nexus-dev.git
cd nexus-dev
# Setup full development environment (pyenv + venv + deps)
make setup
# Run tests
make test
```
For docker testing and multi-project development, please read the detailed setup guide in [CONTRIBUTING.md](CONTRIBUTING.md).
## Adding Language Support
See [CONTRIBUTING.md](CONTRIBUTING.md) for instructions on adding new language chunkers.
## License
MIT License - see [LICENSE](LICENSE) for details.
| text/markdown | null | Marco Mornati <marco@mornati.net> | null | null | MIT | ai, assistant, coding, lancedb, mcp, rag, vector-database | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"click>=8.1.0",
"falkordblite>=0.7.0",
"httpx>=0.28.0",
"jsonschema>=4.18.0",
"lancedb>=0.26.0",
"mcp>=1.25.0",
"openai>=1.60.0",
"pandas>=2.0.0",
"pyarrow>=18.0.0",
"pydantic>=2.10.0",
"pyyaml>=6.0.0",
"redis<5.3.0,>=4.5.0",
"tree-sitter-language-pack>=0.7.0",
"tree-sitter>=0.24.0",
"boto3>=1.34.0; extra == \"all\"",
"cohere>=4.37.0; extra == \"all\"",
"google-cloud-aiplatform>=1.38.0; extra == \"all\"",
"voyageai>=0.1.0; extra == \"all\"",
"boto3>=1.34.0; extra == \"aws\"",
"cohere>=4.37.0; extra == \"cohere\"",
"mypy>=1.14.0; extra == \"dev\"",
"pytest-asyncio>=0.25.0; extra == \"dev\"",
"pytest-cov>=6.0.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.9.0; extra == \"dev\"",
"types-pyyaml>=6.0.0; extra == \"dev\"",
"google-cloud-aiplatform>=1.38.0; extra == \"google\"",
"voyageai>=0.1.0; extra == \"voyage\""
] | [] | [] | [] | [
"Homepage, https://github.com/mmornati/nexus-dev",
"Repository, https://github.com/mmornati/nexus-dev",
"Issues, https://github.com/mmornati/nexus-dev/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:03:26.588796 | nexus_dev-5.0.0.tar.gz | 379,222 | 31/b0/a45bbec5fc7d79568a01ed578027b1ce3df8fbe279a384981fa745bdee75/nexus_dev-5.0.0.tar.gz | source | sdist | null | false | 440cf30d4c6220c951ef6d93a34f10e5 | 22579c7bf2bda277c41e0201c6acc4415445b9d378060efd4dbe7e6cee1fbdce | 31b0a45bbec5fc7d79568a01ed578027b1ce3df8fbe279a384981fa745bdee75 | null | [
"LICENSE"
] | 186 |
2.4 | nextmv | 1.2.0.dev0 | The all-purpose Python SDK for Nextmv | # Nextmv Python SDK
<!-- markdownlint-disable MD033 MD013 -->
<p align="center">
<a href="https://nextmv.io"><img src="https://cdn.prod.website-files.com/60dee0fad10d14c8ab66dd74/674628a824bc14307c1727aa_blog-prototype-p-2000.png" alt="Nextmv" width="45%"></a>
</p>
<p align="center">
<em>Nextmv: The home for all your optimization work</em>
</p>
<p align="center">
<a href="https://pypi.org/project/nextmv" target="_blank">
<img src="https://img.shields.io/pypi/pyversions/nextmv.svg?color=%2334D058" alt="Supported Python versions">
</a>
<a href="https://pypi.org/project/nextmv" target="_blank">
<img src="https://img.shields.io/pypi/v/nextmv?color=%2334D058&label=nextmv" alt="Package version">
</a>
</p>
<!-- markdownlint-enable MD033 MD013 -->
Welcome to `nextmv`, the general Python SDK for the Nextmv Platform.
📖 To learn more about `nextmv`, visit the [docs][docs].
## Installation
Requires Python `>=3.10`. Install using the Python package manager of your
choice:
- `pip`
```bash
pip install nextmv
```
- `pipx`
```bash
pipx install nextmv
```
- `uv`
```bash
uv tool install nextmv
```
Install all optional dependencies (recommended) by specifying `"nextmv[all]"`
instead of just `"nextmv"`.
## CLI
The Nextmv CLI is installed automatically with the SDK. To verify installation,
run:
```bash
nextmv --help
```
If you are contributing to the CLI, please make sure you read the [CLI
Contributing Guide][cli-contributing].
[docs]: https://nextmv-py.docs.nextmv.io/en/latest/nextmv/
[cli-contributing]: nextmv/cli/CONTRIBUTING.md
| text/markdown | null | Nextmv <tech@nextmv.io> | null | Nextmv <tech@nextmv.io> | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2022-2023 nextmv.io inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | decision engineering, decision science, decisions, nextmv, operations research, optimization, shift scheduling, solver, vehicle routing problem | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"folium>=0.20.0",
"pip>=26.0",
"plotly>=6.0.1",
"pydantic>=2.5.2",
"pyyaml>=6.0.1",
"questionary>=2.1.1",
"requests>=2.31.0",
"typer>=0.20.1",
"urllib3>=2.1.0",
"mlflow>=3.9.0; extra == \"notebook\""
] | [] | [] | [] | [
"Homepage, https://www.nextmv.io",
"Documentation, https://nextmv-py.docs.nextmv.io/en/latest/nextmv/",
"Repository, https://github.com/nextmv-io/nextmv-py"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:03:11.947197 | nextmv-1.2.0.dev0.tar.gz | 480,056 | aa/9e/dba23f02390fff7b73aad8955cc7c2ec412e0481b207ca0699352def9268/nextmv-1.2.0.dev0.tar.gz | source | sdist | null | false | ea55bd917bc56d31036d6a5315e042ad | 36bc9b9c438efbabaf3c712580b95859ec2f7a0284b447b300a0cd94973a3fcc | aa9edba23f02390fff7b73aad8955cc7c2ec412e0481b207ca0699352def9268 | null | [
"LICENSE"
] | 177 |
2.4 | homeassistant | 2026.2.3 | Open-source home automation platform running on Python 3. | Home Assistant |Chat Status|
=================================================================================
Open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts. Perfect to run on a Raspberry Pi or a local server.
Check out `home-assistant.io <https://home-assistant.io>`__ for `a
demo <https://demo.home-assistant.io>`__, `installation instructions <https://home-assistant.io/getting-started/>`__,
`tutorials <https://home-assistant.io/getting-started/automation/>`__ and `documentation <https://home-assistant.io/docs/>`__.
|screenshot-states|
Featured integrations
---------------------
|screenshot-integrations|
The system is built using a modular approach so support for other devices or actions can be implemented easily. See also the `section on architecture <https://developers.home-assistant.io/docs/architecture_index/>`__ and the `section on creating your own
components <https://developers.home-assistant.io/docs/creating_component_index/>`__.
If you run into issues while using Home Assistant or during development
of a component, check the `Home Assistant help section <https://home-assistant.io/help/>`__ of our website for further help and information.
|ohf-logo|
.. |Chat Status| image:: https://img.shields.io/discord/330944238910963714.svg
:target: https://www.home-assistant.io/join-chat/
.. |screenshot-states| image:: https://raw.githubusercontent.com/home-assistant/core/dev/.github/assets/screenshot-states.png
:target: https://demo.home-assistant.io
.. |screenshot-integrations| image:: https://raw.githubusercontent.com/home-assistant/core/dev/.github/assets/screenshot-integrations.png
:target: https://home-assistant.io/integrations/
.. |ohf-logo| image:: https://www.openhomefoundation.org/badges/home-assistant.png
:alt: Home Assistant - A project from the Open Home Foundation
:target: https://www.openhomefoundation.org/
| text/x-rst | null | The Home Assistant Authors <hello@home-assistant.io> | null | null | null | home, automation | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Home Automation"
] | [] | null | null | >=3.13.2 | [] | [] | [] | [
"aiodns==4.0.0",
"aiohasupervisor==0.3.3",
"aiohttp==3.13.3",
"aiohttp_cors==0.8.1",
"aiohttp-fast-zlib==0.3.0",
"aiohttp-asyncmdnsresolver==0.1.1",
"aiozoneinfo==0.2.3",
"annotatedyaml==1.0.2",
"astral==2.2",
"async-interrupt==1.2.2",
"attrs==25.4.0",
"atomicwrites-homeassistant==1.4.1",
"audioop-lts==0.2.1",
"awesomeversion==25.8.0",
"bcrypt==5.0.0",
"certifi>=2021.5.30",
"ciso8601==2.3.3",
"cronsim==2.7",
"fnv-hash-fast==1.6.0",
"hass-nabucasa==1.12.0",
"httpx==0.28.1",
"home-assistant-bluetooth==1.13.1",
"ifaddr==0.2.0",
"Jinja2==3.1.6",
"lru-dict==1.3.0",
"PyJWT==2.10.1",
"cryptography==46.0.5",
"Pillow==12.0.0",
"propcache==0.4.1",
"pyOpenSSL==25.3.0",
"orjson==3.11.5",
"packaging>=23.1",
"psutil-home-assistant==0.0.1",
"python-slugify==8.0.4",
"PyYAML==6.0.3",
"requests==2.32.5",
"securetar==2025.2.1",
"SQLAlchemy==2.0.41",
"standard-aifc==3.13.0",
"standard-telnetlib==3.13.0",
"typing-extensions<5.0,>=4.15.0",
"ulid-transform==1.5.2",
"urllib3>=2.0",
"uv==0.9.26",
"voluptuous==0.15.2",
"voluptuous-serialize==2.7.0",
"voluptuous-openapi==0.2.0",
"yarl==1.22.0",
"webrtc-models==0.3.0",
"zeroconf==0.148.0"
] | [] | [] | [] | [
"Homepage, https://www.home-assistant.io/",
"Source Code, https://github.com/home-assistant/core",
"Bug Reports, https://github.com/home-assistant/core/issues",
"Docs: Dev, https://developers.home-assistant.io/",
"Discord, https://www.home-assistant.io/join-chat/",
"Forum, https://community.home-assistant.io/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:03:11.603460 | homeassistant-2026.2.3.tar.gz | 30,909,057 | de/0f/e5dec842a1374f9f04a3131ed96723f2e57bbd9ac58f1536f4b4cdc166c4/homeassistant-2026.2.3.tar.gz | source | sdist | null | false | 5f0962d14c005c495995629bdeea27b3 | 524231671dc853421987c81b8d9f714641194bded45bce454f98ac3723f28874 | de0fe5dec842a1374f9f04a3131ed96723f2e57bbd9ac58f1536f4b4cdc166c4 | Apache-2.0 | [
"LICENSE.md",
"homeassistant/backports/LICENSE.Python"
] | 4,565 |
2.4 | greenlet | 3.3.2 | Lightweight in-process concurrent programming | .. This file is included into docs/history.rst
Greenlets are lightweight coroutines for in-process concurrent
programming.
The "greenlet" package is a spin-off of `Stackless`_, a version of
CPython that supports micro-threads called "tasklets". Tasklets run
pseudo-concurrently (typically in a single or a few OS-level threads)
and are synchronized with data exchanges on "channels".
A "greenlet", on the other hand, is a still more primitive notion of
micro-thread with no implicit scheduling; coroutines, in other words.
This is useful when you want to control exactly when your code runs.
You can build custom scheduled micro-threads on top of greenlet;
however, it seems that greenlets are useful on their own as a way to
make advanced control flow structures. For example, we can recreate
generators; the difference with Python's own generators is that our
generators can call nested functions and the nested functions can
yield values too. (Additionally, you don't need a "yield" keyword. See
the example in `test_generator.py
<https://github.com/python-greenlet/greenlet/blob/adca19bf1f287b3395896a8f41f3f4fd1797fdc7/src/greenlet/tests/test_generator.py#L1>`_).
Greenlets are provided as a C extension module for the regular unmodified
interpreter.
.. _`Stackless`: http://www.stackless.com
Who is using Greenlet?
======================
There are several libraries that use Greenlet as a more flexible
alternative to Python's built in coroutine support:
- `Concurrence`_
- `Eventlet`_
- `Gevent`_
.. _Concurrence: http://opensource.hyves.org/concurrence/
.. _Eventlet: http://eventlet.net/
.. _Gevent: http://www.gevent.org/
Getting Greenlet
================
The easiest way to get Greenlet is to install it with pip::
pip install greenlet
Source code archives and binary distributions are available on the
python package index at https://pypi.org/project/greenlet
The source code repository is hosted on github:
https://github.com/python-greenlet/greenlet
Documentation is available on readthedocs.org:
https://greenlet.readthedocs.io
| text/x-rst | null | Alexey Borzenkov <snaury@gmail.com> | null | Jason Madden <jason@seecoresoftware.com> | null | greenlet, coroutine, concurrency, threads, cooperative | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: C",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"Sphinx; extra == \"docs\"",
"furo; extra == \"docs\"",
"objgraph; extra == \"test\"",
"psutil; extra == \"test\"",
"setuptools; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://greenlet.readthedocs.io",
"Documentation, https://greenlet.readthedocs.io",
"Repository, https://github.com/python-greenlet/greenlet",
"Issues, https://github.com/python-greenlet/greenlet/issues",
"Changelog, https://greenlet.readthedocs.io/en/latest/changes.html"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:02:51.526204 | greenlet-3.3.2-cp314-cp314t-manylinux_2_24_s390x.manylinux_2_28_s390x.whl | 666,581 | d1/67/8197b7e7e602150938049d8e7f30de1660cfb87e4c8ee349b42b67bdb2e1/greenlet-3.3.2-cp314-cp314t-manylinux_2_24_s390x.manylinux_2_28_s390x.whl | cp314 | bdist_wheel | null | false | 80f3b63dcbe4d8ba22253edc9616a860 | 59b3e2c40f6706b05a9cd299c836c6aa2378cabe25d021acd80f13abf81181cf | d1678197b7e7e602150938049d8e7f30de1660cfb87e4c8ee349b42b67bdb2e1 | MIT AND PSF-2.0 | [
"LICENSE",
"LICENSE.PSF"
] | 5,609,162 |
2.4 | nanohub-padre | 0.0.8 | Python library for creating and running PADRE semiconductor device simulations | # nanohub-padre
A Python library for creating and running PADRE semiconductor device simulations.
## Overview
nanohub-padre provides a Pythonic interface to generate PADRE input decks, making it easier to set up complex device simulations programmatically. PADRE (Physics-based Accurate Device Resolution and Evaluation) is a 2D/3D device simulator that solves the drift-diffusion equations for semiconductor devices.
## Features
- **Pythonic Interface**: Define meshes, regions, doping profiles, and solver settings using Python objects
- **Device Factory Functions**: Pre-built functions to create common devices (PN diode, MOSFET, BJT, solar cell, etc.)
- **Complete PADRE Support**: Covers mesh generation, material properties, physical models, and solve commands
- **Validation**: Built-in parameter validation and helpful error messages
- **Examples**: Ready-to-run examples for common device structures
## Installation
```bash
pip install nanohub-padre
```
Or install from source:
```bash
git clone https://github.com/nanohub/nanohub-padre.git
cd nanohub-padre
pip install -e .
```
## Quick Start
### Using Device Factory Functions (Recommended)
```python
from nanohubpadre import create_mosfet, Solve, Log
# Create an NMOS transistor with one line
sim = create_mosfet(
channel_length=0.05,
device_type="nmos",
temperature=300
)
# Add solve commands
sim.add_solve(Solve(initial=True))
sim.add_log(Log(ivfile="idvg"))
sim.add_solve(Solve(v3=0, vstep=0.1, nsteps=15, electrode=3))
# Generate the input deck
print(sim.generate_deck())
```
### Building from Scratch
```python
from nanohubpadre import (
Simulation, Mesh, Region, Electrode, Doping,
Contact, Models, System, Solve
)
# Create simulation
sim = Simulation(title="Simple PN Diode")
# Define mesh
sim.mesh = Mesh(nx=100, ny=3)
sim.mesh.add_x_mesh(1, 0)
sim.mesh.add_x_mesh(100, 1.0)
sim.mesh.add_y_mesh(1, 0)
sim.mesh.add_y_mesh(3, 1)
# Define silicon region
sim.add_region(Region(1, ix_low=1, ix_high=100, iy_low=1, iy_high=3, silicon=True))
# Define electrodes
sim.add_electrode(Electrode(1, ix_low=1, ix_high=1, iy_low=1, iy_high=3))
sim.add_electrode(Electrode(2, ix_low=100, ix_high=100, iy_low=1, iy_high=3))
# Define doping
sim.add_doping(Doping(p_type=True, concentration=1e17, uniform=True, x_right=0.5))
sim.add_doping(Doping(n_type=True, concentration=1e17, uniform=True, x_left=0.5))
# Set contacts
sim.add_contact(Contact(all_contacts=True, neutral=True))
# Configure models
sim.models = Models(temperature=300, srh=True, conmob=True, fldmob=True)
sim.system = System(electrons=True, holes=True, newton=True)
# Solve
sim.add_solve(Solve(initial=True))
# Generate and print the input deck
print(sim.generate_deck())
```
## Device Factory Functions
The library includes factory functions for common devices:
| Function | Description |
|----------|-------------|
| `create_pn_diode` | PN junction diode |
| `create_mos_capacitor` | MOS capacitor for C-V analysis |
| `create_mosfet` | NMOS/PMOS transistor |
| `create_mesfet` | Metal-semiconductor FET |
| `create_bjt` | NPN/PNP bipolar transistor |
| `create_schottky_diode` | Schottky barrier diode |
| `create_solar_cell` | PN junction solar cell |
## Examples
The `examples/` directory contains Python equivalents of common PADRE simulations:
- **pndiode.py**: PN junction diode I-V characterization
- **moscap.py**: MOS capacitor C-V analysis
- **mosfet_equivalent.py**: NMOS transistor transfer and output characteristics
- **mesfet.py**: Metal-Semiconductor FET simulation
- **single_mosgap.py**: Simple oxide-silicon structure
Device factory examples in `examples/devices/`:
- **pn_diode_example.py**: PN diode using factory function
- **mosfet_example.py**: NMOS using factory function
- **bjt_example.py**: NPN BJT using factory function
- **solar_cell_example.py**: Solar cell using factory function
Run an example:
```bash
PYTHONPATH=. python3 examples/pndiode.py > pndiode.inp
padre < pndiode.inp > pndiode.out
```
## Supported Commands
nanohub-padre supports all major PADRE commands:
| Category | Commands |
|----------|----------|
| Mesh | MESH, X.MESH, Y.MESH, Z.MESH |
| Structure | REGION, ELECTRODE |
| Doping | DOPING (uniform, Gaussian, ERFC, file) |
| Boundaries | CONTACT, INTERFACE, SURFACE |
| Materials | MATERIAL, ALLOY |
| Models | MODELS |
| Solver | SYSTEM, METHOD, LINALG, SOLVE |
| Output | LOG, PLOT.1D, PLOT.2D, PLOT.3D, CONTOUR, VECTOR |
| Control | OPTIONS, LOAD, REGRID, ADAPT |
## Documentation
Full documentation is available at [https://nanohub-padre.readthedocs.io/](https://nanohub-padre.readthedocs.io/)
## Testing
Run the test suite:
```bash
pytest tests/ -v
```
## License
MIT License - see LICENSE file for details.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
| text/markdown | nanohub-padre Contributors | null | null | null | MIT | semiconductor, device simulation, PADRE, TCAD | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Topic :: Scientific/Engineering :: Electronic Design Automation (EDA)",
"Topic :: Scientific/Engineering :: Physics"
] | [] | https://github.com/nanohub/nanohub-padre | null | >=3.7 | [] | [] | [] | [
"pytest>=6.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"sphinx>=4.0; extra == \"docs\"",
"sphinx-rtd-theme>=1.0; extra == \"docs\""
] | [] | [] | [] | [
"Documentation, https://nanohub-padre.readthedocs.io/",
"Repository, https://github.com/nanohub/padre",
"Issues, https://github.com/nanohub/padre/issues"
] | twine/6.2.0 CPython/3.10.11 | 2026-02-20T21:02:50.977232 | nanohub_padre-0.0.8.tar.gz | 123,879 | c7/8b/f37f80e12c8b0277268a5d505aa630039684764a76a3cbb4c9b75260f7b5/nanohub_padre-0.0.8.tar.gz | source | sdist | null | false | 7bfe3eb54639a36d5796b5974a526dc3 | 22791ad244b963c841a1b8edd8a86e57c215b90aa2c5b67e695b26e795c587b3 | c78bf37f80e12c8b0277268a5d505aa630039684764a76a3cbb4c9b75260f7b5 | null | [] | 199 |
2.4 | petpal | 0.6.2 | PET-PAL (Positron Emission Tomography Processing and Analysis Library) | # Positron Emission Tomography Processing and Analysis Library (PETPAL)
<figure>
<img src="docs/PETPAL_Logo.png" alt="PETPAL Logo" width="50%">
<figcaption>A comprehensive 4D-PET/MR analysis software suite.</figcaption>
</figure>
## Installation
### Using Pip
The simplest way to install PETPAL is using pip. First, ensure you are using Python version >=3.12. Then, run the following:
```shell
pip install petpal
```
### Build from source
Clone the repository using your preferred method. After navigating to the top-level directory (where `pyproject.toml` exists), we run the following command in the terminal:
```shell
pip install . # Installs the package
```
If you are going to be actively developing and making changes to the package source code, it is recommended to instead do:
```shell
pip install -e . # Installs the package as symlinks to the source code
```
## Documentation
The official docs are hosted on [read the docs](https://petpal.readthedocs.io/en/latest/), which contain helpful tutorials to get started with using PETPAL, and the API reference.
### Building Documentation Locally
To generate the documentation in HTML using sphinx, we first navigate to the `$src/docs/` directory. Then, we run the following commands:
```shell
make clean
make html
```
Then, open `$src/docs/build/html/index.html` using any browser or your IDE.
| text/markdown | null | Noah Goldman <noahg@wustl.edu>, Bradley Judge <bjudge@wustl.edu>, Furqan Dar <dar@wustl.edu>, Kenan Oestreich <kenan.oestreich@wustl.edu> | null | null | null | null | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Image Processing"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"antspyx>=0.5",
"bids-validator",
"docker",
"fslpy",
"lmfit",
"matplotlib",
"networkx",
"nibabel",
"numba",
"numpy",
"pandas",
"pydata-sphinx-theme",
"scikit-learn",
"scipy",
"seaborn",
"simpleitk",
"sphinx",
"sphinx-autoapi",
"sphinx-design"
] | [] | [] | [] | [
"Repository, https://github.com/PETPAL-WUSM/PETPAL.git"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:02:45.081041 | petpal-0.6.2.tar.gz | 4,454,191 | a0/47/0381d151a5a0dd464e8e1688960f4b7efa0c64d598268d623cf4cd9f744d/petpal-0.6.2.tar.gz | source | sdist | null | false | f0287e750ba05a9ff7d33e3e74680713 | 4354387571c4f13d497b66c9b6e3076865063fc2371f317ade3c44355c0111aa | a0470381d151a5a0dd464e8e1688960f4b7efa0c64d598268d623cf4cd9f744d | null | [
"LICENSE"
] | 191 |
2.1 | pyhausbus | 1.0.48 | Python based library for accessing haus-bus.de modules. Inteded to be used in a Home Assistant integration. | # pyhausbus
Python based library for accessing haus-bus.de modules. Intended to be used in a Home Assistant integration.
| text/markdown | Hermann Hoeschen | info@haus-bus.de | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://www.haus-bus.de/ | null | null | [] | [] | [] | [] | [] | [] | [] | [
"Bug Tracker, https://github.com/hausbus/homeassistant/issues",
"Repository, https://github.com/hausbus/homeassistant"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T21:02:37.874680 | pyhausbus-1.0.48.tar.gz | 107,716 | 68/35/ff60126dff9c9cce871844ff83252000cae070be6414b4325c1d11b70e06/pyhausbus-1.0.48.tar.gz | source | sdist | null | false | af96218da7be64c08f6b597fbdcb8d11 | 7c66cab803e5631561f750077f2e8b370847626b9c88434abe4e25cd5409cd3c | 6835ff60126dff9c9cce871844ff83252000cae070be6414b4325c1d11b70e06 | null | [] | 196 |
2.4 | supervertaler | 1.9.294 | Professional AI-enhanced translation workbench with multi-LLM support, glossary system, TM, spellcheck, voice commands, and PyQt6 interface. Batteries included (core). | # Supervertaler
[](https://pypi.org/project/Supervertaler/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
**Professional AI-enhanced translation workbench** with multi-LLM support (GPT-4, Claude, Gemini, Ollama), translation memory, glossary management, and seamless CAT tool integration (memoQ, Trados, CafeTran, Phrase, Déjà Vu).
**Latest release:** v1.9.285 - Import folder of SDLXLIFF files (issue #80).
---
## Installation
```bash
pip install supervertaler
supervertaler
```
Or run from source:
```bash
git clone https://github.com/michaelbeijer/Supervertaler.git
cd Supervertaler
pip install -r requirements.txt
python Supervertaler.py
```
### macOS Standalone App (.dmg)
macOS will block the app on first launch because it is not signed with an Apple Developer certificate. To allow it:
1. **Double-click** `Supervertaler.app` — macOS will show a warning
2. Open **System Settings > Privacy & Security**
3. Scroll down and click **"Open Anyway"** next to the Supervertaler message
4. Confirm when prompted — the app will launch normally from now on
---
## Key Features
- **Multi-LLM AI Translation** - OpenAI GPT-4/5, Anthropic Claude, Google Gemini, Local Ollama
- **Translation Memory** - Fuzzy matching TM with TMX import/export
- **Glossary System** - Priority-based term highlighting with forbidden term marking
- **Superlookup** - Unified concordance search across TM, glossaries, MT, and web resources
- **CAT Tool Integration** - memoQ XLIFF, Trados SDLPPX/SDLRPX, CafeTran, Phrase, Déjà Vu X3
- **Voice Commands** - Hands-free translation with OpenAI Whisper
- **Document Support** - DOCX, bilingual DOCX/RTF, PDF, Markdown, plain text
---
## Documentation
| Resource | Description |
|----------|-------------|
| [Online Manual](https://supervertaler.gitbook.io/superdocs/) | Quick start, guides, and troubleshooting |
| [Changelog](CHANGELOG.md) | Complete version history |
| [Keyboard Shortcuts](docs/guides/KEYBOARD_SHORTCUTS.md) | Shortcut reference |
| [FAQ](FAQ.md) | Common questions |
| [Similar Apps](docs/SIMILAR_APPS.md) | CotranslatorAI, TransAIde, TWAS Suite, and other translation tools |
| [Website](https://supervertaler.com) | Project homepage |
---
## Requirements
- Python 3.10+
- PyQt6
- Windows, macOS, or Linux
---
## Contributing
- [Report bugs](https://github.com/michaelbeijer/Supervertaler/issues)
- [Request features](https://github.com/michaelbeijer/Supervertaler/discussions)
- [Contributing guide](CONTRIBUTING.md)
---
## About
**Supervertaler** is maintained by [Michael Beijer](https://michaelbeijer.co.uk), a professional translator with 30 years of experience in technical and patent translation.
- [Stargazers](https://github.com/michaelbeijer/Supervertaler/stargazers) - See who's starred the project
- [Gitstalk](https://gitstalk.netlify.app/michaelbeijer) - See what I'm up to on GitHub
**License:** MIT - Free for personal and commercial use.
---
**Current Version:** See [CHANGELOG.md](CHANGELOG.md) for the latest release notes.
| text/markdown | Michael Beijer | Michael Beijer <info@michaelbeijer.co.uk> | null | Michael Beijer <info@michaelbeijer.co.uk> | null | translation, CAT, CAT-tool, AI, LLM, GPT, Claude, Gemini, Ollama, glossary, termbase, translation-memory, TM, PyQt6, localization, memoQ, Trados, SDLPPX, XLIFF, voice-commands, spellcheck | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Intended Audience :: End Users/Desktop",
"Topic :: Office/Business",
"Topic :: Text Processing :: Linguistic",
"Environment :: X11 Applications :: Qt"
] | [] | https://supervertaler.com | null | >=3.10 | [] | [] | [] | [
"PyQt6>=6.5.0",
"PyQt6-WebEngine>=6.5.0",
"python-docx>=0.8.11",
"openpyxl>=3.1.0",
"Pillow>=10.0.0",
"lxml>=4.9.0",
"openai>=1.0.0",
"anthropic>=0.7.0",
"google-generativeai>=0.3.0",
"requests>=2.28.0",
"markitdown>=0.0.1",
"sacrebleu>=2.3.1",
"pyperclip>=1.8.2",
"chardet>=5.0.0",
"pyyaml>=6.0.0",
"markdown>=3.4.0",
"pyspellchecker>=0.7.0",
"sounddevice>=0.4.6",
"numpy>=1.24.0",
"PyMuPDF>=1.23.0",
"boto3>=1.28.0",
"deepl>=1.15.0",
"spylls>=0.1.7",
"pynput>=1.7.6",
"keyboard>=0.13.5; platform_system == \"Windows\"",
"ahk>=1.0.0; platform_system == \"Windows\"",
"pyautogui>=0.9.54; platform_system == \"Windows\"",
"psutil>=5.9.0",
"openai-whisper>=20230314; extra == \"local-whisper\""
] | [] | [] | [] | [
"Homepage, https://supervertaler.com",
"Repository, https://github.com/michaelbeijer/Supervertaler.git",
"Bug Tracker, https://github.com/michaelbeijer/Supervertaler/issues",
"Changelog, https://github.com/michaelbeijer/Supervertaler/blob/main/CHANGELOG.md",
"Documentation, https://github.com/michaelbeijer/Supervertaler/blob/main/AGENTS.md",
"Author Website, https://michaelbeijer.co.uk"
] | twine/6.2.0 CPython/3.12.6 | 2026-02-20T21:01:37.572995 | supervertaler-1.9.294.tar.gz | 14,701,747 | 97/ff/5f5b03e1ea2639650ccde3c8b82bdb6e26e9c4122819f6a4bd83ebc21bc7/supervertaler-1.9.294.tar.gz | source | sdist | null | false | 1d212e4215f7611d4254b6c55c214c90 | 0b982d70abbb1d12c66271f3631f2c0b8d8804643a130f65acf4167cd7f7b6bd | 97ff5f5b03e1ea2639650ccde3c8b82bdb6e26e9c4122819f6a4bd83ebc21bc7 | MIT | [
"LICENSE"
] | 210 |
2.4 | tcex | 5.0.0.dev1 | ThreatConnect Exchange App Framework | # tcex - ThreatConnect Exchange App Framework
The ThreatConnect™ TcEx App Framework provides functionality for writing ThreatConnect Exchange Apps.
## Requirements
* arrow (https://pypi.python.org/pypi/arrow/)
* black (https://pypi.org/project/black/)
* inflection (https://pypi.org/project/inflection/)
* isort (https://pypi.org/project/isort/)
* jmespath (https://pypi.org/project/jmespath/)
* paho-mqtt (https://pypi.org/project/paho-mqtt/)
* pyaes (https://pypi.org/project/pyaes/)
* pydantic (https://pypi.org/project/pydantic/)
* python-dateutil (https://pypi.python.org/pypi/python-dateutil/)
* pyyaml (https://pypi.python.org/pypi/pyyaml/)
* redis (https://pypi.python.org/pypi/redis/)
* requests (https://pypi.python.org/pypi/requests/)
* rich (https://pypi.python.org/pypi/rich/)
* semantic_version (https://pypi.org/project/semantic-version/)
* wrapt (https://pypi.org/project/wrapt/)
### Development Requirements
* bandit (https://pypi.org/project/bandit/)
* pre-commit (https://pypi.org/project/pre-commit/)
* pyright (https://pypi.org/project/pyright/)
* pyupgrade (https://pypi.org/project/pyupgrade/)
* ruff (https://pypi.org/project/ruff/)
* typer (https://pypi.python.org/pypi/typer/)
### Test Requirements
* deepdiff (https://pypi.org/project/deepdiff/)
* fakeredis (https://pypi.org/project/fakeredis/)
* pytest (https://pypi.org/project/pytest/)
* pytest-cov (https://pypi.org/project/pytest-cov/)
* pytest-html (https://pypi.org/project/pytest-html/)
* pytest-ordering (https://pypi.org/project/pytest-ordering/)
* pytest-xdist (https://pypi.org/project/pytest-xdist/)
## Installation
```bash
pip install tcex
```
### Development / Testing
* uv (https://pypi.python.org/pypi/uv/)
```bash
uv sync
```
## Documentation
https://threatconnect.readme.io/docs/overview
## Release Notes
https://threatconnect.readme.io/docs/release-notes
## Contact
If you have any questions, bugs, or requests please contact support@threatconnect.com
| text/markdown | null | ThreatConnect <support@threatconnect.com> | null | null | null | exchange, tcex, threatconnect | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Security"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"arrow>=1.3.0",
"black>=25.1.0",
"inflection>=0.5.1",
"isort>=6.0.0",
"jmespath>=1.0.1",
"paho-mqtt<3.0.0",
"pyaes>=1.6.1",
"pydantic-core<3.0.0,>=2.10.0",
"pydantic<3.0.0,>=2.10.0",
"python-dateutil>=2.9.0.post0",
"pyyaml>=6.0.2",
"redis<5.0.0",
"requests>=2.32.3",
"rich>=13.9.4",
"semantic-version>=2.10.0",
"wrapt>=1.17.2"
] | [] | [] | [] | [
"Documentation, https://threatconnect.readme.io/docs/overview",
"Release Notes, https://threatconnect.readme.io/docs/release-notes",
"Source, https://github.com/ThreatConnect-Inc/tcex"
] | uv/0.9.22 {"installer":{"name":"uv","version":"0.9.22","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T21:01:20.781935 | tcex-5.0.0.dev1.tar.gz | 379,574 | a6/5d/65acd82551824aeccaf54a8adb2fadd81756657fd307b6316ed00095cba6/tcex-5.0.0.dev1.tar.gz | source | sdist | null | false | fae43e4b5fb72c207b408904066ed840 | 1cefd6dedf1787affa0fb46990beaabcc7e8a9fe56fbd6b1fb95d3d8d7093aa1 | a65d65acd82551824aeccaf54a8adb2fadd81756657fd307b6316ed00095cba6 | Apache-2.0 | [
"LICENSE"
] | 164 |
2.4 | nightshift-sdk | 0.3.1 | Nightshift — autonomous agent orchestrator with Firecracker VMs | # Nightshift
Autonomous agent orchestrator with Firecracker VM isolation.
Nightshift runs AI agents in isolated [Firecracker](https://firecracker-microvm.github.io/) microVMs on bare-metal infrastructure. Each agent gets its own VM with a dedicated filesystem, network, and resource limits — so agents can execute code, edit files, and make network calls without affecting the host or each other.
## Installation
```bash
uv add nightshift-sdk
```
or
```bash
pip install nightshift-sdk
```
## Quick Start
Define an agent with `NightshiftApp` and `AgentConfig`:
```python
from nightshift import NightshiftApp, AgentConfig
from claude_agent_sdk import query, ClaudeAgentOptions
app = NightshiftApp()
@app.agent(
AgentConfig(
workspace="./my-project",
vcpu_count=2,
mem_size_mib=2048,
timeout_seconds=1800,
)
)
async def code_reviewer(prompt: str):
async for message in query(
prompt=prompt,
options=ClaudeAgentOptions(
cwd="/workspace",
allowed_tools=["Read", "Glob", "Grep"],
model="claude-sonnet-4-6",
),
):
yield message
```
Deploy to a Nightshift platform:
```bash
nightshift login --url https://api.nightshift.sh
nightshift deploy agent.py
nightshift run code_reviewer --prompt "Review the auth module for security issues"
```
Or run locally on a machine with Firecracker:
```bash
nightshift run code_reviewer agent.py --prompt "Review the auth module"
```
## AgentConfig
| Parameter | Default | Description |
|-----------|---------|-------------|
| `workspace` | `""` | Host directory to mount into the VM at `/workspace` |
| `vcpu_count` | `2` | Number of vCPUs allocated to the VM |
| `mem_size_mib` | `2048` | Memory in MiB allocated to the VM |
| `timeout_seconds` | `1800` | Maximum agent execution time (30 min default) |
| `forward_env` | `[]` | Environment variables to forward from host |
| `env` | `{}` | Static environment variables set in the VM |
## Documentation
Full documentation at [docs.nightshift.sh](https://docs.nightshift.sh).
## License
Apache 2.0 — see [LICENSE](LICENSE).
| text/markdown | Nightshift Contributors | null | null | null | Apache-2.0 | agents, ai, claude, firecracker, microvm, orchestrator | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: System :: Distributed Computing"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiosqlite>=0.22.1",
"anthropic",
"claude-agent-sdk",
"click",
"fastapi",
"httpx",
"httpx-sse",
"sse-starlette",
"tomli-w",
"uvicorn"
] | [] | [] | [] | [
"Homepage, https://nightshift.sh",
"Documentation, https://docs.nightshift.sh",
"Repository, https://github.com/tensor-ninja/nightshift",
"Issues, https://github.com/tensor-ninja/nightshift/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T21:01:13.037000 | nightshift_sdk-0.3.1.tar.gz | 131,200 | 8e/9d/036b746578e1c73eb47a3380e0160331889060128004c0f2e2c86c77647a/nightshift_sdk-0.3.1.tar.gz | source | sdist | null | false | 4944f635253f9c55ac2d4252f3142f3f | 0a65b0a43e271a384f5455ac45b1243febf604770de1c03fbc45ff05bad98fd0 | 8e9d036b746578e1c73eb47a3380e0160331889060128004c0f2e2c86c77647a | null | [
"LICENSE"
] | 193 |
2.3 | blackgeorge | 1.1.8 | Agentic framework (runtime) with desk/worker/workforce primitives | # Blackgeorge: Python Agent Framework for LLM Tool-Calling and Multi-Agent Orchestration
[](https://pypi.org/project/blackgeorge/)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](https://zread.ai/jolovicdev/blackgeorge)
A code-first Python framework for building AI agents, tool-calling workflows, and multi-agent systems with explicit APIs, structured outputs, safe tool execution, and pause/resume flows.
## What you can build with this Python AI agent framework
- tool-calling AI agents with validated inputs
- multi-agent teams that coordinate work
- agentic workflows with parallel and sequential steps
- LLM services with durable run state, events, and resume
## Core primitives for agent orchestration
- **Desk**: orchestrates runs, events, and persistence
- **Worker**: single-agent execution with tools and memory
- **Workforce**: multi-worker coordination and management modes
- **Workflow**: step-based flows with parallel execution
## Feature highlights for tool-calling and multi-agent workflows
- tool execution with confirmation, user input, timeouts, retries, and cancellation
- structured output support with Pydantic models
- event streaming and run store persistence
- collaboration primitives: channel messaging and blackboard state
- memory stores including vector memory with configurable chunking
- LiteLLM adapter for OpenAI-compatible model providers
- MCP tool integration for external tool providers
## Why Blackgeorge
If you want a LangChain alternative that stays close to the metal, Blackgeorge emphasizes small, explicit primitives and clear execution flow. Compared to CrewAI or AutoGen, it keeps orchestration and tool calling predictable while still supporting multi-agent systems, workflows, and OpenAI-compatible function calling through LiteLLM.
## Use cases and examples
- coding agents that edit files with confirmation and audit trails
- research and summarization agents with structured outputs
- support triage and routing across multiple workers
- operational workflows that pause for approvals and resume safely
See `examples/coding_agent` for a full end-to-end example.
## Install
```
uv add blackgeorge
```
For development setup, see `docs/development.md`.
## Quick Start: build your first AI agent
```python
from blackgeorge import Desk, Worker, Job
desk = Desk(model="openai/gpt-5-nano")
worker = Worker(name="Researcher")
job = Job(input="Summarize this topic", expected_output="A short summary")
report = desk.run(worker, job)
print(report.content)
```
## Documentation
See `docs/README.md` for the full documentation set.
Preview locally with `uv run mkdocs serve`.
## Job input
`Job.input` is the payload sent to the worker as the user message. If it is not a string, it is serialized to JSON. Use a string for simple requests, or a structured dict when you want explicit fields.
```python
job = Job(
input={
"task": "Fix calculator behavior and update tests.",
"context": "Use tools to inspect the project files.",
"requirements": [
"Confirm divide-by-zero behavior with the user.",
"Confirm empty-average behavior with the user.",
"Apply changes using tools.",
],
},
expected_output="Updated project files with consistent behavior.",
)
```
## Workforce
```python
from blackgeorge import Desk, Worker, Workforce, Job
desk = Desk(model="openai/gpt-5-nano")
w1 = Worker(name="Researcher")
w2 = Worker(name="Writer")
workforce = Workforce([w1, w2], mode="managed")
job = Job(input="Create a market report")
report = desk.run(workforce, job)
```
## Workflow
```python
from blackgeorge import Desk, Worker, Job
from blackgeorge.workflow import Step, Parallel
desk = Desk(model="openai/gpt-5-nano")
analyst = Worker(name="Analyst")
writer = Worker(name="Writer")
flow = desk.flow([
Step(analyst),
Parallel(Step(writer), Step(analyst)),
])
job = Job(input="Analyze product feedback")
report = flow.run(job)
```
## Streaming
```python
report = desk.run(worker, job, stream=True)
```
## Pause and resume
```python
from blackgeorge import Desk, Worker, Job
from blackgeorge.tools import tool
@tool(requires_confirmation=True)
def risky_action(action: str) -> str:
return f"ran:{action}"
desk = Desk(model="openai/gpt-5-nano")
worker = Worker(name="Ops", tools=[risky_action])
job = Job(input="run risky")
report = desk.run(worker, job)
if report.status == "paused":
report = desk.resume(report, True)
```
## Session: multi-turn conversations
```python
from blackgeorge import Desk, Worker
desk = Desk(model="openai/gpt-5-nano")
worker = Worker(name="ChatBot")
session = desk.session(worker)
session.run("My name is Alice")
session.run("What's my name?")
session_id = session.session_id
later_session = desk.session(worker, session_id=session_id)
later_session.run("Where do I live?")
```
| text/markdown | Dušsan Jolović | Dušsan Jolović <jolovic@pm.me> | null | null | MIT License Copyright (c) 2026 Dušan Jolović Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"chromadb>=0.4.0",
"instructor[litellm]>=1.11.0",
"litellm>=1.81.13",
"mcp>=1.0.0",
"pydantic>=2.8.0"
] | [] | [] | [] | [
"Repository, https://github.com/jolovicdev/blackgeorge"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T21:00:22.114145 | blackgeorge-1.1.8.tar.gz | 54,363 | 34/51/77c01d7305e5774deadcff0db11f4e754536697823b320e57577ba08890f/blackgeorge-1.1.8.tar.gz | source | sdist | null | false | fa453f2ab39616cfb866b8cf0e5d02f9 | 7eac46bb88747772cea6c9d103cc46923df45ffc98446ec2e9c7c3fff8d476a6 | 345177c01d7305e5774deadcff0db11f4e754536697823b320e57577ba08890f | null | [] | 196 |
2.4 | lintro | 0.52.1 | A unified CLI tool for code formatting, linting, and quality assurance | # Lintro
<!-- markdownlint-disable MD033 MD013 -->
<p align="center">
<img src="https://raw.githubusercontent.com/lgtm-hq/py-lintro/main/assets/images/lintro.png" alt="Lintro Logo" style="width:100%;max-width:800px;height:auto;">
</p>
<p align="center">
A comprehensive CLI tool that unifies various code formatting, linting, and quality
assurance tools under a single command-line interface.
</p>
<!-- Badges: Build & Quality -->
<p align="center">
<a href="https://github.com/lgtm-hq/py-lintro/actions/workflows/test-and-coverage.yml?query=branch%3Amain"><img src="https://img.shields.io/github/actions/workflow/status/lgtm-hq/py-lintro/test-and-coverage.yml?label=tests&branch=main&logo=githubactions&logoColor=white" alt="Tests"></a>
<a href="https://github.com/lgtm-hq/py-lintro/actions/workflows/ci-pipeline.yml?query=branch%3Amain"><img src="https://img.shields.io/github/actions/workflow/status/lgtm-hq/py-lintro/ci-pipeline.yml?label=ci&branch=main&logo=githubactions&logoColor=white" alt="CI"></a>
<a href="https://github.com/lgtm-hq/py-lintro/actions/workflows/docker-build-publish.yml?query=branch%3Amain"><img src="https://img.shields.io/github/actions/workflow/status/lgtm-hq/py-lintro/docker-build-publish.yml?label=docker&logo=docker&branch=main" alt="Docker"></a>
<a href="https://codecov.io/gh/lgtm-hq/py-lintro"><img src="https://codecov.io/gh/lgtm-hq/py-lintro/branch/main/graph/badge.svg" alt="Coverage"></a>
</p>
<!-- Badges: Releases -->
<p align="center">
<a href="https://github.com/lgtm-hq/py-lintro/releases/latest"><img src="https://img.shields.io/github/v/release/lgtm-hq/py-lintro?label=release" alt="Release"></a>
<a href="https://pypi.org/project/lintro/"><img src="https://img.shields.io/pypi/v/lintro?label=pypi" alt="PyPI"></a>
<a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.11+-blue" alt="Python"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-green" alt="License"></a>
</p>
<!-- Badges: Security & Supply Chain -->
<p align="center">
<a href="https://github.com/lgtm-hq/py-lintro/actions/workflows/codeql.yml?query=branch%3Amain"><img src="https://github.com/lgtm-hq/py-lintro/actions/workflows/codeql.yml/badge.svg?branch=main" alt="CodeQL"></a>
<a href="https://scorecard.dev/viewer/?uri=github.com/lgtm-hq/py-lintro"><img src="https://api.securityscorecards.dev/projects/github.com/lgtm-hq/py-lintro/badge" alt="OpenSSF Scorecard"></a>
<a href="https://www.bestpractices.dev/projects/11142"><img src="https://www.bestpractices.dev/projects/11142/badge" alt="OpenSSF Best Practices"></a>
<a href="docs/security/assurance.md"><img src="https://img.shields.io/badge/SBOM-CycloneDX-brightgreen" alt="SBOM"></a>
<a href="https://github.com/lgtm-hq/py-lintro/actions/workflows/sbom-on-main.yml?query=branch%3Amain"><img src="https://img.shields.io/github/actions/workflow/status/lgtm-hq/py-lintro/sbom-on-main.yml?label=sbom&branch=main" alt="SBOM Status"></a>
</p>
<!-- markdownlint-enable MD033 MD013 -->
## 🚀 Quick Start
```bash
uv pip install lintro # Install (or: pip install lintro)
lintro check . # Find issues (alias: chk)
lintro format . # Fix issues (alias: fmt)
lintro check --output-format grid # Beautiful output
```
<!-- TODO: Add screenshot of grid output -->
## ✨ Why Lintro?
- **🎯 Unified Interface** - One command for all your linting and formatting tools
- **📊 Consistent Output** - Beautiful, standardized output formats across all tools
- **🔧 Auto-fixing** - Automatically fix issues where possible
- **🐳 Docker Ready** - Run in isolated containers for consistent environments
- **📈 Rich Reporting** - Multiple formats: grid, JSON, HTML, CSV, Markdown
- **⚡ Fast** - Optimized parallel execution
## 🔌 Works With Your Existing Configs
Lintro respects your native tool configurations. If you have a `.prettierrc`,
`pyproject.toml [tool.ruff]`, or `.yamllint`, Lintro uses them automatically - no
migration required.
- **Native configs are detected** - Your existing `.prettierrc`, `.oxlintrc.json`, etc.
work as-is
- **Enforce settings override consistently** - Set `line_length: 88` once, applied
everywhere
- **Fallback defaults when needed** - Tools without native configs use sensible defaults
See the [Configuration Guide](docs/configuration.md) for details on the 4-tier config
system.
## 🛠️ Supported Tools
<!-- markdownlint-disable MD013 MD033 MD060 -->
<table>
<thead>
<tr><th>Tool</th><th>Language</th><th>Auto-fix</th><th>Install</th></tr>
</thead>
<tbody>
<tr><th colspan="4">Linters</th></tr>
<tr>
<td><a href="https://github.com/rhysd/actionlint"><img src="https://img.shields.io/badge/Actionlint-24292e?logo=github&logoColor=white" alt="Actionlint"></a></td>
<td>⚙️ GitHub Actions</td>
<td>-</td>
<td><a href="https://github.com/rhysd/actionlint/releases">GitHub Releases</a></td>
</tr>
<tr>
<td><a href="https://github.com/rust-lang/rust-clippy"><img src="https://img.shields.io/badge/Clippy-000000?logo=rust&logoColor=white" alt="Clippy"></a></td>
<td>🦀 Rust</td>
<td>✅</td>
<td><code>rustup component add clippy</code></td>
</tr>
<tr>
<td><a href="https://github.com/hadolint/hadolint"><img src="https://img.shields.io/badge/Hadolint-2496ED?logo=docker&logoColor=white" alt="Hadolint"></a></td>
<td>🐳 Dockerfile</td>
<td>-</td>
<td><a href="https://github.com/hadolint/hadolint/releases">GitHub Releases</a></td>
</tr>
<tr>
<td><a href="https://github.com/DavidAnson/markdownlint-cli2"><img src="https://img.shields.io/badge/Markdownlint--cli2-000000?logo=markdown&logoColor=white" alt="Markdownlint"></a></td>
<td>📝 Markdown</td>
<td>-</td>
<td><code>bun add -g markdownlint-cli2</code><br><code>npm install -g markdownlint-cli2</code></td>
</tr>
<tr>
<td><a href="https://oxc.rs/"><img src="https://img.shields.io/badge/Oxlint-e05d44?logo=javascript&logoColor=white" alt="Oxlint"></a></td>
<td>🟨 JS/TS</td>
<td>✅</td>
<td><code>bun add -g oxlint</code><br><code>npm install -g oxlint</code></td>
</tr>
<tr>
<td><a href="https://github.com/jsh9/pydoclint"><img src="https://img.shields.io/badge/pydoclint-3776AB?logo=python&logoColor=white" alt="pydoclint"></a></td>
<td>🐍 Python</td>
<td>-</td>
<td>📦</td>
</tr>
<tr>
<td><a href="https://www.shellcheck.net/"><img src="https://img.shields.io/badge/ShellCheck-4EAA25?logo=gnubash&logoColor=white" alt="ShellCheck"></a></td>
<td>🐚 Shell Scripts</td>
<td>-</td>
<td><code>brew install shellcheck</code><br><a href="https://github.com/koalaman/shellcheck/releases">GitHub Releases</a></td>
</tr>
<tr>
<td><a href="https://github.com/adrienverge/yamllint"><img src="https://img.shields.io/badge/Yamllint-cb171e?logo=yaml&logoColor=white" alt="Yamllint"></a></td>
<td>🧾 YAML</td>
<td>-</td>
<td>📦</td>
</tr>
<tr><th colspan="4">Formatters</th></tr>
<tr>
<td><a href="https://github.com/psf/black"><img src="https://img.shields.io/badge/Black-000000?logo=python&logoColor=white" alt="Black"></a></td>
<td>🐍 Python</td>
<td>✅</td>
<td>📦</td>
</tr>
<tr>
<td><a href="https://oxc.rs/"><img src="https://img.shields.io/badge/Oxfmt-e05d44?logo=javascript&logoColor=white" alt="Oxfmt"></a></td>
<td>🟨 JS/TS</td>
<td>✅</td>
<td><code>bun add -g oxfmt</code><br><code>npm install -g oxfmt</code></td>
</tr>
<tr>
<td><a href="https://prettier.io/"><img src="https://img.shields.io/badge/Prettier-1a2b34?logo=prettier&logoColor=white" alt="Prettier"></a></td>
<td>🟨 JS/TS · 🧾 JSON</td>
<td>✅</td>
<td><code>bun add -g prettier</code><br><code>npm install -g prettier</code></td>
</tr>
<tr>
<td><a href="https://github.com/mvdan/sh"><img src="https://img.shields.io/badge/shfmt-4EAA25?logo=gnubash&logoColor=white" alt="shfmt"></a></td>
<td>🐚 Shell Scripts</td>
<td>✅</td>
<td><code>brew install shfmt</code><br><a href="https://github.com/mvdan/sh/releases">GitHub Releases</a></td>
</tr>
<tr>
<td><a href="https://github.com/rust-lang/rustfmt"><img src="https://img.shields.io/badge/rustfmt-000000?logo=rust&logoColor=white" alt="rustfmt"></a></td>
<td>🦀 Rust</td>
<td>✅</td>
<td><code>rustup component add rustfmt</code></td>
</tr>
<tr><th colspan="4">Lint + Format</th></tr>
<tr>
<td><a href="https://github.com/astral-sh/ruff"><img src="https://img.shields.io/badge/Ruff-000?logo=ruff&logoColor=white" alt="Ruff"></a></td>
<td>🐍 Python</td>
<td>✅</td>
<td>📦</td>
</tr>
<tr>
<td><a href="https://sqlfluff.com/"><img src="https://img.shields.io/badge/SQLFluff-4b5563?logo=database&logoColor=white" alt="SQLFluff"></a></td>
<td>🗃️ SQL</td>
<td>✅</td>
<td><code>pipx install sqlfluff</code></td>
</tr>
<tr>
<td><a href="https://taplo.tamasfe.dev/"><img src="https://img.shields.io/badge/Taplo-9b4dca?logo=toml&logoColor=white" alt="Taplo"></a></td>
<td>🧾 TOML</td>
<td>✅</td>
<td><code>brew install taplo</code><br><a href="https://github.com/tamasfe/taplo/releases">GitHub Releases</a></td>
</tr>
<tr><th colspan="4">Type Checkers</th></tr>
<tr>
<td><a href="https://astro.build/"><img src="https://img.shields.io/badge/Astro-ff5d01?logo=astro&logoColor=white" alt="Astro"></a></td>
<td>🚀 Astro</td>
<td>-</td>
<td><code>bun add astro</code><br><code>npm install astro</code></td>
</tr>
<tr>
<td><a href="https://mypy-lang.org/"><img src="https://img.shields.io/badge/Mypy-2d50a5?logo=python&logoColor=white" alt="Mypy"></a></td>
<td>🐍 Python</td>
<td>-</td>
<td>📦</td>
</tr>
<tr>
<td><a href="https://svelte.dev/"><img src="https://img.shields.io/badge/svelte--check-ff3e00?logo=svelte&logoColor=white" alt="svelte-check"></a></td>
<td>🔥 Svelte</td>
<td>-</td>
<td><code>bun add -D svelte-check</code><br><code>npm install -D svelte-check</code></td>
</tr>
<tr>
<td><a href="https://www.typescriptlang.org/"><img src="https://img.shields.io/badge/TypeScript-3178c6?logo=typescript&logoColor=white" alt="TypeScript"></a></td>
<td>🟨 JS/TS</td>
<td>-</td>
<td><code>bun add -g typescript</code><br><code>npm install -g typescript</code><br><code>brew install typescript</code></td>
</tr>
<tr>
<td><a href="https://github.com/vuejs/language-tools"><img src="https://img.shields.io/badge/vue--tsc-42b883?logo=vuedotjs&logoColor=white" alt="vue-tsc"></a></td>
<td>💚 Vue</td>
<td>-</td>
<td><code>bun add -D vue-tsc</code><br><code>npm install -D vue-tsc</code></td>
</tr>
<tr><th colspan="4">Security</th></tr>
<tr>
<td><a href="https://github.com/PyCQA/bandit"><img src="https://img.shields.io/badge/Bandit-yellow?logo=python&logoColor=white" alt="Bandit"></a></td>
<td>🐍 Python</td>
<td>-</td>
<td>📦</td>
</tr>
<tr>
<td><a href="https://gitleaks.io/"><img src="https://img.shields.io/badge/Gitleaks-dc2626?logo=git&logoColor=white" alt="Gitleaks"></a></td>
<td>🔐 Secret Detection</td>
<td>-</td>
<td><code>brew install gitleaks</code><br><a href="https://github.com/gitleaks/gitleaks/releases">GitHub Releases</a></td>
</tr>
<tr>
<td><a href="https://github.com/rustsec/rustsec/tree/main/cargo-audit"><img src="https://img.shields.io/badge/cargo--audit-000000?logo=rust&logoColor=white" alt="cargo-audit"></a></td>
<td>🦀 Rust</td>
<td>-</td>
<td><code>cargo install cargo-audit</code></td>
</tr>
<tr>
<td><a href="https://github.com/EmbarkStudios/cargo-deny"><img src="https://img.shields.io/badge/cargo--deny-000000?logo=rust&logoColor=white" alt="cargo-deny"></a></td>
<td>🦀 Rust</td>
<td>-</td>
<td><code>cargo install cargo-deny</code></td>
</tr>
<tr>
<td><a href="https://semgrep.dev/"><img src="https://img.shields.io/badge/Semgrep-5b21b6?logo=semgrep&logoColor=white" alt="Semgrep"></a></td>
<td>🔒 Multi-language</td>
<td>-</td>
<td><code>pipx install semgrep</code><br><code>pip install semgrep</code><br><code>brew install semgrep</code></td>
</tr>
</tbody>
</table>
> 📦 = bundled with lintro — no separate install needed\
> ⚡ Node.js tools support `--auto-install` to install dependencies automatically
<!-- markdownlint-enable MD013 MD033 MD060 -->
## 📦 Installation
**Python 3.11+** is required. Check tool versions with `lintro list-tools`.
```bash
# PyPI (recommended)
uv pip install lintro # or: pip install lintro
# Homebrew (macOS binary)
brew tap lgtm-hq/tap && brew install lintro-bin
# Docker (tools image - includes all external tools)
docker run --rm -v $(pwd):/code ghcr.io/lgtm-hq/py-lintro:latest check
# Docker (base image - minimal, no external tools)
docker run --rm -v $(pwd):/code ghcr.io/lgtm-hq/py-lintro:base check
```
See [Getting Started](docs/getting-started.md) for detailed installation options.
## 💻 Usage
```bash
# Check all files (alias: chk)
lintro check .
# Auto-fix issues (alias: fmt)
lintro format .
# Grid output with grouping
lintro check --output-format grid --group-by file
# Run specific tools
lintro check --tools ruff,prettier,mypy
# Auto-install Node.js dependencies
lintro check --tools tsc --auto-install
# Exclude directories
lintro check --exclude "node_modules,dist,venv"
# List available tools
lintro list-tools
```
### 🐳 Docker
```bash
# Run from GHCR (tools image - recommended)
docker run --rm -v $(pwd):/code ghcr.io/lgtm-hq/py-lintro:latest check
# With formatting
docker run --rm -v $(pwd):/code ghcr.io/lgtm-hq/py-lintro:latest check --output-format grid
# Base image (minimal, no external tools)
docker run --rm -v $(pwd):/code ghcr.io/lgtm-hq/py-lintro:base check
```
## 📚 Documentation
| Guide | Description |
| ------------------------------------------------ | --------------------------------------- |
| [Getting Started](docs/getting-started.md) | Installation, first steps, requirements |
| [Configuration](docs/configuration.md) | Tool configuration, options, presets |
| [Docker Usage](docs/docker.md) | Containerized development |
| [GitHub Integration](docs/github-integration.md) | CI/CD setup, workflows |
| [Contributing](docs/contributing.md) | Development guide, adding tools |
| [Troubleshooting](docs/troubleshooting.md) | Common issues and solutions |
**Advanced:** [Tool Analysis](docs/tool-analysis/) · [Architecture](docs/architecture/)
· [Security](docs/security/)
## 🔨 Development
```bash
# Clone and install
git clone https://github.com/lgtm-hq/py-lintro.git
cd py-lintro
uv sync --dev
# Run tests
./scripts/local/run-tests.sh
# Run lintro on itself
./scripts/local/local-lintro.sh check --output-format grid
```
## 🤝 Community
- 🐛
[Bug Reports](https://github.com/lgtm-hq/py-lintro/issues/new?template=bug_report.md)
- 💡
[Feature Requests](https://github.com/lgtm-hq/py-lintro/issues/new?template=feature_request.md)
- ❓ [Questions](https://github.com/lgtm-hq/py-lintro/issues/new?template=question.md)
- 📖 [Contributing Guide](docs/contributing.md)
## 📄 License
MIT License - see [LICENSE](LICENSE) for details.
| text/markdown | null | lgtm-hq <turbocoder13@gmail.com> | null | null | null | linting, formatting, code-quality, cli, python, javascript, yaml, docker | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Utilities"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.1.8",
"coverage-badge>=1.1.2",
"loguru>=0.7.3",
"packaging>=25.0",
"pathspec>=0.12.1",
"pydantic>=2.12.5",
"rich>=14.2.0",
"tabulate>=0.9.0",
"yamllint>=1.37.1",
"httpx>=0.28.1",
"defusedxml>=0.7.1",
"ruff>=0.14.10",
"black>=26.1.0",
"bandit>=1.9.2",
"mypy>=1.19.1",
"pydoclint>=0.8.3",
"pytest>=9.0.2; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"pytest-mock>=3.15.1; extra == \"dev\"",
"pytest-xdist>=3.8.0; extra == \"dev\"",
"pytest-sugar>=1.1.1; extra == \"dev\"",
"tox>=4.34.1; extra == \"dev\"",
"allure-pytest>=2.15.3; extra == \"dev\"",
"ruff>=0.14.10; extra == \"dev\"",
"mypy>=1.19.1; extra == \"dev\"",
"coverage-badge>=1.1.2; extra == \"dev\"",
"python-semantic-release>=10.5.3; extra == \"dev\"",
"assertpy>=1.1; extra == \"dev\"",
"httpx>=0.28.1; extra == \"dev\"",
"pytest>=9.0.2; extra == \"test\"",
"pytest-cov>=7.0.0; extra == \"test\"",
"pytest-mock>=3.15.1; extra == \"test\"",
"pytest-xdist>=3.8.0; extra == \"test\"",
"assertpy>=1.1; extra == \"test\"",
"types-setuptools>=80.9.0.20251223; extra == \"typing\"",
"types-tabulate>=0.9.0.20241207; extra == \"typing\"",
"semgrep>=1.151.0; extra == \"tools\"",
"sqlfluff>=4.0.0; extra == \"tools\""
] | [] | [] | [] | [
"Homepage, https://github.com/lgtm-hq/py-lintro",
"Documentation, https://github.com/lgtm-hq/py-lintro/docs",
"Source, https://github.com/lgtm-hq/py-lintro"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T20:59:40.509196 | lintro-0.52.1.tar.gz | 1,847,565 | 35/1c/d741b1715d6df42f6ba20cba36d78b5a48a6ce9cc25c3064e32fb1e646b3/lintro-0.52.1.tar.gz | source | sdist | null | false | 1fae42529e274af1a45f60e0758f6275 | fc15041606c8a455f1cab3fe6d2a8674a0f3b3ec1b4f189738990a8afcda1157 | 351cd741b1715d6df42f6ba20cba36d78b5a48a6ce9cc25c3064e32fb1e646b3 | MIT | [
"LICENSE"
] | 189 |
2.4 | snaffler-ng | 1.1.1 | Snaffler Impacket port - find credentials and sensitive data on SMB shares | # snaffler-ng
Impacket port of [Snaffler](https://github.com/SnaffCon/Snaffler).
**snaffler-ng** is a post-exploitation / red teaming tool designed to **discover readable SMB shares**, **walk directory trees**, and **identify credentials and sensitive data** on Windows systems.
## Features
- SMB share discovery via SRVSVC (NetShareEnum)
- DFS namespace discovery via LDAP (v1 + v2), merged and deduplicated with share enumeration
- Recursive directory tree walking
- Regex-based file and content classification
- NTLM authentication (password or pass-the-hash)
- Kerberos authentication
- Multithreaded scanning (share / tree / file stages)
- Optional file download (“snaffling”)
- Resume support via SQLite state database
- Compatible with original and custom TOML rule sets
- Deterministic, ingestion-friendly logging (plain / JSON / TSV)
- Pipe-friendly: accepts NetExec (nxc) `--shares` output via `--stdin`
## Installation
```bash
pip install snaffler-ng
```
## Quick Start
### Full Domain Discovery
Providing only a domain triggers full domain discovery:
```bash
snaffler \
-u USERNAME \
-p PASSWORD \
-d DOMAIN.LOCAL
```
This will automatically:
- Query Active Directory for computer objects
- Discover DFS namespace targets via LDAP (v1 `fTDfs` + v2 `msDFS-Linkv2`)
- Enumerate SMB shares on discovered hosts
- Merge and deduplicate DFS and SMB share paths
- Scan all readable shares
When using Kerberos, set `KRB5CCNAME` to a valid ticket cache and use hostnames/FQDNs:
```bash
snaffler \
-k \
--use-kcache \
-d DOMAIN.LOCAL \
--dc-host CORP-DC02
```
---
### Targeted Scans
Scan a specific UNC path (no discovery):
```bash
snaffler \
-u USERNAME \
-p PASSWORD \
--unc //192.168.1.10/Share
```

Scan multiple computers (share discovery enabled):
```bash
snaffler \
-u USERNAME \
-p PASSWORD \
--computer 192.168.1.10 \
--computer 192.168.1.11
```
Load target computers from file:
```bash
snaffler \
-u USERNAME \
-p PASSWORD \
--computer-file targets.txt
```
### Pipe from NetExec (nxc)
Pipe `nxc smb --shares` output directly into snaffler-ng with `--stdin`:
```bash
nxc smb 10.8.50.20 -u user -p pass --shares | snaffler -u user -p pass --stdin
```
This parses NXC's share output, extracts UNC paths, and feeds them into the file scanner. Snaffler's existing share/directory rules handle filtering.
## Logging & Output Formats
snaffler-ng supports three output formats, each with a distinct purpose:
- `Plain` (default, human-readable)
- `JSON` (structured, SIEM-friendly)
- `TSV` (flat, ingestion-friendly)
## Resume Support
Large environments are expected.
You can resume interrupted scans using the `--resume` argument:
```bash
snaffler \
-u USERNAME \
-p PASSWORD \
--computer-file targets.txt \
--resume
```
State tracks processed shares, directories, and files to avoid re-scanning.
## Authentication Options
- NTLM username/password
- NTLM pass-the-hash (`--hash`)
- Kerberos (`-k`)
- Kerberos via existing ccache (`--use-kcache`)
| text/markdown | null | totekuh <totekuh@protonmail.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to the Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by the Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding any notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2024 totekuh
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"impacket>=0.11.0",
"typer>=0.12.0",
"rich>=13.0.0",
"tomlkit>=0.12.0",
"pytest>=8.0; extra == \"test\"",
"pytest-cov; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T20:59:24.323519 | snaffler_ng-1.1.1.tar.gz | 71,202 | a2/d9/d3939b90d26a432f1f57dc1b6bad599dbeb08858d4937c1eb9bd4344708e/snaffler_ng-1.1.1.tar.gz | source | sdist | null | false | 0437ff459dbbf039b663c81720dba880 | 49401bbea3d17fef84313bfe5106cf2aa22961bc045e0c0d4c7b13ec726af36d | a2d9d3939b90d26a432f1f57dc1b6bad599dbeb08858d4937c1eb9bd4344708e | null | [
"LICENSE"
] | 202 |
2.4 | fastapi-crud-engine | 0.1.4 | FastAPI CRUD router and repository toolkit. | # fastapi-crud-engine
Async CRUD engine for FastAPI + SQLAlchemy with built-in filtering, pagination, soft delete, audit logs, cache, rate limiting, import/export, and webhooks.
`fastapi-crud-engine` helps you ship consistent CRUD APIs faster with less boilerplate and more production-ready defaults.
[](https://pypi.org/project/fastapi-crud-engine/)
[](https://pypi.org/project/fastapi-crud-engine/)
[](https://github.com/Lakeserl/fastapi_crud_engine/LICENSE)
## Table of Contents
- [Features](#features)
- [Installation](#installation)
- [Quickstart](#quickstart)
- [Core Usage](#core-usage)
- [Feature-by-Feature Usage](#feature-by-feature-usage)
- [Configuration](#configuration)
- [Testing and Development](#testing-and-development)
- [Release and Contributing](#release-and-contributing)
- [License](#license)
- [Acknowledgements](#acknowledgements)
- [Security](#security)
## Features
- Automatic CRUD router:
- `GET /`, `GET /{pk}`, `POST /`, `PUT /{pk}`, `PATCH /{pk}`, `DELETE /{pk}`
- Soft delete and restore endpoints
- Advanced `FilterSet` support:
- exact, search, icontains, ordering, range, in, isnull
- Pagination with consistent response schema
- Bulk create endpoint (`/bulk`)
- CSV/XLSX import and export (`/import`, `/export`)
- Audit trail for create/update/delete/restore
- Cache backend (in-memory or Redis)
- Rate limiting (in-memory or Redis)
- Webhook delivery (`http` or `celery`)
- Lifecycle hooks (`before_*`, `after_*`)
- Field-level permissions by role
- Built-in exception mapping for FastAPI
## Installation
Requirements: Python `>=3.11`
### Recommended: install in a virtual environment
```bash
python -m venv .venv
source .venv/bin/activate
python -m pip install -U pip
```
### Install from PyPI
```bash
python -m pip install fastapi-crud-engine
```
### Install optional extras
```bash
python -m pip install "fastapi-crud-engine[excel,redis,celery]"
```
- `excel`: enables XLSX import/export via `openpyxl`
- `redis`: enables Redis cache and Redis rate limiter
- `celery`: enables async webhook delivery through Celery workers
### Install from source (local development)
```bash
git clone https://github.com/Lakeserl/auto-crud.git
cd auto-crud
python -m pip install -e ".[dev]"
```
## Quickstart
```python
from contextlib import asynccontextmanager
from typing import AsyncGenerator
from fastapi import FastAPI
from pydantic import BaseModel
from sqlalchemy import String
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker, create_async_engine
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
from fastapi_crud_engine.core.handlers import register_exception_handlers
from fastapi_crud_engine.core.mixins import SoftDeleteMixin
from fastapi_crud_engine.router import CRUDRouter
engine = create_async_engine("sqlite+aiosqlite:///./app.db")
SessionLocal = async_sessionmaker(engine, expire_on_commit=False, class_=AsyncSession)
class Base(DeclarativeBase):
pass
class User(SoftDeleteMixin, Base):
__tablename__ = "users"
id: Mapped[int] = mapped_column(primary_key=True)
email: Mapped[str] = mapped_column(String(255), unique=True, index=True)
class UserSchema(BaseModel):
id: int | None = None
email: str
model_config = {"from_attributes": True}
async def get_db() -> AsyncGenerator[AsyncSession, None]:
async with SessionLocal() as session:
yield session
@asynccontextmanager
async def lifespan(app: FastAPI):
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.create_all)
yield
app = FastAPI(lifespan=lifespan)
register_exception_handlers(app)
app.include_router(
CRUDRouter(
model=User,
schema=UserSchema,
db=get_db,
prefix="/users",
soft_delete=True,
)
)
```
Run:
```bash
uvicorn main:app --reload
```
## Core Usage
### Endpoints generated by `CRUDRouter`
For `prefix="/users"`:
- `GET /users`
- `GET /users/{pk}`
- `POST /users`
- `PUT /users/{pk}`
- `PATCH /users/{pk}`
- `DELETE /users/{pk}`
- `POST /users/bulk`
- `GET /users/export?fmt=csv|xlsx`
- `POST /users/import`
- `GET /users/deleted` (when `soft_delete=True`)
- `POST /users/{pk}/restore` (when `soft_delete=True`)
## Feature-by-Feature Usage
### 1. Filtering and pagination
```python
from fastapi_crud_engine.core.filters import FilterSet
router = CRUDRouter(
...,
filterset=FilterSet(
fields=["role", "status"],
search_fields=["email", "name"],
ordering_fields=["id", "created_at", "email"],
range_fields=["created_at"],
in_fields=["role"],
nullable_fields=["deleted_at"],
default_ordering="-id",
),
)
```
Example queries:
```http
GET /users?page=1&size=20
GET /users?role=admin
GET /users?search=john
GET /users?ordering=-created_at,email
GET /users?created_at__gte=2026-01-01&created_at__lte=2026-12-31
GET /users?role__in=admin,editor
GET /users?deleted_at__isnull=true
```
### 2. Soft delete and restore
Your model must inherit `SoftDeleteMixin`.
```python
from fastapi_crud_engine.core.mixins import SoftDeleteMixin
class User(SoftDeleteMixin, Base):
...
router = CRUDRouter(..., soft_delete=True)
```
When enabled:
- `DELETE` performs soft delete (`deleted_at` is set)
- `GET /{prefix}/deleted` lists soft-deleted records
- `POST /{prefix}/{pk}/restore` restores a record
### 3. Audit trail
```python
router = CRUDRouter(..., audit_trail=True)
```
This logs create/update/delete/restore operations to the audit model generated by `build_audit_log_model`.
### 4. Cache
```python
from fastapi_crud_engine.features.cache import Cache
cache = Cache(ttl=60, backend="memory")
# or Cache(ttl=60, backend="redis", redis_url="redis://localhost:6379/0")
router = CRUDRouter(
...,
cache=cache,
cache_endpoints=["list", "get"],
)
```
Write operations automatically invalidate model cache keys.
### 5. Rate limiting
```python
from fastapi_crud_engine.features.rate_limiter import RateLimiter
router = CRUDRouter(
...,
rate_limit=RateLimiter(requests=100, window=60),
)
```
Default key strategy is client IP. If limit is exceeded, API returns `429` with `Retry-After`.
### 6. Field-level permissions
```python
from fastapi_crud_engine.core.permissions import FieldPermissions
permissions = FieldPermissions(
hidden_by_default=["password_hash"],
read={"admin": "__all__", "user": ["id", "email", "role"]},
write={"admin": "__all__", "user": ["email"]},
)
router = CRUDRouter(..., field_permissions=permissions)
```
Role is read from `request.state.role` (fallback: `"user"`).
### 7. Lifecycle hooks
```python
from fastapi_crud_engine.router import CRUDHooks
async def before_create(db, payload):
...
async def after_create(db, obj):
...
router = CRUDRouter(
...,
hooks=CRUDHooks(
before_create=before_create,
after_create=after_create,
),
)
```
Available hooks:
- `before_create`, `after_create`
- `before_update`, `after_update`
- `before_delete`, `after_delete`
- `before_restore`, `after_restore`
### 8. Webhooks
```python
from fastapi_crud_engine.features.webhooks import WebhookConfig, WebhookEndpoint
webhooks = WebhookConfig(
delivery="http", # or "celery"
max_retries=3,
timeout=10,
endpoints=[
WebhookEndpoint(
url="https://example.com/webhook",
events=["user.created", "user.updated"],
secret="super-secret",
headers={"X-App": "my-service"},
)
],
)
router = CRUDRouter(..., webhooks=webhooks)
```
Event names look like: `modelname.created`, `modelname.updated`, `modelname.deleted`, `modelname.restored`.
### 9. Import and export
- Export: `GET /{prefix}/export?fmt=csv|xlsx`
- Import: `POST /{prefix}/import` with a CSV/XLSX file
Disable them if you do not need them:
```python
router = CRUDRouter(..., disable=["import", "export"])
```
### 10. Bulk create
- Endpoint: `POST /{prefix}/bulk`
- Payload: list of create schema objects
Disable if not needed:
```python
router = CRUDRouter(..., disable=["bulk"])
```
### 11. Global exception handling
```python
from fastapi_crud_engine.core.handlers import register_exception_handlers
register_exception_handlers(app)
```
This handles library exceptions consistently (not found, permission denied, lock conflict, rate limit, bulk errors).
### 12. Using repository directly
```python
from fastapi_crud_engine.repository import CRUDRepository
repo = CRUDRepository(User, soft_delete=True)
# Inside your service/endpoint:
# obj = await repo.create(db, {"email": "a@b.com"})
# page = await repo.list(db, params=PageParams(page=1, size=20), filter_params=request.query_params)
```
## Configuration
### Common router options
- `soft_delete=True`
- `audit_trail=True`
- `filterset=FilterSet(...)`
- `cache=Cache(...)`
- `rate_limit=RateLimiter(...)`
- `webhooks=WebhookConfig(...)`
- `hooks=CRUDHooks(...)`
- `field_permissions=FieldPermissions(...)`
- `disable=["import", "bulk", "export", "deleted", "restore"]`
### Environment variables
- `REDIS_URL`
- Used by `Cache(backend="auto")` and `RateLimiter(redis_url=None)`
- `CELERY_BROKER_URL`
- Used when `WebhookConfig(delivery="celery")`
### Contributing
- Read `CONTRIBUTING.md`
- Create a branch from `main`
- Add tests for any behavior change
- Open a pull request with clear scope and rationale
## License
MIT License. See `LICENSE`.
## Acknowledgements
- Inspired by `fastapi-crudrouter`: https://github.com/awtkns/fastapi-crudrouter
- Built on top of FastAPI, SQLAlchemy, Pydantic, HTTPX, and the open-source ecosystem
| text/markdown | null | null | null | null | MIT | null | [
"Framework :: FastAPI",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Typing :: Typed",
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"fastapi<1.0.0,>=0.110.0",
"pydantic<3.0.0,>=2.0.0",
"SQLAlchemy<3.0.0,>=2.0.0",
"httpx<1.0.0,>=0.24.0",
"python-dateutil<3.0.0,>=2.8.2",
"openpyxl>=3.1.0; extra == \"excel\"",
"redis>=5.0.0; extra == \"redis\"",
"celery>=5.3.0; extra == \"celery\"",
"aiosqlite>=0.20.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"openpyxl>=3.1.0; extra == \"all\"",
"redis>=5.0.0; extra == \"all\"",
"celery>=5.3.0; extra == \"all\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T20:59:12.808140 | fastapi_crud_engine-0.1.4.tar.gz | 38,812 | a5/33/17393d45bde5352d69802507325c037d21471750252469b1a1c79293082e/fastapi_crud_engine-0.1.4.tar.gz | source | sdist | null | false | 2031e6134d0da317bbf4f52bfa4e14ff | 6f90a7167d68abcb94375884496ed729690dceb00c64b78ebefd5b8f1472930c | a53317393d45bde5352d69802507325c037d21471750252469b1a1c79293082e | null | [
"LICENSE"
] | 210 |
2.4 | blogmore | 0.8.0 | A blog-oriented static site generation engine | # Blogmore
A blog-oriented static site generation engine built in Python.
> [!IMPORTANT]
> This project is built almost 100% using GitHub Copilot. Every other Python
> project you will find in my repository is good old human-built code. This
> project is the complete opposite: as much as possible I'm trying to write
> no code at all as an experiment in getting to know how this process works,
> how to recognise such code, and to understand those who use this process
> every day to, in future, better guide them.
>
> If "AI written" is a huge red flag for you I suggest you avoid this
> project; you'll find [plenty of other pure-davep-built projects via my
> profile](https://github.com/davep).
## Features
- Write everything in Markdown
- All metadata comes from frontmatter
- Uses Jinja2 for templating
- Simple and clean design
- Automatic tag pages and archive generation
- **Built-in search** - Optional client-side full-text search across post titles and content, enabled with `--with-search`; no external services required
- **Automatic icon generation** - Generate favicons and platform-specific icons from a single source image
- iOS (Apple Touch Icons)
- Android/Chrome (with PWA manifest)
- Windows/Edge (with tile configuration)
## Installation
### Using uv (recommended)
```bash
uv tool install blogmore
```
### Using pipx
```bash
pipx install blogmore
```
### From source
```bash
git clone https://github.com/davep/blogmore.git
cd blogmore
uv sync
```
## Usage
### Basic Usage
Create a directory with your markdown posts:
```bash
mkdir posts
```
Create a markdown file with frontmatter:
```markdown
---
title: My First Post
date: 2024-01-15
tags: [python, blog]
---
This is my first blog post!
```
Generate your site:
```bash
blogmore build posts/
```
This will generate your site in the `output/` directory.
### Serve the Site Locally
To serve an existing site:
```bash
blogmore serve -o output/
```
Or generate and serve with auto-reload on changes:
```bash
blogmore serve posts/ -o output/
```
This starts a local HTTP server on port 8000 and watches for changes. Open http://localhost:8000/ in your browser.
Options:
- `-o, --output` - Output directory to serve (default: `output/`)
- `-p, --port` - Port to serve on (default: 8000)
- `--no-watch` - Disable watching for changes
Example:
```bash
blogmore serve posts/ --port 3000 --output my-site/
```
### Custom Options
```bash
blogmore build posts/ \
--templates my-templates/ \
--output my-site/ \
--site-title "My Awesome Blog" \
--site-subtitle "Thoughts on code and technology" \
--site-url "https://example.com"
```
### Configuration File
Blogmore supports configuration files to avoid repetitive command-line arguments. Create a `blogmore.yaml` or `blogmore.yml` file in your project directory:
```yaml
# blogmore.yaml
content_dir: posts
output: my-site
templates: my-templates
site_title: "My Awesome Blog"
site_subtitle: "Thoughts on code and technology"
site_url: "https://example.com"
include_drafts: false
clean_first: false
posts_per_feed: 30
default_author: "Your Name"
extra_stylesheets:
- https://example.com/custom.css
- /assets/extra.css
# Serve-specific options
port: 3000
no_watch: false
# Publish-specific options
branch: gh-pages
remote: origin
```
#### Using Configuration Files
**Automatic Discovery:**
Blogmore automatically searches for `blogmore.yaml` or `blogmore.yml` in the current directory (`.yaml` takes precedence):
```bash
blogmore build # Uses blogmore.yaml if found
```
**Specify a Config File:**
Use the `-c` or `--config` flag to specify a custom config file:
```bash
blogmore build --config my-config.yaml
```
**Override Config with CLI:**
Command-line arguments always take precedence over configuration file values:
```bash
# Uses blogmore.yaml but overrides site_title
blogmore build --site-title "Different Title"
```
#### Configuration Options
All command-line options can be configured in the YAML file:
- `content_dir` - Directory containing markdown posts
- `templates` - Custom templates directory
- `output` - Output directory (default: `output/`)
- `site_title` - Site title (default: "My Blog")
- `site_subtitle` - Site subtitle (optional)
- `site_url` - Base URL of the site
- `include_drafts` - Include posts marked as drafts (default: `false`)
- `clean_first` - Remove output directory before generating (default: `false`)
- `posts_per_feed` - Maximum posts in feeds (default: `20`)
- `default_author` - Default author name for posts without author in frontmatter
- `extra_stylesheets` - List of additional stylesheet URLs
- `port` - Port for serve command (default: `8000`)
- `no_watch` - Disable file watching in serve mode (default: `false`)
- `branch` - Git branch for publish command (default: `gh-pages`)
- `remote` - Git remote for publish command (default: `origin`)
**Note:** The `--config` option itself cannot be set in a configuration file.
### Commands
**Build** (`build`, `generate`, `gen`)
Generate the static site from markdown posts:
```bash
blogmore build posts/ [options]
```
**Serve** (`serve`, `test`)
Serve the site locally with optional generation and auto-reload:
```bash
blogmore serve [posts/] [options]
```
**Publish** (`publish`)
Build and publish the site to a git branch (e.g., for GitHub Pages):
```bash
blogmore publish posts/ [options]
```
This command:
1. Builds your site to the output directory
2. Checks that you're in a git repository
3. Creates or updates a git branch (default: `gh-pages`)
4. Copies the built site to that branch
5. Commits and pushes the changes
Example for GitHub Pages:
```bash
blogmore publish posts/ --branch gh-pages --remote origin
```
**Note:** The publish command requires git to be installed and available in your PATH.
### Common Options
Available for both `build` and `serve` commands:
- `content_dir` - Directory containing markdown posts (required for `build`, optional for `serve`)
- `-c, --config` - Path to configuration file (default: searches for `blogmore.yaml` or `blogmore.yml`)
- `-t, --templates` - Custom templates directory (default: uses bundled templates)
- `-o, --output` - Output directory (default: `output/`)
- `--site-title` - Site title (default: "My Blog")
- `--site-subtitle` - Site subtitle (optional)
- `--site-url` - Base URL of the site
- `--include-drafts` - Include posts marked as drafts
- `--clean-first` - Remove output directory before generating
- `--posts-per-feed` - Maximum posts in feeds (default: 20)
- `--default-author` - Default author name for posts without author in frontmatter
- `--extra-stylesheet` - Additional stylesheet URL (can be used multiple times)
### Serve-Specific Options
- `-p, --port` - Port to serve on (default: 8000)
- `--no-watch` - Disable watching for changes
### Publish-Specific Options
- `--branch` - Git branch to publish to (default: `gh-pages`)
- `--remote` - Git remote to push to (default: `origin`)
## Frontmatter Fields
Required:
- `title` - Post title
Optional:
- `date` - Publication date (YYYY-MM-DD format)
- `category` - Post category (e.g., "python", "webdev")
- `tags` - List of tags or comma-separated string
- `draft` - Set to `true` to mark as draft
- `author` - Author name (uses default_author if not specified)
Example:
```yaml
---
title: My Blog Post
date: 2024-01-15
category: python
tags: [python, webdev, tutorial]
author: Jane Smith
draft: false
---
```
### Categories vs Tags
**Categories** allow you to organize posts into distinct sections or "sub-blogs" within your site. Each post can have one category, and visitors can view all posts in a category at `/category/{category-name}.html`.
**Tags** are for cross-categorization and can be applied multiple times per post. They're useful for topics that span multiple categories.
For example, a blog might use categories like "python", "javascript", "devops" to separate major topics, while using tags like "tutorial", "advanced", "beginner" to indicate post type.
## Icon Generation
Blogmore can automatically generate favicons and platform-specific icons from a single source image. Place a high-resolution square image (ideally 1024×1024 or larger) in your `extras/` directory, and Blogmore will generate all necessary icon formats.
### Generated Icons
From a single source image, Blogmore generates 18 icon files for all major platforms:
- **Favicon files**: Multi-resolution `.ico` and PNG sizes (16×16, 32×32, 96×96)
- **Apple Touch Icons**: Optimized for iOS devices (120×120, 152×152, 167×167, 180×180)
- **Android/Chrome icons**: PWA-ready with web manifest (192×192, 512×512)
- **Windows tiles**: Microsoft Edge and Windows 10+ tiles (70×70, 144×144, 150×150, 310×310, 310×150)
All icons are generated to the `/icons` subdirectory to avoid conflicts with other files.
### Configuration
**Auto-detection** (no configuration needed):
Place one of these files in your `extras/` directory:
- `icon.png` (recommended)
- `icon.jpg` or `icon.jpeg`
- `source-icon.png`
- `app-icon.png`
**Custom filename** via CLI:
```bash
blogmore build content/ --icon-source my-logo.png
```
**Custom filename** via config file:
```yaml
# Icon generation
icon_source: "my-logo.png"
```
### Requirements
- Source image should be square
- Recommended size: 1024×1024 or larger
- Supported formats: PNG, JPEG
- Transparent backgrounds (PNG) work best
## Templates
Blogmore uses Jinja2 templates. The default templates are included, but you can customize them:
- `base.html` - Base template
- `index.html` - Homepage listing (shows full post content)
- `post.html` - Individual post page
- `archive.html` - Archive page
- `tag.html` - Tag page
- `category.html` - Category page
- `search.html` - Search page
- `static/style.css` - Stylesheet
## Search
Search is disabled by default. To enable it, pass `--with-search` on the
command line or set `with_search: true` in the configuration file.
```bash
blogmore build posts/ --with-search
```
```yaml
# blogmore.yaml
with_search: true
```
### How it works
When the site is built with search enabled, two files are added to the output
directory:
- **`search_index.json`** — A JSON array containing the title, URL, date, and
plain-text body of every published post.
- **`search.html`** — A search page with a text input that loads
`search_index.json` and performs an in-browser search as the user types.
A **Search** link is added to the top navigation bar on every page.
No external services or server-side processing are required — everything runs
entirely in the reader's browser.
### Performance
The search index is only fetched when the reader opens the search page; it
does not affect the load time of any other page. The search itself uses
built-in JavaScript string operations — no extra libraries are downloaded.
### Linking to a pre-filled search
Append a `?q=` query string to the search URL to pre-fill the search input
and immediately show results. For example:
```
https://example.com/search.html?q=python
```
## Markdown Features
Blogmore supports all standard Markdown features plus:
- **Fenced code blocks** with syntax highlighting
- **Tables**
- **Table of contents** generation
- **Footnotes** - Use `[^1]` in text and `[^1]: Footnote text` at the bottom
- **GitHub-style admonitions** - Alert boxes for notes, tips, warnings, etc.
### Admonitions (Alerts)
Blogmore supports GitHub-style admonitions (also known as alerts) to highlight important information. These use the same syntax as GitHub Markdown:
```markdown
> [!NOTE]
> Useful information that users should know, even when skimming content.
> [!TIP]
> Helpful advice for doing things better or more easily.
> [!IMPORTANT]
> Key information users need to know to achieve their goal.
> [!WARNING]
> Urgent info that needs immediate user attention to avoid problems.
> [!CAUTION]
> Advises about risks or negative outcomes of certain actions.
```
Each admonition type has its own color scheme and icon:
- **Note** - Blue with ℹ️ icon
- **Tip** - Green with 💡 icon
- **Important** - Purple with ❗ icon
- **Warning** - Orange with ⚠️ icon
- **Caution** - Red with 🚨 icon
Admonitions support all standard Markdown formatting within them, including **bold**, *italic*, `code`, [links](url), and multiple paragraphs.
Example with formatting:
```markdown
> [!TIP]
> You can use **bold**, *italic*, and `code` formatting.
>
> Multiple paragraphs work too!
```
### Footnotes Example
Example with footnote:
```markdown
---
title: My Post
---
This is a post with a footnote[^1].
[^1]: This is the footnote content.
```
## Development
### Setup
```bash
make setup
```
### Run Checks
```bash
make checkall # Run all checks (lint, format, typecheck, spell, tests)
make lint # Check linting
make typecheck # Type checking with mypy
make test # Run test suite
```
### Format Code
```bash
make tidy # Fix formatting and linting issues
```
## Testing
Blogmore has a comprehensive test suite with 143 tests achieving 84% code coverage.
```bash
make test # Run all tests
make test-verbose # Run with verbose output
make test-coverage # Run with detailed coverage report
```
For more information, see the [tests README](tests/README.md).
## License
GPL-3.0-or-later
| text/markdown | Dave Pearson | Dave Pearson <davep@davep.org> | null | null | null | blog, blogging, static site generator, markdown, jinja2, html, web | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3.15",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"jinja2>=3.1.4",
"markdown>=3.7",
"python-frontmatter>=1.1.0",
"python-dateutil>=2.8.2",
"watchdog>=6.0.0",
"feedgen>=1.0.0",
"pygments>=2.18.0",
"pillow>=10.0.0"
] | [] | [] | [] | [
"Homepage, https://blogmore.davep.dev/",
"Repository, https://github.com/davep/blogmore",
"Documentation, https://blogmore.davep.dev/",
"Source, https://github.com/davep/blogmore",
"Issues, https://github.com/davep/blogmore/issues",
"Discussions, https://github.com/davep/blogmore/discussions"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T20:59:06.915969 | blogmore-0.8.0-py3-none-any.whl | 64,373 | f5/c5/f51d18a5d3d37ff77ae38550efef0f0ee87aa20785db81915019266373b2/blogmore-0.8.0-py3-none-any.whl | py3 | bdist_wheel | null | false | e17557e5112fc205d7c58ec5d5242c43 | b7ad9a84afbbe65493573e103a397ccb33cf6bae647d2a29f450190c6ff8d74f | f5c5f51d18a5d3d37ff77ae38550efef0f0ee87aa20785db81915019266373b2 | GPL-3.0-or-later | [] | 193 |
2.4 | pythinfer | 0.5.1 | CLI to easily merge multiple RDF files and perform inference (OWL or SPARQL) on the result. | # pythinfer - Python Logical Inference
[](https://github.com/robertmuil/pythinfer/actions)
[](https://codecov.io/github/robertmuil/pythinfer)
*Pronounced 'python fur'.*
CLI to easily merge multiple RDF files, perform inference (OWL or SPARQL), and query the result.
Point this at a selection of RDF files and it will merge them, run inference over them, export the results, and execute a query on them. The results are the original statements together with the *useful* set of inferences (see below under `Inference` for what 'useful' means here).
A distinction is made between 'reference' and 'focus' files. See below.
## Quick Start
### Using `uv`
(in the below, replace `~/git` and `~/git/pythinfer/example_projects/eg0-basic` with folder paths on your system, of course)
1. Install `pythinfer` as a tool:
```bash
uv tool install pythinfer
```
1. Clone the repository [OPTIONAL - this is just to get the example]:
```bash
cd ~/git
git clone https://github.com/robertmuil/pythinfer.git
```
1. Execute it as a tool in your project (or the example project):
```bash
cd ~/git/pythinfer/example_projects/eg0-basic # or your own project folder
uvx pythinfer query "SELECT * WHERE { ?s ?p ?o } LIMIT 10"
uvx pythinfer query select_who_knows_whom.rq
```
This will create a `pythinfer.yaml` project file in the project folder, merge all RDF files it finds, perform inference, and then execute the SPARQL query against the inferred graph.
1. To use a specific project file, use the `--project` option before the command:
```bash
uvx pythinfer --project pythinfer_celebrity.yaml query select_who_knows_whom.rq
```
1. Edit the `pythinfer.yaml` file to specify which files to include, try again. Have fun.

## Command Line Interface
### Global Options
- `--project` / `-p`: Specify the path to a project configuration file. If not provided, pythinfer will search for `pythinfer.yaml` in the current directory and parent directories, or create a new project if none is found.
- `--verbose` / `-v`: Enable verbose (DEBUG) logging output.
### Common Options
- `--extra-export`: allows specifying extra export formats beyond the default trig. Can be used to 'strip' quads of their named graph down to triples when exporting (by exporting to ttl or nt)
- NB: `trig` is always included as an export because it is used for caching
- ...
### `pythinfer create`
Create a new project specification file in the current folder by scanning for RDF files.
Invoked automatically if another command is used and no project file exists already.
### `pythinfer merge`
Largely a helper command, not likely to need direct invocation.
### `pythinfer infer`
Perform merging and inference as per the project specification, and export the resulting graphs to the output folder.
### `pythinfer query`
A simple helper command should allow easily specifying a query, or queries, and these should be executed against the latest full inferred graph.
In principle, the tool could also take care of dependency management so that any change in an input file is automatically re-merged and inferred before a query...
## Python API
In addition to the CLI, the library can be used directly from Python code.
The primary entry-point is an instance of `Project`. Once initialised, the project can be used to perform inference and access the full inferred graph, as well as the source data.
No state is stored in the `Project` instance, it is just a convenient interface. The data is loaded and created as-needed, either from source files or from the exports of inference, exactly as the CLI operates. In all cases, the data is loaded from disk.
This means that a client should keep the resultant dataset or graph itself in memory, rather than making multiple calls to the merge or infer methods of the `Project` instance, to avoid repeated loading from disk.
### Quick-start: querying full inferred data
```python
from pythinfer import Project
# Load and infer in one step from the first project discovered in current folder
ds = Project.discover().infer()
# Then you can do what you want with the Dataset
results = ds.query("SELECT ?g ?s ?p ?o WHERE { GRAPH ?g { ?s ?p ?o } } LIMIT 10")
for row in results:
print(row)
# Strip to a single Graph if named graphs not needed
from pythinfer.utils import strip
g = strip(ds)
results = g.query("SELECT * WHERE { ?s a ?type }")
```
### Initialising a Project
A project can be initialised from a project specification file, or directly specified.
```python
from pythinfer import Project
# Load from a specific file
project = Project.from_yaml('path/to/pythinfer.yaml')
# Load from a discovered file (searches current and parent folders)
project = Project.discover()
# Specify directly in code
project = Project(
name='Project From Python',
focus=['data/file1.ttl'],
reference=['vocabs/ref_vocab1.ttl'],
)
```
All of these return a `Project` instance. The `from_yaml()` and `discover()` methods will raise a `FileNotFoundError` if no project file is found.
### Merging and Inference
Access to the data is through the merge or infer methods, which return the merged and inferred datasets respectively. The inferred data will be loaded directly from disk if the exports are up-to-date, otherwise inference will be performed.
```python
# Load the source files, returning the merged dataset.
ds_combined = project.merge()
# Load the source files and perform inference, returning the full resultant dataset.
ds_full = project.infer()
```
`merge()` and `infer()` return a `rdflib.Dataset` containing the merged and inferred data, including named graphs for provenance.
A helper method, `strip()` is also provided which returns a `rdflib.Graph` by stripping quads down to triples (i.e. merging all named graphs) which is commonly done to simplify downstream processing.
```python
from pythinfer.utils import strip
# Strip named graphs to triples
g_full = strip(ds_full)
```
## Project Specification
A 'Project' is the specification of which RDF files to process and configuration of how to process them, along with some metadata like a name.
Because we will likely have several files and they will be of different types, it is easiest to specify these in a configuration file (YAML or similar) instead of requiring everything on the command line.
The main function or CLI can then be pointed at the project file to easily switch between projects. This also allows the same sets and subsets of inputs to be combined in different ways with configuration.
### Project Specification Components
```yaml
name: (optional)
focus:
- <pattern>: <a pattern specifying a specific or set of files>
- ...
reference:
- <pattern>: <a pattern specifying a specific or set of 'reference' files>
- ...
output:
folder: <a path to the folder in which to put the output> (defaults to `<base_folder>/derived`)
```
#### Reference vs. Focus Data (was External vs. Internal)
Reference data is treated as ephemeral information used for inference and then discarded. Most commonly it is the vocabulary and data that is not maintained by the user, but whose axioms are assumed to hold true for the application. They are used to augment inference, but are not part of the data being analysed, and so they are not generally needed in the output.
Examples are OWL, RDFS, SKOS, and other standard vocabularies.
Synonyms for 'reference' here could be 'transient' or 'catalyst' or (as was the case) 'external'.
### Path Resolution
Paths in the project configuration file can be either **relative or absolute**.
**Relative paths** are resolved relative to the directory containing the project configuration file (`pythinfer.yaml`). This allows project configurations to remain portable - you can move the project folder around or share it with others, and relative paths will continue to work.
This means that the current working directory from which you execute pythinfer is irrelevant - as long as you point to the right project file, the paths will be resolved correctly.
**Absolute paths** are used as-is without modification.
#### Examples
If your project structure is:
```ascii
my_project/
├── pythinfer.yaml
├── data/
│ ├── file1.ttl
│ └── file2.ttl
└── vocabs/
└── schema.ttl
```
Your `pythinfer.yaml` can use relative paths:
```yaml
name: My Project
data:
- data/file1.ttl
- data/file2.ttl
reference:
- vocabs/schema.ttl
```
These paths will be resolved relative to the directory containing `pythinfer.yaml`, so the configuration is portable.
You can also use absolute paths if needed:
```yaml
data:
- /home/user/my_project/data/file1.ttl
```
### Project Selection
The project selection process is:
1. **User provided**: path to project file provided directly by user on command line, and if this file is not found, exit
1. if no user-provided file, proceed to next step
1. **Discovery**: search in current folder and parent folders for project file, returning first found
1. if no project file discovered, proceed to next step
1. **Creation**: generate a new project specification by searching in current folder for RDF files
1. if no RDF files found, fail
1. otherwise, create new project file and use immediately
### Project Discovery
If a project file is not explicitly specified, `pythinfer` should operate like `git` or `uv` - it should search for a `pythinfer.yaml` file in the current directory, and then in parent directories up to a limit.
The limit on ancestors should be:
1. don't traverse below `$HOME` if that is in the ancestral line
1. don't go beyond 10 folders
1. don't traverse across file systems
### Project Creation
If a project is not provided by the user or discovered from the folder structure, a new project sepecification will be created automatically by scanning the current folder for RDF files. If some RDF files are found, subsidiary files such as SPARQL queries for inference are also sought and a new project specification is created. This new spec will be saved to the current folder.
The user can also specifically request the creation of a new project file with the `create` command.
## Merging
Merging of multiple graphs should preserve the source, ideally using the named graph of a quad.
Merging should distinguish 2 different types of input:
1. *Reference* data: things like OWL, SKOS, RDFS, which are introduced for inference purposes, but are not maintained by the person using the library, and the axioms of which can generally be assumed to exist for any application.
- the term reference is meant from the perspective of the user / application, not to invoke the notion of 'master' vs. 'reference' data.
2. *Focus* data: ontologies being developed, vocabularies that are part of the current focus, and the data itself - all of this should always be preserved in the output, and is the 'focus' of the analysis.
## Inference
By default an efficient OWL rule subset should be used, like OWL-RL.
### Invalid inferences
Some inferences, at least in `owlrl`, may be invalid in RDF - for instance, a triple with a literal as subject. These should be removed during the inference process.
### Unwanted inferences
In addition to the actually invalid inferences, many inferences are banal. For instance, every instance could be considered to be the `owl:sameAs` itself. This is semantically valid but useless to express as an explicit triple.
Several classes of these unwanted inferences can be removed by this package. Some can be removed per-triple during inference, others need to be removed by considering the whole graph.
#### Per-triple unwanted inferences
These are unwanted inferences that can be identified by looking at each triple in isolation. Examples:
1. triples with an empty string as object
2. redundant reflexives, such as `ex:thing owl:sameAs ex:thing`
3. many declarations relating to `owl:Thing`, e.g. `ex:thing rdf:type owl:Thing`
4. declarations that `owl:Nothing` is a subclass of another class (NB: the inverse is *not* unwanted as it indicates a contradiction)
#### Whole-graph unwanted inferences
These are unwanted inferences that can only be identified by considering the whole graph. Examples:
1. Undeclared blank nodes
- blank nodes are often used for complex subClass or range or domain expressions
- where this occurs but the declaration of the blank node is not included in the final output, the blank node is useless and we are better off removing any triples that refer to it
- a good example of this is `skos:member` which uses blank nodes to express that the domain and range are the *union* of `skos:Concept` and `skos:Collection`
- for now, blank node 'declaration' is defined as any triple where the blank node is the subject
### Inference Process
Steps:
1. **Load and merge** all input data into a triplestore
- Maintain provenance of data by named graph
- Maintain list of which named graphs are 'reference'
- output: `merged`
- consequence: `current = merged`
2. **Generate reference inferences** by running RDFS/OWL-RL engine over 'reference' input data[^1]
- output: `inferences_reference_owl`
3. **Generate full inferences** by running RDFS/OWL-RL inference over all data so far[^1]
- output: `inferences_full_owl`
- consequence: `current += inferences_full_owl`
4. **Run heuristics**[^2] over all data
- output: `inferences_sparql` + `inferences_python`
- consequence: `current += inferences_sparql` + `inferences_python`
5. **Repeat steps 3 through 4** until no new triples are generated, or limit reached
- consequence: `combined_full = current`
6. **Subtract reference data and inferences** from the current graph[^4]
- consequence: `current -= (reference_data + inferences_reference_owl)`
- consequence: `combined_focus = current`
7. Subtract all 'unwanted' inferences from result[^3]
- consequence: `combined_wanted = current - inferences_unwanted`
[^1]: inference is backend dependent, and will include the removal of *invalid* triples that may result, e.g. from `owlrl`
[^2]: See below for heuristics.
[^3]: unwanted inferences are those that are semantically valid but not useful, see below
[^4]: this step logically applies, but in the `owlrl` implementation we can simply avoid including the reference_owl_inferences graph in the output, since `owlrl` will not generate inferences that already exist.
### Backends
#### `rdflib` and `owlrl`
In rdflib, the `owlrl` package should be used.
This package has some foibles. For instance, it generates a slew of unnecessary triples. The easiest way to remove these is to first run inference over all reference vocabularies, then combine with the user-provided vocabularies and data, run inference, and then remove all the original inferences from the reference vocabularies from the final result. The reference vocabularies themselves can also be removed, depending on application.
Unwanted inferences are generated even when executed over an empty graph.
#### `pyoxigraph`
No experience with this yet.
#### Jena (`riot` etc.)
Because Jena provides a reference implementation, it might be useful to be able to call out to the Jena suite of command line utilities (like `riot`) for manipulation of the graphs (including inference).
#### Heuristics (SPARQL, Python, etc.)
Some inferences are difficult or impossible to express in OWL-RL. This will especially be the case for very project-specific inferences which are trivial to express procedurally but complicated in a logical declaration.
Therefore we want to support specification of 'heuristics' in other formalisms, like SPARQL CONSTRUCT queries and Python functions.
The order of application of these heuristics may matter - for instance, a SPARQL CONSTRUCT may create triples that are then used by a Python heuristic, or the former may require the full type hierarchy to be explicit from OWL-RL inference.
Thus, we apply heuristics and OWL-RL inference in alternating steps until no new triples are generated.
## Data Structures
### DatasetView
Intended to give a restricted (filtered) view on a Dataset by only providing access to explicitly selected graphs, enabling easy handling of a subset of graphs without copying data to new graphs.
Specifications:
1. A DatasetView may be read/write or readonly.
1. Graphs MUST be explicitly included to be visible, otherwise they are excluded (and invisible).
1. Attempted access to excluded graphs MUST raise a PermissionError.
1. Any mechanism to retrieve triples (e.g.: iterating the view itself, or using `triples()` or using `quads()`) that does not explicitly specify a named graph (e.g. `triples()` called without a `context` argument) MUST return triples from all included graphs, not just the default graph.
1. Default graph MUST therefore be excluded if the underlying Dataset has `default_union` set (because otherwise this would counterintuitively render triples from excluded graphs visible to the view).
1. A DatasetView SHOULD otherwise operate in exactly the same way as the underlying Dataset.
#### Inclusion and Exclusion of Graphs
`rdflib`'s handling of access, addition, and deletion of named graphs has some unintuitive nuance. See [this issue](https://github.com/robertmuil/rdflib/issues/18) for the most relevant example.
For the View, we want to adopt as little difference to APIs and expectations as possible, which unfortunately means taking on the unintuitive behaviours.
So, there are *no* methods for including or excluding a graph once a view is created, because the behaviour of such methods would be very difficult to define. If the included graphs needs to be changed, a new DatasetView should simply be created, which is light-weight because no copying is involved.
#### Adding and removing content
Adding a new graph is not possible through the View unless it was in the list of included graphs at construction, because it only allows accessing included graphs. If an identifier is in the original included list, but has no corresponding triples in the underlying triplestore, this is allowed, and subsequent addition of a triple against that graph identifier would defacto essentially be the 'addition' of a graph to the store.
Removing a graph likewise performs exactly as if performed on the underlying Dataset, unless the graph's identifier is not in the inclusion list, in which case it generates a `PermissionError`. In either case, the graph remains in the inclusion list.
Adding and removing triples is possible (unless the View is set to read-only, which may not be implemented) as long as the triples are added to a graph in the inclusion list.
Adding or removing a triple without specifying the graph would go to the default graph and the same check applies: if the default graph is in the inclusion list, this is allowed, otherwise it will raise a `PermissionError`.
This is all following the principle of altering the API of `Dataset` as little as possible.
## Real-World Usage
The `example_projects` folder contains contrived examples, but this has also been run over real data:
1. [foafPub](https://ebiquity.umbc.edu/resource/html/id/82/foafPub-dataset)
1. takes a while, but successfully completes
2. only infers 7 new useful triples, all deriving from an `owl:sameAs` link to an otherwise completely unconnected local id (treated as a blank node)
1. [starwars](https://platform.ontotext.com/semantic-objects/_downloads/2043955fe25b183f32a7f6b6ba61d5c2/SWAPI-WD-data.ttl)
1. successfully completes, reasonable time
2. infers 175 new triples from the basic starwars.ttl file, mainly that characters are of type `voc:Mammal` and `voc:Sentient` or `voc:Artificial`, etc.
1. also funnily generates `xsd:decimal owl:disjointWith xsd:string`
3. including `summary.ttl` doesn't change the inferences, which I think is correct.
## Next Steps
1. implement pattern support for input files
1. check this handles non-turtle input files ok
1. allow Python-coded inference rules (e.g. for path-traversal or network analytics)
- also use of text / linguistic analysis would be a good motivation (e.g. infer that two projects are related if they share similar topics based on text analysis of abstracts)
1. implement base_folder support - perhaps more generally support for specification of any folder variables...
1. consider using a proper config language like dhal(?) instead of yaml
1. check and raise error or at least warning if default_union is set in underlying Dataset of DatasetView
1. document and/or fix serialisation: canon longTurtle is not great with the way it orders things, so we might need to call out to riot unfortunately.
1. add better output support for ASK query
1. add option to remove project name from named graphs, for easier specification:
1. e.g. `<urn:pythinfer:inferences:owl>` which is easy to remember and specify on command-line.
| text/markdown | null | Robert Muil <robertmuil@gmail.com> | null | null | null | Inference, Linked Data, OWL, RDF, SPARQL, Semantic Web | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"owlrl>=7.1.4",
"pydantic>=2.12.4",
"pyyaml>=6.0.3",
"rdflib>=7.4.0",
"typer>=0.20.0"
] | [] | [] | [] | [] | uv/0.6.3 | 2026-02-20T20:58:55.499210 | pythinfer-0.5.1.tar.gz | 1,046,846 | 60/a0/a7a03e5dd24c5d6adf67080261e9b6b4aca6a3ed6923a3c6cacec998a76d/pythinfer-0.5.1.tar.gz | source | sdist | null | false | fddd564e0151ae8bd430fc99399aa567 | a0f868c0a0515762b6b774efda813bda063fb7ca41b2e79abf4bff4b7b180d97 | 60a0a7a03e5dd24c5d6adf67080261e9b6b4aca6a3ed6923a3c6cacec998a76d | Apache-2.0 | [
"LICENSE.txt"
] | 188 |
2.4 | labwatch | 0.6.10 | Homelab monitoring CLI — system resources, Docker, systemd, HTTP, DNS, VPNs, and more with push notifications via ntfy | ```
██╗ █████╗ ██████╗ ██╗ ██╗ █████╗ ████████╗ ██████╗██╗ ██╗
██║ ██╔══██╗██╔══██╗██║ ██║██╔══██╗╚══██╔══╝██╔════╝██║ ██║
██║ ███████║██████╔╝██║ █╗ ██║███████║ ██║ ██║ ███████║
██║ ██╔══██║██╔══██╗██║███╗██║██╔══██║ ██║ ██║ ██╔══██║
███████╗██║ ██║██████╔╝╚███╔███╔╝██║ ██║ ██║ ╚██████╗██║ ██║
╚══════╝╚═╝ ╚═╝╚═════╝ ╚══╝╚══╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚═╝ ╚═╝
```
# labwatch
A CLI tool for monitoring your homelab. Tracks system resources, Docker containers, systemd services, VPNs, Nginx, DNS, network interfaces, and more. Schedules checks with cron, sends push notifications on failures via [ntfy](https://ntfy.sh), and can automate Docker Compose image updates and system package upgrades.
## Why labwatch?
Homelabs tend to grow into a sprawl of containers, services, and network configs. Uptime dashboards are great, but they're another thing to host and maintain. labwatch takes a different approach: a single CLI that lives on your server, runs from cron, and pushes alerts to your phone when something breaks.
- No web UI to host. It writes to stdout and pushes to ntfy.
- Cron-native. Schedule checks, Docker image updates, and system package upgrades with built-in cron management.
- Config-driven. One YAML file defines everything to monitor.
- Guided setup. The `labwatch init` wizard walks you through every option with detailed explanations, auto-detects Docker containers and systemd services, tests your notifications, and installs your cron schedule.
- Smart notifications. Deduplicates repeated alerts and sends recovery notices when things come back.
- Hardened for unattended use. File lock prevents overlapping runs, rotating log file provides forensic history, dead man's switch pings an external service so you know labwatch itself is still running.
- Extensible. Plugin architecture for checks and notification backends.
## What It Monitors
| Module | What it checks |
|--------|---------------|
| **system** | Disk usage per partition, RAM usage, CPU load. Alerts at configurable warning/critical thresholds. |
| **docker** | Pings the Docker daemon, reports every container's status. Running = OK, paused/restarting = warning, exited/dead = critical. |
| **http** | Makes HTTP requests to your URLs. 2xx/3xx = OK, 4xx/5xx/timeout/refused = critical. |
| **nginx** | Verifies Nginx is running (systemd/pgrep or Docker), validates config with `nginx -t`, checks endpoint URLs. |
| **systemd** | Runs `systemctl is-active` per unit. Only "active" is healthy — inactive, failed, activating, etc. all trigger alerts. Auto-discovers running services during setup. |
| **dns** | DNS lookups via `getaddrinfo`. Alerts if resolution fails. |
| **certs** | TLS certificate expiry monitoring. Connects to port 443, checks the certificate expiry date, and alerts at configurable warning/critical day thresholds. Catches silent certbot/ACME renewal failures. |
| **ping** | Single ICMP ping per host with round-trip time. Alerts if unreachable. |
| **network** | Per-interface: link state (UP/DOWN), IPv4 address assigned, TX bytes transmitted. Good for VPN tunnels and WireGuard. |
| **process** | `pgrep -x` (or tasklist on Windows) to verify processes are running by exact name. |
| **home_assistant** | HA `/api/` health, optional external URL check, optional Google Home cloud API, authenticated checks with long-lived token. |
| **updates** | Counts pending package updates (apt/dnf/yum). Warn at N+ pending, critical at M+. |
| **smart** | S.M.A.R.T. disk health for HDDs, SSDs, and NVMe via smartctl. Raspberry Pi SD/eMMC wear via sysfs. Alerts on failing health, high temps, excessive wear, reallocated sectors. |
| **command** | Run any shell command. Exit 0 = OK, non-zero = failure. Optional output string matching. |
## Install
Requires **Python 3.8+**.
### Recommended: pipx (isolated CLI install)
[pipx](https://pipx.pypa.io/) installs CLI tools in their own virtual environment so they don't pollute your system Python. It's the cleanest way to install labwatch.
```bash
# Debian 12+ / DietPi / Raspberry Pi OS (Bookworm)
sudo apt install pipx python3-dev gcc
pipx ensurepath # adds ~/.local/bin to your PATH
source ~/.bashrc # or open a new shell
# Install labwatch
pipx install labwatch
```
> **Why `python3-dev gcc`?** labwatch uses [psutil](https://github.com/giampaolo/psutil) for system monitoring (disk, memory, CPU), which has a C extension that needs to be compiled on ARM. These packages are only needed at install time.
> **Older systems** (Debian 11, Ubuntu 22.04 and earlier) where `apt install pipx` isn't available:
> ```bash
> pip install pipx
> pipx ensurepath
> ```
### Alternative: pip with virtual environment
Modern Debian-based systems (Bookworm+) block `pip install` outside a venv ([PEP 668](https://peps.python.org/pep-0668/)). If you prefer pip over pipx:
```bash
python3 -m venv ~/.local/share/labwatch-venv
~/.local/share/labwatch-venv/bin/pip install labwatch
# Symlink into PATH so you can just type "labwatch"
ln -s ~/.local/share/labwatch-venv/bin/labwatch ~/.local/bin/labwatch
```
> **PATH note (DietPi / Raspberry Pi / Debian):** `~/.local/bin` may not be in your PATH by default. If `labwatch` is "command not found" after install, add this line to `~/.bashrc` and open a new shell:
> ```bash
> export PATH="$HOME/.local/bin:$PATH"
> ```
> This is not needed with pipx (which runs `ensurepath` for you).
### Updating
```bash
# Self-update from anywhere
labwatch update
# Or manually via pipx / pip
pipx upgrade labwatch
# or (if installed in a venv)
~/.local/share/labwatch-venv/bin/pip install --upgrade labwatch
```
### Development install
```bash
git clone https://github.com/rbretschneider/labwatch_cli.git
cd labwatch_cli/cli
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[test]"
```
## Quick Start
```bash
# 1. Interactive setup
# Walks you through every check module with detailed descriptions,
# auto-detects Docker containers and systemd services, tests your
# notifications, and sets up your cron schedule — all in one command.
labwatch init
# 2. Run all enabled checks (happens automatically once cron is set up)
labwatch check
# 3. Run specific check modules
labwatch check --only network,dns
```
That's it. The wizard handles config generation, notification testing, and cron scheduling. You don't need to manually edit crontab or config files unless you want to.
## The Setup Wizard
`labwatch init` is the primary way to configure labwatch. It walks through every section with beginner-friendly explanations — no assumptions about what you already know:
1. **Module selection** — the fun part first. A checkbox menu of all 16 modules (14 monitoring + Docker auto-updates + system updates) with short descriptions. Pick what matches your setup; skip the rest. You can always come back.
2. **Hostname** — a friendly name for this machine (shows up in alerts so you know which server is talking)
3. **Notifications (ntfy)** — explains what ntfy is, why you want push alerts, and walks through server/topic setup
4. **Module details** — for each module you selected, configures thresholds, endpoints, devices, etc. Systemd monitoring auto-discovers running services and highlights 70+ known homelab services (Pi-hole, WireGuard, CUPS, Tailscale, Plex, etc.) so you can pick from a list instead of typing unit names from memory.
5. **Docker auto-updates** — auto-detects Compose projects from running containers via Docker labels, or scans a base directory for compose files
6. **System updates** — configures automated `apt-get upgrade` or `dist-upgrade` for Debian/DietPi, with optional autoremove and auto-reboot
7. **Summary** — shows what you enabled/disabled, your notification target, and auto-update directories
8. **Notification test** — sends a test alert to verify your ntfy setup works before you rely on it
9. **Scheduling** — explains what cron is, shows a recommended schedule grouped by frequency, and offers three options:
- **Accept** the recommended schedule (installs cron entries immediately)
- **Customize** per check group (choose from sensible frequency options like every 5 min, hourly, daily, weekly)
- **None** (skip scheduling, print the manual commands for later)
Re-run `labwatch init` to edit your config — existing values become defaults. Use `labwatch init --only http` to edit a single section.
Use `--config /tmp/test.yaml` to try it without overwriting your real config.
## Commands
| Command | Description |
|---------|-------------|
| `labwatch init` | Interactive wizard — config, notifications, scheduling |
| `labwatch init --only docker,http` | Re-run wizard for specific sections only |
| `labwatch check` | Run all enabled checks, notify on failures |
| `labwatch check --only system,docker` | Run specific check modules |
| `labwatch check --json` | JSON output for scripting |
| `labwatch check --no-notify` | Run checks without sending notifications |
| `labwatch discover` | List Docker containers, suggest HTTP endpoints |
| `labwatch discover --systemd` | List systemd services, highlight known homelab services |
| `labwatch docker-update` | Pull latest Docker images and restart changed services |
| `labwatch docker-update --dry-run` | Show what would be updated without pulling |
| `labwatch docker-update --force` | Update even version-pinned tags |
| `labwatch system-update` | Run apt-get upgrade on Debian/DietPi systems |
| `labwatch system-update --dry-run` | Show upgradable packages without installing |
| `labwatch notify "Title" "Message"` | Send a one-off push notification |
| `labwatch summarize` | Show config summary as a Rich tree |
| `labwatch validate` | Validate config file |
| `labwatch edit` | Open config in your default editor |
| `labwatch modules` | List all modules with descriptions and on/off status |
| `labwatch enable docker` | Enable a check module |
| `labwatch disable docker` | Disable a check module |
| `labwatch doctor` | Check installation health and connectivity |
| `labwatch schedule check --every 5m` | Schedule all checks to cron |
| `labwatch schedule check --only network --every 1m` | Schedule specific modules at their own interval |
| `labwatch schedule docker-update --every 1d` | Add Docker update schedule to cron |
| `labwatch schedule system-update --every 1w` | Add system update schedule to cron |
| `labwatch schedule list` | Show all labwatch cron entries |
| `labwatch schedule remove` | Remove all labwatch cron entries |
| `labwatch schedule remove --only check` | Remove only check entries |
| `labwatch motd` | Plain-text login summary for SSH MOTD |
| `labwatch motd --only updates` | MOTD for specific modules only |
| `labwatch completion bash` | Print shell completion script (bash/zsh/fish) |
| `labwatch update` | Update labwatch to the latest PyPI release |
| `labwatch version` | Show version |
**Global options:** `--config PATH`, `--no-color`, `--verbose`, `--quiet`
### Exit Codes
`labwatch check` returns meaningful exit codes for scripting:
| Code | Meaning |
|------|---------|
| 0 | All checks passed (OK) |
| 1 | At least one WARNING |
| 2 | At least one CRITICAL |
## Configuration
Config is a single YAML file. `labwatch init` creates it for you, and the wizard shows the full path at the start and end of setup. You can edit it with any text editor or re-run the wizard.
**Where is it?**
| OS | Path |
|----|------|
| Linux | `/home/yourusername/.config/labwatch/config.yaml` |
| macOS | `/Users/yourusername/.config/labwatch/config.yaml` |
| Windows | `C:\Users\yourusername\AppData\Roaming\labwatch\config.yaml` |
> **Note:** On Linux/macOS, `.config` is a hidden directory (the dot prefix hides it from `ls` by default). Use `ls -a` to see it, or just open the file directly: `nano ~/.config/labwatch/config.yaml`
Run `labwatch summarize` at any time to see the resolved path and a tree view of what's configured:
```
my-server
├── Notifications enabled
│ ├── ntfy: https://ntfy.sh/homelab_alerts
│ └── min severity: warning
├── Monitoring (8 modules)
│ ├── System
│ │ ├── disk: warn 80% / crit 90%
│ │ ├── memory: warn 80% / crit 90%
│ │ └── cpu: warn 80% / crit 95%
│ ├── Docker
│ │ ├── watching: all containers
│ │ └── alert on stopped containers
│ ├── HTTP Endpoints
│ │ ├── Grafana: http://localhost:3000 (timeout 10s)
│ │ └── Plex: http://localhost:32400/identity (timeout 5s)
│ ├── DNS Resolution
│ │ ├── google.com
│ │ └── github.com
│ ├── TLS Certificates
│ │ ├── mydomain.com
│ │ └── warn at 14 days / crit at 7 days
│ ├── Ping
│ │ ├── 8.8.8.8
│ │ ├── 1.1.1.1
│ │ └── timeout: 5s
│ ├── Systemd Units
│ │ ├── docker (critical)
│ │ └── wg-quick@wg0 (critical)
│ └── Package Updates
│ ├── warn at 1+ pending
│ └── critical at 50+ pending
├── Disabled: Nginx, S.M.A.R.T., Network Interfaces, Home Assistant, Processes, Custom Commands
├── Docker auto-updates (2 directories)
│ ├── /home/docker/plex
│ └── /home/docker/grafana
└── System updates (apt-get upgrade)
├── mode: safe
└── autoremove: yes
```
Run `labwatch init` to regenerate it interactively, or edit by hand:
```yaml
hostname: "my-server"
notifications:
min_severity: "warning" # only notify on warning or critical
heartbeat_url: "" # dead man's switch — see "Heartbeat" section below
ntfy:
enabled: true
server: "https://ntfy.sh"
topic: "homelab_alerts" # or use ${NTFY_TOPIC} for env var
checks:
system:
enabled: true
thresholds:
disk_warning: 80
disk_critical: 90
memory_warning: 80
memory_critical: 90
cpu_warning: 80
cpu_critical: 95
docker:
enabled: true
watch_stopped: true
containers: [] # empty = monitor all
http:
enabled: true
endpoints:
- name: "Grafana"
url: "http://localhost:3000"
timeout: 10
- name: "Plex"
url: "http://localhost:32400/identity"
nginx:
enabled: true
container: "" # empty = host-mode (systemd/apt)
endpoints:
- "https://mydomain.com"
systemd:
enabled: true
units:
- "docker"
- name: "wg-quick@wg0"
severity: "critical"
network:
enabled: true
interfaces:
- name: "tun0"
severity: "critical"
- name: "wg0"
severity: "warning"
dns:
enabled: true
domains:
- "google.com"
- "github.com"
certs:
enabled: true
domains:
- "mydomain.com"
- "nextcloud.example.org"
warn_days: 14 # warning when cert expires within 14 days
critical_days: 7 # critical when cert expires within 7 days
ping:
enabled: true
hosts:
- "8.8.8.8"
- "1.1.1.1"
timeout: 5
home_assistant:
enabled: false
url: "http://localhost:8123"
external_url: ""
token: "${HA_TOKEN}" # env var — keeps secrets out of YAML
google_home: true
process:
enabled: false
names:
- "redis-server"
updates:
enabled: true
warning_threshold: 1 # warn if any updates pending
critical_threshold: 50 # critical if 50+ pending
smart:
enabled: true
temp_warning: 50
temp_critical: 60
wear_warning: 80
wear_critical: 90
devices: [] # empty = auto-detect all drives
command:
enabled: false
commands:
- name: "custom health check"
command: "/usr/local/bin/my-check.sh"
expect_exit: 0
severity: "warning"
update:
compose_dirs:
- "/home/docker/plex"
- "/home/docker/grafana"
system:
enabled: true
mode: "safe" # "safe" = apt-get upgrade, "full" = apt-get dist-upgrade
autoremove: true
auto_reboot: false # set true to auto-reboot after kernel updates
```
### Environment Variables in Config
Config values can reference environment variables with `${VAR}` syntax. This keeps secrets out of the YAML file:
```yaml
home_assistant:
token: "${HA_TOKEN}"
notifications:
ntfy:
topic: "${NTFY_TOPIC}"
```
Unset variables are left as-is (not expanded). Use `labwatch doctor` to check for unexpanded variables.
### Quick Enable/Disable
Toggle check modules without editing YAML or re-running the wizard:
```bash
labwatch enable dns
labwatch disable docker
```
## Scheduling with Cron
labwatch is not a daemon — it runs once and exits. To monitor continuously, it needs a cron job. The `labwatch init` wizard can set this up for you, or you can manage it manually with `labwatch schedule`.
labwatch manages its own cron entries so you don't have to edit crontab by hand. All labwatch entries are grouped inside a clearly marked block so you can tell them apart from your own cron jobs:
```
# your existing cron jobs stay untouched up here
0 * * * * /usr/bin/backup.sh
# ── LABWATCH ENTRIES (generated by labwatch init) ──
*/1 * * * * /usr/bin/labwatch check --only network # labwatch:check:network
*/5 * * * * /usr/bin/labwatch check --only dns,http,nginx,ping # labwatch:check:dns,http,nginx,ping
*/30 * * * * /usr/bin/labwatch check --only docker,system # labwatch:check:docker,system
# ── END LABWATCH ENTRIES ──
```
Use `--only` to run different check modules at different frequencies. Each `--only` combination gets its own cron entry, so they all coexist:
```bash
# Network interface checks every minute (VPN tunnels, WireGuard)
labwatch schedule check --only network --every 1m
# Service reachability every 5 minutes
labwatch schedule check --only http,dns,ping,nginx --every 5m
# System resources and Docker every 30 minutes
labwatch schedule check --only docker,system --every 30m
# Package updates daily
labwatch schedule check --only updates --every 1d
# Docker Compose image updates weekly
labwatch schedule docker-update --every 1w
# System package upgrades daily
labwatch schedule system-update --every 1w
# See what's scheduled
labwatch schedule list
# Remove all labwatch cron entries
labwatch schedule remove
# Remove only check entries (keep update schedule)
labwatch schedule remove --only check
```
Supported intervals: `1m`–`59m`, `1h`–`23h`, `1d`, `1w`.
The `--quiet` flag suppresses output when all checks pass, following the cron convention where silence means success:
```bash
# In cron: only produces output (and cron email) when something fails
labwatch -q check
```
> **Windows:** Cron is not available. The wizard will print the equivalent commands for you to set up in Task Scheduler.
## Smart Notifications
labwatch sends alerts via [ntfy](https://ntfy.sh) when checks fail. ntfy is a simple push notification service — install the ntfy app on your phone, subscribe to your topic, and you'll get alerts when something breaks.
### Deduplication and Recovery
labwatch tracks the state of each check between runs. This means:
- **No repeated alerts** — if the same check fails the same way on consecutive runs, you only get notified once
- **Escalation alerts** — if a check goes from WARNING to CRITICAL, you get a new alert
- **Recovery alerts** — when a previously failing check returns to OK, you get a `[hostname] RECOVERED` notification
State is stored in `state.json` next to the config file.
### Severity and Priority
Severity maps to ntfy priority:
| Severity | ntfy Priority |
|----------|--------------|
| CRITICAL | Urgent |
| WARNING | High |
| OK | Low |
Set the minimum severity threshold to filter out noise:
```yaml
notifications:
min_severity: "warning" # ignore OK results
ntfy:
enabled: true
server: "https://ntfy.sh" # or your self-hosted instance
topic: "homelab_alerts"
```
Test your notifications at any time:
```bash
labwatch notify "Test" "Hello from labwatch"
```
## Heartbeat (Dead Man's Switch)
labwatch can ping an external monitoring service after every check run. If the pings stop arriving, the external service alerts you — catching the case where labwatch itself breaks (cron deleted, Python env corrupted, permissions changed, etc.).
This works with [Healthchecks.io](https://healthchecks.io) (free tier), [Uptime Kuma](https://github.com/louislam/uptime-kuma), or any service that accepts HTTP GET pings.
```yaml
notifications:
heartbeat_url: "https://hc-ping.com/your-uuid-here"
```
- Pinged after every `labwatch check` run
- Appends `/fail` to the URL when checks have failures (Healthchecks.io convention)
- 10-second timeout; never crashes monitoring if the ping fails
- `labwatch doctor` verifies the URL is reachable
## Unattended Cron Hardening
When running from cron, labwatch includes three safety features that require no configuration:
**File lock** — prevents overlapping runs. If a previous `labwatch check` is still running when cron fires again, the new instance exits silently. The lock auto-releases on crash. Lock file: `~/.config/labwatch/labwatch.lock`.
**Rotating log** — every run logs to `~/.config/labwatch/labwatch.log`. Max 512KB per file with 1 backup = 1MB total on disk. Safe for Raspberry Pi SD cards.
```
2026-02-19 14:30:00 INFO check started
2026-02-19 14:30:02 INFO check complete: 8 ok, 1 failed, worst=warning
2026-02-19 14:30:02 INFO notifications sent for 1 failure(s)
2026-02-19 14:30:03 INFO heartbeat pinged
```
**Dead man's switch** — see the Heartbeat section above.
These features are active for `labwatch check`, `labwatch docker-update`, and `labwatch system-update`.
## Service Discovery
### Docker
`labwatch discover` scans your running Docker containers and suggests HTTP endpoints for 23+ known services (Plex, Grafana, Home Assistant, Portainer, Jellyfin, Sonarr, Radarr, Pi-hole, and more). The `labwatch init` wizard uses this automatically when configuring HTTP checks.
```bash
labwatch discover
```
### Systemd
`labwatch discover --systemd` lists all running systemd services and highlights 70+ recognized homelab services — Pi-hole, WireGuard, CUPS, Tailscale, Samba, Plex, Docker, Grafana, and many more. The `labwatch init` wizard uses this to present a pick-list instead of requiring you to type unit names from memory.
```bash
labwatch discover --systemd
```
## Health Check
`labwatch doctor` verifies your installation is working correctly:
```bash
labwatch doctor
```
It checks:
- Config file exists and is valid
- File permissions on the config (warns if too open)
- Unexpanded `${VAR}` references (env vars not set)
- ntfy server reachability
- Heartbeat URL reachability (if configured)
- Docker daemon accessibility
- Required system tools (`systemctl`, `pgrep`, `ping`, `ip`) for enabled checks
- Log directory is writable
- Cron entries installed
- Cron daemon is running
- labwatch binary path in each cron entry still exists on disk
- `sudo` NOPASSWD is configured for privileged cron entries (e.g. system-update)
## Shell Completion
Enable tab completion for bash, zsh, or fish:
```bash
# Bash
labwatch completion bash >> ~/.bashrc
# Zsh
labwatch completion zsh >> ~/.zshrc
# Fish
labwatch completion fish > ~/.config/fish/completions/labwatch.fish
```
## Login MOTD
`labwatch motd` prints a plain-text status summary meant for SSH login. Drop a script into `/etc/profile.d/` and you'll see pending updates, failed services, or disk warnings every time you log in.
```bash
# /etc/profile.d/labwatch.sh
labwatch motd 2>/dev/null
```
Or use `--only` to keep it focused:
```bash
# Just show pending updates and VPN status on login
labwatch motd --only updates,network 2>/dev/null
```
Example output:
```
--- labwatch | homelab ---
[+] disk:/: 45.2% used (112.3GB free of 234.5GB)
[!] updates: 12 pending updates
[+] network:wg0:link: UP
[X] network:tun0:link: DOWN
```
The output is plain text with no colors or Rich formatting, so it works in any terminal and won't break non-interactive shells.
## System Package Updates
`labwatch system-update` runs `apt-get update && apt-get upgrade -y` (or `dist-upgrade`) on Debian-based systems. It's designed to keep your servers fully patched without manual SSH sessions.
```bash
# Preview what would be upgraded
labwatch system-update --dry-run
# Run the upgrade (requires root)
sudo labwatch system-update
# Schedule weekly upgrades via cron
labwatch schedule system-update --every 1w
```
**Modes:**
- `safe` (default) — runs `apt-get upgrade`, which never removes packages or installs new dependencies
- `full` — runs `apt-get dist-upgrade`, which may remove or install packages as needed for major upgrades
**Options:**
- `autoremove` — automatically clean up unused packages after upgrade (default: on)
- `auto_reboot` — schedule `shutdown -r +1` if a kernel update requires a reboot (default: off). The 1-minute delay lets the notification send first.
**Root privileges:** System updates require root to run `apt-get`. If you're not running as root, the `labwatch init` wizard detects this and shows you exactly how to set up passwordless sudo — a single sudoers line that grants the minimum permission needed. The wizard also automatically adds `sudo` to the cron entry so scheduled updates run correctly.
Notifications are sent via ntfy on completion, with package counts, error status, and reboot status.
## Project Goals
- Simple to install and run. `pipx install labwatch` and `labwatch init`, nothing else required.
- Guided setup. The wizard explains everything and handles config, notification testing, and scheduling in one pass.
- Cron-first scheduling. Manage monitoring schedules without external tools.
- Cover the common homelab stack: system resources, Docker, systemd, VPNs, Nginx, DNS, HTTP endpoints.
- Granular scheduling. Different check modules can run at different intervals (VPN every minute, Docker every 30 minutes, etc.).
- Separate concerns. System package upgrades, Docker image updates, and monitoring checks all run on independent schedules.
- Automate Docker Compose image updates with auto-detection of Compose projects.
- Automate system package upgrades with configurable mode, autoremove, and auto-reboot.
- Smart notifications via ntfy — deduplicated, with recovery alerts.
- Extensible. Add custom checks via the command module or write new check plugins.
- Scriptable. JSON output and meaningful exit codes for integration with other tools.
## Contributing
Contributions are welcome. The check and notification systems use a plugin registry, so adding a new module is pretty simple:
1. Create a module in `src/labwatch/checks/` or `src/labwatch/notifications/`
2. Implement the base class
3. Use the `@register("name")` decorator
```bash
# Run tests
cd cli
pip install -e ".[test]"
pytest
```
## License
GPL v3. See [LICENSE](LICENSE) for details.
| text/markdown | Ryan Bretschneider | null | null | null | null | homelab, monitoring, cli, docker, systemd, ntfy, devops | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: System Administrators",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: System :: Monitoring",
"Topic :: System :: Systems Administration"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"click>=8.0",
"pyyaml>=6.0",
"rich>=12.0",
"docker>=6.0",
"requests>=2.28",
"psutil>=5.9",
"questionary>=2.0",
"pytest>=7.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/rbretschneider/labwatch_cli",
"Repository, https://github.com/rbretschneider/labwatch_cli",
"Issues, https://github.com/rbretschneider/labwatch_cli/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T20:58:00.433187 | labwatch-0.6.10.tar.gz | 110,065 | 94/4b/5818dd75f88603cc48b2e24c74fd92542bc0c738a2cc04b4a2b2866872f7/labwatch-0.6.10.tar.gz | source | sdist | null | false | e327aaf5c06fa970d98e84a138e34d13 | 3dd8bf8898565b3c2494c04e88ed4d293f07c9a06b56c6a9836cdcfbeb5b91d1 | 944b5818dd75f88603cc48b2e24c74fd92542bc0c738a2cc04b4a2b2866872f7 | GPL-3.0-or-later | [] | 199 |
2.4 | solpolpy | 0.5.2 | Solar polarization resolver for any instrument | # solpolpy
[](https://codecov.io/gh/punch-mission/solpolpy)
[](https://github.com/punch-mission/solpolpy/actions/workflows/CI.yml)
[](https://badge.fury.io/py/solpolpy)
[](https://zenodo.org/doi/10.5281/zenodo.10076326)
`solpolpy` is a solar polarization resolver based on [Deforest et al. 2022](https://doi.org/10.3847/1538-4357/ac43b6).
It converts between various polarization formats, e.g. from the native three triple version from observations
(also known as the MZP convention) to polarization brightness (pB) and total brightness (B), Stokes I, Q and U, etc.
An example of transforming the polarization basis using the LASCO/C2 images is
shown in the image below. The images at polarizing angles of -60°, 0° and +60° is shown in the top panel as
Bm, Bz and Bp respectively. The bottom panel shows the output of the `solpolpy` to convert the initial basis
to the Stokes I, Q and U.

## Quickstart
`pip install solpolpy`
We recommend following along the examples in [the documentation](https://solpolpy.readthedocs.io/en/latest/quickstart.html)!
## Getting Help
Please open a discussion or issue for help.
## Contributing
We encourage all contributions.
If you have a problem with the code or would like to see a new feature, please open an issue.
Or you can submit a pull request.
If you're contributing code, please see [this package's deveopment guide](https://solpolpy.readthedocs.io/en/latest/development.html).
## Code of Conduct
[Access here](CODE_OF_CONDUCT.md)
## Citing
To cite the software please cite the version you used with [the Zenodo citation](https://zenodo.org/records/10289143).
## Origin of the Name
`solpolpy` is just a combination of `sol` for solar, `pol` for polarization, and `py` for Python.
| text/markdown | null | "J. Marcus Hughes" <mhughes@boulder.swri.edu>, "Matthew J. West" <mwest@boulder.swri.edu>, Ritesh Patel <ritesh.patel@swri.org>, "Bryce M. Walbridge" <bmw39@calvin.edu>, Chris Lowder <chris.lowder@swri.org> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"sunpy",
"astropy",
"numpy",
"matplotlib",
"networkx",
"ndcube",
"sunkit_image>=0.6",
"pytest; extra == \"test\"",
"coverage; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"ruff; extra == \"test\"",
"sphinx; extra == \"docs\"",
"pydata-sphinx-theme; extra == \"docs\"",
"sphinx-autoapi; extra == \"docs\"",
"nbsphinx; extra == \"docs\"",
"ipython; extra == \"docs\"",
"pandoc; extra == \"docs\"",
"packaging; extra == \"docs\"",
"solpolpy[docs,test]; extra == \"dev\"",
"pre-commit; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-20T20:57:35.196086 | solpolpy-0.5.2.tar.gz | 22,738,714 | 49/bb/1a5c9130f6ad4b8c936e2bb283bcf93c5ff22878b67e7496517f82e0ab81/solpolpy-0.5.2.tar.gz | source | sdist | null | false | 7f0cc048feddae45c0c4fc0e03469ad9 | 9abb1c53cb2f3b04d5bb465bee51fc0b4494f31befa6b23930821ccc98231a2b | 49bb1a5c9130f6ad4b8c936e2bb283bcf93c5ff22878b67e7496517f82e0ab81 | null | [
"LICENSE.txt"
] | 148 |
2.4 | openai-chatkit | 1.6.2 | A ChatKit backend SDK. | ## License
This project is licensed under the [Apache License 2.0](LICENSE).
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic",
"uvicorn",
"openai",
"openai-agents>=0.3.2",
"jinja2<4,>=3.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T20:57:20.228943 | openai_chatkit-1.6.2.tar.gz | 61,562 | 40/87/87826ce30c34a9d3c71eecdd96f7add26a57cba2ec0e6fbf933e321f2254/openai_chatkit-1.6.2.tar.gz | source | sdist | null | false | 0fd8f3a5adfdb048e67971ea7fbc08bb | fd91e8bf0e14244dc86f20c5f93f8386beff3aa1afbcd6f1fec7c1f52de856c6 | 408787826ce30c34a9d3c71eecdd96f7add26a57cba2ec0e6fbf933e321f2254 | null | [
"LICENSE",
"NOTICE"
] | 4,502 |
2.1 | linkarchivetools | 0.1.30 | Link Archive Tools | # Link Database Tools
Package provides tools that allow to filter databases produced by https://github.com/rumca-js/Django-link-archive.
Can filter or analyze entries from https://github.com/rumca-js/Internet-Places-Database.
# Tools
- DbAnalyzer - provides analysis of the DB contents
- Db2Feeds - converts database to DB of feeds
- Db2JSON - converts database to JSON
- DbFilter - filters database (only bookmarks? only votes?)
- DbMerge - Merges database with other databse
- JSON2Db - Converts JSON into datbase
- Backup - makes backup of postgres tables
# Utils
Alchemy provides search capabilities.
Reflected tools - provides access table definitions.
# Installation
pip install linkarchivetools
| text/markdown | Iwan Grozny | renegat@renegat0x0.ddns.net | null | null | GPL3 | null | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"python-dateutil<3.0.0,>=2.8.2",
"sqlalchemy",
"webtoolkit<0.0.207,>=0.0.206",
"sympy<2.0.0,>=1.13.2",
"psycopg2-binary",
"requests<3.0.0,>=2.32.5"
] | [] | [] | [] | [] | poetry/1.8.2 CPython/3.12.3 Linux/6.8.0-100-generic | 2026-02-20T20:56:39.485909 | linkarchivetools-0.1.30.tar.gz | 44,591 | 49/cd/37f9a1b3434ff5162e51e7a8b2929085a91cd399651dce1d5df31979ad85/linkarchivetools-0.1.30.tar.gz | source | sdist | null | false | e4b155154332ec028ae558a79698eceb | 5ef5605163ba5ff8bb88132e2c045f4df9560d50bade7ec9a49da2125a116b9f | 49cd37f9a1b3434ff5162e51e7a8b2929085a91cd399651dce1d5df31979ad85 | null | [] | 205 |
2.4 | pollux-ai | 1.1.0 | Multimodal orchestration for LLM analysis | # Pollux
Multimodal orchestration for LLM APIs.
> You describe what to analyze. Pollux handles source patterns, context caching, and multimodal complexity—so you don't.
[Documentation](https://polluxlib.dev/) ·
[Quickstart](https://polluxlib.dev/quickstart/) ·
[Cookbook](https://polluxlib.dev/cookbook/)
[](https://pypi.org/project/pollux-ai/)
[](https://github.com/seanbrar/pollux/actions/workflows/ci.yml)
[](https://codecov.io/gh/seanbrar/pollux)
[](https://github.com/seanbrar/minimal-tests-maximum-trust)


## Quick Start
```python
import asyncio
from pollux import Config, Source, run
result = asyncio.run(
run(
"What are the key findings?",
source=Source.from_text(
"Pollux supports fan-out, fan-in, and broadcast source patterns. "
"It also supports context caching for repeated prompts."
),
config=Config(provider="gemini", model="gemini-2.5-flash-lite"),
)
)
print(result["answers"][0])
# "The key findings are: (1) three source patterns (fan-out, fan-in,
# broadcast) and (2) context caching for token and cost savings."
```
`run()` returns a `ResultEnvelope` dict — `answers` is a list with one entry per prompt.
To use OpenAI instead: `Config(provider="openai", model="gpt-5-nano")`.
For a full 2-minute walkthrough (install, key setup, success checks), see the
[Quickstart](https://polluxlib.dev/quickstart/).
## Why Pollux?
- **Multimodal-first**: PDFs, images, video, YouTube URLs, and arXiv papers—same API
- **Source patterns**: Fan-out (one source, many prompts), fan-in (many sources, one prompt), and broadcast (many-to-many)
- **Context caching**: Upload once, reuse across prompts—save tokens and money
- **Structured output**: Get typed responses via `Options(response_schema=YourModel)`
- **Built for reliability**: Async execution, automatic retries, concurrency control, and clear error messages with actionable hints
## Installation
```bash
pip install pollux-ai
```
### API Keys
Get a key from [Google AI Studio](https://ai.dev/) or [OpenAI Platform](https://platform.openai.com/api-keys), then:
```bash
# Gemini (recommended starting point — supports context caching)
export GEMINI_API_KEY="your-key-here"
# OpenAI
export OPENAI_API_KEY="your-key-here"
```
## Usage
### Multi-Source Analysis
```python
import asyncio
from pollux import Config, Source, run_many
async def main() -> None:
config = Config(provider="gemini", model="gemini-2.5-flash-lite")
sources = [
Source.from_file("paper1.pdf"),
Source.from_file("paper2.pdf"),
]
prompts = ["Summarize the main argument.", "List key findings."]
envelope = await run_many(prompts, sources=sources, config=config)
for answer in envelope["answers"]:
print(answer)
asyncio.run(main())
```
### YouTube and arXiv Sources
```python
from pollux import Source
lecture = Source.from_youtube("https://www.youtube.com/watch?v=dQw4w9WgXcQ")
paper = Source.from_arxiv("2301.07041")
```
Pass these to `run()` or `run_many()` like any other source — Pollux handles the rest.
### Structured Output
```python
import asyncio
from pydantic import BaseModel
from pollux import Config, Options, Source, run
class Summary(BaseModel):
title: str
key_points: list[str]
sentiment: str
result = asyncio.run(
run(
"Summarize this document.",
source=Source.from_file("report.pdf"),
config=Config(provider="gemini", model="gemini-2.5-flash-lite"),
options=Options(response_schema=Summary),
)
)
parsed = result["structured"] # Summary instance
print(parsed.key_points)
```
### Configuration
```python
from pollux import Config
config = Config(
provider="gemini",
model="gemini-2.5-flash-lite",
enable_caching=True, # Gemini-only in v1.0
)
```
See the [Configuration Guide](https://polluxlib.dev/configuration/) for details.
### Provider Differences
Pollux does not force strict feature parity across providers in v1.0.
See the capability matrix: [Provider Capabilities](https://polluxlib.dev/reference/provider-capabilities/).
## Documentation
- [Quickstart](https://polluxlib.dev/quickstart/) — First result in 2 minutes
- [Concepts](https://polluxlib.dev/concepts/) — Mental model for source patterns and caching
- [Sources and Patterns](https://polluxlib.dev/sources-and-patterns/) — Source constructors, run/run_many, ResultEnvelope
- [Configuration](https://polluxlib.dev/configuration/) — Providers, models, retries, caching
- [Caching and Efficiency](https://polluxlib.dev/caching-and-efficiency/) — TTL management, cache warming, cost savings
- [Troubleshooting](https://polluxlib.dev/troubleshooting/) — Common issues and solutions
- [API Reference](https://polluxlib.dev/reference/api/) — Entry points and types
- [Cookbook](https://polluxlib.dev/cookbook/) — Scenario-driven, ready-to-run recipes
## Contributing
See [CONTRIBUTING](https://polluxlib.dev/contributing/) and [TESTING.md](./TESTING.md) for guidelines.
Built during [Google Summer of Code 2025](https://summerofcode.withgoogle.com/) with Google DeepMind. [Learn more](https://polluxlib.dev/#about)
## License
[MIT](LICENSE)
| text/markdown | null | Sean Brar <hello@seanbrar.com> | null | null | MIT | null | [] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"google-genai>=1.61",
"httpx>=0.24",
"openai>=2.16.0",
"pydantic>=2.1",
"python-dotenv>=0.19"
] | [] | [] | [] | [
"Homepage, https://polluxlib.dev",
"Documentation, https://polluxlib.dev",
"Repository, https://github.com/seanbrar/pollux",
"Changelog, https://github.com/seanbrar/pollux/blob/main/CHANGELOG.md",
"Issues, https://github.com/seanbrar/pollux/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T20:55:27.613081 | pollux_ai-1.1.0.tar.gz | 498,536 | d6/0d/2fcaf390414a82d3d1a8f858fc66ee29e0036b2322e5aeb78145a5047e38/pollux_ai-1.1.0.tar.gz | source | sdist | null | false | 5cef242130c1b9cc41a134c359d8dc5d | b49194e8c3f09c380551be7c3c9792bf76e3c6327d68d5e290f2331ba11eddcf | d60d2fcaf390414a82d3d1a8f858fc66ee29e0036b2322e5aeb78145a5047e38 | null | [
"LICENSE"
] | 204 |
2.4 | regularizepsf | 1.1.2 | Point spread function modeling and regularization | # regularizepsf
[](https://codecov.io/gh/punch-mission/regularizepsf)
[](https://zenodo.org/badge/latestdoi/555583385)
[](https://badge.fury.io/py/regularizepsf)
[](https://github.com/punch-mission/regularizepsf/actions/workflows/ci.yml)
A package for manipulating and correcting variable point spread functions.
Below is an example of correcting model data using the package. An initial image of a simplified starfield (a) is synthetically observed with a slowly
varying PSF (b), then regularized with this technique (c). The final image visually matches a direct convolution of
the initial image with the target PSF (d). The panels are gamma-corrected to highlight the periphery of the model PSFs.

## Getting started
`pip install regularizepsf` and then follow along with the [documentation](https://regularizepsf.readthedocs.io/en/latest/index.html).
## Contributing
We encourage all contributions. If you have a problem with the code or would like to see a new feature, please open an issue. Or you can submit a pull request.
If you're contributing code please see [this package's development guide](https://regularizepsf.readthedocs.io/en/latest/development.html).
## License
See [LICENSE file](LICENSE)
## Need help?
Please ask a question in our [discussions](https://github.com/punch-mission/regularizepsf/discussions)
## Citation
Please cite [the associated paper](https://iopscience.iop.org/article/10.3847/1538-3881/acc578) if you use this technique:
```
@article{Hughes_2023,
doi = {10.3847/1538-3881/acc578},
url = {https://dx.doi.org/10.3847/1538-3881/acc578},
year = {2023},
month = {apr},
publisher = {The American Astronomical Society},
volume = {165},
number = {5},
pages = {204},
author = {J. Marcus Hughes and Craig E. DeForest and Daniel B. Seaton},
title = {Coma Off It: Regularizing Variable Point-spread Functions},
journal = {The Astronomical Journal}
}
```
If you use this software, please also cite the package with the specific version used. [Zenodo always has the most up-to-date citation](https://zenodo.org/records/10066960).
| text/markdown | null | null | null | null | Copyright (c) 2024 PUNCH Science Operations Center
This software may be used, modified, and distributed under the terms
of the GNU Lesser General Public License v3 (LGPL-v3); both the
LGPL-v3 and GNU General Public License v3 (GPL-v3) are reproduced
below.
There is NO WARRANTY associated with this software.
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
| null | [] | [] | null | null | >3.10 | [] | [] | [] | [
"numpy",
"h5py",
"sep",
"astropy",
"scipy",
"scikit-image",
"matplotlib",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"hypothesis; extra == \"test\"",
"coverage; extra == \"test\"",
"ruff; extra == \"test\"",
"pytest-mpl; extra == \"test\"",
"packaging; extra == \"docs\"",
"sphinx; extra == \"docs\"",
"pydata-sphinx-theme; extra == \"docs\"",
"sphinx-autoapi; extra == \"docs\"",
"nbsphinx; extra == \"docs\"",
"ipython; extra == \"docs\"",
"regularizepsf[docs,test]; extra == \"dev\"",
"pre-commit; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T20:55:10.414042 | regularizepsf-1.1.2.tar.gz | 9,006,743 | 6c/31/503da231148fad9e6974454cdf44d07b088b1753c26e8a80f2a0b0a26b68/regularizepsf-1.1.2.tar.gz | source | sdist | null | false | 134cd6f1775fbb12d02bda92278695b0 | b9c18b08b27a9ecbf4b06070aa3729f20abb068ebd4cb0a653b98e2b55382743 | 6c31503da231148fad9e6974454cdf44d07b088b1753c26e8a80f2a0b0a26b68 | null | [
"LICENSE"
] | 159 |
2.4 | pyrobloxbot | 2.2.1 | A python library to control the Roblox character and interact with game ui through keyboard inputs |
# pyrobloxbot
[](https://pyrobloxbot.readthedocs.io/en/latest/index.html)
[](https://pypi.python.org/pypi/pyrobloxbot)
[](https://pypi.python.org/pypi/pyrobloxbot)
**pyrobloxbot** is an open-source package for making Roblox bots that interact with the game strictly through the keyboard.
It simplifies this process by providing features like:
- Methods for most actions your character can make, like movement, chatting, resetting, etc
- Methods to navigate through game ui elements through the keyboard only, to avoid needing the mouse which is unreliable
- Methods to join games, join users and join private servers
- Highly customizable bots, by changing different options to fit your use case
- A global failsafe to avoid your bot going rogue
- Support for multi account bots
## Installation guide
pyrobloxbot can be installed using pip, by doing:
```shell
pip install pyrobloxbot
```
> [!NOTE]
> For now, pyrobloxbot is Windows only. See the [issue tracker](https://github.com/Mews/pyrobloxbot/issues/93) for updates.
## Documentation
Read the documentation at https://pyrobloxbot.readthedocs.io/en/latest/index.html
There you'll find:
- API references
- Basic and advanced usage guides
- Step by step, real life examples
- Pieces of wisdom I've gathered after making tens of bots with pyrobloxbot
## Have a question?
Don't hesitate to ask!
You can check the [FAQ](https://pyrobloxbot.readthedocs.io/en/latest/faq.html), [open an issue](https://github.com/Mews/pyrobloxbot/issues/new?labels=question), or contact me on discord (mews75)!
## Got an idea?
All feature requests are welcome!
You can submit them on github by [opening an issue](https://github.com/mews/pyrobloxbot/issues/new?template=feature.yml) and using the feature template.
---
Also, feel free to share anything you make with me through my discord (mews75)!
## Usage/Examples
```python
import pyrobloxbot as bot
#Send a message in chat
bot.chat("Hello world!")
#Walk forward for 5 seconds
bot.walk_forward(5)
#Reset player character
bot.reset_player()
```
## [Changelog](https://github.com/Mews/pyrobloxbot/blob/main/CHANGELOG.md)
| text/markdown | null | Mews <ar754456@gmail.com> | null | null | null | bot, keyboard, roblox | [
"License :: OSI Approved :: MIT License",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"opencv-python==4.13.0.90",
"pillow==12.1.1",
"pydirectinput==1.0.4",
"pynput==1.8.1",
"pyperclip==1.11.0",
"pyscreeze==1.0.1",
"pywin32==311",
"requests==2.32.5"
] | [] | [] | [] | [
"Homepage, https://github.com/Mews/pyrobloxbot",
"Issues, https://github.com/Mews/pyrobloxbot/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T20:55:08.642016 | pyrobloxbot-2.2.1.tar.gz | 20,291,903 | 9c/a6/f6a65a5b39e55008d51b2721f33f3186b561f3efb6ea73855327dc85354d/pyrobloxbot-2.2.1.tar.gz | source | sdist | null | false | c38f519c891a19f6661b178010d978f3 | b86eb7ab13a3b86702a987315710dd536902812dfa6bbf5e75bb96c2566f5972 | 9ca6f6a65a5b39e55008d51b2721f33f3186b561f3efb6ea73855327dc85354d | MIT | [
"LICENSE"
] | 208 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.