metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | podstack | 1.3.21 | Official Python SDK for Podstack GPU Notebook Platform | # Podstack Python SDK
Official Python SDK for the Podstack GPU Platform. Run ML workloads on remote GPUs with simple decorators, track experiments, and manage models.
## Installation
```bash
pip install podstack
```
With optional dependencies:
```bash
pip install podstack[torch] # PyTorch support
pip install podstack[huggingface] # HuggingFace Transformers
pip install podstack[all] # All ML frameworks
```
## Quick Start
```python
import podstack
# Initialize the SDK
podstack.init(
api_key="your-api-key",
project_id="your-project-id"
)
# Run a function on a remote GPU with a single decorator
@podstack.gpu(type="L40S", fraction=100)
def train():
import torch
print(f"GPU: {torch.cuda.get_device_name(0)}")
return {"status": "done"}
result = train() # Executes on remote GPU!
```
## Decorators & Annotations
Podstack provides decorators that turn any Python function into a remote GPU workload with built-in experiment tracking.
### `@podstack.gpu` - Remote GPU Execution
```python
import podstack
# Basic GPU execution
@podstack.gpu(type="L40S")
def train_model():
import torch
model = torch.nn.Linear(768, 10).cuda()
return {"params": sum(p.numel() for p in model.parameters())}
result = train_model()
# Specify GPU type, count, and fraction
@podstack.gpu(type="A100-80G", count=2, fraction=100)
def train_large_model():
import torch
print(f"GPUs available: {torch.cuda.device_count()}")
# Install pip packages on the fly
@podstack.gpu(type="L40S", pip=["transformers", "datasets", "accelerate"])
def finetune_llm():
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf")
...
# Use uv for faster package installation
@podstack.gpu(type="L40S", uv=["torch", "transformers"])
def fast_setup():
...
# Install from requirements.txt
@podstack.gpu(type="L40S", requirements="requirements.txt", use_uv=True)
def train_with_deps():
...
# Use conda packages
@podstack.gpu(type="L40S", conda="cudatoolkit=11.8")
def train_with_conda():
...
# Use a pre-built environment
@podstack.gpu(type="L40S", env="nlp")
def nlp_task():
...
# Set execution timeout (default: 3600s)
@podstack.gpu(type="L40S", timeout=7200)
def long_training():
...
# Disable remote execution (run locally for debugging)
@podstack.gpu(type="L40S", remote=False)
def debug_locally():
print("This runs on your local machine")
# Use as a context manager
with podstack.gpu(type="A100-80G", count=2) as cfg:
print(f"GPU config set: {cfg.type}")
```
**Available GPU types:** `T4`, `L4`, `A10`, `L40S`, `A100-40G`, `A100-80G`, `H100`
**Available environments:** `ml`, `nlp`, `cv`, `audio`, `tabular`, `rl`, `scientific`
### `@podstack.experiment` - Experiment Tracking
```python
import podstack
# As a decorator
@podstack.experiment(name="transformer-experiments")
def run_experiment():
...
# As a context manager
with podstack.experiment(name="transformer-experiments") as exp:
print(f"Experiment ID: {exp.id}")
```
### `@podstack.run` - Run Tracking
Automatically tracks execution time and GPU configuration.
```python
import podstack
# As a decorator
@podstack.experiment(name="my-experiment")
@podstack.run(name="training-v1", track_gpu=True)
def train():
podstack.registry.log_params({"lr": 0.001, "batch_size": 32})
for epoch in range(10):
loss = 1.0 / (epoch + 1)
podstack.registry.log_metrics({"loss": loss}, step=epoch)
# As a context manager
with podstack.run(name="training-v1") as run:
podstack.registry.log_params({"lr": 0.001})
podstack.registry.log_metrics({"loss": 0.5}, step=1)
print(f"Run ID: {run.id}")
# With tags
@podstack.run(name="ablation-study", tags={"variant": "no-dropout"})
def ablation():
...
```
### `@podstack.model` - Model Registration
```python
import podstack
# Register model after function completes
@podstack.experiment(name="my-experiment")
@podstack.run(name="training-v1")
@podstack.model.register(name="my-classifier")
def train_and_save():
import torch
model = torch.nn.Linear(768, 10)
torch.save(model.state_dict(), "model.pt")
podstack.registry.log_artifact("model.pt", "model")
# Promote model to production after validation
@podstack.model.promote(name="my-classifier", version=1, stage="production")
def validate_and_promote():
# Run validation checks
accuracy = 0.95
assert accuracy > 0.90, "Model doesn't meet threshold"
```
### Combining Decorators
Stack decorators for a complete ML workflow:
```python
import podstack
podstack.init(api_key="your-api-key", project_id="your-project-id")
@podstack.gpu(type="L40S", pip=["transformers", "datasets"])
@podstack.experiment(name="sentiment-analysis")
@podstack.run(name="bert-finetune-v1", track_gpu=True)
@podstack.model.register(name="sentiment-bert")
def full_pipeline():
from transformers import AutoModelForSequenceClassification, Trainer
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
# Log hyperparameters
podstack.registry.log_params({
"model": "bert-base-uncased",
"learning_rate": 2e-5,
"epochs": 3
})
# Train...
podstack.registry.log_metrics({"accuracy": 0.92, "f1": 0.89})
return {"accuracy": 0.92}
result = full_pipeline() # Runs on remote L40S GPU with full tracking
```
## Registry - Experiment Tracking & Model Management
### Initialize
```python
from podstack import registry
registry.init(
api_key="your-api-key",
project_id="your-project-id"
)
```
### Track Experiments and Runs
```python
from podstack import registry
# Set experiment
registry.set_experiment("my-experiment")
# Start a tracked run
with registry.start_run(name="training-v1") as run:
# Log hyperparameters
registry.log_params({
"learning_rate": 0.001,
"batch_size": 32,
"epochs": 10,
"optimizer": "adam"
})
# Log metrics at each step
for epoch in range(10):
loss = train_epoch()
accuracy = evaluate()
registry.log_metrics({"loss": loss, "accuracy": accuracy}, step=epoch)
# Set tags
registry.set_tag("framework", "pytorch")
# Upload artifacts to cloud artifact store
registry.log_artifact("model.pt")
registry.log_artifact("training_curves.png", artifact_path="plots/curves.png")
# Log dataset provenance (first-class resource, deduped by content hash)
registry.log_dataset("imdb-reviews", path="data/imdb.csv", context="training")
# Or pass a DataFrame — schema and row/feature counts are auto-computed
import pandas as pd
df = pd.read_csv("data/imdb.csv")
registry.log_dataset("imdb-reviews", df=df, context="training")
```
### Log and Load Models
```python
from podstack import registry
# Serialize and upload the model to the artifact store (auto-detects framework)
registry.log_model(model, artifact_path="model", framework="pytorch")
# Register in model registry
registry.register_model(
name="my-classifier",
run_id=run.id,
description="BERT sentiment classifier"
)
# Promote to production
registry.set_model_stage("my-classifier", version=1, stage="production")
# Set aliases
registry.set_model_alias("my-classifier", alias="champion", version=1)
# Load model from any machine — files are downloaded automatically if missing locally
model = registry.load_model("my-classifier", stage="production")
```
### Compare Runs
```python
from podstack import registry
# Compare multiple runs
comparison = registry.compare_runs(
run_ids=["run-id-1", "run-id-2", "run-id-3"],
metric_keys=["loss", "accuracy"]
)
# Get metric history for a run
history = registry.get_metric_history("run-id-1", "loss")
for point in history:
print(f"Step {point.step}: {point.value}")
# Search runs
runs = registry.search_runs(
experiment_id="exp-id",
status="completed",
max_results=50
)
```
### Dataset Tracking & Lineage
Podstack tracks datasets as first-class resources, linking them to runs and model versions so you can always answer *"what data was this model trained on?"*
The lineage chain is:
```
Dataset(s) ──[logged to]──▶ Run ──[run_id]──▶ ModelVersion
```
#### `log_dataset()` — log a dataset to the active run
```python
dataset = registry.log_dataset(
name="imdb-reviews", # required — human-readable name
path="data/imdb.csv", # local path or URI (s3://, gcs://, https://)
context="training", # "training" | "validation" | "test" (default: "training")
)
```
The dataset is stored as a **project-level resource** and linked to the current run.
Subsequent calls with the same file produce the same dataset record — no duplicates.
**Auto-enrichment from a local file:**
```python
# SHA-256 digest is computed automatically for files ≤ 500 MB.
# This enables deduplication across runs — if two runs use the exact
# same file, they share one Dataset record in the registry.
dataset = registry.log_dataset("imdb-reviews", path="data/imdb.csv")
print(dataset.digest) # "a3f2c1..." — hex SHA-256
```
**Auto-enrichment from a pandas DataFrame:**
```python
import pandas as pd
df = pd.read_csv("data/imdb.csv")
dataset = registry.log_dataset(
name="imdb-reviews",
df=df,
context="training",
)
# schema and profile are computed automatically:
print(dataset.schema) # {"text": "object", "label": "int64"}
print(dataset.profile) # {"num_rows": 50000, "num_features": 2}
```
**Pass both `path` and `df`** to get digest dedup *and* schema inference:
```python
dataset = registry.log_dataset("imdb-reviews", path="data/imdb.csv", df=df)
```
**All parameters:**
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `name` | `str` | required | Human-readable dataset name |
| `path` | `str` | `None` | Local file path or URI (`s3://`, `gcs://`, `https://`) |
| `df` | `DataFrame` | `None` | pandas DataFrame — schema and profile auto-computed |
| `context` | `str` | `"training"` | Role of the dataset: `"training"`, `"validation"`, or `"test"` |
| `digest` | `str` | `None` | SHA-256 hex digest. Computed from `path` if not provided |
| `source_type` | `str` | `"local"` | Storage backend: `"local"`, `"s3"`, `"gcs"`, `"url"` |
| `tags` | `dict` | `None` | Arbitrary string key-value tags |
**Returns:** `Dataset` object with fields:
| Field | Type | Description |
|-------|------|-------------|
| `id` | `str` | UUID of the dataset record |
| `name` | `str` | Dataset name |
| `digest` | `str` | SHA-256 hex digest (empty if not computed) |
| `source_type` | `str` | Storage backend |
| `source` | `str` | File path or URI |
| `schema` | `dict` | Column → dtype mapping |
| `profile` | `dict` | `num_rows`, `num_features`, and any other stats |
| `tags` | `dict` | Tags dict |
| `created_at` | `str` | ISO 8601 timestamp |
**Via the `Run` object** (equivalent to calling `registry.log_dataset()`):
```python
with registry.start_run("training-v1") as run:
dataset = run.log_dataset("imdb-reviews", df=df, context="training")
```
#### Multiple datasets per run
Log validation and test sets alongside the training set:
```python
with registry.start_run("bert-finetune") as run:
run.log_dataset("imdb-train", df=train_df, context="training")
run.log_dataset("imdb-val", df=val_df, context="validation")
run.log_dataset("imdb-test", df=test_df, context="test")
```
#### `get_run_datasets()` — retrieve datasets logged to a run
Returns every `Dataset` object linked to a run, in the order they were logged.
```python
datasets = registry.get_run_datasets(run_id)
```
**Parameters:**
| Parameter | Type | Description |
|-----------|------|-------------|
| `run_id` | `str` | ID of the run to query |
**Returns:** `list[Dataset]` — same object as returned by `log_dataset()`.
**Fields on each `Dataset`:**
| Field | Type | Description |
|-------|------|-------------|
| `id` | `str` | UUID of the dataset record |
| `name` | `str` | Human-readable name |
| `digest` | `str` | SHA-256 hex digest (empty if not computed at log time) |
| `source_type` | `str` | `"local"`, `"s3"`, `"gcs"`, or `"url"` |
| `source` | `str` | File path or URI that was passed to `log_dataset()` |
| `schema` | `dict` | Column → dtype mapping (e.g. `{"text": "object", "label": "int64"}`) |
| `profile` | `dict` | Stats dict, always contains `num_rows` and `num_features` when a DataFrame was passed |
| `tags` | `dict` | Key-value tags |
| `created_at` | `str` | ISO 8601 timestamp |
**Examples:**
```python
from podstack import registry
registry.init(api_key="...", project_id="...")
datasets = registry.get_run_datasets("3a9f12c4-...")
# Inspect each dataset
for ds in datasets:
print(ds.name)
print(f" source : {ds.source}")
print(f" digest : {ds.digest[:16]}…")
print(f" rows : {ds.profile.get('num_rows', 'unknown')}")
print(f" schema : {ds.schema}")
```
Checking datasets on a run you have in hand:
```python
with registry.start_run("training-v1") as run:
run.log_dataset("train", df=train_df, context="training")
run.log_dataset("val", df=val_df, context="validation")
# After the run completes, retrieve everything that was logged
datasets = registry.get_run_datasets(run.id)
assert len(datasets) == 2
```
Verifying deduplication — the same physical file logged across two runs
returns the same dataset ID:
```python
ds1 = registry.get_run_datasets(run_a.id)[0]
ds2 = registry.get_run_datasets(run_b.id)[0]
# Same file → same digest → same Dataset record
assert ds1.id == ds2.id
assert ds1.digest == ds2.digest
```
#### `get_model_lineage()` — trace a model back to its training data
Returns the full provenance chain for every version of a registered model:
which datasets each version was trained on, via which run.
```python
lineage = registry.get_model_lineage(model_id)
```
**Parameters:**
| Parameter | Type | Description |
|-----------|------|-------------|
| `model_id` | `str` | ID of the registered model |
**Returns:** `dict` with the following structure:
```
{
"model_id": str,
"versions": [
{
"version": int, # version number (1, 2, 3 …)
"stage": str, # "development" | "staging" | "production" | "archived"
"run_id": str, # ID of the linked training run (empty if none)
"run_name": str, # display name of the run
"datasets": [Dataset] # list of Dataset dicts logged to that run
},
…
]
}
```
Each `datasets` entry has the same fields as a `Dataset` object
(`id`, `name`, `digest`, `source_type`, `source`, `schema`, `profile`, `tags`, `created_at`).
**Examples:**
Basic iteration:
```python
from podstack import registry
registry.init(api_key="...", project_id="...")
model = registry.get_model("sentiment-bert")
lineage = registry.get_model_lineage(model.id)
for version in lineage["versions"]:
print(f"v{version['version']} · {version['stage']}")
print(f" Run: {version['run_name']} ({version['run_id'][:8]}…)")
for ds in version["datasets"]:
rows = ds["profile"].get("num_rows", "?")
print(f" └─ {ds['name']} {rows} rows sha256:{ds['digest'][:12]}…")
```
Example output:
```
v3 · production
Run: bert-finetune-v3 (3a9f12c4…)
└─ imdb-train 40000 rows sha256:a3f2c1d8e9b0…
└─ imdb-val 5000 rows sha256:7e4b2f1a0c3d…
v2 · staging
Run: bert-finetune-v2 (8b2e77d1…)
└─ imdb-train 40000 rows sha256:a3f2c1d8e9b0…
v1 · archived
Run: bert-finetune-v1 (f1c3a0e2…)
└─ imdb-train 40000 rows sha256:a3f2c1d8e9b0…
```
Finding every unique dataset ever used to train any version of a model:
```python
lineage = registry.get_model_lineage(model.id)
seen = {}
for version in lineage["versions"]:
for ds in version["datasets"]:
seen[ds["id"]] = ds # dedup by ID
unique_datasets = list(seen.values())
print(f"{len(unique_datasets)} unique dataset(s) across all versions")
```
Checking whether the production version was trained on an approved dataset:
```python
APPROVED_DIGEST = "a3f2c1d8e9b0..."
lineage = registry.get_model_lineage(model.id)
prod = next(v for v in lineage["versions"] if v["stage"] == "production")
approved = any(ds["digest"] == APPROVED_DIGEST for ds in prod["datasets"])
print("Production model trained on approved data:", approved)
```
#### End-to-end example
```python
import pandas as pd
from podstack import registry
registry.init(api_key="...", project_id="...")
registry.set_experiment("sentiment-analysis")
# Load data
train_df = pd.read_csv("data/train.csv")
val_df = pd.read_csv("data/val.csv")
with registry.start_run("bert-finetune-v3") as run:
# Log datasets — digest is auto-computed, schema inferred
run.log_dataset("imdb-train", path="data/train.csv", df=train_df, context="training")
run.log_dataset("imdb-val", path="data/val.csv", df=val_df, context="validation")
# Train
run.log_params({"lr": 2e-5, "epochs": 3})
run.log_metrics({"accuracy": 0.93, "f1": 0.92})
# Register and promote the model
registry.register_model("sentiment-bert", run_id=run.id)
registry.set_model_stage("sentiment-bert", version=3, stage="production")
# Later — answer "what data trained v3?"
model = registry.get_model("sentiment-bert")
lineage = registry.get_model_lineage(model.id)
```
### Artifact Storage
Podstack stores every artifact you log — model files, plots, CSV exports, anything — in the project's cloud artifact store. Artifacts are keyed by run ID, so the same file can be retrieved from any machine, by any project member, at any time.
#### `log_artifact()` — upload a file for the active run
```python
# Upload a single file (uses the filename as the artifact path)
registry.log_artifact("model.pt")
# Upload with an explicit path inside the artifact store
registry.log_artifact("training_curves.png", artifact_path="plots/curves.png")
registry.log_artifact("feature_importance.csv", artifact_path="analysis/features.csv")
```
**Parameters:**
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `local_path` | `str` | required | Path to the local file to upload |
| `artifact_path` | `str` | filename | Relative path inside the artifact store. Defaults to `os.path.basename(local_path)` |
If the artifact store is temporarily unreachable, the SDK saves the file to a local fallback cache (`~/.podstack/artifacts/<run_id>/`) so your run is never interrupted.
**Via the `Run` object** — equivalent to calling `registry.log_artifact()`:
```python
with registry.start_run("training-v1") as run:
run.log_artifact("confusion_matrix.png", artifact_path="plots/confusion_matrix.png")
run.log_artifact("model.pkl")
```
#### `list_artifacts()` — list all artifacts for a run
```python
artifacts = registry.list_artifacts(run_id)
for a in artifacts:
print(f"{a['path']:40s} {a['size'] / 1e6:.1f} MB {a['last_modified']}")
```
**Parameters:**
| Parameter | Type | Description |
|-----------|------|-------------|
| `run_id` | `str` | ID of the run to query |
**Returns:** `list[dict]` — one entry per artifact:
| Key | Type | Description |
|-----|------|-------------|
| `path` | `str` | Relative artifact path (e.g. `"plots/curves.png"`) |
| `size` | `int` | File size in bytes |
| `etag` | `str` | Content hash for integrity verification |
| `last_modified` | `str` | ISO 8601 upload timestamp |
#### `download_artifact()` — retrieve an artifact
Downloads a specific artifact from the cloud store into a local directory. Falls back to the local cache when the store is unreachable.
```python
# Download a single file
dest = registry.download_artifact("run-id", "model/model.pkl", "./downloads/")
print(f"Saved to: {dest}")
# Download a whole model directory
dest = registry.download_artifact("run-id", "model", "./local_models/")
```
**Parameters:**
| Parameter | Type | Description |
|-----------|------|-------------|
| `run_id` | `str` | ID of the run that logged the artifact |
| `artifact_path` | `str` | Relative artifact path as logged (e.g. `"model/model.pkl"`) |
| `local_path` | `str` | Destination directory |
**Returns:** `str` — absolute path to the downloaded file or directory.
**Raises:** `ArtifactNotFoundError` if the artifact cannot be found in the store or the local cache.
#### Models as artifacts: `log_model()` and `load_model()`
`log_model()` serializes your model to disk and uploads every resulting file to the artifact store in one call. `load_model()` resolves the registered model version, downloads any missing files from the store, then deserializes the model — so it works correctly from any machine regardless of where training happened.
```python
# ── Training machine ──────────────────────────────────────────────────────────
with registry.start_run("bert-finetune-v3") as run:
# train...
registry.log_model(model, artifact_path="model", framework="pytorch")
registry.register_model("sentiment-bert", run_id=run.id)
registry.set_model_stage("sentiment-bert", version=3, stage="production")
# ── Any machine (CI, inference server, colleague's laptop) ───────────────────
# Model files are downloaded automatically from the artifact store if not cached
model = registry.load_model("sentiment-bert", stage="production")
```
**`log_model()` parameters:**
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `model` | any | required | Model object (PyTorch, TensorFlow, sklearn, HuggingFace, or any picklable object) |
| `artifact_path` | `str` | `"model"` | Sub-path inside the artifact store |
| `framework` | `str` | auto-detected | `"pytorch"`, `"tensorflow"`, `"sklearn"`, `"huggingface"`, or `"pickle"` |
| `metadata` | `dict` | `None` | Arbitrary key-value metadata stored as run params |
**`load_model()` parameters:**
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `model_name` | `str` | required | Registered model name |
| `version` | `int` | `None` | Specific version to load. Mutually exclusive with `stage` |
| `stage` | `str` | `None` | Stage to load from: `"development"`, `"staging"`, `"production"`, `"archived"` |
| `framework` | `str` | from run params | Override framework for deserialization |
#### Viewing artifacts in the dashboard
Every artifact logged with `log_artifact()` or `log_model()` appears automatically in the **Artifacts tab** of the run's detail page in the Podstack dashboard. No extra steps are needed — the tab populates from the same store the SDK writes to.
The Artifacts tab shows:
| Column | Description |
|--------|-------------|
| **Path** | The relative artifact path as logged (e.g. `model/model.pkl`, `plots/curves.png`) |
| **Type badge** | File extension, color-coded by category — model weights, data files, images, configs, etc. |
| **Size** | Formatted file size (B / KB / MB) |
| **Uploaded** | Timestamp of when the file was stored |
| **Download** | One-click download button — opens a short-lived direct download link in the browser |
A footer below the list shows the combined size of all artifacts for the run.
```python
# Everything logged here shows up in the dashboard Artifacts tab
with registry.start_run("bert-finetune-v3") as run:
registry.log_params({"lr": 2e-5, "epochs": 3})
registry.log_metrics({"accuracy": 0.93})
# These all appear as separate rows in the Artifacts tab
registry.log_artifact("confusion_matrix.png", artifact_path="plots/confusion_matrix.png")
registry.log_artifact("feature_importance.csv", artifact_path="analysis/features.csv")
registry.log_model(model, artifact_path="model", framework="pytorch")
# ↳ each model file (model.pkl, config.json, etc.) appears as its own row
```
#### Access control
Artifact upload and download URLs are issued by the registry API and require a valid API key and project membership. The URLs are short-lived, ensuring that access always reflects the current state of your project — a revoked key can no longer generate new URLs. Any member of a project can upload and download artifacts for runs within that project.
### List and Browse
```python
from podstack import registry
# List experiments
experiments = registry.list_experiments()
# List models
models = registry.list_models()
# List artifacts for a specific run
artifacts = registry.list_artifacts(run_id)
# Download a specific artifact to a local directory
dest = registry.download_artifact("run-id", "model/model.pt", "./downloads/")
print(f"Saved to: {dest}")
```
## GPU Runner - Direct Code Execution
For running code strings directly on GPUs without decorators:
```python
import podstack
podstack.init(api_key="your-api-key", project_id="your-project-id")
# Run code on a remote GPU
result = podstack.run_on_gpu('''
import torch
print(f"GPU: {torch.cuda.get_device_name(0)}")
print(f"Memory: {torch.cuda.get_device_properties(0).total_mem / 1e9:.1f} GB")
''', gpu="L40S")
print(result.output)
print(f"Success: {result.success}")
print(f"Duration: {result.duration_seconds}s")
```
## Client API
For direct API access to notebooks and executions:
```python
from podstack import Client
client = Client(api_key="your-api-key")
# Create a notebook
notebook = client.sync_create_notebook(name="experiment", gpu_type="L40S")
print(f"JupyterLab: {notebook.jupyter_url}")
# Run code
result = client.sync_run("print('Hello GPU!')", gpu_type="L40S")
print(result.output)
```
## Error Handling
```python
from podstack import (
PodstackError,
AuthenticationError,
GPUNotAvailableError,
RateLimitError,
ExecutionTimeoutError
)
try:
result = train()
except AuthenticationError:
print("Invalid API key")
except GPUNotAvailableError as e:
print(f"GPU not available")
except RateLimitError as e:
print(f"Rate limited, retry after {e.retry_after}s")
except ExecutionTimeoutError as e:
print(f"Execution timed out: {e.execution_id}")
except PodstackError as e:
print(f"Error: {e.message}")
```
## Configuration
```python
import podstack
# Option 1: Initialize explicitly
podstack.init(
api_key="your-api-key",
project_id="your-project-id",
api_url="https://api.podstack.ai/v1", # optional
registry_url="https://registry.podstack.ai" # optional
)
# Option 2: Environment variables
# PODSTACK_API_KEY=your-api-key
# PODSTACK_PROJECT_ID=your-project-id
# PODSTACK_API_URL=https://api.podstack.ai/v1
# PODSTACK_REGISTRY_URL=https://registry.podstack.ai
# Option 3: Auto-init (set PODSTACK_AUTO_INIT=1)
# SDK auto-initializes from env vars at import time
```
## License
MIT License - see LICENSE for details.
| text/markdown | null | Podstack <support@podstack.ai> | null | null | null | gpu, notebook, machine-learning, deep-learning, cloud, jupyter | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Languag... | [] | null | null | >=3.8 | [] | [] | [] | [
"httpx>=0.24.0",
"requests>=2.28.0",
"torch; extra == \"torch\"",
"tensorflow; extra == \"tensorflow\"",
"scikit-learn; extra == \"sklearn\"",
"transformers; extra == \"huggingface\"",
"safetensors; extra == \"huggingface\"",
"torch; extra == \"all\"",
"tensorflow; extra == \"all\"",
"scikit-learn... | [] | [] | [] | [
"Homepage, https://podstack.ai",
"Documentation, https://docs.podstack.ai",
"Repository, https://github.com/podstack/podstack-python",
"Issues, https://github.com/podstack/podstack-python/issues"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-18T18:29:57.464182 | podstack-1.3.21.tar.gz | 89,003 | 75/0c/76f235c22e9e84d6fb66d1a17d4522225a85404640c49b15bc3b549465ca/podstack-1.3.21.tar.gz | source | sdist | null | false | 0e4afa1174eaa0cccfcc9e42eb2af97a | 7d6312f60b1f466aa06420a6052b297f2799a94a5ed1fde26e9aad85e158cab3 | 750c76f235c22e9e84d6fb66d1a17d4522225a85404640c49b15bc3b549465ca | MIT | [
"LICENSE"
] | 235 |
2.4 | pisek | 2.3.0 | Tool for developing tasks for programming competitions. | # Pisek ⏳
Tool for developing tasks for programming competitions.
## Installation
Pisek requires Python 3.12 or newer. You can install it with pip:
```bash
pip install pisek
```
## Usage
You can create a task skeleton with:
```bash
pisek init
```
Test the task in the current directory with:
```bash
pisek test
```
## Docs
See our [user documentation](https://piskoviste.github.io/pisek/) for more details.
| text/markdown | null | Václav Volhejn <vaclav.volhejn@gmail.com>, Jiří Beneš <mail@jiribenes.com>, Michal Töpfer <michal.topfer@gmail.com>, Jiri Kalvoda <jirikalvoda@kam.mff.cuni.cz>, Daniel Skýpala <skipy@kam.mff.cuni.cz>, Benjamin Swart <benjaminswart@email.cz>, Antonín Maloň <git@tonyl.eu> | null | Daniel Skýpala <skipy@kam.mff.cuni.cz> | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: POSIX :: Linux"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"argcomplete",
"colorama",
"pydantic",
"readchar",
"black; extra == \"dev\"",
"commitizen; extra == \"dev\"",
"mypy; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://piskoviste.github.io/pisek/",
"Issues, https://github.com/piskoviste/pisek/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:28:45.505944 | pisek-2.3.0.tar.gz | 143,744 | 9a/e5/58d64bc4e8ad5b5ec49786f97b22d3e9d116d0a557f9169387fd2e65a47b/pisek-2.3.0.tar.gz | source | sdist | null | false | a89e191409fffe981bffee40f8fc5025 | 1edff5ee1b760daa610b374d1f73fee8a74a167a664885fa576599fd8c32d14b | 9ae558d64bc4e8ad5b5ec49786f97b22d3e9d116d0a557f9169387fd2e65a47b | GPL-3.0-or-later | [
"LICENSE"
] | 242 |
2.4 | smcore | 1.1.2 | package providing core agent classes | # Core-Py
## Example
### Prerequisites
#### Install smcore
```bash
pip install smcore
```
#### Get the address of a blackboard to talk to
If you don't have an address already (something like `bb.myhost.com:8080` or
`bb.host.com`), you can install the core tool and run your own locally.
```
go install gitlab.com/hoffman-lab/core@latest
core start server
```
The default address when running locally is `localhost:8080`
This is great for debugging and getting started. The `core` tool also has
other good stuff.
### Use an agent to post data
```python
from smcore import agent
from smcore import hardcore
bb_addr = "localhost:8080"
async def main():
bb = hardcore.HTTPTransit(bb_addr)
a = agent.Agent(bb)
for i in range(n_messages):
metadata=b"hello"
data=b"world"
tags = ["upload-test", str(i)]
await a.post(metadata, data, tags)
if __name__=="__main__":
asyncio.run(main())
```
### Use an agent to listen for data
```python
from smcore import agent
from smcore import hardcore
bb_addr = "localhost:8080"
async def main():
bb = hardcore.HTTPTransit(bb_addr)
a = agent.Agent(bb)
# Listen for messages matching the listed tags
in_queue = a.listen_for(["important","segmentation"])
# listening is an active process and must be started.
# although listen_for can be called after start it is
# best practice to make all calls in advance.
#
task = a.start()
while True:
post = await in_queue.get()
await a.reply([post], None, None, ["received!"])
# The started coroutine for listening can be cancelled normally
task.cancel()
if __name__=="__main__":
asyncio.run(main())
```
### (De)serialization
Serialization is the process of converting abstract structures into
binary-encodable formats for saving and sharing. Common language allows agents to
communicate.
```python
from smcore import serialize, deserialize
data = serialize.file("path/to/file.ext")
deserialize.file(data, "path/to/new/file.ext")
data = serialize.numpy(np.random.random((512,512,1)))
array = deserialize.numpy(data)
```
The `serialize/deserialize` modules give you some basics to get started.
These allow you to easily set data and metadata in your posts:
```python
bb = hardcore.HTTPTransit(bb_addr)
post = bb.message_post()
fp_to_upload = "data/for/sharing/file.ext"
post.set_data(serialize.file(fp_to_upload))
post.set_metadata(serialize.dictionary({"path": fp_to_upload}))
```
You can decode it in other agents using the paired `deserialize` function:
```python
post = await incoming.get()
finfo = deserialize.dictionary(post.metadata())
deserialize.file(post.data(), finfo["path"])
```
## Motivation
The main [core]() repo is getting a little unwieldy. CI/CD and organization could
benefit from giving the python API for Core a little breathing room.
## Short term goals
1. Get the Python API updated to utilize the hardcore protocol
2. Debug the message stoppage issue @mattbrown7 has identified
3. Discuss, enumerate, and begin prototyping key tests to validate that the Core
API fulfills its contract
## Long term goals
- Resilient CI/CD that runs tests in real world conditions
- Shared maintenance of the Python API with other developers
| text/markdown | null | "J. Hoffman, J. Genender" <johnmarianhoffman@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"aiohttp",
"vyper-config",
"numpy"
] | [] | [] | [] | [
"Homepage, https://gitlab.com/hoffman-lab/core-py",
"Bug Tracker, https://gitlab.com/hoffman-lab/core-py/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T18:27:33.542652 | smcore-1.1.2.tar.gz | 17,907 | ac/b6/1a8ee8a9d8fd0ea146f1a332617f461b43c7caf131c52479c996d7aeb778/smcore-1.1.2.tar.gz | source | sdist | null | false | 378049c0cc8e36217601baef3377e509 | 10be90b866f0d5d31530e77a320af51a7f7f1821597db640f626172b7670fd93 | acb61a8ee8a9d8fd0ea146f1a332617f461b43c7caf131c52479c996d7aeb778 | null | [
"LICENSE"
] | 253 |
2.4 | protoc-gen-validate | 1.3.3 | PGV for python via just-in-time code generation | # Protoc-gen-validate (PGV)
While protocol buffers effectively guarantee the types of structured data,
they cannot enforce semantic rules for values. This package is a python implementation
of [protoc-gen-validate][pgv-home], which allows for runtime validation of various
semantic assertions expressed as annotations on the protobuf schema. The syntax for all available annotations is
in `validate.proto`. Implemented Python annotations are listed in the [rules comparison][rules-comparison].
### Example
```python3
from entities_pb2 import Person
from protoc_gen_validate.validator import validate, ValidationFailed, validate_all
p = Person(name="Foo")
try:
validate(p)
except ValidationFailed as err:
print(err) # p.id is not greater than 999
try:
validate_all(p)
except ValidationFailed as err:
print(err)
# p.id is not greater than 999
# p.email is not a valid email
# p.name pattern does not match ^[A-Za-z]+( [A-Za-z]+)*$
# home is required.
```
[pgv-home]: https://github.com/envoyproxy/protoc-gen-validate
[rules-comparison]: https://github.com/envoyproxy/protoc-gen-validate/blob/main/rule_comparison.md
| text/markdown | Buf | dev@buf.build | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://github.com/bufbuild/protoc-gen-validate | null | >=3.10 | [] | [] | [] | [
"validate-email>=1.3",
"Jinja2>=2.11.1",
"protobuf>=5.27.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.12 | 2026-02-18T18:27:21.939204 | protoc_gen_validate-1.3.3.tar.gz | 18,152 | 8e/fc/24e9345f017d3e9d7fe2a9faa89fc973db7b3ccaecd54815cb6ee46c49d5/protoc_gen_validate-1.3.3.tar.gz | source | sdist | null | false | 2d15fe627bff7240939d5ee4c5597e53 | 76c1cea6c2b1290fc2a266b7d19f44cdcc1dc496236b760db1a50e1672d6c978 | 8efc24e9345f017d3e9d7fe2a9faa89fc973db7b3ccaecd54815cb6ee46c49d5 | null | [
"LICENSE"
] | 4,554 |
2.4 | llama-index-llms-bedrock-converse | 0.12.11 | llama-index llms bedrock converse integration | # LlamaIndex Llms Integration: Bedrock Converse
### Installation
```bash
%pip install llama-index-llms-bedrock-converse
!pip install llama-index
```
### Usage
```py
from llama_index.llms.bedrock_converse import BedrockConverse
# Set your AWS profile name
profile_name = "Your aws profile name"
# Simple completion call
resp = BedrockConverse(
model="anthropic.claude-3-haiku-20240307-v1:0",
profile_name=profile_name,
).complete("Paul Graham is ")
print(resp)
```
### Call chat with a list of messages
```py
from llama_index.core.llms import ChatMessage
from llama_index.llms.bedrock_converse import BedrockConverse
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality"
),
ChatMessage(role="user", content="Tell me a story"),
]
resp = BedrockConverse(
model="anthropic.claude-3-haiku-20240307-v1:0",
profile_name=profile_name,
).chat(messages)
print(resp)
```
### Streaming
```py
# Using stream_complete endpoint
from llama_index.llms.bedrock_converse import BedrockConverse
llm = BedrockConverse(
model="anthropic.claude-3-haiku-20240307-v1:0",
profile_name=profile_name,
)
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
print(r.delta, end="")
# Using stream_chat endpoint
from llama_index.llms.bedrock_converse import BedrockConverse
llm = BedrockConverse(
model="anthropic.claude-3-haiku-20240307-v1:0",
profile_name=profile_name,
)
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality"
),
ChatMessage(role="user", content="Tell me a story"),
]
resp = llm.stream_chat(messages)
for r in resp:
print(r.delta, end="")
```
### Configure Model
```py
from llama_index.llms.bedrock_converse import BedrockConverse
llm = BedrockConverse(
model="anthropic.claude-3-haiku-20240307-v1:0",
profile_name=profile_name,
)
resp = llm.complete("Paul Graham is ")
print(resp)
```
### Connect to Bedrock with Access Keys
```py
from llama_index.llms.bedrock_converse import BedrockConverse
llm = BedrockConverse(
model="anthropic.claude-3-haiku-20240307-v1:0",
aws_access_key_id="AWS Access Key ID to use",
aws_secret_access_key="AWS Secret Access Key to use",
aws_session_token="AWS Session Token to use",
region_name="AWS Region to use, eg. us-east-1",
)
resp = llm.complete("Paul Graham is ")
print(resp)
```
### Use an Application Inference Profile
AWS Bedrock supports Application Inference Profiles which are a sort of provisioned proxy to Bedrock LLMs.
Since these profile ARNs are account-specific, they must be handled specially in BedrockConverse.
When an application inference profile is created as an AWS resource, it references an existing Bedrock foundation model or a cross-region inference profile. The referenced model must be provided to the BedrockConverse initializer as the `model` argument, and the ARN of the application inference profile must be provided as the `application_inference_profile_arn` argument.
**Important:** BedrockConverse does not validate that the `model` argument in fact matches the underlying model referenced by the application inference profile provided. The caller is responsible for making sure they match. Behavior when they do not match is undefined.
```py
# Assumes the existence of a provisioned application inference profile
# that references a foundation model or cross-region inference profile.
from llama_index.llms.bedrock_converse import BedrockConverse
# Instantiate the BedrockConverse model
# with the model and application inference profile
# Make sure the model is the one that the
# application inference profile refers to in AWS
llm = BedrockConverse(
model="us.anthropic.claude-3-5-sonnet-20240620-v1:0", # this is the referenced model/profile
application_inference_profile_arn="arn:aws:bedrock:us-east-1:012345678901:application-inference-profile/fake-profile-name",
)
```
### Function Calling
```py
# Claude, Command, and Mistral Large models support native function calling through AWS Bedrock Converse.
# There is seamless integration with LlamaIndex tools through the predict_and_call function on the LLM.
from llama_index.llms.bedrock_converse import BedrockConverse
from llama_index.core.tools import FunctionTool
# Define some functions
def multiply(a: int, b: int) -> int:
"""Multiply two integers and return the result"""
return a * b
def mystery(a: int, b: int) -> int:
"""Mystery function on two integers."""
return a * b + a + b
# Create tools from functions
mystery_tool = FunctionTool.from_defaults(fn=mystery)
multiply_tool = FunctionTool.from_defaults(fn=multiply)
# Instantiate the BedrockConverse model
llm = BedrockConverse(
model="anthropic.claude-3-haiku-20240307-v1:0",
profile_name=profile_name,
)
# Use function tools with the LLM
response = llm.predict_and_call(
[mystery_tool, multiply_tool],
user_msg="What happens if I run the mystery function on 5 and 7",
)
print(str(response))
response = llm.predict_and_call(
[mystery_tool, multiply_tool],
user_msg=(
"""What happens if I run the mystery function on the following pairs of numbers?
Generate a separate result for each row:
- 1 and 2
- 8 and 4
- 100 and 20
NOTE: you need to run the mystery function for all of the pairs above at the same time"""
),
allow_parallel_tool_calls=True,
)
print(str(response))
for s in response.sources:
print(f"Name: {s.tool_name}, Input: {s.raw_input}, Output: {str(s)}")
```
### Async usage
```py
from llama_index.llms.bedrock_converse import BedrockConverse
llm = BedrockConverse(
model="anthropic.claude-3-haiku-20240307-v1:0",
aws_access_key_id="AWS Access Key ID to use",
aws_secret_access_key="AWS Secret Access Key to use",
aws_session_token="AWS Session Token to use",
region_name="AWS Region to use, eg. us-east-1",
)
# Use async complete
resp = await llm.acomplete("Paul Graham is ")
print(resp)
```
### Prompt Caching System and regular messages
You can cache normal and system messages by placing cache points strategically:
```py
from llama_index.core.llms import ChatMessage
from llama_index.core.base.llms.types import (
TextBlock,
CacheControl,
CachePoint,
MessageRole,
)
# Cache expensive context but keep dynamic instructions uncached
cached_context = (
"""[Large context about company policies, knowledge base, etc...]"""
)
dynamic_instructions = (
"Today's date is 2024-01-15. Focus on recent developments."
)
document_text = "[Long document]"
messages = [
ChatMessage(
role=MessageRole.SYSTEM,
blocks=[
TextBlock(text=cached_context),
CachePoint(cache_control=CacheControl(type="default")),
TextBlock(text=dynamic_instructions),
],
),
ChatMessage(
role=MessageRole.USER,
blocks=[
TextBlock(
text=f"{document_text}",
type="text",
),
CachePoint(cache_control=CacheControl(type="default")),
TextBlock(
text="What's our current policy on remote work?",
type="text",
),
],
),
]
response = llm.chat(messages)
```
### LLM Implementation example
https://docs.llamaindex.ai/en/stable/examples/llm/bedrock_converse/
| text/markdown | null | Your Name <you@example.com> | null | null | null | null | [] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"aioboto3<16,>=15.0.0",
"boto3<2,>=1.38.27",
"llama-index-core<0.15,>=0.14.5"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T18:27:20.879294 | llama_index_llms_bedrock_converse-0.12.11.tar.gz | 18,642 | 4b/b8/ab38f5bc28b2671508fd75e08eefc7dbab3ead298aeb69cb2136429e29f1/llama_index_llms_bedrock_converse-0.12.11.tar.gz | source | sdist | null | false | f1b04da1a02f3051bff54fbf509fe112 | 025b1eab52c2dc697bebe1021cb32928a5913f68fa42a68e7e77209f0aa04263 | 4bb8ab38f5bc28b2671508fd75e08eefc7dbab3ead298aeb69cb2136429e29f1 | MIT | [
"LICENSE"
] | 2,511 |
2.3 | gslides-api | 0.3.5 | A Python library for working with Google Slides API using Pydantic domain objects | # gslides-api
A Python library for working with Google Slides API using Pydantic domain objects.
## Overview
This library provides a Pythonic interface to the Google Slides API with:
- **Pydantic domain objects** that match the JSON structure returned by the Google Slides API
- **Type-safe operations** with full type hints support
- **Easy-to-use methods** for creating, reading, and manipulating Google Slides presentations
- **Comprehensive coverage** of Google Slides API features
## Installation
```bash
pip install gslides-api
```
## Quick Start
### Authentication
First, set up your Google API credentials. See [CREDENTIALS.md](docs/CREDENTIALS.md) for detailed instructions.
```python
from gslides_api import initialize_credentials
# Initialize with your credentials directory
initialize_credentials("/path/to/your/credentials/")
```
### Basic Usage
```python
from gslides_api import Presentation
# Load an existing presentation
presentation = Presentation.from_id("your-presentation-id")
# Create a new blank presentation
new_presentation = Presentation.create_blank("My New Presentation")
# Access slides
for slide in presentation.slides:
print(f"Slide ID: {slide.objectId}")
# Create a new slide
new_slide = presentation.add_slide()
```
## Features
- **Domain Objects**: Complete Pydantic models for all Google Slides API objects
- **Presentations**: Create, load, copy, and manipulate presentations
- **Slides**: Add, remove, duplicate, and reorder slides
- **Elements**: Work with text boxes, shapes, images, and other slide elements
- **Layouts**: Access and use slide layouts and masters
- **Requests**: Type-safe request builders for batch operations
- **Markdown Support**: Convert between Markdown and Google Slides content
- **MCP Server**: Expose Google Slides operations as tools for AI assistants
## MCP Server
gslides-api includes an MCP (Model Context Protocol) server that exposes Google Slides operations as tools for AI assistants like Claude.
### Installation
```bash
pip install gslides-api[mcp]
```
### Quick Start
```bash
# Set credentials path
export GSLIDES_CREDENTIALS_PATH=/path/to/credentials
# Run the MCP server
python -m gslides_api.mcp.server
```
### Available Tools
| Tool | Description |
|------|-------------|
| `get_presentation` | Get full presentation by URL or ID |
| `get_slide` | Get slide by name (speaker notes) |
| `get_element` | Get element by slide and element name |
| `get_slide_thumbnail` | Get slide thumbnail image |
| `read_element_markdown` | Read text element as markdown |
| `write_element_markdown` | Write markdown to text element |
| `replace_element_image` | Replace image from URL |
| `copy_slide` | Duplicate a slide |
| `move_slide` | Reorder slide position |
| `delete_slide` | Remove a slide |
### MCP Configuration
Add to your `.mcp.json`:
```json
{
"gslides": {
"type": "stdio",
"command": "python",
"args": ["-m", "gslides_api.mcp.server"]
}
}
```
The server reads credentials from `GSLIDES_CREDENTIALS_PATH` environment variable. Use `--credential-path` to override.
See [docs/MCP_SERVER.md](docs/MCP_SERVER.md) for detailed documentation.
## API Coverage
The library covers most Google Slides API functionality including:
- Presentations and slides management
- Text elements and formatting
- Shapes and images
- Tables and charts
- Page layouts and masters
- Batch update operations
## Requirements
- Python 3.8+
- Google API credentials (OAuth2 or Service Account)
## Dependencies
- `google-api-python-client` - Google API client library
- `google-auth-oauthlib` - OAuth2 authentication
- `pydantic` - Data validation and serialization
- `marko` - Markdown processing
- `protobuf` - Protocol buffer support
## Development
### Running Tests
```bash
pip install -e ".[test]"
pytest
```
### Code Formatting
```bash
pip install -e ".[dev]"
black gslides_api/
isort gslides_api/
```
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Related Projects
- [md2googleslides](https://github.com/ShoggothAI/md2googleslides) - TypeScript library for creating slides from Markdown
- [gslides](https://github.com/michael-gracie/gslides) - Python library focused on charts and tables
- [gslides-maker](https://github.com/vilmacio/gslides-maker) - Generate slides from Wikipedia content
## Acknowledgments
This library is built on top of the excellent Google API Python client and leverages the power of Pydantic for type-safe data handling.
| text/markdown | motley.ai | info@motley.ai | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"pydantic<3.0.0,>=2.11.7",
"google-auth<3.0.0,>=2.0.0",
"google-api-python-client<3.0.0,>=2.169.0",
"google-auth-oauthlib<2.0.0,>=1.2.2",
"marko<3.0.0,>=2.1.4",
"protobuf<7.0.0,>=6.31.1",
"requests<3.0.0,>=2.32.4",
"typeguard<5.0.0,>=4.4.4",
"pillow<11.0.0,>=10.0.0; extra == \"image\"",
"pandas<3.... | [] | [] | [] | [] | poetry/2.1.3 CPython/3.11.11 Linux/6.17.0-14-generic | 2026-02-18T18:26:32.391302 | gslides_api-0.3.5.tar.gz | 92,813 | c5/f1/07e8ec6e6c1a795abdb779e456fb860e08bf3362eaf87ebf3a31faf66b5b/gslides_api-0.3.5.tar.gz | source | sdist | null | false | b370e249d8e1ed417ba0b60e6a2603fd | 8e8547e56e196439f4ac968032a9387fc3c343bf796273e2e427a09f98c7fb2a | c5f107e8ec6e6c1a795abdb779e456fb860e08bf3362eaf87ebf3a31faf66b5b | null | [] | 245 |
2.4 | qcelemental | 0.50.0rc2 | Core data structures for Quantum Chemistry. | # QCElemental
[](https://github.com/MolSSI/QCElemental/actions?query=workflow%3ACI)
[](https://codecov.io/gh/MolSSI/QCElemental)
[](https://molssi.github.io/QCElemental/)
[](https://join.slack.com/t/qcarchive/shared_invite/zt-3calopudd-2rtUC~XN1tj1Zn9MHkV6GQ)

**Documentation:** [GitHub Pages](https://molssi.github.io/QCElemental/)
Core data structures for Quantum Chemistry. QCElemental also contains physical constants and periodic table data from NIST and molecule handlers.
Periodic Table and Physical Constants data are pulled from NIST srd144 and srd121, respectively ([details](raw_data/README.md)) in a renewable manner (class around NIST-published JSON file).
This project also contains a generator, validator, and translator for [Molecule QCSchema](https://molssi-qc-schema.readthedocs.io/en/latest/auto_topology.html).
## ✨ Getting Started
- Installation. QCElemental supports Python 3.10+ starting with v0.50 (aka "next", aka "QCSchema v2 available").
```sh
python -m pip install qcelemental
```
- To install QCElemental with molecule visualization capabilities (useful in iPython or Jupyter notebook environments):
```sh
python -m pip install 'qcelemental[viz]`
```
- To install QCElemental with various alignment capabilities using `networkx`
```sh
python -m pip install 'qcelemental[align]`
```
- Or install both:
```sh
python -m pip install 'qcelemental[viz,align]`
```
- See [documentation](https://molssi.github.io/QCElemental/)
### Periodic Table
A variety of periodic table quantities are available using virtually any alias:
```python
>>> import qcelemental as qcel
>>> qcel.periodictable.to_E('KRYPTON')
'Kr'
>>> qcel.periodictable.to_element(36)
'Krypton'
>>> qcel.periodictable.to_Z('kr84')
36
>>> qcel.periodictable.to_A('Kr')
84
>>> qcel.periodictable.to_A('D')
2
>>> qcel.periodictable.to_mass('kr', return_decimal=True)
Decimal('83.9114977282')
>>> qcel.periodictable.to_mass('kr84')
83.9114977282
>>> qcel.periodictable.to_mass('Kr86')
85.9106106269
```
### Physical Constants
Physical constants can be acquired directly from the [NIST CODATA](https://physics.nist.gov/cuu/Constants/Table/allascii.txt):
```python
>>> import qcelemental as qcel
>>> qcel.constants.Hartree_energy_in_eV
27.21138602
>>> qcel.constants.get('hartree ENERGY in ev')
27.21138602
>>> pc = qcel.constants.get('hartree ENERGY in ev', return_tuple=True)
>>> pc.label
'Hartree energy in eV'
>>> pc.data
Decimal('27.21138602')
>>> pc.units
'eV'
>>> pc.comment
'uncertainty=0.000 000 17'
```
Alternatively, with the use of the [Pint unit conversion package](https://pint.readthedocs.io/en/latest/), arbitrary
conversion factors can be obtained:
```python
>>> qcel.constants.conversion_factor("bohr", "miles")
3.2881547429884475e-14
```
### Covalent Radii
Covalent radii are accessible for most of the periodic table from [Alvarez, Dalton Transactions (2008) doi:10.1039/b801115j](https://doi.org/10.1039/b801115j) ([details](qcelemental/data/alvarez_2008_covalent_radii.py.py)).
```python
>>> import qcelemental as qcel
>>> qcel.covalentradii.get('I')
2.626719314386381
>>> qcel.covalentradii.get('I', units='angstrom')
1.39
>>> qcel.covalentradii.get(116)
Traceback (most recent call last):
...
qcelemental.exceptions.DataUnavailableError: ('covalent radius', 'Lv')
>>> qcel.covalentradii.get(116, missing=4.0)
4.0
>>> qcel.covalentradii.get('iodine', return_tuple=True).dict()
{'numeric': True, 'label': 'I', 'units': 'angstrom', 'data': Decimal('1.39'), 'comment': 'e.s.d.=3 n=451', 'doi': 'DOI: 10.1039/b801115j'}
```
### van der Waals Radii
Van der Waals radii are accessible for most of the periodic table from [Mantina, J. Phys. Chem. A (2009) doi: 10.1021/jp8111556](https://pubs.acs.org/doi/10.1021/jp8111556) ([details](qcelemental/data/mantina_2009_vanderwaals_radii.py)).
```python
>>> import qcelemental as qcel
>>> qcel.vdwradii.get('I')
3.7416577284064996
>>> qcel.vdwradii.get('I', units='angstrom')
1.98
>>> qcel.vdwradii.get(116)
Traceback (most recent call last):
...
qcelemental.exceptions.DataUnavailableError: ('vanderwaals radius', 'Lv')
>>> qcel.vdwradii.get('iodine', return_tuple=True).dict()
{'numeric': True, 'label': 'I', 'units': 'angstrom', 'data': Decimal('1.98'), 'doi': 'DOI: 10.1021/jp8111556'}
```
| text/markdown | null | The QCArchive Development Team <qcarchive@molssi.org> | null | null | BSD 3-Clause License
Copyright (c) 2018-2021, The Molecular Sciences Software Institute
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Lan... | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy; python_version < \"3.14\"",
"numpy>=2.3.3; python_version >= \"3.14\"",
"pint>=0.24",
"pydantic>=2.12",
"msgpack; extra == \"standard\"",
"jsonschema; extra == \"standard\"",
"nglview; extra == \"viz\"",
"setuptools>=68.0.0; python_version >= \"3.12\" and extra == \"viz\"",
"ipykernel<6.0; e... | [] | [] | [] | [
"homepage, https://github.com/MolSSI/QCElemental",
"changelog, https://github.com/MolSSI/QCElemental/blob/master/docs/changelog.rst",
"documentation, https://molssi.github.io/QCElemental/",
"issues, https://github.com/MolSSI/QCElemental/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T18:26:05.299499 | qcelemental-0.50.0rc2.tar.gz | 7,503,105 | c7/75/2222f4d59670d186866eb3687f97d94d55aeebb7243de1d40d8565028a7b/qcelemental-0.50.0rc2.tar.gz | source | sdist | null | false | a5c1ba4851eca06be611f8d8ac688eee | 39be97b2a6eaea8d27af8f7eaed75ffde2e22a9ad605da69caa07a1b48636abd | c7752222f4d59670d186866eb3687f97d94d55aeebb7243de1d40d8565028a7b | null | [
"LICENSE"
] | 248 |
2.4 | pactown | 0.1.166 | Pactown Ecosystem Orchestrator - Build and manage decentralized microservice ecosystems from Markdown READMEs using markpact sandboxes and a centralized service registry. | 
# Pactown 🏘️
**Decentralized Service Ecosystem Orchestrator** – Build interconnected microservices from Markdown using [markpact](https://github.com/wronai/markpact).
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/Apache-2.0)
[](https://github.com/wronai/pactown/stargazers)
[](https://github.com/wronai/pactown/network/members)
[](https://github.com/wronai/pactown/issues)
[](https://github.com/wronai/pactown/pulls)
[](https://github.com/wronai/pactown/actions)
[](https://github.com/psf/black)
[](http://mypy-lang.org/)
[](https://github.com/wronai/pactown)
[](https://github.com/wronai/pactown)
## Overview
Pactown enables you to compose multiple independent markpact projects into a unified, decentralized service ecosystem. Each service is defined in its own `README.md`, runs in its own sandbox, and communicates with other services through well-defined interfaces.
```
┌─────────────────────────────────────────────────────────────────┐
│ Pactown Ecosystem │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Web │───▶│ API │───▶│ Database │ │ CLI │ │
│ │ :8002 │ │ :8001 │ │ :8003 │ │ shell │ │
│ │ React │ │ FastAPI │ │ Postgres │ │ Python │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ markpact sandboxes (isolated) │ │
│ └──────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Key Features
### Core Features
- **🔗 Service Composition** – Combine multiple markpact READMEs into one ecosystem
- **📦 Local Registry** – Store and share markpact artifacts across projects
- **🔄 Dependency Resolution** – Automatic startup order based on service dependencies
- **🏥 Health Checks** – Monitor service health with configurable endpoints
- **🌐 Multi-Language** – Mix Python, Node.js, Go, Rust in one ecosystem
- **🔒 Isolated Sandboxes** – Each service runs in its own environment
- **🔌 Dynamic Ports** – Automatic port allocation when preferred ports are busy
- **🔍 Service Discovery** – Name-based service lookup, no hardcoded URLs
- **⚡ Config Generator** – Auto-generate config from folder of READMEs
### New in v0.4.0
- **⚡ Fast Start** – Dependency caching for millisecond startup times ([docs](docs/FAST_START.md))
- **🛡️ Security Policy** – Rate limiting, user profiles, anomaly logging ([docs](docs/SECURITY_POLICY.md))
- **👤 User Isolation** – Linux user-based sandbox isolation for multi-tenant SaaS ([docs](docs/USER_ISOLATION.md))
- **📊 Detailed Logging** – Structured logs with error capture ([docs](docs/LOGGING.md))
---
## 📚 Documentation
### Quick Navigation
| Category | Documents |
|----------|-----------|
| **Getting Started** | [Quick Start](#quick-start) · [Installation](#installation) · [Commands](#commands) |
| **Core Concepts** | [Specification](docs/SPECIFICATION.md) · [Configuration](docs/CONFIGURATION.md) · [Network](docs/NETWORK.md) |
| **Deployment** | [Deployment Guide](docs/DEPLOYMENT.md) · [Quadlet/VPS](docs/QUADLET.md) · [Generator](docs/GENERATOR.md) |
| **Security** | [Security Policy](docs/SECURITY_POLICY.md) · [Quadlet Security](docs/SECURITY.md) · [User Isolation](docs/USER_ISOLATION.md) |
| **Performance** | [Fast Start](docs/FAST_START.md) · [Logging](docs/LOGGING.md) |
| **Comparisons** | [vs Cloudflare Workers](docs/CLOUDFLARE_WORKERS_COMPARISON.md) |
### All Documentation
| Document | Description |
|----------|-------------|
| [Specification](docs/SPECIFICATION.md) | Architecture and design |
| [Configuration](docs/CONFIGURATION.md) | YAML config reference |
| [Deployment](docs/DEPLOYMENT.md) | Production deployment guide (Compose/Kubernetes/Quadlet) |
| [Network](docs/NETWORK.md) | Dynamic ports & service discovery |
| [Generator](docs/GENERATOR.md) | Auto-generate configs |
| [Quadlet](docs/QUADLET.md) | Podman Quadlet deployment for VPS production |
| [Security](docs/SECURITY.md) | Quadlet security hardening and injection test suite |
| [Security Policy](docs/SECURITY_POLICY.md) | Rate limiting, user profiles, resource monitoring |
| [Fast Start](docs/FAST_START.md) | Dependency caching for fast startup |
| [User Isolation](docs/USER_ISOLATION.md) | Linux user-based sandbox isolation |
| [Logging](docs/LOGGING.md) | Structured logging and error capture |
| [Cloudflare Workers comparison](docs/CLOUDFLARE_WORKERS_COMPARISON.md) | When to use Pactown vs Cloudflare Workers |
### Source Code Reference
| Module | Description |
|--------|-------------|
| [`config.py`](src/pactown/config.py) | Configuration models |
| [`orchestrator.py`](src/pactown/orchestrator.py) | Service lifecycle management |
| [`resolver.py`](src/pactown/resolver.py) | Dependency resolution |
| [`network.py`](src/pactown/network.py) | Port allocation & discovery |
| [`generator.py`](src/pactown/generator.py) | Config file generator |
| [`service_runner.py`](src/pactown/service_runner.py) | High-level service runner API |
| [`security.py`](src/pactown/security.py) | Security policy & rate limiting |
| [`fast_start.py`](src/pactown/fast_start.py) | Dependency caching & fast startup |
| [`user_isolation.py`](src/pactown/user_isolation.py) | Linux user isolation for multi-tenant |
| [`sandbox_manager.py`](src/pactown/sandbox_manager.py) | Sandbox lifecycle management |
| [`registry/`](src/pactown/registry/) | Local artifact registry |
| [`deploy/`](src/pactown/deploy/) | Deployment backends (Docker, Podman, K8s, Quadlet) |
---
## 🎯 Examples
| Example | What it shows |
|---------|---------------|
| [`examples/saas-platform/`](examples/saas-platform/) | Complete SaaS with Web + API + Database + Gateway |
| [`examples/quadlet-vps/`](examples/quadlet-vps/) | VPS setup and Quadlet workflow |
| [`examples/email-llm-responder/`](examples/email-llm-responder/) | Email automation with LLM integration |
| [`examples/api-gateway-webhooks/`](examples/api-gateway-webhooks/) | API gateway / webhook handler |
| [`examples/realtime-notifications/`](examples/realtime-notifications/) | WebSocket + SSE real-time notifications |
| [`examples/microservices/`](examples/microservices/) | Multi-language microservices |
| [`examples/fast-start-demo/`](examples/fast-start-demo/) | **NEW:** Fast startup with dependency caching |
| [`examples/security-policy/`](examples/security-policy/) | **NEW:** Rate limiting and user profiles |
| [`examples/user-isolation/`](examples/user-isolation/) | **NEW:** Multi-tenant user isolation |
## Installation
```bash
pip install pactown
```
Or install from source:
```bash
git clone https://github.com/wronai/pactown
cd pactown
make install
```
## Quick Start
### 1. Create ecosystem configuration
```yaml
# saas.pactown.yaml
name: my-saas
version: 0.1.0
services:
api:
readme: services/api/README.md
port: 8001
health_check: /health
web:
readme: services/web/README.md
port: 8002
depends_on:
- name: api
endpoint: http://localhost:8001
```
### 2. Create service READMEs
Each service is a standard markpact README:
````markdown
# API Service
REST API for the application.
---
```python markpact:deps
fastapi
uvicorn
```
```python markpact:file path=main.py
from fastapi import FastAPI
app = FastAPI()
@app.get("/health")
def health():
return {"status": "ok"}
```
```bash markpact:run
uvicorn main:app --port ${MARKPACT_PORT:-8001}
```
````
### 3. Start the ecosystem
```bash
pactown up saas.pactown.yaml
```
```bash
INFO: 127.0.0.1:57432 - "GET /health HTTP/1.1" 200 OK
INFO: 127.0.0.1:59272 - "GET /health HTTP/1.1" 200 OK
127.0.0.1 - - [15/Jan/2026 14:15:17] "GET /health HTTP/1.1" 200 -
INFO: 127.0.0.1:59300 - "GET /health HTTP/1.1" 200 OK
Ecosystem: saas-platform
┏━━━━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━┓
┃ Service ┃ Port ┃ Status ┃ PID ┃ Health ┃
┡━━━━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━┩
│ database │ 10000 │ 🟢 Running │ 534102 │ ✓ 22ms │
│ api │ 10001 │ 🟢 Running │ 534419 │ ✓ 23ms │
│ web │ 10002 │ 🟢 Running │ 534424 │ ✓ 29ms │
│ cli │ 10003 │ 🔴 Stopped │ 534734 │ Process died │
│ gateway │ 10004 │ 🟢 Running │ 535242 │ ✓ 23ms │
└──────────┴───────┴────────────┴────────┴──────────────┘
Press Ctrl+C to stop all services
127.0.0.1 - - [15/Jan/2026 14:15:29] "GET / HTTP/1.1" 200 -
INFO: 127.0.0.1:42964 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:53998 - "GET /health HTTP/1.1" 200 OK
INFO: 127.0.0.1:54008 - "GET /api/stats HTTP/1.1" 200 OK
INFO: 127.0.0.1:36100 - "GET /records/users HTTP/1.1" 200 OK
INFO: 127.0.0.1:54012 - "GET /api/users HTTP/1.1" 200 OK
```
## Commands
| Command | Description |
|---------|-------------|
| `pactown up <config>` | Start all services |
| `pactown down <config>` | Stop all services |
| `pactown status <config>` | Show service status |
| `pactown validate <config>` | Validate configuration |
| `pactown graph <config>` | Show dependency graph |
| `pactown init` | Initialize new ecosystem |
| `pactown publish <config>` | Publish to registry |
| `pactown pull <config>` | Pull dependencies |
## Registry
Pactown includes a local registry for sharing markpact artifacts using **JSON-based storage** (no database required):
```bash
# Start registry
pactown-registry --port 8800
# Publish artifact
pactown publish saas.pactown.yaml --registry http://localhost:8800
# Pull dependencies
pactown pull saas.pactown.yaml --registry http://localhost:8800
```
### Storage Architecture
The registry uses **file-based JSON storage** instead of a database:
- **Location**: `.pactown-registry/index.json` in your project directory
- **Format**: Single JSON file containing all artifacts and versions
- **Content**: Full README content, metadata, checksums, and version history
- **Benefits**: No database setup, portable, version-controllable
### Registry API
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/v1/artifacts` | GET | List artifacts |
| `/v1/artifacts/{ns}/{name}` | GET | Get artifact info |
| `/v1/artifacts/{ns}/{name}/{version}/readme` | GET | Get README content |
| `/v1/publish` | POST | Publish artifact |
## Configuration Reference
```yaml
name: ecosystem-name # Required: ecosystem name
version: 0.1.0 # Semantic version
description: "" # Optional description
base_port: 8000 # Starting port for auto-assignment
sandbox_root: ./.pactown-sandboxes # Sandbox directory
registry:
url: http://localhost:8800
namespace: default
services:
service-name:
readme: path/to/README.md # Path to markpact README
port: 8001 # Service port
health_check: /health # Health check endpoint
timeout: 60 # Startup timeout (seconds)
replicas: 1 # Number of instances
auto_restart: true # Restart on failure
env: # Environment variables
KEY: value
depends_on: # Dependencies
- name: other-service
endpoint: http://localhost:8000
env_var: OTHER_SERVICE_URL
```
## Examples
See the `examples/` directory for complete ecosystem examples:
- **SaaS Platform** – Web + API + Database + CLI
- **Microservices** – Multiple language services
- **Event-Driven** – Services with message queues
## Architecture
```bash
pactown/
├── src/pactown/
│ ├── __init__.py # Package exports
│ ├── cli.py # CLI commands
│ ├── config.py # Configuration models
│ ├── orchestrator.py # Service orchestration
│ ├── resolver.py # Dependency resolution
│ ├── sandbox_manager.py # Sandbox management
│ └── registry/
│ ├── __init__.py
│ ├── server.py # Registry API server
│ ├── client.py # Registry client
│ └── models.py # Data models
├── examples/
│ ├── saas-platform/ # Complete SaaS example
│ └── microservices/ # Microservices example
├── tests/
├── Makefile
├── pyproject.toml
└── README.md
```
## License
Apache License 2.0 - see [LICENSE](LICENSE) for details.
## Author
Created by **Tom Sapletta** - [tom@sapletta.com](mailto:tom@sapletta.com)
| text/markdown | null | Tom Sapletta <tom@sapletta.com> | null | null | null | decentralized, ecosystem, markdown, markpact, microservices, orchestrator, sandbox | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"fastapi>=0.100.0",
"httpx>=0.24.0",
"markpact>=0.1.18",
"nfo>=0.1.17",
"pydantic>=2.0",
"python-dotenv>=1.0",
"pyyaml>=6.0",
"rich>=13.0",
"uvicorn>=0.20.0",
"watchfiles>=0.20.0",
"lolm>=0.1.6; extra == \"all\"",
"build; extra == \"dev\"",
"bump2version>=1.0; extra == \"dev\... | [] | [] | [] | [
"Homepage, https://github.com/wronai/pactown",
"Repository, https://github.com/wronai/pactown",
"Issues, https://github.com/wronai/pactown/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T18:26:04.536169 | pactown-0.1.166.tar.gz | 439,537 | 64/58/a46528850b793e42ef58174812c4f8c61d7bb175c58e6dafd56a51be7d0e/pactown-0.1.166.tar.gz | source | sdist | null | false | ff2df75cc571d9a18a8e5cf3d11464c2 | ace7b7085d7e5da1b0845f08a7d53da0c4c174b0fcf8fad31bd9d0b14685a893 | 6458a46528850b793e42ef58174812c4f8c61d7bb175c58e6dafd56a51be7d0e | Apache-2.0 | [
"LICENSE"
] | 240 |
2.1 | vv-llm | 0.3.73 | Universal LLM interfaces for multi-provider chat and utilities | # vv-llm
Universal LLM interface layer for Python. One API, 16 backends, sync & async.
```
pip install vv-llm
```
## Supported Backends
OpenAI | Anthropic | DeepSeek | Gemini | Qwen | Groq | Mistral | Moonshot | MiniMax | Yi | ZhiPuAI | Baichuan | StepFun | xAI | Ernie | Local
Also supports Azure OpenAI, Vertex AI, and AWS Bedrock deployments.
## Quick Start
### Configure
```python
from vv_llm.settings import settings
settings.load({
"VERSION": "2",
"endpoints": [
{
"id": "openai-default",
"api_base": "https://api.openai.com/v1",
"api_key": "sk-...",
}
],
"backends": {
"openai": {
"models": {
"gpt-4o": {
"id": "gpt-4o",
"endpoints": ["openai-default"],
}
}
}
}
})
```
### Sync
```python
from vv_llm.chat_clients import create_chat_client, BackendType
client = create_chat_client(BackendType.OpenAI, model="gpt-4o")
resp = client.create_completion([
{"role": "user", "content": "Explain RAG in one sentence"}
])
print(resp.content)
```
### Streaming
```python
for chunk in client.create_stream([
{"role": "user", "content": "Write a haiku"}
]):
if chunk.content:
print(chunk.content, end="")
```
### Async
```python
import asyncio
from vv_llm.chat_clients import create_async_chat_client, BackendType
async def main():
client = create_async_chat_client(BackendType.OpenAI, model="gpt-4o")
resp = await client.create_completion([
{"role": "user", "content": "hello"}
])
print(resp.content)
asyncio.run(main())
```
## Features
- **Unified interface** — same `create_completion` / `create_stream` API across all providers
- **Type-safe factory** — `create_chat_client(BackendType.X)` returns the correct client type
- **Multi-endpoint** — configure multiple endpoints per backend with random selection and failover
- **Tool calling** — normalized tool/function calling across providers
- **Multimodal** — text + image inputs where supported
- **Thinking/reasoning** — access chain-of-thought from Claude, DeepSeek Reasoner, etc.
- **Token counting** — per-model tokenizers (tiktoken, deepseek-tokenizer, qwen-tokenizer)
- **Rate limiting** — RPM/TPM controls with memory, Redis, or DiskCache backends
- **Context length control** — automatic message truncation to fit model limits
- **Prompt caching** — Anthropic prompt caching support
- **Retry with backoff** — configurable retry logic for transient failures
## Utilities
```python
from vv_llm.chat_clients import format_messages, get_token_counts, get_message_token_counts
```
| Function | Description |
|---|---|
| `format_messages` | Normalize multimodal/tool messages across formats |
| `get_token_counts` | Count tokens for a text string |
| `get_message_token_counts` | Count tokens for a message list |
## Optional Dependencies
```bash
pip install 'vv-llm[redis]' # Redis rate limiting
pip install 'vv-llm[diskcache]' # DiskCache rate limiting
pip install 'vv-llm[server]' # FastAPI token server
pip install 'vv-llm[vertex]' # Google Vertex AI
pip install 'vv-llm[bedrock]' # AWS Bedrock
```
## Project Structure
```
src/vv_llm/
chat_clients/ # Per-backend clients + factory
settings/ # Configuration management
types/ # Type definitions & enums
utilities/ # Rate limiting, retry, media processing, token counting
server/ # Optional token counting server
tests/unit/ # Unit tests
tests/live/ # Live integration tests (requires real API keys)
```
## Development
```bash
pdm install -d # Install dev dependencies
pdm run lint # Ruff linter
pdm run format-check # Ruff format check
pdm run type-check # Ty type checker
pdm run test # Unit tests
pdm run test-live # Live tests (needs real endpoints)
```
## License
MIT
| text/markdown | null | Anderson <andersonby@163.com> | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"openai>=1.99.3",
"tiktoken>=0.7.0",
"httpx>=0.27.0",
"anthropic>=0.47.1",
"pydantic>=2.8.2",
"Pillow>=10.4.0",
"deepseek-tokenizer>=0.1.0",
"qwen-tokenizer>=0.2.0",
"boto3>=1.28.57; extra == \"bedrock\"",
"botocore>=1.31.57; extra == \"bedrock\"",
"diskcache; extra == \"diskcache\"",
"redis; ... | [] | [] | [] | [] | pdm/2.26.6 CPython/3.12.0 Windows/11 | 2026-02-18T18:25:19.872204 | vv_llm-0.3.73.tar.gz | 58,539 | da/9c/44fe5f7688dcdf5c6cb37a32e5fbbbe70d0dd222a88870a765f71badfb12/vv_llm-0.3.73.tar.gz | source | sdist | null | false | 77cd06e3a07f09e429989ba095ea41f2 | 63057fdeac46086bda9bf3098b9d26e9160dc452d629c45fcb1c21ab31b6733f | da9c44fe5f7688dcdf5c6cb37a32e5fbbbe70d0dd222a88870a765f71badfb12 | null | [] | 339 |
2.4 | civics-cdf-validator | 1.57.dev1 | Checks if an election feed follows best practices | civics_cdf_validator is a script that checks if an election data feed follows best practices and outputs errors, warnings and info messages for common issues.
| null | Google Civics | election-results-xml-validator@google.com | gVelocity Civics | election-results-xml-validator@google.com | Apache License | null | [] | [] | https://github.com/google/civics_cdf_validator | null | null | [] | [] | [] | [
"lxml>=3.3.4",
"language-tags>=0.4.2",
"requests>=2.10",
"networkx>=2.6.3",
"pycountry==22.1.10",
"frozendict>=2.4.4",
"attrs>=25.1.0",
"six",
"pytest; extra == \"test\"",
"absl-py; extra == \"test\"",
"mock==3.0.5; extra == \"test\"",
"freezegun; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:25:07.838664 | civics_cdf_validator-1.57.dev1.tar.gz | 55,780 | a3/c5/eaf28401ba8362ea8ddc9ee0c0ec1f766a66aff3fe1e7347b1e38ad97608/civics_cdf_validator-1.57.dev1.tar.gz | source | sdist | null | false | e61f9ace3a550de82d07fa0af7de708f | 26a7fd1d20875f2b27ceee31f355322a147308ffcdcc33a427987999ddbea182 | a3c5eaf28401ba8362ea8ddc9ee0c0ec1f766a66aff3fe1e7347b1e38ad97608 | null | [
"LICENSE-2.0.txt"
] | 210 |
2.4 | libretificacaotjcore | 0.1.87 | Biblioteca para centralizar conexao com filas no rabbit e banco de dados no mongodb para os servicos de retificacao da TJ | # 🛠️ LIBRETIFICACAOTJCORE
## 📝 Descrição
O Objetivo desse serviço é:
- Centralizar conexão com filas no rabbit e consumo de mensagens
- Centralizar conexão banco de dados no mongodb para os serviços de retificação da TJ
- Centralizar todas as operações de criação, leitura e atualização de arquivos
- Centralizar todas as operações de criação, leitura e atualização de protocolos
- Disponibilizar metodos para tratativas de arquivos
- Disponibilizar Dtos e Enums comuns em todos os serviços de retificações
## ⚙️ Configuração
nessesário ter o [uv astral](https://docs.astral.sh/uv/getting-started/installation/) instalado
Com o UV instalado, execute o comando abaixo para criar o arquivo de configuração:
```bash
uv sync
```
## 📺 Como publicar?
Para publicar o serviço, execute o comando abaixo:
```bash
uv build
```
e depois
```bash
twine upload dist/*
```
Obs: É necessário informa o token do pypi para que o comando funcione
| text/markdown | null | Jhonatan Azevedo <dev.azevedo@outlook.com> | null | null | null | tj, tributo justo, retificação, automação, pydantic, rabbitmq, boto3, motor | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :... | [] | null | null | >=3.13 | [] | [] | [] | [
"aio-pika>=9.5.7",
"aiofiles>=24.1.0",
"boto3>=1.39.16",
"cryptography>=45.0.6",
"httpx>=0.28.1",
"motor>=3.7.1",
"pika>=1.3.2",
"py7zr>=1.0.0",
"pydantic>=2.11.7"
] | [] | [] | [] | [
"Homepage, https://github.com/seu-usuario/libretificacaotjcore",
"Issues, https://github.com/seu-usuario/libretificacaotjcore/issues",
"Repository, https://github.com/seu-usuario/libretificacaotjcore"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-18T18:24:53.926134 | libretificacaotjcore-0.1.87.tar.gz | 12,284 | b4/ca/107af566399a45dbbe441a612ed00bea50d9732c56130cac49353e239b37/libretificacaotjcore-0.1.87.tar.gz | source | sdist | null | false | 15b4b6dcd8f924f55d828d66e2b2268d | 65be85410dfaea98525e345e483678f2a7c95a6618d820f940bb341427e6acb9 | b4ca107af566399a45dbbe441a612ed00bea50d9732c56130cac49353e239b37 | null | [] | 254 |
2.4 | airr | 1.6.1 | AIRR Community Data Representation Standard reference library for antibody and TCR sequencing data. | Installation
------------------------------------------------------------------------------
Install in the usual manner from PyPI::
> pip3 install airr --user
Or from the `downloaded <https://github.com/airr-community/airr-standards>`__
source code directory::
> python3 setup.py install --user
Quick Start
------------------------------------------------------------------------------
Deprecation Notice
^^^^^^^^^^^^^^^^^^^^
The ``load_repertoire``, ``write_repertoire``, and ``validate_repertoire`` functions
have been deprecated for the new generic ``load_airr_data``, ``write_airr_data``, and
``validate_airr_data`` functions. These new functions are backwards compatible with
the Repertoire metadata format but also support the new AIRR objects such as GermlineSet,
RepertoireGroup, GenotypeSet, Cell and Clone. This new format is defined by the DataFile
Schema, which describes a standard set of objects included in a file containing
AIRR Data Model presentations. Currently, the AIRR DataFile does not completely support
Rearrangement, so users should continue using AIRR TSV files and its specific functions.
Also, the ``repertoire_template`` function has been deprecated for the ``Schema.template``
method, which can now be called on any AIRR Schema to create a blank object.
Reading AIRR Data Files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``airr`` package contains functions to read and write AIRR Data
Model files. The file format is either YAML or JSON, and the package provides a
light wrapper over the standard parsers. The file needs a ``json``, ``yaml``, or ``yml``
file extension so that the proper parser is utilized. All of the AIRR objects
are loaded into memory at once and no streaming interface is provided::
import airr
# Load the AIRR data
data = airr.read_airr('input.airr.json')
# loop through the repertoires
for rep in data['Repertoire']:
print(rep)
Why are the AIRR objects, such as Repertoire, GermlineSet, and etc., in a list versus in a
dictionary keyed by their identifier (e.g., ``repertoire_id``)? There are two primary reasons for
this. First, the identifier might not have been assigned yet. Some systems might allow MiAIRR
metadata to be entered but the identifier is assigned to that data later by another process. Without
the identifier, the data could not be stored in a dictionary. Secondly, the list allows the data to
have a default ordering. If you know that the data has a unique identifier then you can quickly
create a dictionary object using a comprehension. For example, with repertoires::
rep_dict = { obj['repertoire_id'] : obj for obj in data['Repertoire'] }
another example with germline sets::
germline_dict = { obj['germline_set_id'] : obj for obj in data['GermlineSet'] }
Writing AIRR Data Files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Writing an AIRR Data File is also a light wrapper over standard YAML or JSON
parsers. Multiple AIRR objects, such as Repertoire, GermlineSet, and etc., can be
written together into the same file. In this example, we use the ``airr`` library ``template``
method to create some blank Repertoire objects, and write them to a file.
As with the read function, the complete list of repertoires are written at once,
there is no streaming interface::
import airr
# Create some blank repertoire objects in a list
data = { 'Repertoire': [] }
for i in range(5):
data['Repertoire'].append(airr.schema.RepertoireSchema.template())
# Write the AIRR Data
airr.write_airr('output.airr.json', data)
Reading AIRR Rearrangement TSV files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``airr`` package contains functions to read and write AIRR Rearrangement
TSV files as either iterables or pandas data frames. The usage is straightforward,
as the file format is a typical tab delimited file, but the package
performs some additional validation and type conversion beyond using a
standard CSV reader::
import airr
# Create an iteratable that returns a dictionary for each row
reader = airr.read_rearrangement('input.tsv')
for row in reader: print(row)
# Load the entire file into a pandas data frame
df = airr.load_rearrangement('input.tsv')
Writing AIRR Rearrangement TSV files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Similar to the read operations, write functions are provided for either creating
a writer class to perform row-wise output or writing the entire contents of
a pandas data frame to a file. Again, usage is straightforward with the ``airr``
output functions simply performing some type conversion and field ordering
operations::
import airr
# Create a writer class for iterative row output
writer = airr.create_rearrangement('output.tsv')
for row in reader: writer.write(row)
# Write an entire pandas data frame to a file
airr.dump_rearrangement(df, 'file.tsv')
By default, ``create_rearrangement`` will only write the ``required`` fields
in the output file. Additional fields can be included in the output file by
providing the ``fields`` parameter with an array of additional field names::
# Specify additional fields in the output
fields = ['new_calc', 'another_field']
writer = airr.create_rearrangement('output.tsv', fields=fields)
A common operation is to read an AIRR rearrangement file, and then
write an AIRR rearrangement file with additional fields in it while
keeping all of the existing fields from the original file. The
``derive_rearrangement`` function provides this capability::
import airr
# Read rearrangement data and write new file with additional fields
reader = airr.read_rearrangement('input.tsv')
fields = ['new_calc']
writer = airr.derive_rearrangement('output.tsv', 'input.tsv', fields=fields)
for row in reader:
row['new_calc'] = 'a value'
writer.write(row)
Validating AIRR data files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``airr`` package can validate AIRR Data Model JSON/YAML files and Rearrangement
TSV files to ensure that they contain all required fields and that the fields types
match the AIRR Schema. This can be done using the ``airr-tools`` command
line program or the validate functions in the library can be called::
# Validate a rearrangement TSV file
airr-tools validate rearrangement -a input.tsv
# Validate an AIRR DataFile
airr-tools validate airr -a input.airr.json
Combining Repertoire metadata and Rearrangement files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``airr`` package does not currently keep track of which AIRR Data Model files
are associated with which Rearrangement TSV files, though there is ongoing work to define
a standardized manifest, so users will need to handle those
associations themselves. However, in the data, AIRR identifier fields, such as ``repertoire_id``,
form the link between objects in the AIRR Data Model.
The typical usage is that a program is going to perform some
computation on the Rearrangements, and it needs access to the Repertoire metadata
as part of the computation logic. This example code shows the basic framework
for doing that, in this case doing gender specific computation::
import airr
# Load AIRR data containing repertoires
data = airr.read_airr('input.airr.json')
# Put repertoires in dictionary keyed by repertoire_id
rep_dict = { obj['repertoire_id'] : obj for obj in data['Repertoire'] }
# Create an iteratable for rearrangement data
reader = airr.read_rearrangement('input.tsv')
for row in reader:
# get repertoire metadata with this rearrangement
rep = rep_dict[row['repertoire_id']]
# check the gender
if rep['subject']['sex'] == 'male':
# do male specific computation
elif rep['subject']['sex'] == 'female':
# do female specific computation
else:
# do other specific computation
| null | AIRR Community | null | null | null | CC BY 4.0 | AIRR, bioinformatics, sequencing, immunoglobulin, antibody, adaptive immunity, T cell, B cell, BCR, TCR | [
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Bio-Informatics"
] | [] | http://docs.airr-community.org | null | null | [] | [] | [] | [
"pandas>=0.24.0",
"pyyaml>=3.12",
"yamlordereddictloader>=0.4.0",
"setuptools>=2.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.6 | 2026-02-18T18:24:25.934415 | airr-1.6.1.tar.gz | 82,825 | b3/65/1f5b39f99c04718e0bb72ad1df74753dae8afd6c2abf7ca71e3f2ac6f2d8/airr-1.6.1.tar.gz | source | sdist | null | false | a09f500cccf9b83c4f42833b9ad9a210 | f701fde0440d5848e01374ce9026f6c6cf85cbbb412a148f95ffc53c9d1bdddb | b3651f5b39f99c04718e0bb72ad1df74753dae8afd6c2abf7ca71e3f2ac6f2d8 | null | [] | 454 |
2.3 | foxops-client | 0.5.0 | Foxops API Client | # foxops-client-python
This repository contains the Python client for the [foxops](https://github.com/roche/foxops) templating tool.
## Installation
```shell
pip install foxops-client
```
## Usage
```python
from foxops_client import FoxopsClient, AsyncFoxopsClient
client = FoxopsClient("http://localhost:8080", "my-token")
incarnations = client.list_incarnations()
# or alternatively, the async version
client = AsyncFoxopsClient("http://localhost:8080", "my-token")
incarnations = await client.list_incarnations()
```
| text/markdown | Alexander Hungenberg | alexander.hungenberg@roche.com | null | null | null | null | [
"Programming Language :: Python :: 3"
] | [] | null | null | <4.0,>=3.14 | [] | [] | [] | [
"httpx<0.29.0,>=0.28.1",
"tenacity<10.0.0,>=9.1.4",
"structlog<26.0.0,>=25.5.0"
] | [] | [] | [] | [] | poetry/2.1.1 CPython/3.14.3 Linux/6.14.0-1017-azure | 2026-02-18T18:24:02.671839 | foxops_client-0.5.0.tar.gz | 7,821 | d5/6c/e2d73cc6ded1e3357b304ca2c16a1a42190151c1835086c9e2a026a7a0fc/foxops_client-0.5.0.tar.gz | source | sdist | null | false | b6beceaab507aab913b81a3982706be4 | 383a4123982bf02dbc86345681c0917067f5125384b18ca217987085442a4598 | d56ce2d73cc6ded1e3357b304ca2c16a1a42190151c1835086c9e2a026a7a0fc | null | [] | 239 |
2.4 | langvision | 0.1.56 | Efficient LoRA Fine-Tuning for Vision LLMs with advanced CLI and model zoo | <div align="center">
<img src="https://raw.githubusercontent.com/langtrain-ai/langvision/main/static/langvision-black.png" alt="Langvision" width="400" />
<h3>Fine-tune Vision LLMs with ease</h3>
<p>
<strong>Train LLaVA, Qwen-VL, and other vision models in minutes.</strong><br>
The simplest way to create custom multimodal AI.
</p>
<p>
<a href="https://www.producthunt.com/products/langtrain-2" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=1049974&theme=light" alt="Product Hunt" width="200" /></a>
</p>
<p>
<a href="https://pypi.org/project/langvision/"><img src="https://img.shields.io/pypi/v/langvision.svg?style=for-the-badge&logo=pypi&logoColor=white" alt="PyPI" /></a>
<a href="https://pepy.tech/project/langvision"><img src="https://img.shields.io/pepy/dt/langvision?style=for-the-badge&logo=python&logoColor=white&label=downloads" alt="Downloads" /></a>
<a href="https://github.com/langtrain-ai/langvision/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue?style=for-the-badge" alt="License" /></a>
</p>
<p>
<a href="#quick-start">Quick Start</a> •
<a href="#features">Features</a> •
<a href="#supported-models">Models</a> •
<a href="https://langtrain.xyz/docs">Docs</a>
</p>
</div>
---
## ⚡ Quick Start
### 1-Click Install (Recommended)
The fastest way to get started. Installs Langvision in an isolated environment.
```bash
curl -fsSL https://raw.githubusercontent.com/langtrain-ai/langvision/main/scripts/install.sh | bash
```
### Or using pip
```bash
pip install langvision
```
Fine-tune a vision model in **3 lines**:
```python
from langvision import LoRATrainer
trainer = LoRATrainer(model_name="llava-hf/llava-1.5-7b-hf")
trainer.train_from_file("image_data.jsonl")
```
Your custom vision model is ready.
---
## ✨ Features
<table>
<tr>
<td width="50%">
### 🖼️ **Multimodal Training**
Train on images + text together. Perfect for VQA, image captioning, and visual reasoning.
### 🎯 **Smart Defaults**
Optimized configurations for each model architecture. Just point and train.
### 💾 **Efficient Memory**
LoRA + 4-bit quantization = Train 13B vision models on a single 24GB GPU.
</td>
<td width="50%">
### 🔧 **Battle-Tested**
Production-ready code used by teams building real-world vision applications.
### 🌐 **All Major Models**
LLaVA, Qwen-VL, CogVLM, InternVL, and more. Full compatibility.
### ☁️ **Deploy Anywhere**
Export to GGUF, ONNX, or deploy directly to Langtrain Cloud.
</td>
</tr>
</table>
---
## 🤖 Supported Models
| Model | Parameters | Memory Required |
|-------|-----------|-----------------|
| LLaVA 1.5 | 7B, 13B | 8GB, 16GB |
| Qwen-VL | 7B | 8GB |
| CogVLM | 17B | 24GB |
| InternVL | 6B, 26B | 8GB, 32GB |
| Phi-3 Vision | 4.2B | 6GB |
---
## 📖 Full Example
```python
from langvision import LoRATrainer
from langvision.config import TrainingConfig, LoRAConfig
# Configure training
config = TrainingConfig(
num_epochs=3,
batch_size=2,
learning_rate=2e-4,
lora=LoRAConfig(rank=16, alpha=32)
)
# Initialize trainer
trainer = LoRATrainer(
model_name="llava-hf/llava-1.5-7b-hf",
output_dir="./my-vision-model",
config=config
)
# Train on image-text data
trainer.train_from_file("training_data.jsonl")
```
---
## 📝 Data Format
```jsonl
{"image": "path/to/image1.jpg", "conversations": [{"from": "human", "value": "What's in this image?"}, {"from": "assistant", "value": "A cat sitting on a couch."}]}
```
---
## 🤝 Community
<p align="center">
<a href="https://discord.gg/langtrain">Discord</a> •
<a href="https://twitter.com/langtrainai">Twitter</a> •
<a href="https://langtrain.xyz">Website</a>
</p>
---
<div align="center">
**Built with ❤️ by [Langtrain AI](https://langtrain.xyz)**
*Making vision AI accessible to everyone.*
</div>
| text/markdown | null | Pritesh Raj <priteshraj10@gmail.com> | null | Pritesh Raj <priteshraj10@gmail.com> | null | vision, transformer, lora, fine-tuning, deep-learning, computer-vision | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :... | [] | null | null | >=3.8 | [] | [] | [] | [
"torch>=1.10.0",
"torchvision>=0.11.0",
"numpy>=1.21.0",
"tqdm>=4.62.0",
"pyyaml>=6.0",
"scipy>=1.7.0",
"matplotlib>=3.5.0",
"pillow>=8.3.0",
"timm>=0.6.0",
"transformers>=4.20.0",
"toml>=0.10.0",
"scikit-learn>=1.0.0",
"pandas>=1.3.0",
"opencv-python-headless>=4.5.0",
"wandb>=0.13.0",
... | [] | [] | [] | [
"Homepage, https://github.com/langtrain-ai/langvision",
"Documentation, https://github.com/langtrain-ai/langvision/tree/main/docs",
"Repository, https://github.com/langtrain-ai/langvision",
"Bug Tracker, https://github.com/langtrain-ai/langvision/issues",
"Source Code, https://github.com/langtrain-ai/langvi... | twine/6.2.0 CPython/3.11.14 | 2026-02-18T18:23:52.686723 | langvision-0.1.56.tar.gz | 122,507 | 55/c7/3132508d8ae8d6b1188623e2d52842f0ad9fb5bd3a0ebc03fe7a3e57a20c/langvision-0.1.56.tar.gz | source | sdist | null | false | 75cc49f51fa59688e4bb7e55711c2ed1 | 19270375e2df95e7479cfd2048751077f069b4bc976980857cd2fc9b0b2ba26e | 55c73132508d8ae8d6b1188623e2d52842f0ad9fb5bd3a0ebc03fe7a3e57a20c | MIT | [] | 245 |
2.4 | dj-error-panel | 0.1.0 | Monitor errors and stacktraces right fromn the django admin | [](https://github.com/yassi/dj-error-panel/actions/workflows/test.yml)
[](https://codecov.io/gh/yassi/dj-error-panel)
[](https://badge.fury.io/py/dj-error-panel)
[](https://pypi.org/project/dj-error-panel/)
[](https://opensource.org/licenses/MIT)
# Dj Error Panel
Monitor errors and stacktraces right fromn the django admin
**Compatible with [dj-control-room](https://github.com/yassi/dj-control-room).** Register this panel in the Control Room to manage it from a centralized dashboard.
- **Official site:** [djangocontrolroom.com](https://djangocontrolroom.com)
- **Project repo:** [dj-control-room](https://github.com/yassi/dj-control-room)
## Docs
[https://yassi.github.io/dj-error-panel/](https://yassi.github.io/dj-error-panel/)
## Features
- **TBD**: Add your main features here
### Project Structure
```
dj-error-panel/
├── dj_error_panel/ # Main package
│ ├── templates/ # Django templates
│ ├── views.py # Django views
│ └── urls.py # URL patterns
├── example_project/ # Example Django project
├── tests/ # Test suite
├── images/ # Screenshots for README
└── requirements.txt # Development dependencies
```
## Requirements
- Python 3.9+
- Django 4.2+
## Screenshots
### Django Admin Integration
Seamlessly integrated into your Django admin interface. A new section for dj-error-panel
will appear in the same places where your models appear.
**NOTE:** This application does not actually introduce any model or migrations.

## Installation
### 1. Install the Package
```bash
pip install dj-error-panel
```
### 2. Add to Django Settings
Add `dj_error_panel` to your `INSTALLED_APPS`:
```python
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'dj_error_panel', # Add this line
# ... your other apps
]
```
### 3. Configure Settings (Optional)
Add any custom configuration to your Django settings if needed:
```python
# Optional: Add custom settings for dj_error_panel
DJ_ERROR_PANEL_SETTINGS = {
# Add your configuration here
}
```
### 4. Include URLs
Add the Panel URLs to your main `urls.py`:
```python
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/dj-error-panel/', include('dj_error_panel.urls')), # Add this line
path('admin/', admin.site.urls),
]
```
### 5. Run Migrations and Create Superuser
```bash
python manage.py migrate
python manage.py createsuperuser # If you don't have an admin user
```
### 6. Access the Panel
1. Start your Django development server:
```bash
python manage.py runserver
```
2. Navigate to the Django admin at `http://127.0.0.1:8000/admin/`
3. Look for the "DJ ERROR PANEL" section in the admin interface
## DJ Control Room Integration
This panel is designed to work seamlessly with [DJ Control Room](https://github.com/yassi/dj-control-room), a centralized dashboard for managing Django admin panels.
### Integration
register your panel in django's installed apps
1. Add `dj_control_room` to `INSTALLED_APPS`:
```python
INSTALLED_APPS = [
# ... other apps
'dj_control_room',
'dj_error_panel',
]
```
2. Include the Control Room URLs in your `urls.py`:
```python
urlpatterns = [
path('', include('dj_error_panel.urls')), # Panel URLs
path('admin/dj-control-room/', include('dj_control_room.urls')), # Control Room
path('admin/', admin.site.urls),
]
```
3. Visit `/admin/dj-control-room/` to see all your panels in one place!
### Panel Configuration
The panel is configured via the `panel.py` file with the following attributes:
- **ID**: `dj_error_panel`
- **Name**: Dj Error Panel
- **Description**: Monitor errors and stacktraces right fromn the django admin
- **Icon**: cog
You can customize these values by editing `dj_error_panel/panel.py`.
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
---
## Development Setup
If you want to contribute to this project or set it up for local development:
### Prerequisites
- Python 3.9 or higher
- Redis server running locally
- Git
- Autoconf
- Docker
It is reccommended that you use docker since it will automate much of dev env setup
### 1. Clone the Repository
```bash
git clone https://github.com/yassi/dj-error-panel.git
cd dj-error-panel
```
### 2a. Set up dev environment using virtualenv
```bash
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -e . # install dj-error-panel package locally
pip intall -r requirements.txt # install all dev requirements
# Alternatively
make install # this will also do the above in one single command
```
### 2b. Set up dev environment using docker
```bash
make docker_up # bring up all services (redis, memached) and dev environment container
make docker_shell # open up a shell in the docker conatiner
```
### 3. Set Up Example Project
The repository includes an example Django project for development and testing
```bash
cd example_project
python manage.py migrate
python manage.py createsuperuser
```
### 4. Populate Test Data (Optional)
Add any custom management commands for populating test data if needed.
### 6. Run the Development Server
```bash
python manage.py runserver
```
Visit `http://127.0.0.1:8000/admin/` to access the Django admin with Dj Error Panel.
### 7. Running Tests
The project includes a comprehensive test suite. You can run them by using make or
by invoking pytest directly:
```bash
# build and install all dev dependencies and run all tests inside of docker container
make test_docker
# Test without the docker on your host machine.
# note that testing always requires a redis and memcached service to be up.
# these are mostly easily brought up using docker
make test_local
```
| text/markdown | null | Yasser Toruno <your.email@example.com> | null | Yasser Toruno <your.email@example.com> | MIT | django, admin, panel | [
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Framework :: Django :: 6.0",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
... | [] | null | null | >=3.9 | [] | [] | [] | [
"Django>=4.2",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-django>=4.5.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-xdist>=3.2.0; extra == \"dev\"",
"django-redis>=5.0.0; extra == \"dev\"",
"psycopg2-binary>=2.9.0; extra == \"dev\"",
"mkdocs-material>=9.1.12; extra == \"dev\"",
... | [] | [] | [] | [
"Homepage, https://yassi.github.io/dj-error-panel/",
"Documentation, https://yassi.github.io/dj-error-panel/",
"Repository, https://github.com/yassi/dj-error-panel",
"Bug Tracker, https://github.com/yassi/dj-error-panel/issues"
] | twine/6.1.0 CPython/3.9.7 | 2026-02-18T18:23:44.530524 | dj_error_panel-0.1.0.tar.gz | 14,134 | a0/07/4b67aa9b52c55144af053af591c5c5bde70b314f85cc6becef5aaee2a550/dj_error_panel-0.1.0.tar.gz | source | sdist | null | false | d23c20a510b5a187c150aec9de70c977 | 2452f7faf8ee3c1469ae23b6cde700eb9015ebedc084826e505a7db3ba486228 | a0074b67aa9b52c55144af053af591c5c5bde70b314f85cc6becef5aaee2a550 | null | [
"LICENSE"
] | 268 |
2.4 | langtune | 0.1.40 | Efficient LoRA Fine-Tuning for Large Language Models - Train smarter, not harder. | <div align="center">
<img src="https://raw.githubusercontent.com/langtrain-ai/langtune/main/static/langtune-white.png" alt="Langtune" width="400" />
<h3>The fastest way to fine-tune LLMs</h3>
<p>
<strong>Production-ready LoRA fine-tuning in minutes, not days.</strong><br>
Built for ML engineers who need results, not complexity.
</p>
<p>
<a href="https://www.producthunt.com/products/langtrain-2" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=1049974&theme=light" alt="Product Hunt" width="200" /></a>
</p>
<p>
<a href="https://pypi.org/project/langtune/"><img src="https://img.shields.io/pypi/v/langtune.svg?style=for-the-badge&logo=pypi&logoColor=white" alt="PyPI" /></a>
<a href="https://pepy.tech/project/langtune"><img src="https://img.shields.io/pepy/dt/langtune?style=for-the-badge&logo=python&logoColor=white&label=downloads" alt="Downloads" /></a>
<a href="https://github.com/langtrain-ai/langtune/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue?style=for-the-badge" alt="License" /></a>
</p>
<p>
<a href="#quick-start">Quick Start</a> •
<a href="#features">Features</a> •
<a href="#why-langtune">Why Langtune</a> •
<a href="https://langtrain.xyz/docs">Docs</a>
</p>
</div>
---
## ⚡ Quick Start
### 1-Click Install (Recommended)
The fastest way to get started. Installs Langtune in an isolated environment.
```bash
curl -fsSL https://raw.githubusercontent.com/langtrain-ai/langtune/main/scripts/install.sh | bash
```
### Or using pip
```bash
pip install langtune
```
Fine-tune your first model in **3 lines of code**:
```python
from langtune import LoRATrainer
trainer = LoRATrainer(model_name="meta-llama/Llama-2-7b-hf")
trainer.train_from_file("data.jsonl")
```
That's it. Your fine-tuned model is ready.
---
## ✨ Features
<table>
<tr>
<td width="50%">
### 🚀 **Blazing Fast**
Train 7B models in under 30 minutes on a single GPU. Our optimized kernels squeeze every last FLOP.
### 🎯 **Zero Config Required**
Smart defaults that just work. No PhD required. Start training in seconds.
### 💾 **Memory Efficient**
4-bit quantization + gradient checkpointing = Train 70B models on consumer hardware.
</td>
<td width="50%">
### 🔧 **Production Ready**
Battle-tested at scale. Used by teams fine-tuning thousands of models daily.
### 🌐 **Any Model, Any Data**
Works with Llama, Mistral, Qwen, Phi, and more. JSONL, CSV, or HuggingFace datasets.
### ☁️ **Cloud Native**
One-click deployment to Langtrain Cloud. Or export to GGUF, ONNX, HuggingFace.
</td>
</tr>
</table>
---
## 🎯 Why Langtune?
| | Langtune | Others |
|---|:---:|:---:|
| **Time to first training** | 30 seconds | 2+ hours |
| **Lines of code** | 3 | 100+ |
| **Memory usage** | 8GB | 24GB+ |
| **Learning curve** | Minutes | Days |
---
## 📖 Full Example
```python
from langtune import LoRATrainer
from langtune.config import TrainingConfig, LoRAConfig
# Configure your training
config = TrainingConfig(
num_epochs=3,
batch_size=4,
learning_rate=2e-4,
lora=LoRAConfig(rank=16, alpha=32)
)
# Initialize and train
trainer = LoRATrainer(
model_name="mistralai/Mistral-7B-v0.1",
output_dir="./my-model",
config=config
)
# Train on your data
trainer.train_from_file("training_data.jsonl")
# Push to Hub (optional)
trainer.push_to_hub("my-username/my-fine-tuned-model")
```
---
## 🛠️ Advanced Usage
<details>
<summary><b>Custom Dataset Format</b></summary>
```python
# JSONL format (recommended)
{"text": "Your training example here"}
{"text": "Another example"}
# Or instruction format
{"instruction": "Summarize this:", "input": "Long text...", "output": "Summary"}
```
</details>
<details>
<summary><b>Distributed Training</b></summary>
```python
trainer = LoRATrainer(
model_name="meta-llama/Llama-2-70b-hf",
device_map="auto", # Automatic multi-GPU
)
```
</details>
<details>
<summary><b>Export Formats</b></summary>
```python
# Export to different formats
trainer.export("gguf") # For llama.cpp
trainer.export("onnx") # For ONNX Runtime
trainer.export("hf") # HuggingFace format
```
</details>
---
## 🤝 Community
<p align="center">
<a href="https://discord.gg/langtrain">Discord</a> •
<a href="https://twitter.com/langtrainai">Twitter</a> •
<a href="https://langtrain.xyz">Website</a>
</p>
---
<div align="center">
**Built with ❤️ by [Langtrain AI](https://langtrain.xyz)**
*Making LLM fine-tuning accessible to everyone.*
</div>
| text/markdown | null | Pritesh Raj <priteshraj41@gmail.com> | null | Langtrain AI <contact@langtrain.ai> | MIT License
Copyright (c) 2025 Pritesh Raj
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| llm, lora, fine-tuning, machine-learning, deep-learning, transformers, nlp, language-model, pytorch, rlhf, dpo, ppo | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python ... | [] | null | null | >=3.8 | [] | [] | [] | [
"torch>=1.10",
"numpy",
"tqdm",
"pyyaml",
"scipy",
"wandb",
"rich>=13.0.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"black; extra == \"dev\"",
"flake8; extra == \"dev\"",
"mypy; extra == \"dev\"",
"isort; extra == \"dev\"",
"transformers; extra == \"all\"",
"data... | [] | [] | [] | [
"Homepage, https://github.com/langtrain-ai/langtune",
"Documentation, https://github.com/langtrain-ai/langtune/tree/main/docs",
"Repository, https://github.com/langtrain-ai/langtune",
"Changelog, https://github.com/langtrain-ai/langtune/blob/main/CHANGELOG.md",
"Bug Tracker, https://github.com/langtrain-ai/... | twine/6.2.0 CPython/3.11.14 | 2026-02-18T18:23:38.118438 | langtune-0.1.40.tar.gz | 102,639 | 18/3c/f0cb9e157b315db22350ac1df336bd7c77b1571f566940d37f1c68e644a5/langtune-0.1.40.tar.gz | source | sdist | null | false | 12825320c361f6198fea5308e71fc13e | 0095cbf07916448d438c4cefe0de8d44a4be8c665096fccf29123b245fdcb1b3 | 183cf0cb9e157b315db22350ac1df336bd7c77b1571f566940d37f1c68e644a5 | null | [
"LICENSE"
] | 241 |
2.4 | reverse-pred | 0.1.6 | Library to run Reverse Predictivity | # Reverse Predictivity
A lightweight, modular Python library for computing **bidirectional alignment** between artificial neural network (ANN) representations and primate inferior temporal (IT) cortex responses.
This package accompanies the preprint:
**Muzellec & Kar (2025). _Reverse Predictivity: Going Beyond One-Way Mapping to Compare Artificial Neural Network Models and Brains_. bioRxiv.**
<https://www.biorxiv.org/content/10.1101/2025.08.08.669382v1>
Reverse predictivity complements traditional *forward neural predictivity* by asking the reciprocal question:
> **How well do neural responses predict ANN activations?**
Together, the forward and reverse metrics provide a more complete picture of representational similarity between brains and models.
---
## 🧠 Library Overview
This library contains four core mapping modules:
| Module | Mapping Direction | Question Answered |
|--------|-------------------|-------------------|
| `model_to_monkey.py` | Model → Monkey | How well do ANN features predict neural responses? *(forward predictivity)* |
| `monkey_to_model.py` | Monkey → Model | How well do IT neurons predict ANN unit activations? *(reverse predictivity)* |
| `monkey_to_monkey.py` | Monkey A → Monkey B | How consistent are neural populations across animals? *(biological upper bound)* |
| `model_to_model.py` | Model A → Model B | How aligned are representations across models or layers? |
All functions compute **explained variance (EV)** using repeated linear mappings and save EV arrays to disk.
---
## 🔧 Installation
We recommend using a clean environment:
```bash
conda create -n reverse_pred python=3.10 -y
conda activate reverse_pred
```
Install required Python packages:
```bash
pip install numpy scipy scikit-learn matplotlib
```
Install this library:
```bash
pip install reverse_pred
```
---
## 🚀 Usage
Each mapping function takes:
- `model_features`: `(n_images × n_units)` array
- `rates`: `(n_images × n_neurons × n_repeats)` array
- `out_dir`: output directory for saving EV results
- `reps`: number of cross-validated fits (default: 20)
- `model_type`: Choice of regressor models among `ridge, linear, lasso, elasticnet, pls`. Default=`ridge`
---
### 1. Forward Predictivity
**Module:** `model_to_monkey.py`
**Function:** `compute_model_to_monkey`
```python
from reverse_predictivity.model_to_monkey import compute_model_to_monkey
import numpy as np
model_features = np.load("features/resnet50_itlayer.npy")
rates = np.load("data/it_rates.npy")
compute_model_to_monkey(
model_features=model_features,
rates=rates,
out_dir="results/model_to_monkey/resnet50",
reps=20,
out_name='forward_ev'
)
```
**Output:**
```
results/model_to_monkey/resnet50/forward_ev.npy
```
---
### 2. Reverse Predictivity
**Module:** `monkey_to_model.py`
**Function:** `compute_monkey_to_model`
```python
from reverse_predictivity.monkey_to_model import compute_monkey_to_model
import numpy as np
model_features = np.load("features/resnet50_itlayer.npy")
rates = np.load("data/it_rates.npy")
compute_monkey_to_model(
model_features=model_features,
rates=rates,
out_dir="results/monkey_to_model/resnet50",
max_n=None,
reps=20,
out_name='reverse_ev'
)
```
**Parameters:**
`max_n`: can be set to subsample n number of neurons within the neural population. Each repetition will be done using a different sampling.
**Output:**
```
results/monkey_to_model/resnet50/reverse_ev.npy
```
---
### 3. Neural–Neural Consistency
**Module:** `monkey_to_monkey.py`
**Function:** `compute_monkey_to_monkey`
```python
from reverse_predictivity.monkey_to_monkey import compute_monkey_to_monkey
import numpy as np
ratesA = np.load("data/monkeyA_rates.npy")
ratesB = np.load("data/monkeyB_rates.npy")
compute_monkey_to_monkey(
rates_predictor=ratesA,
rates_predicted=ratesB,
out_dir="results/monkey_to_monkey/",
reps=20,
max_n=None,
name_predicted="monkeyB",
name_predictor="monkeyA",
)
```
**Parameters:**
`max_n`: can be set to subsample n number of predictor neurons. Each repetition will be done using a different sampling.
**Output:**
```
results/monkey_to_monkey/monkeyA_to_monkeyB_ev.npy
```
---
### 4. Model–Model Alignment
**Module:** `model_to_model.py`
**Function:** `compute_model_to_model`
```python
from reverse_predictivity.model_to_model import compute_model_to_model
import numpy as np
modelA = np.load("features/resnet50_itlayer.npy")
modelB = np.load("features/convnext_itlayer.npy")
compute_model_to_model(
model_features_predictor=modelA,
model_features_predicted=modelB,
out_dir="results/model_to_model/resnet_to_convnext",
reps=20,
name_predicted="convnext",
name_predictor="resnet50",
)
```
**Output:**
```
results/model_to_model/resnet_to_convnext/resnet50_to_convnext_ev.npy
```
---
## 📊 Downstream Analysis
After generating EV results:
1. Load the saved `.npy` files.
2. Compare forward vs reverse predictivity.
3. Compare model–monkey EV to monkey–monkey EV.
4. Compare model–model EV across architectures.
```python
import numpy as np
import matplotlib.pyplot as plt
fwd = np.load("results/model_to_monkey/resnet50/forward_ev.npy")
rev = np.load("results/monkey_to_model/resnet50/reverse_ev.npy")
plt.hist(fwd, bins=30, alpha=0.6, label="Forward")
plt.hist(rev, bins=30, alpha=0.6, label="Reverse")
plt.legend()
plt.xlabel("Explained Variance")
plt.ylabel("Count")
plt.show()
```
---
## 📌 Citation
If you use this library, please cite:
```
@article{muzellec_kar_2025_reversepredictivity,
title = {Reverse Predictivity: Going Beyond One-Way Mapping to Compare Artificial Neural Network Models and Brains},
author = {Muzellec, Sabine and Kar, Kohitij},
journal = {bioRxiv},
year = {2025}
}
```
---
## 📜 License
MIT License — see `LICENSE`.
| text/markdown | null | Sabine Muzellec and Kohitij Kar <sabinem@yorku.ca> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/vital-kolab/reverse_pred",
"Issues, https://github.com/vital-kolab/reverse_pred/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T18:23:30.698915 | reverse_pred-0.1.6.tar.gz | 8,902 | 2e/69/48f54575d7ffd90d8d01b9252eb84b72cc33cbd5cc85f94d0af89057677e/reverse_pred-0.1.6.tar.gz | source | sdist | null | false | 7f9ef430280c55863a2462d5e4c7d745 | b0af3b4da6cb6027a72ccc3671ff7875534a23f7db6b932583dc5fd311f112a7 | 2e6948f54575d7ffd90d8d01b9252eb84b72cc33cbd5cc85f94d0af89057677e | MIT | [
"LICENSE"
] | 244 |
2.3 | rxai-sdg | 0.1.39 | Reactive AI - Synthetic Datasets Generator | # Reactive AI: Synthetic Dataset Generators (rxai-sdg)
Toolkit for generating high-quality synthetic datasets for training Reactive Transformer models. Supports Memory Reinforcement Learning (MRL), Supervised Fine-Tuning (SFT), and the new Hybrid Reasoning generators for RxT-Beta.
## Installation
```bash
pip install rxai-sdg
```
Or install from source:
```bash
git clone https://github.com/RxAI-dev/rxai-sdg.git
cd rxai-sdg
pip install -e .
```
## Overview
This library provides synthetic dataset generators for training Reactive Language Models:
| Module | Purpose | Target Training Stage |
|--------|---------|----------------------|
| `rxai_sdg.mrl` | Memory Reinforcement Learning datasets | MRL stage |
| `rxai_sdg.sft` | Supervised Fine-Tuning datasets | Interaction SFT |
| `rxai_sdg.hybrid` | Hybrid Reasoning & DMPO datasets | RxT-Beta advanced training |
## Quick Start
### API Configuration
All generators support both OpenAI-compatible APIs and Ollama for local testing:
```python
# OpenAI-compatible API (default)
generator = MrlSyntheticDatasetGenerator(
max_items=100,
model_name="gpt-4",
api_url="https://api.openai.com/v1",
api_key="your-api-key",
use_ollama=False
)
# Ollama local testing
generator = MrlSyntheticDatasetGenerator(
max_items=100,
model_name="llama3.2",
api_url="http://localhost:11434",
use_ollama=True
)
# Third-party providers (Novita.ai, Together.ai, etc.)
generator = MrlSyntheticDatasetGenerator(
max_items=100,
model_name="qwen/qwen3-4b-fp8",
api_url="https://api.novita.ai/v3/openai",
api_key="your-key"
)
```
## Memory Reinforcement Learning (MRL) Datasets
Generate multi-turn conversations testing memory retention:
```python
from rxai_sdg.mrl import (
MrlSyntheticDatasetGenerator,
MrlPromptCreator,
MrlGeneratorPostprocessor,
)
from rxai_sdg.mrl.prompts import ALL_PROMPTS_REAL, TOPICS_REAL
from rxai_sdg.mrl.examples import EXAMPLES_REAL_MICRO
# Initialize generator
generator = MrlSyntheticDatasetGenerator(
max_items=500,
model_name="gpt-4",
api_url="https://api.openai.com/v1",
api_key="your-key"
)
# Create prompt creator with topics
prompt_creator = MrlPromptCreator(
prompts=ALL_PROMPTS_REAL,
examples=EXAMPLES_REAL_MICRO,
topics=TOPICS_REAL
)
# Generate dataset
generator(
prompt_creator=prompt_creator,
steps=3, # Follow-up interactions per conversation
iterations=50, # API calls
num_examples=10, # Examples per API call
num_topics=10, # Random topics per prompt
temperature=0.7,
stream=True, # Show generation progress
max_tokens=20000
)
# Post-process and export
postprocessor = MrlGeneratorPostprocessor(
generator=generator,
dataset_id="your-org/mrl-dataset",
token="hf_token"
)
postprocessor.filter_duplicates()
postprocessor.remove_incorrect_interactions(steps=3)
postprocessor.push_to_hf_hub()
# Or get as Dataset object
dataset = generator.get_dataset()
```
### MRL Dataset Format
```python
{
'query': ["Initial question 1", "Initial question 2", ...],
'answer': ["Initial answer 1", "Initial answer 2", ...],
'interactions': [
[
{'query': "Follow-up Q1", 'answer': "Follow-up A1"},
{'query': "Follow-up Q2", 'answer': "Follow-up A2"},
...
],
...
]
}
```
### Generation Modes
**Multi-Topic Mode** (default): Single topic with progressive memory testing
```python
generator(prompt_creator, steps=3, mode='multi')
```
**Long-Range Mode**: Two-topic strategy testing long-range memory
```python
generator(prompt_creator, steps=5, mode='long')
```
## RxT-Beta Hybrid Reasoning Datasets
The `rxai_sdg.hybrid` module provides generators for RxT-Beta's advanced training stages:
### 1. Reasoning Completion Generator
Add missing 'think' blocks to existing conversations:
```python
from rxai_sdg.hybrid import ReasoningCompletionGenerator
from datasets import load_dataset
# Load existing dataset with missing think blocks
dataset = load_dataset("your-dataset", split="train")
# Expected format: {'interactions': [[{'query': ..., 'think': '', 'answer': ...}, ...]]}
generator = ReasoningCompletionGenerator(
max_items=100,
model_name="gpt-4",
api_url="https://api.openai.com/v1",
api_key="your-key"
)
# Mode 1: Generate think blocks one at a time (higher quality)
generator.complete_single(
dataset=dataset,
target_tokens=512,
temperature=0.7,
stream=True
)
# Mode 2: Generate all think blocks at once (more efficient)
generator.complete_all_at_once(
dataset=dataset,
target_tokens_per_think=512,
temperature=0.7
)
# Get completed dataset
completed_dataset = generator.get_dataset()
```
### 2. Hybrid Reasoning Generator
Create new conversations with full reasoning chains from scratch:
```python
from rxai_sdg.hybrid import (
HybridReasoningGenerator,
HybridReasoningPromptCreator,
TOPICS_HYBRID_REASONING,
)
# Initialize
generator = HybridReasoningGenerator(
max_items=100,
model_name="gpt-4",
api_url="https://api.openai.com/v1",
api_key="your-key"
)
# Custom topics (or use defaults)
my_topics = [
"Quantum computing fundamentals",
"Climate change feedback loops",
"Machine learning optimization",
# ...
]
prompt_creator = HybridReasoningPromptCreator(
topics=my_topics, # or TOPICS_HYBRID_REASONING
include_examples=True
)
# Mode 1: Generate one interaction at a time (builds context progressively)
generator.generate_single(
prompt_creator=prompt_creator,
num_interactions=5, # Interactions per conversation
conversations=20, # Number of conversations
target_tokens=1024, # Tokens per interaction
thinking_ratio=0.7, # 70% use extended thinking
temperature=0.7,
stream=True
)
# Mode 2: Generate entire conversations at once
generator.generate_all_at_once(
prompt_creator=prompt_creator,
num_interactions=5,
conversations=20,
target_tokens_per_interaction=1024,
thinking_ratio=0.7,
temperature=0.7
)
dataset = generator.get_dataset()
```
### Hybrid Reasoning Dataset Format
```python
{
'interactions': [
[
{'query': "Question 1", 'think': "Reasoning...", 'answer': "Response 1"},
{'query': "Question 2", 'think': "Reasoning...", 'answer': "Response 2"},
...
],
...
],
'topics': ["Topic 1", "Topic 2", ...]
}
```
### 3. DMPO (Direct Memory and Preference Optimization) Generator
Create preference pairs for memory-aware training:
```python
from rxai_sdg.hybrid import DMPOGenerator, DMPOPromptCreator
generator = DMPOGenerator(
max_items=100,
model_name="gpt-4",
api_url="https://api.openai.com/v1",
api_key="your-key"
)
prompt_creator = DMPOPromptCreator(
topics=TOPICS_HYBRID_REASONING,
include_examples=True
)
# Mode 1: Generate pairs one at a time
generator.generate_single(
prompt_creator=prompt_creator,
num_interactions=5,
conversations=20,
target_tokens=1024,
temperature=0.7
)
# Mode 2: Generate entire preference conversations at once
generator.generate_all_at_once(
prompt_creator=prompt_creator,
num_interactions=5,
conversations=20,
target_tokens_per_interaction=1024
)
dataset = generator.get_dataset()
```
### DMPO Dataset Format
Each interaction contains accepted (good) and rejected (bad) responses:
```python
{
'interactions': [
[
{
'query': "Question requiring memory...",
'accepted': {
'think': "Good reasoning with correct memory usage...",
'answer': "Accurate, helpful response..."
},
'rejected': {
'think': "Flawed reasoning or memory errors...",
'answer': "Response with issues..."
}
},
...
],
...
],
'topics': ["Topic 1", "Topic 2", ...]
}
```
### Postprocessing
```python
from rxai_sdg.hybrid import HybridGeneratorPostprocessor
postprocessor = HybridGeneratorPostprocessor(
generator=generator,
dataset_id="your-org/hybrid-dataset",
token="hf_token"
)
# Filter empty/invalid conversations
postprocessor.filter_empty_interactions()
# Filter by conversation length
postprocessor.filter_by_length(min_interactions=3, max_interactions=10)
# Convert to RxT-Beta format
rxt_format = postprocessor.convert_to_rxt_format()
# Returns: [{'formatted': '[Q] query [T] think [A] answer', ...}, ...]
# Push to HuggingFace Hub
postprocessor.push_to_hf_hub()
```
## RxT-Beta Interaction Template
The hybrid generators produce data compatible with RxT-Beta's interaction template:
| Mode | Template | Description |
|------|----------|-------------|
| Fast Answer | `[Q] query [A] answer` | Direct response without reasoning |
| Extended Thinking | `[Q] query [T] thinking [A] answer` | Response with reasoning chain |
| Tool Usage | `[U] tool_result [T] thinking [A] answer` | Processing tool results |
| Internal Instruction | `[I] instruction [Q] query [A] answer` | Per-interaction behavior control |
| Tool Call | `[Q] query [A] answer [C] tool_call` | Invoking external tools |
## Convenience Functions
```python
from rxai_sdg.hybrid import (
create_reasoning_completion_generator,
create_hybrid_reasoning_generator,
create_dmpo_generator,
)
# Quick setup with defaults
completion_gen = create_reasoning_completion_generator(
max_items=100,
model_name="gpt-4",
api_key="your-key"
)
# Returns (generator, prompt_creator) tuple
reasoning_gen, reasoning_prompts = create_hybrid_reasoning_generator(
max_items=100,
model_name="gpt-4",
api_key="your-key",
topics=my_custom_topics
)
dmpo_gen, dmpo_prompts = create_dmpo_generator(
max_items=100,
model_name="gpt-4",
api_key="your-key"
)
```
## API Reference
### Base Classes
#### `BaseDatasetGenerator`
Abstract base class for all generators.
```python
class BaseDatasetGenerator(ABC):
def __init__(
self,
max_items: int = None, # Maximum items to generate
model_name: str = "...", # Model identifier
api_url: str = "...", # API endpoint
api_key: str = None, # API authentication
use_ollama: bool = False # Use Ollama instead of OpenAI API
)
def generate_items(
self,
prompt: str,
stream: bool = False,
temperature: float = 0.7,
top_p: float = 0.9,
max_tokens: int = 15000,
system_prompt: str = "",
timeout: int = 120,
additional_config: dict = None
) -> str
def get_dataset(self) -> Dataset # Return HuggingFace Dataset
```
### MRL Module
- `MrlSyntheticDatasetGenerator` - Main generator
- `MrlPromptCreator` - Prompt composition
- `MrlContextBasedPromptCreator` - Context-aware prompts
- `MrlGeneratorPostprocessor` - Post-processing and export
### Hybrid Module
- `ReasoningCompletionGenerator` - Add missing think blocks
- `HybridReasoningGenerator` - Create reasoning conversations
- `DMPOGenerator` - Create preference pairs
- `HybridReasoningPromptCreator` - Prompts for reasoning generation
- `DMPOPromptCreator` - Prompts for DMPO generation
- `HybridGeneratorPostprocessor` - Post-processing utilities
## Configuration Options
### Generation Parameters
| Parameter | Default | Description |
|-----------|---------|-------------|
| `temperature` | 0.7 | Sampling temperature (0-1) |
| `top_p` | 0.9 | Nucleus sampling threshold |
| `max_tokens` | 15000 | Maximum tokens per generation |
| `timeout` | 120 | Request timeout in seconds |
| `stream` | False | Stream responses in real-time |
### Additional Config
```python
additional_config = {
'presence_penalty': 0,
'frequency_penalty': 0,
'response_format': {"type": "text"},
'extra_body': {
"top_k": 50,
'repetition_penalty': 1,
'min_p': 0,
},
}
generator.generate_items(..., additional_config=additional_config)
```
## Examples
### Complete Workflow: MRL Dataset
```python
from rxai_sdg.mrl import *
from rxai_sdg.mrl.prompts import ALL_PROMPTS_REAL, TOPICS_REAL
# 1. Initialize
generator = MrlSyntheticDatasetGenerator(
max_items=1000,
model_name="gpt-4",
api_url="https://api.openai.com/v1",
api_key="sk-..."
)
prompt_creator = MrlPromptCreator(
prompts=ALL_PROMPTS_REAL,
topics=TOPICS_REAL
)
# 2. Generate
for steps in [2, 3, 4, 5]: # Multiple conversation lengths
generator(
prompt_creator=prompt_creator,
steps=steps,
iterations=25,
num_examples=10,
temperature=0.8,
stream=True
)
# 3. Post-process
postprocessor = MrlGeneratorPostprocessor(
generator=generator,
dataset_id="myorg/mrl-dataset",
token="hf_..."
)
postprocessor.filter_duplicates()
postprocessor.push_to_hf_hub()
```
### Complete Workflow: Hybrid Reasoning
```python
from rxai_sdg.hybrid import *
# 1. Initialize
generator = HybridReasoningGenerator(
max_items=500,
model_name="gpt-4",
api_url="https://api.openai.com/v1",
api_key="sk-..."
)
prompt_creator = HybridReasoningPromptCreator()
# 2. Generate different conversation lengths
for length in [3, 5, 7]:
generator(
prompt_creator=prompt_creator,
num_interactions=length,
conversations=50,
mode='single', # Higher quality
temperature=0.7,
stream=True,
restart=False # Accumulate
)
# 3. Post-process
postprocessor = HybridGeneratorPostprocessor(
generator=generator,
dataset_id="myorg/hybrid-reasoning",
token="hf_..."
)
postprocessor.filter_empty_interactions()
postprocessor.push_to_hf_hub()
```
### Complete Workflow: DMPO Dataset
```python
from rxai_sdg.hybrid import *
# 1. Initialize
generator = DMPOGenerator(
max_items=300,
model_name="gpt-4",
api_url="https://api.openai.com/v1",
api_key="sk-..."
)
prompt_creator = DMPOPromptCreator()
# 2. Generate
generator(
prompt_creator=prompt_creator,
num_interactions=5,
conversations=60,
mode='single',
target_tokens=1024,
temperature=0.7
)
# 3. Export
postprocessor = HybridGeneratorPostprocessor(
generator=generator,
dataset_id="myorg/dmpo-dataset",
token="hf_..."
)
postprocessor.push_to_hf_hub()
```
## Ollama Local Testing
For local development and testing with Ollama:
```bash
# Start Ollama
ollama serve
# Pull a model
ollama pull llama3.2
```
```python
from rxai_sdg.hybrid import HybridReasoningGenerator, HybridReasoningPromptCreator
# Use Ollama
generator = HybridReasoningGenerator(
max_items=10,
model_name="llama3.2",
api_url="http://localhost:11434",
use_ollama=True
)
prompt_creator = HybridReasoningPromptCreator()
# Generate (smaller batches for local testing)
generator(
prompt_creator=prompt_creator,
num_interactions=3,
conversations=5,
mode='single',
temperature=0.8,
stream=True
)
```
## License
Apache-2.0
## Contributing
Contributions are welcome! Please open an issue or submit a pull request.
## Links
- [Repository](https://github.com/RxAI-dev/rxai-sdg)
- [RxT-Beta Model](https://huggingface.co/ReactiveAI/RxT-Beta)
- [Reactive Transformer Paper](https://arxiv.org/abs/2510.03561)
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| text/markdown | Adam Filipek | adamfilipek@rxai.dev | null | null | Apache-2.0 | deep-learning, ai, machine-learning | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://rxai.dev/rxai-sdg | null | >=3.10 | [] | [] | [] | [
"datasets<4.0.0,>=3.5.0",
"huggingface-hub<0.31.0,>=0.30.0",
"openai<2.0.0,>=1.82.1",
"nltk<4.0.0,>=3.9.1",
"ollama<0.6.0,>=0.5.1"
] | [] | [] | [] | [
"Homepage, https://rxai.dev/rxai-sdg",
"Repository, https://github.com/RxAI-dev/rxai-sdg"
] | poetry/2.1.3 CPython/3.13.0 Darwin/25.2.0 | 2026-02-18T18:23:13.136749 | rxai_sdg-0.1.39.tar.gz | 230,466 | 07/e4/d8d0e67737c18127d43240f2d53cb837be90de6130210829c3e0722a7ff3/rxai_sdg-0.1.39.tar.gz | source | sdist | null | false | 06bf28758c75074ddca0ff418d66a221 | 1df6d6f4abc545024e6ec83301b77c473e5a9f1858f417dd29438298454271ba | 07e4d8d0e67737c18127d43240f2d53cb837be90de6130210829c3e0722a7ff3 | null | [] | 287 |
2.4 | pytest-agentcontract | 0.1.1 | Deterministic CI tests for LLM agent trajectories — record once, replay offline, assert contracts | # pytest-agentcontract
**Deterministic CI tests for LLM agent trajectories.** Record once, replay offline, assert contracts.
[](https://pypi.org/project/pytest-agentcontract/)
[](https://github.com/mikiships/pytest-agentcontract/actions)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
---
Your agent calls `lookup_order`, then `check_eligibility`, then `process_refund`. Every time. That's the contract. Test it like any other interface.
```bash
# Record a trajectory (hits real APIs once)
pytest --ac-record
# Replay in CI forever (no network, no API keys, no cost, deterministic)
pytest --ac-replay
```
```
tests/scenarios/refund-eligible.agentrun.json
├── turn 0: user → "I want a refund for order 123"
├── turn 1: assistant → lookup_order(order_id="123")
├── turn 2: assistant → check_eligibility(order_id="123")
├── turn 3: assistant → process_refund(order_id="123", amount=49.99)
└── turn 4: assistant → "Your refund of $49.99 has been processed."
```
## Install
```bash
pip install pytest-agentcontract
```
With auto-recording interceptors:
```bash
pip install pytest-agentcontract[openai] # OpenAI SDK
pip install pytest-agentcontract[anthropic] # Anthropic SDK
pip install pytest-agentcontract[all] # Everything
```
Framework adapters (LangGraph, LlamaIndex, OpenAI Agents SDK) are included -- no extras needed.
## Quick Start
### 1. Write a test
```python
@pytest.mark.agentcontract("refund-eligible")
def test_refund_flow(ac_recorder, ac_mode, ac_replay_engine, ac_check_contract):
if ac_mode == "record":
# Runs your real agent, records the trajectory
run_my_agent(ac_recorder)
elif ac_mode == "replay":
# Replays from cassette -- no network, no tokens
result = ac_replay_engine.run()
contract = ac_check_contract(ac_recorder.run)
assert contract.passed, contract.failures()
```
### 2. Record once
```bash
pytest --ac-record -k test_refund_flow
# Creates tests/scenarios/refund-eligible.agentrun.json
```
### 3. Replay in CI
```bash
pytest --ac-replay
# Deterministic. No API keys. No flakes. Sub-second.
```
## SDK Auto-Recording
Intercept real SDK calls instead of manually building turns:
```python
from agentcontract.recorder.interceptors import patch_openai
def test_with_real_agent(ac_recorder):
client = openai.OpenAI()
unpatch = patch_openai(client, ac_recorder)
# Every chat.completions.create call is recorded automatically
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Refund order 123"}],
tools=[...],
)
unpatch()
```
Works with Anthropic too:
```python
from agentcontract.recorder.interceptors import patch_anthropic
unpatch = patch_anthropic(client, ac_recorder)
```
## Framework Adapters
Drop-in recording for popular agent frameworks:
```python
# LangGraph
from agentcontract.adapters import record_graph
unpatch = record_graph(compiled_graph, recorder)
result = compiled_graph.invoke({"messages": [("user", "I need a refund")]})
unpatch()
# LlamaIndex
from agentcontract.adapters import record_agent
unpatch = record_agent(agent, recorder)
response = agent.chat("What's the refund policy?")
unpatch()
# OpenAI Agents SDK
from agentcontract.adapters import record_runner
unpatch = record_runner(recorder)
result = Runner.run_sync(agent, "Help with billing")
unpatch()
```
## Configuration
`agentcontract.yml` in your project root:
```yaml
version: "1"
scenarios:
include: ["tests/scenarios/**/*.agentrun.json"]
replay:
stub_tools: true
defaults:
assertions:
- type: contains
target: final_response
value: "refund"
- type: called_with
target: "tool:process_refund"
schema:
order_id: "123"
policies:
- name: allowed-tools
type: tool_allowlist
tools: [lookup_order, check_eligibility, process_refund]
- name: confirm-before-refund
type: requires_confirmation
tools: [process_refund]
```
Generate a starter config:
```bash
agentcontract init
```
## Assertions
| Type | What It Checks |
|------|---------------|
| `exact` | Exact string match |
| `contains` | Substring present |
| `regex` | Pattern match |
| `json_schema` | JSON Schema validation on tool args/results |
| `not_called` | Tool was NOT invoked |
| `called_with` | Tool called with specific arguments |
| `called_count` | Exact invocation count |
## Policies
| Policy | What It Enforces |
|--------|-----------------|
| `tool_allowlist` | Only listed tools may be called |
| `requires_confirmation` | Protected tools must follow user confirmation |
## Target Syntax
- `final_response` -- last assistant message
- `turn:N` -- specific turn by index
- `full_conversation` -- all turns concatenated
- `tool_call:function_name:arguments` -- tool call arguments
- `tool_call:function_name:result` -- tool call result
## CLI
```bash
agentcontract info cassette.agentrun.json # Cassette summary
agentcontract validate cassette.agentrun.json # Structure check
agentcontract init # Starter config
```
## Why Not VCR / pytest-recording?
VCR records **HTTP requests**. This records **agent decisions**.
- VCR: "did the HTTP request match?" -- brittle, breaks on any provider API change
- agentcontract: "did the agent call the right tools with the right args?" -- tests actual behavior
Your agent's contract is: given this input, it calls these tools in this order with these arguments. That's what you want to regression-test, not the HTTP layer underneath.
## How It Works
```
┌─────────┐ ┌──────────┐ ┌───────────────┐
│ pytest │────▶│ Recorder │────▶│ .agentrun.json│
│ --record │ │ │ │ (cassette) │
└─────────┘ └──────────┘ └───────┬───────┘
│
┌─────────┐ ┌──────────┐ │
│ pytest │────▶│ Replay │◀────────────┘
│ --replay │ │ Engine │
└─────────┘ └────┬─────┘
│
┌────▼─────┐
│Assertion │──▶ pass / fail
│ Engine │
└──────────┘
```
1. **Record**: Run your agent against real APIs. The recorder captures every turn, tool call, argument, and result into a `.agentrun.json` cassette.
2. **Replay**: The replay engine feeds recorded tool results back. No network. No tokens. Deterministic.
3. **Assert**: The assertion engine checks contracts -- tool sequences, argument schemas, response content, policies.
## License
MIT
| text/markdown | Josh Pfizer | null | null | null | null | agents, anthropic, ci, deterministic, langchain, llamaindex, llm, openai, pytest, record-replay, regression, testing, vcr | [
"Development Status :: 3 - Alpha",
"Framework :: Pytest",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
... | [] | null | null | >=3.10 | [] | [] | [] | [
"jsonschema>=4.0",
"pytest>=7.0",
"pyyaml>=6.0",
"anthropic>=0.30; extra == \"all\"",
"langchain-core>=0.2; extra == \"all\"",
"llama-index-core>=0.10; extra == \"all\"",
"openai>=1.0; extra == \"all\"",
"anthropic>=0.30; extra == \"anthropic\"",
"mypy>=1.10; extra == \"dev\"",
"pytest-cov>=5.0; e... | [] | [] | [] | [
"Homepage, https://github.com/mikiships/pytest-agentcontract",
"Repository, https://github.com/mikiships/pytest-agentcontract",
"Issues, https://github.com/mikiships/pytest-agentcontract/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:22:33.210885 | pytest_agentcontract-0.1.1.tar.gz | 279,493 | b1/11/362d9ccd76704c78e4a69e6b4354dbf629d4344d8376d7937778a2ab71f1/pytest_agentcontract-0.1.1.tar.gz | source | sdist | null | false | 27dfa31a32064b4bd4831ae329d00210 | 68ce86484b4c5a8b5ff49f417737cef5c58e6cfd3c7cb705f703826101c442eb | b111362d9ccd76704c78e4a69e6b4354dbf629d4344d8376d7937778a2ab71f1 | MIT | [
"LICENSE"
] | 240 |
2.4 | aws-cdk-cli | 2.1106.1 | Python wrapper for AWS CDK CLI with smart Node.js runtime management | # AWS CDK CLI - Python Wrapper
A Python package that provides a wrapper around the AWS CDK CLI tool, allowing Python developers to install and use AWS CDK via pip/uv instead of npm. This package bundles the AWS CDK code and either uses your system's Node.js installation or downloads a platform-specific Node.js runtime during installation.
## How It Works
This package follows a hybrid approach:
1. It bundles the AWS CDK JavaScript code with the package
2. For Node.js, it either:
- Uses your system's Node.js installation if available (default)
- Downloads appropriate Node.js binaries for your platform during installation
3. This approach ensures compatibility across platforms while leveraging existing Node.js installations when possible
## Why Use This Package?
If you're a Python developer working with AWS CDK, you typically need to install Node.js and npm first, then install the CDK CLI globally using npm. This wrapper simplifies this by:
- Bundling the CDK CLI code directly into a Python package
- Using your existing Node.js installation or downloading a minimal Node.js runtime
Benefits:
- No need to install npm or manage global npm packages
- Works in environments where npm installation is restricted
- Keeps AWS CDK installations isolated in Python virtual environments
- Consistent CDK versioning tied to your Python dependencies
- Optimized package size with platform-specific binary downloads only when needed
## Installation
```bash
# Using pip
pip install aws-cdk-cli
# Using uv
uv pip install aws-cdk-cli
# Install a specific version
pip install aws-cdk-cli==2.108.0
```
Note: During installation, the package will download the appropriate Node.js binaries for your platform. This requires an internet connection for the initial setup.
## Features
- **No npm dependency**: Eliminates the need for npm while still requiring Node.js (either system or downloaded)
- **Platform support**: Downloads appropriate Node.js binaries for Windows, macOS, and Linux when needed
- **Automatic updates**: Stays in sync with official AWS CDK releases
- **Seamless integration**: Use the same CDK commands you're familiar with
- **Offline caching**: Downloaded binaries are cached for offline usage
- **License compliance**: Includes all necessary license texts
- **Optimized size**: Only downloads the binaries needed for your platform
- **Flexible runtime options**: Can use system Node.js, downloaded Node.js, or Bun runtime
- **Compatible with Windows, macOS, and Linux**
- **Supports both x86_64 and ARM64 architectures**
## Additional Features
### Node.js Access in Virtual Environments
When you install AWS CDK CLI in a Python virtual environment, the package automatically creates a `node` symlink in your virtual environment's `bin` directory. This allows you to run the `node` command directly without requiring Node.js to be installed on your system.
For example:
```bash
# Activate your virtual environment
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Now you can use the node command
node --version
```
If for some reason the symlink wasn't created, you can create it manually by running:
```bash
cdk --create-node-symlink
```
## Usage
After installation, you can use the `cdk` command just as you would with the npm version:
```bash
# Initialize a new CDK project
cdk init app --language python
# Deploy a CDK stack
cdk deploy
# List all CDK stacks
cdk list
# Show version information
cdk --version
# Additional wrapper-specific commands
cdk --verbose # Show detailed installation information
cdk --license # Show license information
cdk --update # Update to the latest AWS CDK version
cdk --offline # Run in offline mode using cached packages
```
## JavaScript Runtime Options
The wrapper supports various JavaScript runtimes in the following priority order:
### Using system Node.js (default)
By default, the wrapper first checks if you have Node.js installed on your system. It will use your system Node.js installation if it meets the minimum required version for AWS CDK (typically v14.15.0+).
> **Note:** Node.js version compatibility warnings are silenced by default. If you want to see these warnings:
> ```bash
> cdk --show-node-warnings [commands...]
> ```
If you want to force using your system Node.js regardless of version:
```bash
cdk --use-system-node [commands...]
```
### Using Bun (if explicitly enabled)
Bun is a fast JavaScript runtime with 100% Node.js compatibility. Enable it with:
```bash
cdk --use-bun [commands...]
```
Requirements for using Bun:
- Bun v1.1.0 or higher must be installed on your system
- The wrapper will verify that Bun's reported Node.js version is compatible with CDK requirements
### Using downloaded Node.js (fallback)
If no compatible system Node.js is found, the wrapper will download and use the Node.js runtime for your platform during installation. This is guaranteed to be a version that's compatible with AWS CDK.
### Using downloaded Node.js explicitly
```bash
cdk --use-downloaded-node [commands...]
```
This explicitly uses the downloaded Node.js even if a compatible system Node.js is available.
## Environment Variables
The package respects the following environment variables:
- `AWS_CDK_DEBUG`: Set to "1" for verbose debug output
- `AWS_CDK_CLI_USE_SYSTEM_NODE=1`: Use system Node.js if available
- `AWS_CDK_CLI_USE_BUN=1`: Use Bun as the JavaScript runtime
- `AWS_CDK_CLI_USE_DOWNLOADED_NODE=1`: Use downloaded Node.js instead of system Node.js
## License Information
This package contains:
- AWS CDK (Apache License 2.0)
- Node.js (MIT License)
All copyright notices and license texts are included in the distribution. You can view the licenses using:
```bash
cdk --license
```
## Version Synchronization
The version of this Python package matches the version of the AWS CDK npm package it wraps. Updates are automatically published when new versions of AWS CDK are released.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
### Development Setup
1. Clone the repository
```bash
git clone https://github.com/your-org/aws-cdk.git
cd aws-cdk
```
2. Create a virtual environment
```bash
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
```
3. Install in development mode
```bash
pip install -e ".[dev]"
```
4. Run tests
```bash
make test
```
### Building from Source
```bash
python -m build
```
## Acknowledgements
- [AWS CDK](https://github.com/aws/aws-cdk) - The original AWS CDK project
- [Node.js](https://nodejs.org/) - JavaScript runtime bundled with this package
| text/markdown | null | "Ruben J. Jongejan" <ruben.jongejan@gmail.com> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: ... | [] | https://github.com/rvben/aws-cdk-cli-py | null | >=3.7 | [] | [] | [] | [
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\"",
"sphinx<9,>=8.0.0; extra == \"docs\"",
"sphinx-rtd-theme>=3.0.0; extra == \"docs\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T18:22:32.433098 | aws_cdk_cli-2.1106.1.tar.gz | 6,977,543 | 32/25/83841b6166e4d237cc9af40769427768c24aeac1bdd91d46fa6075be6d3c/aws_cdk_cli-2.1106.1.tar.gz | source | sdist | null | false | 7d1bfd7ce6c8b57b1ca695927e01ffe9 | 7d244f303ca47e12669748b59a758248a5f6ff02a4fead395e84826bd1531c72 | 322583841b6166e4d237cc9af40769427768c24aeac1bdd91d46fa6075be6d3c | null | [
"LICENSE"
] | 404 |
2.4 | amazon-ads-mcp | 0.2.12 | Amazon Ads API MCP Server - Implementation for Amazon Advertising API | <div align="center">
# Amazon Ads API MCP SDK
**Build AI-powered advertising applications with the Model Context Protocol (MCP) SDK for Amazon Advertising API**
*Made with ❤️ + ☕ by [Openbridge](https://www.openbridge.com/)*
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
mcp-name: io.github.KuudoAI/amazon_ads_mcp
</div>
## What Are MCP Tools?
Think of MCP (Model Context Protocol) as a translator between an AI model and outside systems (like Amazon Ads). Each MCP tool is like a remote control button that tells the AI how to interact with Amazon Ads. Without MCP tools, the AI would have no idea how to “talk” to Amazon Ads.
With MCP tools:
* The AI knows the exact endpoints to call.
* The AI can request campaign reports, budgets, or targeting data safely.
* Everything is structured, so the AI doesn’t break things by making random guesses.
👉 In short: MCP tools = a safe, well-labeled toolkit that lets AI work with the [Amazon Ads API](https://advertising.amazon.com/API/docs/en-us).
## 🚀 What is Amazon Ads API MCP SDK?
The Amazon Ads API MCP SDK is an open-source implementation that provides a robust foundation for creating AI-powered advertising tools, chatbots, and automated services.
### ✨ Key Features
- **🔌 MCP Integration**: Full Model Context Protocol compliance for AI application integration
- **🌍 Multi-Region Support**: NA, EU, and FE region endpoints with automatic routing
- **📊 Comprehensive API Coverage**: Campaigns, profiles, reporting, DSP, AMC workflows, and more
- **📝 Type Safety**: Full Pydantic model support with comprehensive type hints
- **🧪 Production Ready**: Includes testing, validation, and error handling
## 🎯 Use Cases
### Claude Desktop Integration
- **Campaign Management**: Ask Claude to create, update, or analyze campaigns
- **Performance Insights**: Get AI-powered analysis of your advertising performance
- **Budget Optimization**: Let Claude suggest budget adjustments based on performance
- **Creative Testing**: Get recommendations for ad creative improvements
- **Reporting**: Generate custom reports and insights on demand
### AI Applications
- **Marketing Chatbots**: Build conversational AI that can manage Amazon Ads campaigns
- **Automated Reporting**: AI-powered insights and performance analysis
- **Smart Budget Management**: Intelligent budget optimization using AI
- **Creative Optimization**: AI-driven ad creative testing and optimization
### Enterprise Services
- **Marketing Automation Platforms**: Integrate Amazon Ads into existing marketing tools
- **Agency Management Systems**: Multi-client, multi-account advertising management
- **E-commerce Integrations**: Connect Amazon Ads with e-commerce platforms
- **Analytics Dashboards**: Real-time advertising performance monitoring
### Developer Tools
- **API Wrappers**: Create custom SDKs for specific use cases
- **Testing Frameworks**: Automated testing for Amazon Ads integrations
- **Development Tools**: Local development and debugging utilities
## 📚 What Is Included In the Amazon Ads MCP?
There is broad coverage in the MCP server for the published in the Amazon Ads API. Each aligns with a collection of operations within the Amazon Ads API. This includes services like the new [Campaign Management services in the new Amazon Ads API v1](https://advertising.amazon.com/API/docs/en-us/guides/campaign-management/overview), [Exports](https://advertising.amazon.com/API/docs/en-us/guides/exports/overview), [Amazon Marketing Cloud](https://advertising.amazon.com/API/docs/en-us/guides/amazon-marketing-cloud/overview) and many more.
Here is a representative list of the various Amazon API services in the MCP:
- Accounts
- Audiences
- Reporting
- Brand metrics
- Sponsored Products
- Sponsored Brands
- Sponsored Display
- Amazon DSP
- Amazon Attribution
- Recommendations & insights
- Creatives
- Change history
- Data provider
- Products
- Unified pre-moderation
- Moderation
- Amazon Marketing Stream
- Locations
- Exports
- Media Planning
- Amazon Ads API v1 (Beta)
### 🧪 Amazon Ads API v1 (Beta)
The Amazon Ads API v1 represents a reimagined approach to the Amazon Ads API, built from the ground up to provide a seamless experience across all Amazon advertising products through a common model. One major benefit of this common model is improved compatibility with code generation tools such as client library generators.
> **⚠️ Beta Notice**: These APIs are currently in beta at Amazon. Features and endpoints may change. Use in production with caution.
| Package Name | Description | Prefix |
|-------------|-------------|--------|
| `ads-api-v1-sp` | Sponsored Products v1 | `spv1_` |
| `ads-api-v1-sb` | Sponsored Brands v1 | `sbv1_` |
| `ads-api-v1-dsp` | Amazon DSP v1 | `dspv1_` |
| `ads-api-v1-sd` | Sponsored Display v1 | `sdv1_` |
| `ads-api-v1-st` | Sponsored Television v1 | `stv1_` |
To activate Ads API v1 packages, add them to your `AMAZON_AD_API_PACKAGES` environment variable:
```bash
# Example: Enable Sponsored Products v1 and DSP v1
AMAZON_AD_API_PACKAGES="profiles,ads-api-v1-sp,ads-api-v1-dsp"
```
For more information, see Amazon's [Campaign Management Overview](https://advertising.amazon.com/API/docs/en-us/guides/campaign-management/overview).
## Installation
We recommend installing Amazon Ads API MCP with 🐳 [Docker](https://docs.astral.sh/uv/):
```bash
docker pull openbridge/amazon-ads-mcp
```
Copy the environment template
```bash
cp .env.example .env
```
Edit .env with your settings
Start the server with Docker Compose
```bash
docker-compose up -d
```
The server will be available at http://localhost:9080
Check logs
```bash
docker-compose logs -f
```
Stop the server
```bash
docker-compose down
```
For full installation instructions, including verification, upgrading, and developer setup, see the [**Installation Guide**](INSTALL.md).
## Configuration
Amazon Ads requires that all calls to the API are authorized. If you are not sure what this means, you should read the Amazon docs:
* Amazon Ads API onboarding overview: https://advertising.amazon.com/API/docs/en-us/guides/onboarding/overview
* Getting started with the Amazon Ads API: https://advertising.amazon.com/API/docs/en-us/guides/get-started/overview
There are two paths for connecting to the API;
1. Bring Your Own App (BYOA)
2. Leverage Partner Apps
## Bring Your Own Amazon Ads API App
If you have your own Amazon Ads API app, or want to create one, the process is detailed below.
### 1. Register Your Application with Amazon
1. Go to the [Amazon Developer Console](https://developer.amazon.com/)
2. Create or select your Login with Amazon application
3. Note your `Client ID` and `Client Secret`
4. Set your callback URL to "Allowed Return URLs". This is where you are running this server:
- For production: `https://your-server.com/auth/callback`
- For local development: `http://localhost:8000/auth/callback`
Once you have your app secured and approved by Amazon, you will need the client ID and secret:
```bash
# Amazon Ads API Credentials (required)
AMAZON_AD_API_CLIENT_ID="your-client-id"
AMAZON_AD_API_CLIENT_SECRET="your-client-secret"
```
Make sure these are in your `.env` file. Also, make sure you set your authorization method to `direct` in the same `.env`:
```bash
AUTH_METHOD=direct
```
### Complete OAuth Flow
To authorize your connection to Amazon, you need to complete an OAuth workflow as an end user. First, you need to set your region. Authorization occurs at the region level and not setting your region may cause a failure. The server will default to the `na` region. You can manually set the region with tool `set_active_region`.
* Tool: `set_active_region`
* Parameters: `na` | `eu` | `fe`
Example prompt: *"Set my current region to `eu`"*
### Step 1: Start OAuth
To connect to Amazon Ads API, you use an MCP tool to start your OAuth flow
* Tool: `start_oauth_flow`
* Example prompt: *"Start my OAuth flow"*
<img src="images/step1.png" alt="Step 1" style="max-width: 600px;">
### Step 2: Redirect to Amazon Ads
In this example, you are prompted to click the link that will open a browser window and request approval at Amazon.
<img src="images/step2.png" alt="Step 2" style="max-width: 600px;">
### Step 3: Approve Request
In the browser window, Amazon will prompt that you approve the request to connect.
<img src="images/step3.png" alt="Step 3" style="max-width: 600px;">
### Step 4: Success
If all goes well, you will see a success response. You can close the browser window and go back to your client. If you see something else, attempt the process again and confirm all your configuration elements are correct
<img src="images/step4.png" alt="Step 4" style="max-width: 600px;">
### Step 5: Confirmation
To confirm that your MCP server is connected to the Amazon Ads API, check your OAuth status
* Tool: `check_oauth_status`
* Example prompt: *"Check my OAuth status"*
<img src="images/step5.png" alt="Step 5" style="max-width: 600px;">
You are ready to start interacting with the Amazon Ads API system!
### Partner Applications: Token Authentication
You can configure your client, like Claude, to use authentication by supplying a valid access token. This is most appropriate for service accounts, long-lived API keys, CI/CD, applications where authentication is managed separately, or other non-interactive authentication methods.
#### Openbridge Partner App
As an Ads API Partner application provider, Openbridge offers a ready-to-go gateway to the Amazon Ads API. You log into your Openbridge account, provision a token, then set your token in your client config (see below).
First, set Openbridge as the auth method:
```bash
AUTH_METHOD=openbridge
```
That is it for the server config. To access the server, you need configure the client, like Claude Desktop, to pass the token directly. (see [Example MCP Client: Connect Claude Desktop](#example-mcp-client-connect-claude-desktop))
##### Authorized Amazon Accounts
Your Amazon authorizations reside in Openbridge. Your first step in your client is to request your current identities: `"List my remote identities"`. Next, you would tell the MCP server to use one of these identities: `"Set my remote identity to <>"`. You can then ask the MCP to `List all of my Amazon Ad profiles` linked to that account. If you do not see an advertiser listed, set a different identity.
### Set Your Amazon Ads MCP Packages
To activate, you need to set a comma-separated package to load. For example, to activate `profiles` and `amc-workflow`, set your package environment like this:
- `AMAZON_AD_API_PACKAGES="profiles,amc-workflow"`
Here is the list of tool packages available in the server:
- `profiles`
- `campaign-manage`
- `accounts-manager-accounts`
- `accounts-ads-accounts`
- `accounts-portfolios`
- `accounts-billing`
- `accounts-account-budgets`
- `audiences-discovery`
- `reporting-version-3`
- `brand-benchmarks`
- `brand-metrics`
- `stores-analytics`
- `sponsored-products`
- `sp-suggested-keywords`
- `sponsored-brands-v4`
- `sponsored-brands-v3`
- `sponsored-display`
- `dsp-measurement`
- `dsp-advertisers`
- `dsp-audiences`
- `dsp-conversions`
- `dsp-target-kpi-recommendations`
- `amazon-attribution`
- `audience-insights`
- `forecasts`
- `brand-store-manangement`
- `partner-opportunities`
- `tactical-recommendations`
- `persona-builder`
- `creative-assets`
- `change-history`
- `data-provider-data`
- `data-provider-hashed`
- `products-metadata`
- `products-eligibility`
- `unified-pre-moderation-results`
- `moderation-results`
- `amazon-marketing-stream`
- `locations`
- `exports-snapshots`
- `marketing-mix-modeling`
- `reach-forecasting`
- `amc-administration`
- `amc-workflow`
- `amc-rule-audience`
- `amc-ad-audience`
- `ads-api-v1-sp` *(Beta)*
- `ads-api-v1-sb` *(Beta)*
- `ads-api-v1-dsp` *(Beta)*
- `ads-api-v1-sd` *(Beta)*
- `ads-api-v1-st` *(Beta)*
You will note that some are broken up into smaller groupings. For example, Amazon Marketing Cloud has bundles; `amc-ad-audience`, `amc-administration`, `amc-rule-audience`, and `amc-workflow`. This is done to create efficiencies and optimizations that reduce context limits in many AI clients.
## Understanding Amazon Ads MCP Tools
Amazon Ads MCP tools have prefixes (like `cp_` for Campaign Performance or `amc_` for Amazon Marketing Cloud) to help organize the specific Ads API operation.
Example prefixes:
- `cp_` → campaign/advertising APIs
- `amc_` → AMC-related APIs
- `dsp_` → DSP APIs
- `sd_` → Sponsored Display
- `ams_` → Amazon Marketing Stream
- `spv1_` → Sponsored Products v1 *(Beta)*
- `sbv1_` → Sponsored Brands v1 *(Beta)*
- `dspv1_` → Amazon DSP v1 *(Beta)*
- `sdv1_` → Sponsored Display v1 *(Beta)*
- `stv1_` → Sponsored Television v1 *(Beta)*
This will translate into collections of tools that align with the API operations that are available:
**Campaign Management (`cp_`)**
- `cp_listCampaigns` — List all campaigns
- `cp_getCampaign` — Get specific campaign
- `cp_createCampaign` — Create new campaign
- `cp_updateCampaign` — Update campaign
- `cp_archiveCampaign` — Archive campaign
**Sponsored Products (`sp_`)**
- `sp_listProductAds` — List product ads
- `sp_createKeywords` — Create keywords
- `sp_updateBids` — Update keyword bids
- `sp_getNegativeKeywords` — Get negative keywords
**AMC Workflows (`amc_`)**
- `amc_listWorkflows` — List AMC workflows
- `amc_executeWorkflow` — Run workflow
- `amc_getWorkflowStatus` — Check workflow status
Users would see tools like:
- **"List my Amazon Ads campaigns"**
→ Claude uses: `cp_listCampaigns`
- **"Create an AMC workflow"**
→ Claude uses: `amc_createWorkflow`
- **"Export my sponsored products ads data"**
→ Claude uses: `export_createAdExport`
## 📥 Downloading Reports & Exports
When you request a report or export, the data is downloaded server-side and stored in profile-scoped directories. You can then retrieve files via HTTP.
### Download Workflow
```
1. Request Report 2. List Downloads 3. Get Download URL 4. Download File
──────────────── ─────────────── ───────────────── ─────────────────
"Generate a campaign "List my downloaded "Get URL for the Open URL in browser
performance report" files" campaign report" or use curl
│ │ │ │
▼ ▼ ▼ ▼
request_and_download list_downloads() get_download_url() GET /downloads/...
_report() │ │
│ │ │
▼ ▼ ▼
data/profiles/ Returns file list Returns HTTP URL
{profile_id}/ with metadata like:
reports/... http://localhost:9080/
downloads/reports/...
```
### Example Prompts
| Task | Example Prompt |
|------|----------------|
| Download a report | *"Generate a Sponsored Products report for January 2026"* |
| List available files | *"Show me my downloaded files"* |
| Get download link | *"Get the download URL for the report we just created"* |
| Filter by type | *"List my downloaded campaign exports"* |
### HTTP Download API
Once you have a download URL, you can retrieve files directly:
```bash
# List available downloads
curl http://localhost:9080/downloads
# Download a specific file
curl -O http://localhost:9080/downloads/reports/async/report_123.json.gz
# With authentication (if enabled)
curl -H "Authorization: Bearer your-token" \
-O http://localhost:9080/downloads/exports/campaigns/export.json
```
### Profile Isolation
Files are stored per-profile to ensure data isolation:
- Each profile's files are in `data/profiles/{profile_id}/`
- You can only access files for your active profile
- Set your profile first: *"Set my active profile to 123456789"*
HTTP download endpoints and download tools serve profile-scoped files only. Move legacy files into a profile directory for access.
## Advertiser Profiles & Regions
### Setting Your Advertiser Profile
Per Amazon: *Profiles play a crucial role in the Amazon Ads API by determining the management scope for a given call. A profile ID is a required credential to access an advertiser's data and services in a specific marketplace.*
You may not know what profile(s) authorization grants you access to. You can list all advertising profiles accessible by your authorization:
* Tool: `ac_listProfiles`
* Example prompt: *"List my advertiser profile ids"*
**Warning:** Large accounts can return very large profile lists that may exceed client context limits. Prefer these bounded tools for discovery:
* Tool: `summarize_profiles` — *"Summarize my advertiser profiles"*
* Tool: `search_profiles` — *"Find profiles with Acme in the name in US"*
* Tool: `page_profiles` — *"Show the first 20 UK profiles"*
* Tool: `refresh_profiles_cache` — *"Refresh my profile list cache"*
Response includes profile details:
- profileId, countryCode, currencyCode
- dailyBudget, timezone
- accountInfo (type: seller/vendor/agency)
Let's assume your list included profile ID `1043817530956285`. You can check for more details by getting profile details to confirm this is the one you want to use.
* Tool: `ac_getProfile`
* Example prompt: *"Get the details for my profile_id: `1043817530956285`"*
Assuming this is the profile you want to use, you need to **set** the profile Amazon requires for API calls:
* Tool: `set_active_profile`
* Example prompt: *"Set my active profile id to `1043817530956285`"*
When you set the profile, it determines:
- Which account's data you access
- Currency and timezone for reports
- Available campaigns/ads/keywords
The profile ID will be set in the background for the duration of your session. Repeat the process if you want to switch to a new profile.
Most calls to the Amazon Ads API require a Region. Each [advertiser profile ID](https://advertising.amazon.com/API/docs/en-us/guides/account-management/authorization/profiles) is associated with an advertising account in a specific region/marketplace.
The region is part of an advertiser profile. When you set an advertiser profile with `set_active_profile`, it will set the region that is associated with the profile automatically.
* Tool: `set_active_profile`
Example prompt: *"Set my active advertiser profile to `111111111111`"*
Since profile ID `111111111111` is based in `na`, the region will be set based on the profile region.
### Set Active Region
The Amazon Ads MCP server includes tools for managing API regions as defaults and dynamically, allowing you to switch between North America (`na`), Europe (`eu`), and Far East (`fe`) regions without restarting the server.
| Region Code | Name | API Endpoint
|------------|------|--------------|
| `na` | North America | https://advertising-api.amazon.com
| `eu` | Europe | https://advertising-api-eu.amazon.com
| `fe` | Far East | https://advertising-api-fe.amazon.com
When you set a region, the system automatically:
1. **Updates API endpoints** - Routes API calls to the correct regional endpoint
2. **Updates OAuth endpoints** - Uses the correct token refresh endpoint for the region
3. **Clears cached tokens** - Ensures fresh authentication for the new region
4. **Preserves other settings** - Keeps profile ID and identity settings intact
**IMPORTANT: Avoid Region Mismatch**: *If you attempt to set a region that is not associated with your advertiser profile, the Ads API will reject your requests. For example, if a profile ID is attached to `na` and you manually set the region to `eu`, you have created a mismatch which will cause API request failures.*
### Get Active Region
If you are not sure what region is set, you can check for the region
* Tool: `get_active_region`
* Returns: Current region, endpoints, and configuration source
Example prompt: *"What is my current active region?"*
## Example MCP Client: Connect Claude Desktop
Navigate to Connector Settings
Open Claude in your browser and navigate to the settings page. You can access this by clicking on your profile icon and selecting “Settings” from the dropdown menu. Once in settings, locate and click on the “Connectors” section in the sidebar. This will display your currently configured connectors and provide options to add new ones.
Edit your Claude Desktop configuration file:
**macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
**Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
**Linux**: `~/.config/Claude/claude_desktop_config.json`
In this example, we show how to use the bearer token using the Openbridge API key. Add this configuration to your `mcpServers` section:
```json
{
"mcpServers": {
"amazon_ads_mcp": {
"command": "npx",
"args": [
"-y",
"mcp-remote@latest",
"http://${HOSTNAME}:${PORT}/mcp/",
"--allow-http",
"--header",
"Authorization:Bearer ${OPENBRIDGE_API_KEY}",
"--header",
"Accept:application/json,text/event-stream",
"--debug"
],
"env": {
"MCP_TIMEOUT": "300",
"HOSTNAME": "your_hostname",
"PORT": "your_server_port",
"MCP_TIMEOUT": "120000",
"MCP_REQUEST_TIMEOUT": "60000",
"MCP_CONNECTION_TIMEOUT": "10000",
"MCP_SERVER_REQUEST_TIMEOUT": "60000",
"MCP_TOOL_TIMEOUT": "120000",
"MCP_REQUEST_WARNING_THRESHOLD": "10000",
"OPENBRIDGE_API_KEY": "your_openbridge_token_here"
}
}
}
}
```
**Note**: Replace `hostname`, `port` and `your_openbridge_token_here` with your actual OpenBridge token.
**IMPORTANT**: Cursor and Claude Desktop (Windows) have a bug where spaces inside args aren't escaped when it invokes npx, which ends up mangling these values. You can work around it using: [mcp-remote custom headers documentation](https://github.com/geelen/mcp-remote?tab=readme-ov-file#custom-headers).
The config would look something like this:
```json
{
"mcpServers": {
"amazon_ads_mcp": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"http://${HOSTNAME}:${PORT}/mcp/",
"--allow-http",
"--header",
"Authorization:${AUTH_HEADER}"
"--header",
"Accept: application/json, text/event-stream"
],
"env": {
"MCP_TIMEOUT": "300",
"HOSTNAME": "your_hostname",
"PORT": "your_server_port",
"MCP_TIMEOUT": "120000",
"MCP_REQUEST_TIMEOUT": "60000",
"MCP_CONNECTION_TIMEOUT": "10000",
"MCP_SERVER_REQUEST_TIMEOUT": "60000",
"MCP_TOOL_TIMEOUT": "120000",
"MCP_REQUEST_WARNING_THRESHOLD": "10000",
"AUTH_HEADER": "Bearer <your_openbridge_token_here>"
}
}
}
}
```
Here is another example, which can be used if you are using OAuth since the `OPENBRIDGE_API_KEY` is not needed:
```json
{
"mcpServers": {
"amazon_ads_mcp": {
"command": "npx",
"args": [
"-y",
"mcp-remote@latest",
"http://localhost:9080/mcp/",
"--allow-http"
],
"env": {
"MCP_TIMEOUT": "120000",
"MCP_REQUEST_TIMEOUT": "60000",
"MCP_CONNECTION_TIMEOUT": "10000",
"MCP_SERVER_REQUEST_TIMEOUT": "60000",
"MCP_TOOL_TIMEOUT": "120000",
"MCP_REQUEST_WARNING_THRESHOLD": "10000"
}
}
}
}
```
*Note: For various Claude configurations similar to what was shown above, see the [MCP Remote docs](https://github.com/geelen/mcp-remote) for the latest settings/options.*
### 3. Restart Claude Desktop
After saving the configuration file, restart Claude Desktop to load the new MCP server.
## ⚠️ Context Limits and Active MCP Server Tools
MCP tool registration and use can impact your AI systems usage limits. Usage limits control how much you can interact with an AI system, like Claude, over a specific time period. As Anthropic states, think of the amount of information/data used as drawing down on a "conversation budget". That budget determines how many messages you can send to your AI client, or how long you can work, before needing to wait for your limit to reset.
MCP Server tools contribute metadata like titles, descriptions, hints, and schemas to the model's context. This metadata is loaded into the LLM’s context window, which acts as its short-term working memory.
Each client, like Claude, has a fixed-size context window. This defines the maximum amount of information it can process in a single interaction—including user prompts, system instructions, tool metadata, and any prior messages.
The more tools you activate, the more of that limited space gets consumed up front. When you activate many tools, their combined schema and config payloads can significantly use up this context and you may quickly hit the context ceiling. This is when you’ll start seeing errors or warnings about exceeding the chat length limit.
**The Amazon Ads MCP provides coverage across the entire API. As a result, there can be 100s of tools!**
* More tools = less room for user interaction: Activating unnecessary tools reduces available space for your actual prompt or data.
* Start small: Activate only what you need for the current task. You can always add more later.
If you're encountering unexpected length issues, review which tools are active. Trimming unused ones can help minimize context use.
## Troubleshooting
**Server not connecting?**
- Ensure the Docker container is running: `docker-compose ps`
- Check server logs: `docker-compose logs -f`
- Verify the port is correct (8765 by default)
**Authentication errors?**
- Check your OpenBridge token is valid
- Ensure the token is properly set in the environment
- Verify your Amazon Ads API access
**Claude not recognizing the server?**
- Restart Claude Desktop after configuration changes
- "Reload Page" in Claude Desktop if the MCP is not active
- Check the JSON syntax is valid
- Ensure the server name matches exactly
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
| text/markdown | Amazon Ads API MCP SDK | null | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"PyYAML>=6.0.2",
"cryptography>=41.0.0",
"fastmcp<3.0,>=2.14.4",
"httpx>=0.28.1",
"openai>=1.109.1",
"pydantic>=2.12.5",
"pydantic-settings>=2.12.0",
"pyjwt>=2.10.1",
"python-dotenv>=1.2.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:21:58.138132 | amazon_ads_mcp-0.2.12.tar.gz | 204,739 | 9e/db/ac5a05c18b825400b85e13daba349b69c20e3aa4adbe44c1ac06feaf0e1a/amazon_ads_mcp-0.2.12.tar.gz | source | sdist | null | false | 8de2f651fcebfeff5a7a842c2e8a6a80 | e18830b28dd45407abc9898bc8243c44e4f1cb5609473fd605b78f741be3f1b5 | 9edbac5a05c18b825400b85e13daba349b69c20e3aa4adbe44c1ac06feaf0e1a | null | [
"LICENSE"
] | 260 |
2.4 | sutro | 0.1.55 | Sutro Python SDK | 


Sutro makes it easy to analyze and generate unstructured data using LLMs, from quick experiments to billion token jobs.
Whether you're generating synthetic data, running model evals, structuring unstructured data, classifying data, or generating embeddings - *batch inference is faster, cheaper, and easier* with Sutro.
Visit [sutro.sh](https://sutro.sh) to learn more and request access to the cloud beta.
## 🚀 Quickstart
Install:
```bash
[uv] pip install sutro
```
Authenticate:
```bash
sutro login
```
### Run your first job:
```python
import sutro as so
import polars as pl
from pydantic import BaseModel
# Load your data
df = pl.DataFrame({
"review": [
"The battery life is terrible.",
"Great camera and build quality!",
"Too expensive for what it offers."
]
})
# Add a system prompt (optional)
system_prompt = "Classify the sentiment of the review as positive, neutral, or negative."
# Define an output schema (optional)
class Sentiment(BaseModel):
sentiment: str
# Run a prototyping (p0) job
df = so.infer(
df,
column="review",
model="qwen-3-32b",
output_schema=Sentiment
)
print(df)
```
Will produce a result like:

### Scaling up:
```python
# load a larger dataset
df = pl.read_parquet('hf://datasets/sutro/synthetic-product-reviews-20k/results.parquet')
# Run a production (p1) job
job_id = so.infer(
df,
column="review_text",
model="qwen-3-32b",
output_schema=Sentiment,
job_priority=1 # <-- one line of code for near-limitless scale
)
```
You can track live progress of your job, view results, and share with your team from the Sutro web app:

## What is Sutro?
Sutro is a **serverless, high-throughput batch inference service for LLM workloads**. With just a few lines of Python, you can quickly run batch inference jobs using open-source foundation models—at scale, with strong cost/time guarantees, and without worrying about infrastructure.
Think of Sutro as **online analytical processing (OLAP) for AI**: you submit queries over unstructured data (documents, emails, product reviews, etc.), and Sutro handles the heavy lifting of job execution - from intelligent batching to cloud orchestration to inference framework and hardware optimizations. You just bring your data, and Sutro handles the rest.
## 📚 Documentation & Examples
- [Documentation](https://docs.sutro.sh/)
- Example Guides:
- [Synthetic Data Zero to Hero](https://docs.sutro.sh/examples/synthetic-data-zero-to-hero)
- [Synthetic Data for Privacy Preservation](https://docs.sutro.sh/examples/synthetic-data-privacy)
- [Large Scale Embedding Generation with Qwen3 0.6B](https://docs.sutro.sh/examples/large-scale-embeddings)
- More coming soon...
## ✨ Features
- **⚡ Run experiments faster**
Small scale jobs complete in minutes, large scale jobs run within 1 hour - more than 20x faster than competing cloud services.
- **📈 Seamless scaling**
Use the same interface to run jobs with a few tokens, or billions at a time.
- **💰 Decreased Costs and Transparent Pricing**
Up to 10x cheaper than alternative inference services. Use dry run mode to estimate costs before running large jobs.
- **🐍 Pythonic DataFrame and file integrations**
Submit and receive results directly as Pandas/Polars DataFrames, or upload CSV/Parquet files.
- **🏗️ Zero infrastructure setup**
No need to manage GPUs, tune inference frameworks, or orchestrate parallelization. Just data in, results out.
- **📊 Real-time observability dashboard**
Use the Sutro web app to monitor your jobs in real-time and see results as they are generated, tag jobs for easier tracking, and share results with your team.
- **🔒 Built with security in mind**
Custom data retention options, and bring-your-own s3-compatible storage options available.
## 🧑💻 Typical Use Cases
- **Synthetic data generation**: Create millions of product reviews, conversations, or paraphrases for pre-training or distillation.
- **Model evals**: Easily run LLM benchmarks on a scheduled basis to detect model regressions or performance degradation.
- **Unstructured data analytics**: Run analytical workloads over unstructured data (e.g. customer reviews, product descriptions, emails, etc.).
- **Semantic tagging**: Add boolean/numeric/closed-set tags to messy data (e.g. LinkedIn bios, company descriptions).
- **Structured Extraction**: Pull structured fields out of unstructured documents at scale.
- **Classification**: Apply consistent labels across large datasets (spam, sentiment, topic, compliance risk).
- **Embedding generation**: Generate and store embeddings for downstream search/analytics.
## 🔌 Integrations
- **DataFrames**: Pandas, Polars
- **Files**: CSV, Parquet
- **Storage**: S3-Compatible Object Stores (e.g. R2, S3, GCS, etc.)
## 📦 Hosting Options
- **Cloud**: Run Sutro on our secure, multi-tenant cloud.
- **Isolated Deployments**: Bring your own storage, models, or cloud resources.
- **Local and Self-Hosted**: Coming soon!
See our [pricing page](https://sutro.sh/pricing) for more details.
## 🤝 Contributing
We welcome contributions! Please reach out to us at [team@sutro.sh](mailto:team@sutro.sh) to get involved.
## 📄 License
Apache 2.0 | text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy<3.0.0,>=2.1.1",
"requests<3.0.0,>=2.32.3",
"pandas<3.0.0,>=2.2.3",
"polars<=1.34.0,>=1.33.0",
"click<9.0.0,>=8.1.7",
"colorama<1.0.0,>=0.4.4",
"yaspin<4.0.0,>=3.2.0",
"tqdm<5.0.0,>=4.67.1",
"pydantic<3.0.0,>=2.11.4",
"pyarrow<22.0.0,>=21.0.0",
"tabulate<1.0.0,>=0.9.0",
"langsmith>=0.5.1... | [] | [] | [] | [
"Documentation, https://docs.sutro.sh",
"Homepage, https://sutro.sh"
] | twine/6.1.0 CPython/3.12.8 | 2026-02-18T18:21:13.999714 | sutro-0.1.55.tar.gz | 34,566 | 3d/ce/186886b34a0729a4ca134423e240b407fb8e5679ccac369e420614fd372e/sutro-0.1.55.tar.gz | source | sdist | null | false | 7baefe0dbcf5e3c15a85f6379147c70e | 512f2eb0abd8f2fd1b8c3b1416294016b4de568c6aa388ba0d979c9cc20a5bea | 3dce186886b34a0729a4ca134423e240b407fb8e5679ccac369e420614fd372e | Apache-2.0 | [] | 309 |
2.4 | nnsight | 0.6.0a1 | Package for interpreting and manipulating the internals of deep learning models. | <p align="center">
<img src="./nnsight_logo.svg" alt="nnsight" width="300">
</p>
<h3 align="center">
Interpret and manipulate the internals of deep learning models
</h3>
<p align="center">
<a href="https://www.nnsight.net"><b>Documentation</b></a> | <a href="https://github.com/ndif-team/nnsight"><b>GitHub</b></a> | <a href="https://discord.gg/6uFJmCSwW7"><b>Discord</b></a> | <a href="https://discuss.ndif.us/"><b>Forum</b></a> | <a href="https://x.com/ndif_team"><b>Twitter</b></a> | <a href="https://arxiv.org/abs/2407.14561"><b>Paper</b></a>
</p>
<p align="center">
<a href="https://colab.research.google.com/github/ndif-team/nnsight/blob/main/NNsight_Walkthrough.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg"></img></a>
<a href="https://deepwiki.com/ndif-team/nnsight"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></img></a>
</p>
---
## About
**nnsight** is a Python library that enables interpreting and intervening on the internals of deep learning models. It provides a clean, Pythonic interface for:
- **Accessing activations** at any layer during forward passes
- **Modifying activations** to study causal effects
- **Computing gradients** with respect to intermediate values
- **Batching interventions** across multiple inputs efficiently
Originally developed in the [NDIF team](https://ndif.us/) at Northeastern University, nnsight supports local execution on any PyTorch model and remote execution on large models via the NDIF infrastructure.
> 📖 For a deeper technical understanding of nnsight's internals (tracing, interleaving, the Envoy system, etc.), see **[NNsight.md](./NNsight.md)**.
---
## Installation
```bash
pip install nnsight
```
---
## Agents
Inform LLM agents how to use nnsight using one of these methods:
### Skills Repository
**Claude Code**
```bash
# Open Claude Code terminal
claude
# Add the marketplace (one time)
/plugin marketplace add https://github.com/ndif-team/skills.git
# Install all skills
/plugin install nnsight@skills
```
**OpenAI Codex**
```bash
# Open OpenAI Codex terminal
codex
# Install skills
skill-installer install https://github.com/ndif-team/skills.git
```
### Context7 MCP
Alternatively, use [Context7](https://github.com/upstash/context7) to provide up-to-date nnsight documentation directly to your LLM. Add `use context7` to your prompts or configure it in your MCP client:
```json
{
"mcpServers": {
"context7": {
"url": "https://mcp.context7.com/mcp"
}
}
}
```
See the [Context7 README](https://github.com/upstash/context7/blob/master/README.md) for full installation instructions across different IDEs.
### Documentation Files
You can also add our documentation files directly to your agent's context:
- **[CLAUDE.md](./CLAUDE.md)** — Comprehensive guide for AI agents working with nnsight
- **[NNsight.md](./NNsight.md)** — Deep technical documentation on nnsight's internals
---
## Quick Start
```python
from nnsight import LanguageModel
model = LanguageModel('openai-community/gpt2', device_map='auto', dispatch=True)
with model.trace('The Eiffel Tower is in the city of'):
# Intervene on activations (must access in execution order!)
model.transformer.h[0].output[0][:] = 0
# Access and save hidden states from a later layer
hidden_states = model.transformer.h[-1].output[0].save()
# Get model output
output = model.output.save()
print(model.tokenizer.decode(output.logits.argmax(dim=-1)[0]))
```
> **💡 Tip:** Always call `.save()` on values you want to access after the trace exits. Without `.save()`, values are garbage collected. You can also use `nnsight.save(value)` as an alternative.
## Accessing Activations
```python
with model.trace("The Eiffel Tower is in the city of"):
# Access attention output
attn_output = model.transformer.h[0].attn.output[0].save()
# Access MLP output
mlp_output = model.transformer.h[0].mlp.output.save()
# Access any layer's output (access in execution order)
layer_output = model.transformer.h[5].output[0].save()
# Access final logits
logits = model.lm_head.output.save()
```
**Note:** GPT-2 transformer layers return tuples where index 0 contains the hidden states.
## Modifying Activations
### In-Place Modification
```python
with model.trace("Hello"):
# Zero out all activations
model.transformer.h[0].output[0][:] = 0
# Modify specific positions
model.transformer.h[0].output[0][:, -1, :] = 0 # Last token only
```
### Replacement
```python
with model.trace("Hello"):
# Add noise to activations
hs = model.transformer.h[-1].mlp.output.clone()
noise = 0.01 * torch.randn(hs.shape)
model.transformer.h[-1].mlp.output = hs + noise
result = model.transformer.h[-1].mlp.output.save()
```
## Batching with Invokers
Process multiple inputs in one forward pass. Each invoke runs its code in a **separate worker thread**:
- Threads execute serially (no race conditions)
- Each thread waits for values via `.output`, `.input`, etc.
- Invokes run in the order they're defined
- Cross-invoke references work because threads run sequentially
- **Within an invoke, access modules in execution order only**
```python
with model.trace() as tracer:
# First invoke: worker thread 1
with tracer.invoke("The Eiffel Tower is in"):
embeddings = model.transformer.wte.output # Thread waits here
output1 = model.lm_head.output.save()
# Second invoke: worker thread 2 (runs after thread 1 completes)
with tracer.invoke("_ _ _ _ _ _"):
model.transformer.wte.output = embeddings # Uses value from thread 1
output2 = model.lm_head.output.save()
```
### Prompt-less Invokers
Use `.invoke()` with no arguments to operate on the entire batch:
```python
with model.trace() as tracer:
with tracer.invoke("Hello"):
out1 = model.lm_head.output[:, -1].save()
with tracer.invoke(["World", "Test"]):
out2 = model.lm_head.output[:, -1].save()
# No-arg invoke: operates on ALL 3 inputs
with tracer.invoke():
out_all = model.lm_head.output[:, -1].save() # Shape: [3, vocab]
```
## Multi-Token Generation
Use `.generate()` for autoregressive generation:
```python
with model.generate("The Eiffel Tower is in", max_new_tokens=3) as tracer:
output = model.generator.output.save()
print(model.tokenizer.decode(output[0]))
# "The Eiffel Tower is in the city of Paris"
```
### Iterating Over Generation Steps
```python
with model.generate("Hello", max_new_tokens=5) as tracer:
logits = list().save()
# Iterate over all generation steps
for step in tracer.iter[:]:
logits.append(model.lm_head.output[0][-1].argmax(dim=-1))
print(model.tokenizer.batch_decode(logits))
```
### Conditional Interventions Per Step
```python
with model.generate("Hello", max_new_tokens=5) as tracer:
outputs = list().save()
for step_idx in tracer.iter[:]:
if step_idx == 2:
model.transformer.h[0].output[0][:] = 0 # Only on step 2
outputs.append(model.transformer.h[-1].output[0])
```
> **⚠️ Warning:** Code after `tracer.iter[:]` never executes! The unbounded iterator waits forever for more steps. Put post-iteration code in a separate `tracer.invoke()`:
> ```python
> with model.generate("Hello", max_new_tokens=3) as tracer:
> with tracer.invoke(): # First invoker
> for step in tracer.iter[:]:
> hidden = model.transformer.h[-1].output.save()
> with tracer.invoke(): # Second invoker - runs after
> final = model.output.save() # Now works!
> ```
## Gradients
Gradients are accessed on **tensors** (not modules), only inside a `with tensor.backward():` context:
```python
with model.trace("Hello"):
hs = model.transformer.h[-1].output[0]
hs.requires_grad_(True)
logits = model.lm_head.output
loss = logits.sum()
with loss.backward():
grad = hs.grad.save()
print(grad.shape)
```
## Model Editing
Create persistent model modifications:
```python
# Create edited model (non-destructive)
with model.edit() as model_edited:
model.transformer.h[0].output[0][:] = 0
# Original model unchanged
with model.trace("Hello"):
out1 = model.transformer.h[0].output[0].save()
# Edited model has modification
with model_edited.trace("Hello"):
out2 = model_edited.transformer.h[0].output[0].save()
assert not torch.all(out1 == 0)
assert torch.all(out2 == 0)
```
## Scanning (Shape Inference)
Get shapes without running the full model. Like all tracing contexts, `.save()` is required to persist values outside the block:
```python
import nnsight
with model.scan("Hello"):
dim = nnsight.save(model.transformer.h[0].output[0].shape[-1])
print(dim) # 768
```
## Caching Activations
Automatically cache outputs from modules:
```python
with model.trace("Hello") as tracer:
cache = tracer.cache()
# Access cached values
layer0_out = cache['model.transformer.h.0'].output
print(cache.model.transformer.h[0].output[0].shape)
```
## Sessions
Group multiple traces for efficiency:
```python
with model.session() as session:
with model.trace("Hello"):
hs1 = model.transformer.h[0].output[0].save()
with model.trace("World"):
model.transformer.h[0].output[0][:] = hs1 # Use value from first trace
hs2 = model.transformer.h[0].output[0].save()
```
## Remote Execution (NDIF)
Run on NDIF's remote infrastructure:
```python
from nnsight import CONFIG
CONFIG.set_default_api_key("YOUR_API_KEY")
model = LanguageModel("meta-llama/Meta-Llama-3.1-8B")
with model.trace("Hello", remote=True):
hidden_states = model.model.layers[-1].output.save()
```
Check available models at [nnsight.net/status](https://nnsight.net/status/)
## vLLM Integration
High-performance inference with vLLM:
```python
from nnsight.modeling.vllm import VLLM
model = VLLM("gpt2", tensor_parallel_size=1, dispatch=True)
with model.trace("Hello", temperature=0.0, max_tokens=5) as tracer:
logits = list().save()
for step in tracer.iter[:]:
logits.append(model.logits.output)
```
## NNsight for Any PyTorch Model
Use `NNsight` for arbitrary PyTorch models:
```python
from nnsight import NNsight
import torch
net = torch.nn.Sequential(
torch.nn.Linear(5, 10),
torch.nn.Linear(10, 2)
)
model = NNsight(net)
with model.trace(torch.rand(1, 5)):
layer1_out = model[0].output.save()
output = model.output.save()
```
## Source Tracing
Access intermediate operations inside a module's forward pass. `.source` rewrites the forward method to hook into all operations:
```python
# Discover available operations
print(model.transformer.h[0].attn.source)
# Shows forward method with operation names like:
# attention_interface_0 -> 66 attn_output, attn_weights = attention_interface(...)
# self_c_proj_0 -> 79 attn_output = self.c_proj(attn_output)
# Access operation values
with model.trace("Hello"):
attn_out = model.transformer.h[0].attn.source.attention_interface_0.output.save()
```
## Ad-hoc Module Application
Apply modules out of their normal execution order:
```python
with model.trace("The Eiffel Tower is in the city of"):
# Get intermediate hidden states
hidden_states = model.transformer.h[-1].output[0]
# Apply lm_head to get "logit lens" view
logits = model.lm_head(model.transformer.ln_f(hidden_states))
tokens = logits.argmax(dim=-1).save()
print(model.tokenizer.decode(tokens[0]))
```
---
## Core Concepts
### Deferred Execution with Thread-Based Synchronization
NNsight uses **deferred execution** with **thread-based synchronization**:
1. **Code extraction**: When you enter a `with model.trace(...)` block, nnsight captures your code (via AST) and immediately exits the block
2. **Thread execution**: Your code runs in a separate worker thread
3. **Value waiting**: When you access `.output`, the thread **waits** until the model provides that value
4. **Hook-based injection**: The model uses PyTorch hooks to provide values to waiting threads
```python
with model.trace("Hello"):
# Code runs in a worker thread
# Thread WAITS here until layer output is available
hs = model.transformer.h[-1].output[0]
# .save() marks the value to persist after the context exits
hs = hs.save()
# Alternative: hs = nnsight.save(hs)
# After exiting, hs contains the actual tensor
print(hs.shape) # torch.Size([1, 2, 768])
```
**Key insight:** Your code runs directly. When you access `.output`, you get the **real tensor** - your thread just waits for it to be available.
**Important:** Within an invoke, you must access modules in execution order. Accessing layer 5's output before layer 2's output will cause a deadlock (layer 2 has already been executed).
### Key Properties
Every module has these special properties. Accessing them causes the worker thread to **wait** for the value:
| Property | Description |
|----------|-------------|
| `.output` | Module's forward pass output (thread waits) |
| `.input` | First positional argument to the module |
| `.inputs` | All inputs as `(args_tuple, kwargs_dict)` |
**Note:** `.grad` is accessed on **tensors** (not modules), only inside a `with tensor.backward():` context.
### Module Hierarchy
Print the model to see its structure:
```python
print(model)
# GPT2LMHeadModel(
# (transformer): GPT2Model(
# (h): ModuleList(
# (0-11): 12 x GPT2Block(
# (attn): GPT2Attention(...)
# (mlp): GPT2MLP(...)
# )
# )
# )
# (lm_head): Linear(...)
# )
```
---
## Troubleshooting
| Error | Cause | Fix |
|-------|-------|-----|
| `OutOfOrderError: Value was missed...` | Accessed modules in wrong order | Access modules in forward-pass execution order |
| `NameError` after `for step in tracer.iter[:]` | Code after unbounded iter doesn't run | Use separate `tracer.invoke()` for post-iteration code |
| `ValueError: Cannot return output of Envoy...` | No input provided to trace | Provide input: `model.trace(input)` or use `tracer.invoke(input)` |
For more debugging tips, see the [documentation](https://www.nnsight.net).
---
## More Resources
- **[Documentation](https://www.nnsight.net)** — Tutorials, guides, and API reference
- **[NNsight.md](./NNsight.md)** — Deep technical documentation on nnsight's internals
- **[CLAUDE.md](./CLAUDE.md)** — Comprehensive guide for AI agents working with nnsight
- **[Performance Report](./tests/performance/profile/results/performance_report.md)** — Detailed performance analysis and benchmarks
---
## Citation
If you use `nnsight` in your research, please cite:
```bibtex
@article{fiottokaufman2024nnsightndifdemocratizingaccess,
title={NNsight and NDIF: Democratizing Access to Foundation Model Internals},
author={Jaden Fiotto-Kaufman and Alexander R Loftus and Eric Todd and Jannik Brinkmann and Caden Juang and Koyena Pal and Can Rager and Aaron Mueller and Samuel Marks and Arnab Sen Sharma and Francesca Lucchetti and Michael Ripa and Adam Belfki and Nikhil Prakash and Sumeet Multani and Carla Brodley and Arjun Guha and Jonathan Bell and Byron Wallace and David Bau},
year={2024},
eprint={2407.14561},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2407.14561},
}
```
| text/markdown | null | Jaden Fiotto-Kaufman <jadenfk@outlook.com> | null | null | MIT License | deep learning, neural networks, interpretability, pytorch, transformers | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved ::... | [] | null | null | >=3.10 | [] | [] | [] | [
"transformers",
"astor",
"cloudpickle",
"httpx",
"python-socketio[client]",
"pydantic>=2.9.0",
"torch>=2.4.0",
"accelerate",
"toml",
"ipython",
"rich",
"zstandard",
"pytest>=6.0; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"vllm>=0.12; extra == \"vllm\"",
"triton==3.5.0; extra... | [] | [] | [] | [
"Homepage, https://github.com/ndif-team/nnsight",
"Website, https://nnsight.net/",
"Documentation, https://nnsight.net/documentation/",
"Changelog, https://github.com/ndif-team/nnsight/CHANGELOG.md",
"Repository, https://github.com/ndif-team/nnsight.git",
"Bug Tracker, https://github.com/ndif-team/nnsight... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:20:40.611560 | nnsight-0.6.0a1.tar.gz | 983,784 | de/65/6287456c890af2325764ef44bee1fcba8038044883de5b65760cc4fe731d/nnsight-0.6.0a1.tar.gz | source | sdist | null | false | 37626e02c5bb0430f43b2a21d55e3fb3 | d7b9860b186dac3753117f4d2bc6d21e2110ee5f4a7f0290290ec91afed96d93 | de656287456c890af2325764ef44bee1fcba8038044883de5b65760cc4fe731d | null | [
"LICENSE"
] | 1,576 |
2.1 | duct-tape | 0.26.7 | Duct Tape is a Python interface for downloading data, uploading data, and controlling supported Ed-Tech software. | 
# Duct Tape
Duct Tape is a Python interface for downloading data, uploading data, and controlling supported Ed-Tech software.
It is built on top of Requests and Selenium and is intended to help K-12 school system data and IT teams save
time and use "better" code to automate their work flows.
## Currently Supported Products
The following products are currently supported, some have more functionality that others.
* SchoolMint
* Google Sheets
* Lexia
* Clever
* Informed K12
* Mealtime
* Typing Agent
* Summit Learning
* SEIS
## Installing / Getting started
To use this project, complete the following steps (note: we are currently running out of master and
have not cut a release yet):
0. Set up a Chrome + Selenium environment on your computer. Instructions [here](https://medium.com/@patrick.yoho11/installing-selenium-and-chromedriver-on-windows-e02202ac2b08).
1. Download or clone the project to your computer.
2. Navigate to the root `ducttape` directory in your command line/terminal (the one with the setup.py file in it). Run `pip install ./`.
3. Check out the SchoolMint example in the [`examples`](https://github.com/SummitPublicSchools/ducttape/tree/master/examples) folder to see how easy it can be to grab your data.
## Documentation
A good number of functions have strong doc strings describing their purpose, parameters, and return types.
For now, this along with a couple of [examples](https://github.com/SummitPublicSchools/ducttape/tree/master/examples) are the primary sources of documentation.
## Features
* Downloading data from ed-tech Web UIs without human interaction
* Uploading data to ed-tech through web UIs without human interaction (coming soon)
* Controlling ed-tech web UIs through Python (limited implementation)
The original development purpose of this project was to automate data extraction from ed-tech
products in Python and return them as Pandas dataframes for analysis. Therefore, the biggest
feature set is around downloading flat files from different ed-tech products that don't provide
API and SQL access at all the data you might need to get to. Some work is in progress around
uploading data and controlling other portions of ed-tech platforms, but it is still in
private development.
## Developing
The vision for this project is to have contributors from across different school systems help build
out a centralized, well-coded, tested library for interacting with ed-tech products that don't provide
adequate customer-facing APIs. This will be most successful if contributors come on board as developers
from different school systems; iron will sharpen iron and we will get better coverage of ed-tech products.
If you are interested in developing (and especially if you are interested in adding in support for a new
product), please reach out to mdunham@summitps.org and hshen@summitps.org.
#### Ideas for Future Development
* Add the ability to download data from a new product
* Add a missing feature to a currently supported product.
* Fully automating unit testing
## Unit Tests
Unit tests have been written for much of the functionality within this package. These are run
before any commits are made to master. However, they are context specific (in that you need
to use live instances to do the testing) and are not all fully automated (there are still cases
where a human needs to check that the downloaded data meets expected conditions since it is
being tested off of production systems).
A future area of development would be to figure out how to properly mock interacting with
these ed-tech platforms so that we could fully automate unit testing and have better coverage.
## Contributing
If you'd like to contribute new functionality, please reach out to mdunham@summitps.org and
hshen@summitps.org. If you have a bug fix or a code clean-up suggestion, feel free to fork us
and submit a pull request.
## Licensing
Please see the license file.
| text/markdown | Patrick Yoho | trickyoho@gmail.com | null | null | null | automation, education, illuminate, selenium, etl | [
"License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://github.com/SummitPublicSchools/ducttape | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/5.0.0 CPython/3.9.18 | 2026-02-18T18:20:06.342540 | duct_tape-0.26.7.tar.gz | 51,381 | 66/e6/68c5447e6667697b0531de6363323cfb602798a047f6a45d8ac76f8c91b2/duct_tape-0.26.7.tar.gz | source | sdist | null | false | a6b7f79c21e8eab83d944bbccb287de5 | ab504f4c0b27195e25d814baaaa2338c8750b383bede5761ab325d7233d8987a | 66e668c5447e6667697b0531de6363323cfb602798a047f6a45d8ac76f8c91b2 | null | [] | 178 |
2.4 | mini-afsd | 1.3.0 | A program for controlling a miniaturized additive friction stir deposition (AFSD) machine. | =========
mini_afsd
=========
mini_afsd is a program for controlling a miniaturized additive friction stir deposition (AFSD) machine.
.. contents:: **Contents**
:depth: 1
Introduction
------------
This repository contains code for controlling a miniaturized AFSD machine and
is used by the `Yu group at Virginia Tech <https://yu.mse.vt.edu>`_.
Communication with the machine is achieved using `FluidNC <https://github.com/bdring/FluidNC>`_,
and future modifications to the firmware or code inputs can be helped by looking
through FluidNC's documentation.
Installation
------------
Dependencies
~~~~~~~~~~~~
Driver Dependencies
^^^^^^^^^^^^^^^^^^^
The `LJM` driver from LabJack must be installed to interface with the
LabJack for measuring the thermocouple outputs, which can be downloaded from
https://labjack.com/support/software/installers/ljm.
The driver needed for computers to properly connect to the serial
port's USB interface is available from
https://oemdrivers.com/usb-cp2104-usb-to-uart-driver.
(Change this in the future if the connector changes)
Python Dependencies
^^^^^^^^^^^^^^^^^^^
mini_afsd requires `Python <https://python.org>`_ version 3.10 or later
and the following Python libraries:
* `labjack-ljm <https://pypi.org/project/labjack-ljm/>`_
* `NumPy <https://numpy.org>`_
* `matplotlib <https://pypi.org/project/matplotlib/>`_ (>=3.4)
* `pyserial <https://pypi.org/project/pyserial/>`_
All of the required Python libraries should be automatically installed when
installing mini_afsd using any of the installation methods below.
Installing Python
~~~~~~~~~~~~~~~~~
Python can be installed multiple ways:
* If on Windows, the easiest way is to use `WinPython <https://winpython.github.io/>`_. The recommended
installation file (as of June 10, 2022) is WinPython64-3.10.4.0 (or WinPython64-3.10.4.0dot if you don't
want any preinstalled libraries).
* Use `Anaconda <https://www.anaconda.com/>`_, which comes with many libraries preinstalled.
* Install from Python's official source, https://www.python.org/. Follow the instructions listed at
https://packaging.python.org/en/latest/tutorials/installing-packages/#requirements-for-installing-packages
to ensure Python and the Python package manager `pip <https://pip.pypa.io>`_ are correctly installed.
Stable Release
~~~~~~~~~~~~~~
mini_afsd can be installed from `pypi <https://pypi.org/project/mini_afsd>`_
using `pip <https://pip.pypa.io>`_, by running the following command in the terminal:
.. code-block:: console
pip install -U mini_afsd
Development Version
~~~~~~~~~~~~~~~~~~~
The sources for mini_afsd can be downloaded from the `GitHub repo`_.
To install the current version of mini_afsd from GitHub, run:
.. code-block:: console
pip install https://github.com/RyTheGuy355/MiniAFSDCode/zipball/main
.. _GitHub repo: https://github.com/RyTheGuy355/MiniAFSDCode
Optional Dependencies
~~~~~~~~~~~~~~~~~~~~~
While not needed, an Arduino IDE (available from https://www.arduino.cc/en/software)
can be used when connected to the serial port of the mill to get more detailed feedback
on the messages sent to and from the port.
Quick Start
-----------
For default usage, mini_afsd can be ran from the a terminal (the command line if
Python was install universally, from an Anaconda terminal if Python was installed with
Anaconda, or from the WinPython Command Prompt if Python was installed using WinPython) using:
.. code-block:: console
python -m mini_afsd
To list out the various options when using mini_afsd from the terminal, simply do:
.. code-block:: console
python -m mini_afsd -h
Alternatively, mini_afsd can be used from a Python file by doing the following:
.. code-block:: python
from mini_afsd import Controller
Controller().run()
Configuring LabJack
-------------------
For determining proper addresses to connections on the LabJack, use
the Kipling software included with LJM to find the pin addresses within
the "Register Matrix" section.
Sending Commands to FluidNC
---------------------------
Commands sent from the GUI to FluidNC for control of the mill can be split into 2 categories:
1) G-Code (and subsequent M-Codes, etc.): These are prefixed by "G", "M", etc., and follow their
standard usage. See http://wiki.fluidnc.com/en/features/supported_gcodes for the G-Codes
supported by FluiNC.
2) Codes to FluiNC or Grbl. These can include things liking homing ("$H"), status query ("?"), or
soft reset ("0x18" == "CTRL+X"). When adding new commands under this category, it is recommended
to add a comment as to what these commands are doing since it is not immediately clear and makes
maintenance difficult. A full listing of commands can be found at
http://wiki.fluidnc.com/en/features/commands_and_settings.
Log Files
---------
While the program is running, it is set up to automatically log messages sent and received from
the mill for later reference/debugging. In addition, if data collection was turned on and the
data was not subsequently saved, the data is automatically saved in order to prevent losing data.
The folder where these logs and data files are saved can be found by running the following within
a Python file:
.. code-block:: python
from mini_afsd.controller import get_save_location
print(get_save_location())
On Windows, this folder location likely corresponds to the local AppData folder,
ie. ``%localappdata%/mini_afsd``.
License
-------
mini_afsd is all rights reserved. For more information, refer to the license_.
.. _license: https://github.com/RyTheGuy355/MiniAFSDCode/tree/main/LICENSE.txt
Author
------
* Ryan Gottwald
| text/x-rst | Ryan Gottwald | Donald Erb <derb15@vt.edu> | null | Donald Erb <derb15@vt.edu> | null | AFSD, additive friction stir deposition, additive manufacturing, engineering | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python... | [] | null | null | >=3.7 | [] | [] | [] | [
"labjack-ljm",
"matplotlib>=3.4",
"numpy",
"pyserial",
"build; extra == \"release\"",
"bump-my-version; extra == \"release\"",
"twine; extra == \"release\""
] | [] | [] | [] | [
"Homepage, https://github.com/RyTheGuy355/MiniAFSDCode"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T18:19:08.763944 | mini_afsd-1.3.0.tar.gz | 27,274 | c7/20/755ee904eda6112cbd83149be1e4922949cb21acca8b5dd5c44382372a75/mini_afsd-1.3.0.tar.gz | source | sdist | null | false | a315892570ace16cfa8dbda26cdb93b7 | 7464deec30b5da75c40f1c6c0de85e3475c18607fe218e701df36a85ae491fed | c720755ee904eda6112cbd83149be1e4922949cb21acca8b5dd5c44382372a75 | null | [
"LICENSE.txt"
] | 255 |
2.4 | dtachwrap | 0.1.1 | Python wrapper for dtach with built-in binary management | # dtachwrap
Python wrapper for **dtach** with built-in binary management and multi-task CLI.
It packages the `dtach` binary in the wheel (Linux x86_64 / aarch64), so it works out of the box without compilation during installation.
## Features
- **Portable**: Includes pre-compiled `dtach` binaries (GPL compliant).
- **Easy Management**: `start`, `attach`, `ls`, `logs`, `stop` commands.
- **Logging**: Automatically captures stdout/stderr to files.
- **Recovery**: Keeps tasks running even if you disconnect.
## Installation
```bash
pip install dtachwrap
```
Or run directly with `uvx`:
```bash
uvx dtachwrap start my-task -- python script.py
```
## Usage
### Start a task
```bash
dtachwrap start train-exp1 -- python train.py --cfg exp1.yaml
```
The task runs in the background.
- Socket: `~/.dtachwrap/sockets/train-exp1`
- Logs: `~/.dtachwrap/logs/train-exp1.out`
### List tasks
```bash
dtachwrap ls
```
Shows running tasks. Use `dtachwrap ls --all` to see stopped tasks.
### View logs
```bash
dtachwrap logs train-exp1 -f
```
### Attach to a task
```bash
dtachwrap attach train-exp1
```
- Detach key: `^\` (Ctrl+\)
- Redraw: `Ctrl+l`
### Stop a task
```bash
dtachwrap stop train-exp1
```
## License
This project is licensed under MIT.
The bundled `dtach` binary is GPL-2.0. See `src/dtachwrap/_vendor/licenses/DTACH_COPYING`.
## Development
This project uses `uv` for dependency management.
1. **Setup**:
```bash
uv sync
```
2. **Run locally**:
```bash
uv run dtachwrap --help
```
3. **Build**:
```bash
uv build
```
| text/markdown | null | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: POSIX :: Linux",
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"typer>=0.9.0"
] | [] | [] | [] | [
"Homepage, https://github.com/HarborYuan/dtachwrap"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:19:05.113737 | dtachwrap-0.1.1.tar.gz | 92,552 | 45/3e/b08a6ae42e53b89f2c107546ee1562311a9ec53d974f2d42fead0ddc09b8/dtachwrap-0.1.1.tar.gz | source | sdist | null | false | 42b1bdfb8420b58779d7ffb720edd301 | 21bd7bc31cb74a0851a0f89f613557b8e7b480a4e89afd4413c33662de565235 | 453eb08a6ae42e53b89f2c107546ee1562311a9ec53d974f2d42fead0ddc09b8 | null | [] | 774 |
2.4 | pretiac | 0.5.0 | pretiac: A PREtty Typed Icinga2 Api Client. | .. image:: http://img.shields.io/pypi/v/pretiac.svg
:target: https://pypi.org/project/pretiac
:alt: This package on the Python Package Index
.. image:: https://github.com/Josef-Friedrich/PREtty-Typed-Icinga2-Api-Client_py/actions/workflows/tests.yml/badge.svg
:target: https://github.com/Josef-Friedrich/PREtty-Typed-Icinga2-Api-Client_py/actions/workflows/tests.yml
:alt: Tests
.. image:: https://readthedocs.org/projects/pretty-typed-icinga2-api-client-py/badge/?version=latest
:target: https://pretty-typed-icinga2-api-client-py.readthedocs.io
:alt: Documentation Status
pretiac: PREtty Typed Icinga2 Api Client
========================================
For more information about the project, please read the
`API documentation <https://pretty-typed-icinga2-api-client-py.readthedocs.io>`_.
``pretiac`` stands for **PRE** tty **T** yped **I** cinga2 **A** pi **C** lient.
This project is a fork / extension of the
`TeraIT-at/icinga2apic <https://github.com/TeraIT-at/icinga2apic>`__ api client.
The client class of ``icinga2apic`` was renamed to `pretiac.raw_client.RawClient`.
``pretaic`` provides an additional client (`pretiac.client.Client`), which is typed.
`Pydantic <https://github.com/pydantic/pydantic>`__ is used to validate the
Icinga2 REST API and to convert the JSON
output into Python data types.
Authenticating Icinga 2 API Users with TLS Client Certificates
--------------------------------------------------------------
Source: `Blog post at icinga.com
<https://icinga.com/blog/2022/11/16/authenticating-icinga-2-api-users-with-tls-client-certificates/>`__
Icinga 2 supports a second authentication mechanism: TLS client certificates.
This is a feature of TLS that also allows the client to send a certificate, just
like the server does, allowing the server to authenticate the client as well.
You can start by generating a private key and a certificate signing request
(CSR) with the ``icinga2 pki new-cert`` command:
.. code-block::
icinga2 pki new-cert \
--cn my-api-client \
--key my-api-client.key.pem \
--csr my-api-client.csr.pem
This writes the key and CSR to the files my-api-client.key.pem and
my-api-client.csr.pem respectively. Note that you can also use other methods to
generate these files. It is only important that the CSR contains a meaningful
common name (CN). This allows you to also generate the private key on a hardware
security token for example.
Next, the CSR has to be signed by the Icinga CA. This can be achieved by copying
the CSR file to the Icinga master and running the following command:
.. code-block::
icinga2 pki sign-csr \
--csr my-api-client.csr.pem \
--cert my-api-client.cert.pem
This generates a certificate, however, so far, Icinga 2 does not know what to do
with this certificate. To fix this, a new ApiUser object has to be created that
connects the certificate and its common name with some permissions.
.. code-block::
object ApiUser "my-api-client" {
client_cn = "my-api-client"
permissions = [ "*" ]
}
After reloading the Icinga 2 configuration, the certificate is now ready to use.
The following example uses curl, but any HTTPS client that supports client
certificates will do.
Command line interface
----------------------
::
Usage: pretiac [OPTIONS] COMMAND [ARGS]...
Command line interface for the Icinga2 API.
Options:
-d, --debug Increase debug verbosity (use up to 3 times): -d: info -dd:
debug -ddd: verbose.
--help Show this message and exit.
Commands:
actions There are several actions available for Icinga 2 provided...
check Execute checks and send it to the monitoring server.
config Manage configuration packages and stages.
dump-config Dump the configuration of the pretiac client.
events Subscribe to an event stream.
objects Manage configuration objects.
status Retrieve status information and statistics for Icinga 2.
types Retrieve the configuration object types.
variables Request information about global variables.
``pretiac actions``
^^^^^^^^^^^^^^^^^^^
::
Usage: pretiac actions [OPTIONS] COMMAND [ARGS]...
There are several actions available for Icinga 2 provided by the
``/v1/actions`` URL endpoint.
Options:
--help Show this message and exit.
Commands:
send-service-check-result Send a check result for a service and create...
``pretiac actions send-service-check-result``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
Usage: pretiac actions send-service-check-result [OPTIONS] SERVICE
Send a check result for a service and create the host or the service if
necessary.
Options:
--plugin-output TEXT The plugin main output. Does **not** contain the
performance data.
--performance-data TEXT The performance data.
--exit-status TEXT For services: ``0=OK``, ``1=WARNING``,
``2=CRITICAL``, ``3=UNKNOWN``, for hosts: ``0=UP``,
``1=DOWN``.
--host TEXT The name of the host.
--help Show this message and exit.
``pretiac config``
^^^^^^^^^^^^^^^^^^
::
Usage: pretiac config [OPTIONS] COMMAND [ARGS]...
Manage configuration packages and stages.
Manage configuration packages and stages based on configuration files and
directory trees.
Options:
--help Show this message and exit.
Commands:
delete Delete a configuration package or a configuration stage entirely.
show
``pretiac config delete``
^^^^^^^^^^^^^^^^^^^^^^^^^
::
Usage: pretiac config delete [OPTIONS] PACKAGE [STAGE]
Delete a configuration package or a configuration stage entirely.
Options:
--help Show this message and exit.
``pretiac objects``
^^^^^^^^^^^^^^^^^^^
::
Usage: pretiac objects [OPTIONS] COMMAND [ARGS]...
Manage configuration objects.
Options:
--help Show this message and exit.
Commands:
delete-service Delete a service.
list List the different configuration object types.
``pretiac objects delete-service``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
Usage: pretiac objects delete-service [OPTIONS] HOST SERVICE
Delete a service.
Options:
--help Show this message and exit.
``pretiac objects list``
^^^^^^^^^^^^^^^^^^^^^^^^
::
Usage: pretiac objects list [OPTIONS] OBJECT_TYPE
List the different configuration object types.
Options:
--help Show this message and exit.
| text/x-rst | Josef Friedrich | Josef Friedrich <josef@friedrich.rocks> | null | null | null | null | [
"Topic :: Utilities",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
... | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"click>=8.1.8",
"pydantic>=2.12.5",
"pyyaml>=6.0.3",
"requests>=2.32.5",
"rich>=14.3.2",
"types-pyyaml>=6.0.12.20250915",
"types-requests>=2.32.4.20260107"
] | [] | [] | [] | [
"Repository, https://github.com/Josef-Friedrich/PREtty-Typed-Icinga2-Api-Client_py"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T18:19:00.325834 | pretiac-0.5.0-py3-none-any.whl | 44,698 | 8c/08/b9321b87b86f44860214636dc0f7143d78bc4824caf74c045ce1da587be3/pretiac-0.5.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 5a3e562f33dbde670211332168edeae3 | 06729b000ab16be8ec8cc632d99217c7cfaec2445f5c28bd09673c7629742664 | 8c08b9321b87b86f44860214636dc0f7143d78bc4824caf74c045ce1da587be3 | GPL-3.0-only | [] | 251 |
2.4 | watchback | 0.1.2 | Simple backup utility | # Watchback

Simple desktop backup app: pick a source folder, pick one or more mirror folders, click `Sync`, and Watchback keeps mirrors updated.
## Quick Start
1. Install:
```bash
pip install watchback
```
2. Run:
```bash
watchback
```
3. In the app:
- Click `Add Profile`
- Add at least 2 folders
- Double-click one folder to mark it as `[GROUND]` (source of truth)
- Click `Save Profile`
- Click `Sync`
That is it. While sync is running, file changes are mirrored automatically.
## Open Existing Mirror (No Profile Needed)
If you attach a drive that already contains a Watchback mirror, you can use it directly:
1. Click `Open Mirror`
2. Select the mirror folder
3. Choose one of:
- `Explore Current`
- `Explore Versions`
- `Explore Snapshots`
You can export files from the mirror without creating a local profile first.
## What It Stores In Mirrors
Each mirror gets:
```text
mirror/
├── current/ # live copy
├── versions/ # older file versions
├── snapshots/ # periodic state history
└── objects/ # content storage used by versions/snapshots
```
## Important Notes
- Sync direction is one-way: `GROUND -> MIRROR`.
- Do not edit files inside mirror folders directly.
- Settings and logs are stored in:
- `~/.watchback/watchback.json`
- `~/.watchback/watchback.log`
## Requirements
- Python `3.9+`
- Linux, macOS, or Windows
## License
MIT. See `LICENSE`.
| text/markdown | null | Ali Aman <ali.aman.burki@gmail.com> | null | null | MIT License
Copyright (c) 2026 Ali Aman
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"PySide6<7.0,>=6.0",
"watchdog<7.0,>=5.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T18:18:40.492540 | watchback-0.1.2.tar.gz | 20,681 | f7/f2/ebe005f57688e441faac892c7d6c4f41ade9466d8a20082474896867099b/watchback-0.1.2.tar.gz | source | sdist | null | false | b15bc124ef8ccba212c1a113d44ca35d | 6f2d89df76ecd239501b4ca608d24229029e94137ddb00615df725b858af846f | f7f2ebe005f57688e441faac892c7d6c4f41ade9466d8a20082474896867099b | null | [
"LICENSE"
] | 227 |
2.4 | logiscout-logger | 0.1.1 | Python logging library for ingesting logs into LogiScout | # LogiScout Logger
Python logging library for ingesting logs into [LogiScout](https://github.com/Kazim68/logiscout-logger). Built on top of `structlog` to provide context-rich, structured logs with intelligent batching.
## Features
- **Structured Logging**: JSON-formatted logs with timestamps, levels, and metadata
- **Intelligent Batching**: Automatically batches logs (200 logs or 30 seconds) to reduce network overhead
- **Correlation ID Tracking**: Automatic request correlation across your application
- **Environment Support**: DEV (console only) and PROD (console + remote) modes
- **Confidential Logging**: Mark sensitive logs to prevent them from being sent remotely
- **Framework Support**: Built-in middleware for FastAPI, Flask, and other ASGI/WSGI frameworks
## Installation
```bash
pip install logiscout-logger
```
## Quick Start
### Basic Usage
```python
from logiscout_logger import init, get_logger, PROD, DEV
# Initialize the logger
init(
endpoint="https://api.logiscout.com/logs",
service_name="my-service",
env=PROD # Use DEV for local development (no remote sending)
)
# Get a logger instance
logger = get_logger(__name__)
# Log messages
logger.info("User logged in", user_id=123)
logger.warning("Rate limit approaching", current=95, limit=100)
logger.error("Payment failed", order_id="abc-123", reason="insufficient_funds")
```
### FastAPI Integration
```python
from fastapi import FastAPI
from logiscout_logger import init, get_logger, asgiConfiguration, PROD
app = FastAPI()
# Initialize LogiScout
init(
endpoint="https://api.logiscout.com/logs",
service_name="my-fastapi-app",
env=PROD
)
# Add middleware for automatic correlation ID tracking
app.add_middleware(asgiConfiguration)
logger = get_logger("api")
@app.get("/users/{user_id}")
async def get_user(user_id: int):
logger.info("Fetching user", user_id=user_id)
# Your logic here
return {"user_id": user_id}
```
### Flask Integration
```python
from flask import Flask
from logiscout_logger import init, get_logger, wsgiConfiguration, PROD
app = Flask(__name__)
# Initialize LogiScout
init(
endpoint="https://api.logiscout.com/logs",
service_name="my-flask-app",
env=PROD
)
# Apply WSGI middleware
app.wsgi_app = wsgiConfiguration(app.wsgi_app)
logger = get_logger("api")
@app.route("/users/<int:user_id>")
def get_user(user_id):
logger.info("Fetching user", user_id=user_id)
return {"user_id": user_id}
```
### Django Integration
**1. Initialize in `settings.py`:**
```python
from logiscout_logger import init, PROD, DEV
init(
endpoint="https://api.logiscout.com/logs",
service_name="my-django-app",
env=PROD # Use DEV for local development
)
```
**2. Apply middleware in `wsgi.py`:**
```python
import os
from django.core.wsgi import get_wsgi_application
from logiscout_logger import wsgiConfiguration
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
application = get_wsgi_application()
application = wsgiConfiguration(application)
```
If running Django with an ASGI server (e.g., Uvicorn), apply the middleware in `asgi.py` instead:
```python
import os
from django.core.asgi import get_asgi_application
from logiscout_logger import asgiConfiguration
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
application = get_asgi_application()
application = asgiConfiguration(application)
```
**3. Use in views:**
```python
from logiscout_logger import get_logger
logger = get_logger(__name__)
def my_view(request):
logger.info("Processing request", user_id=request.user.id)
return JsonResponse({"status": "ok"})
```
## Environment Modes
### Development Mode (DEV)
In DEV mode, logs are only printed to the console. No logs are sent to the remote endpoint.
```python
from logiscout_logger import init, DEV
init(
endpoint="https://api.logiscout.com/logs",
service_name="my-service",
env=DEV # Logs only go to console
)
```
### Production Mode (PROD)
In PROD mode, logs are printed to the console AND sent to the remote endpoint with intelligent batching.
```python
from logiscout_logger import init, PROD
init(
endpoint="https://api.logiscout.com/logs",
service_name="my-service",
env=PROD # Logs go to console AND remote
)
```
## Confidential Logging
For sensitive data that should not be sent to the remote server, use the `send=False` parameter:
```python
logger = get_logger(__name__)
# This log will be sent to the remote server
logger.info("User authenticated", user_id=123)
# This log will ONLY appear in the console (not sent remotely)
logger.info("Password reset token generated", token="secret-token", send=False)
# Works with all log levels
logger.debug("Sensitive debug info", data=sensitive_data, send=False)
logger.error("Internal error details", stack_trace=trace, send=False)
```
## Log Levels
```python
logger.debug("Detailed debug information")
logger.info("General information")
logger.warning("Warning message")
logger.error("Error message")
logger.critical("Critical error message")
```
## Adding Metadata
Add any additional context to your logs:
```python
# Inline metadata
logger.info("Order created", order_id="123", total=99.99, currency="USD")
# Bound logger with persistent context
user_logger = logger.bind(user_id=123, session_id="abc")
user_logger.info("User action", action="click") # Includes user_id and session_id
```
## Standalone Usage
You can use logiscout-logger without connecting to a remote service. Logs will only go to the console:
```python
from logiscout_logger import get_logger
# No init() call needed for console-only logging
logger = get_logger("my_script")
logger.info("Script started")
logger.debug("Processing data", count=100)
logger.warning("Disk space low", available_gb=1.5)
```
## API Reference
### `init()`
Initialize the LogiScout logger.
```python
init(
endpoint: str, # Remote logging endpoint URL
service_name: str, # Service identifier
env: Environment # DEV or PROD
)
```
### `get_logger()`
Get a logger instance.
```python
logger = get_logger(name: str) # Usually __name__
```
### Logger Methods
```python
logger.debug(msg: str, send: bool = True, **kwargs)
logger.info(msg: str, send: bool = True, **kwargs)
logger.warning(msg: str, send: bool = True, **kwargs)
logger.error(msg: str, send: bool = True, **kwargs)
logger.critical(msg: str, send: bool = True, **kwargs)
```
### Middleware
```python
from logiscout_logger import asgiConfiguration, wsgiConfiguration
# For ASGI (FastAPI, Starlette, Django with ASGI)
app.add_middleware(asgiConfiguration)
# For WSGI (Flask, Django with WSGI)
app.wsgi_app = wsgiConfiguration(app.wsgi_app)
```
## Requirements
- Python 3.9+
- structlog >= 24.0.0
- requests >= 2.28.0
## License
MIT License - see [LICENSE](LICENSE) for details.
## Support
- Issues: https://github.com/Kazim68/logiscout-logger/issues
| text/markdown | null | Abdur Rehman Kazim <abdurrehmankazim68@gmail.com> | null | null | MIT | log-ingestion, logging, logiscout, observability, structured-logging | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python... | [] | null | null | >=3.9 | [] | [] | [] | [
"requests>=2.28.0",
"structlog>=24.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Kazim68/logiscout-logger",
"Documentation, https://github.com/Kazim68/logiscout-logger",
"Repository, https://github.com/Kazim68/logiscout-logger",
"Issues, https://github.com/Kazim68/logiscout-logger/issues"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-18T18:18:06.483499 | logiscout_logger-0.1.1.tar.gz | 15,730 | 60/88/b5c53a4b90e33083e9085342716b71a06fe718e9bb5a8195a13955bfead5/logiscout_logger-0.1.1.tar.gz | source | sdist | null | false | cd6f1df1037ad2a7542a8c976c9d2efe | 50b01fdcd0b653989d1ad7919320d24925ce8e5a33a773bbc73476fbb54e01d2 | 6088b5c53a4b90e33083e9085342716b71a06fe718e9bb5a8195a13955bfead5 | null | [
"LICENSE"
] | 251 |
2.4 | elephantq | 0.1.1 | ElephantQ - PostgreSQL-only async job queue - built for developer happiness. | # ElephantQ
**PostgreSQL-first background jobs for Python.**
ElephantQ is a modern, async-first job queue that uses PostgreSQL as the only backend. No Redis, no broker services, no operational sprawl. You get reliable queues, retries, scheduling, and a dashboard in a single package.
## Why ElephantQ
- One backend: PostgreSQL only. No Redis or broker to deploy or maintain.
- Async-native API: use `async def` jobs and `await` enqueue.
- Explicit worker model: predictable production behavior, easy to scale.
- Built-in features in the same package (opt-in flags).
- Strong DX: clean CLI, clear job discovery, minimal boilerplate.
- Scales well: uses Postgres `LISTEN/NOTIFY` for fast wakeups and row locking for safe concurrency.
## Quick Start
**Prerequisites:** PostgreSQL
```bash
# 1. Install
pip install elephantq
# 2. Set your database URL
export ELEPHANTQ_DATABASE_URL="postgresql://localhost/your_db"
# 3. Initialize database (creates tables)
elephantq setup
```
## Architecture at a Glance
- Your app enqueues jobs directly into PostgreSQL; no broker, no separate result store.
- Workers poll with `LISTEN/NOTIFY` plus `FOR UPDATE SKIP LOCKED`; horizontal workers never fight over rows.
- Feature flags gate dashboard, metrics, dead-letter, scheduling, signing, timeouts, dependencies, and more.
## Design Choices
- **PostgreSQL-only stack** keeps reliability predictable and operations simple while leveraging existing transactions.
- **Explicit scheduler command (`elephantq scheduler`)** runs only when you need recurring/cron jobs. For development, `elephantq dev` bundles worker + scheduler + dashboard.
- **Feature flag gating** keeps `elephantq.enqueue`, `elephantq.start`, and the global API lean even though dashboard, metrics, signing, and webhooks live in the same repository.
- **Built-in observability**: CLI reports job counts, the dashboard paints job/queue metrics, and optional metrics/Prometheus data live under `elephantq.features.metrics`.
## Fluent Scheduling API
ElephantQ offers fluent builders so complex scheduling logic stays readable.
Use `elephantq.features.recurring.daily().at("09:00").high_priority().schedule(report_job)` for expressive recurring flows,
or `elephantq.features.scheduling.schedule_job(task).in_hours(2).enqueue(arg=...)` for ad-hoc delayed execution.
Batches via `elephantq.features.scheduling.create_batch().enqueue_all([...])` keep coordinated enqueues atomic and traceable across the same PostgreSQL transaction.
### Minimal App (FastAPI)
```python
import elephantq
from fastapi import FastAPI
app = FastAPI()
elephantq.configure(database_url="postgresql://localhost/myapp")
@elephantq.job()
async def process_upload(file_path: str):
print(f"Processing {file_path}")
@app.post("/upload")
async def upload_file(file_path: str):
job_id = await elephantq.enqueue(process_upload, file_path=file_path)
return {"job_id": job_id}
```
### Run Workers
ElephantQ workers always run as a **separate process**.
```bash
# Terminal 1: Your app
uvicorn app:app
# Terminal 2: Workers (needs discovery)
export ELEPHANTQ_JOBS_MODULES="app"
elephantq start --concurrency 4
```
`ELEPHANTQ_JOBS_MODULES` drives the discovery snippet shown above. Run the scheduler daemon when you need recurring/crontab-style jobs—just like Celery keeps Beat and workers separate, ElephantQ keeps recurring work in a dedicated process so workers stay focused on execution. citeturn0search2
```bash
ELEPHANTQ_SCHEDULING_ENABLED=true elephantq scheduler
```
The dashboard lives behind `ELEPHANTQ_DASHBOARD_ENABLED=true elephantq dashboard`. Add `ELEPHANTQ_DASHBOARD_WRITE_ENABLED=true` if you need retry/delete/cancel buttons.
## Dashboard Preview

## ElephantQ vs Alternatives
| Aspect | Celery | RQ | ElephantQ |
| --------------- | ---------------------------- | ------------------ | ----------------------- |
| Backend | Redis/RabbitMQ required | Redis required | ✅ PostgreSQL only |
| Scheduling | Separate `beat` process | External scheduler | ✅ Built-in scheduling |
| Concurrency | Worker pools + ack tuning | Controlled by Redis | Queue routing + unique jobs + dependency/timeouts |
| Observability | Flower/exporter dashboards | rq-dashboard | Built-in dashboard + metrics |
| Getting started | More setup | Moderate setup | ✅ Minutes to first job |
## Examples (Practical)
### Reliable Retries (Backoff)
```python
import elephantq
@elephantq.job(retries=5, retry_delay=1, retry_backoff=True, retry_max_delay=30)
async def resilient_task(user_id: int):
...
```
### Queue Routing
```python
import elephantq
@elephantq.job(queue="emails")
async def send_email(to: str):
...
@elephantq.job(queue="media")
async def transcode_video(video_id: str):
...
```
```bash
# Process only specific queues
elephantq start --queues emails,media
```
### Delayed Jobs (One-Off Scheduling)
```python
import elephantq
@elephantq.job()
async def remind_user(user_id: int):
...
# Run in 10 minutes
await elephantq.schedule(remind_user, run_in=600, user_id=42)
```
### Recurring Jobs (Cron)
```python
import elephantq
after_midnight = "0 2 * * *"
@elephantq.job()
async def nightly_report():
...
await elephantq.features.recurring.cron(after_midnight).schedule(nightly_report)
```
Make sure `ELEPHANTQ_SCHEDULING_ENABLED=true` is set when using recurring jobs.
### Instance-Based API (Multi-tenant or Separate DBs)
```python
from elephantq import ElephantQ
billing = ElephantQ(database_url="postgresql://localhost/billing")
@billing.job()
async def invoice_customer(customer_id: int):
...
await billing.enqueue(invoice_customer, customer_id=123)
```
## Examples Directory
Runnable examples live in `examples/`:
- `examples/basic_app.py` – minimal FastAPI enqueue flow
- `examples/recurring_jobs.py` – recurring scheduler patterns
- `examples/queue_routing.py` – multi-queue routing and worker config
- `examples/file_processing.py` – background file processing pattern
- `examples/webhook_delivery.py` – webhook delivery smoke example
## Queue Design Tips
- Use separate queues for different workloads (e.g., `emails`, `media`, `billing`).
- Keep job payloads small; store large blobs elsewhere and pass IDs.
- Use retries with backoff for flaky external APIs.
- Start with 1–4 workers locally; scale by adding worker processes.
## Optional Features (Same Package)
Advanced features live under `elephantq.features` and are **opt-in** via flags. Core job APIs (`job`, `enqueue`, `schedule`, workers) remain in the main package. The dashboard and monitoring features also require their optional dependencies (see extras below).
```bash
pip install elephantq[dashboard]
pip install elephantq[monitoring]
pip install elephantq[all]
```
Enable feature flags:
```bash
export ELEPHANTQ_DASHBOARD_ENABLED=true # enable dashboard UI
export ELEPHANTQ_DASHBOARD_WRITE_ENABLED=true # allow retry/delete actions
export ELEPHANTQ_SCHEDULING_ENABLED=true # recurring + delayed jobs
export ELEPHANTQ_DEAD_LETTER_QUEUE_ENABLED=true # dead letter queue
export ELEPHANTQ_METRICS_ENABLED=true # metrics endpoints
export ELEPHANTQ_LOGGING_ENABLED=true # structured logging
export ELEPHANTQ_WEBHOOKS_ENABLED=true # webhooks on job events
export ELEPHANTQ_DEPENDENCIES_ENABLED=true # job dependencies
export ELEPHANTQ_TIMEOUTS_ENABLED=true # job timeouts
export ELEPHANTQ_SIGNING_ENABLED=true # optional helpers (signing, secrets utils)
```
Example usage:
```python
import elephantq
metrics = await elephantq.features.metrics.get_system_metrics()
stats = await elephantq.features.dead_letter.get_stats()
```
## Troubleshooting
### "Job not registered"
If the worker says a job is not registered, it means the worker did not import your module.
✅ Fix:
```bash
export ELEPHANTQ_JOBS_MODULES="your_app_module"
elephantq start
```
## CLI
```bash
elephantq setup
elephantq start --concurrency 4 --queues default,urgent
elephantq scheduler
elephantq dashboard --port 6161 # read-only by default
elephantq metrics --hours 24
elephantq dead-letter list
```
## Documentation
- `docs/getting-started.md`
- `docs/cli.md`
- `docs/scheduling.md`
- `docs/features.md`
- `docs/production.md`
| text/markdown | null | Abhinav Saxena <abhinav@apiclabs.com> | null | null | null | async, job, queue, postgresql, task, redis-alternative, developer-experience, zero-dependencies, background-jobs | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Develop... | [] | null | null | null | [] | [] | [] | [
"asyncpg<0.32.0,>=0.30.0",
"pydantic<3.0.0,>=2.7.1",
"pydantic-settings<3.0.0,>=2.0.0",
"croniter>=1.4.0",
"aiohttp>=3.8.0",
"structlog>=22.0.0",
"cryptography>=3.4.0",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"httpx>=0.24.0; extra == \"dev\"",
"black>=23.0.... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.5 | 2026-02-18T18:17:15.311832 | elephantq-0.1.1.tar.gz | 108,568 | a8/8f/0f3e63dafd02ae3d35bf91b65a522a77592f52b4a9cad903251a96351aa3/elephantq-0.1.1.tar.gz | source | sdist | null | false | 5a76d9d94814f67d8d7389213a7efa9d | f470fe552d60f225a0f61ce3ba59a24385a1d33694d6155add4a845f2fad9e34 | a88f0f3e63dafd02ae3d35bf91b65a522a77592f52b4a9cad903251a96351aa3 | MIT | [
"LICENSE"
] | 268 |
2.4 | markpact | 0.1.19 | Executable Markdown Runtime – run and manage entire projects directly from a single README.md file using specialized markpact code blocks and isolated Docker sandboxes. | 
# markpact
[](https://pypi.org/project/markpact/)
[](https://pypi.org/project/markpact/)
[](https://opensource.org/licenses/Apache-2.0)
[](https://github.com/wronai/markpact/actions)
[](https://github.com/astral-sh/ruff)
Markpact to minimalny runtime, który pozwala trzymać cały projekt w jednym `README.md`.
Runtime ignoruje zwykły Markdown, a wykonuje wyłącznie codeblocki `markpact:*`.
## 💡 Czym jest Markpact?
Markpact to narzędzie, które zamienia plik README.md w **wykonywalny kontrakt projektu**. Zamiast utrzymywać osobno dokumentację i kod źródłowy, wszystko znajduje się w jednym miejscu.
### Kluczowe możliwości:
| Funkcja | Opis |
|---------|------|
| **Executable README** | Uruchom cały projekt z jednego pliku README.md |
| **LLM Generation** | Wygeneruj projekt z opisu tekstowego: `markpact -p "REST API"` |
| **Multi-language** | Python, Node.js, Go, Rust, PHP, TypeScript, React |
| **Publishing** | Publikuj do PyPI, npm, Docker Hub jedną komendą |
| **Docker Sandbox** | Uruchom w izolowanym kontenerze: `--docker` |
| **HTTP Testing** | Definiuj testy HTTP w `markpact:test http` |
| **Auto-fix** | Automatyczne naprawianie błędów runtime |
### Dla kogo?
- **Deweloperzy** – szybkie prototypowanie i uruchamianie projektów
- **DevOps** – CI/CD z README jako single source of truth
- **Edukatorzy** – interaktywne tutoriale z wykonywalnym kodem
- **LLM/AI** – generowanie i modyfikacja projektów przez AI

## 🚀 Szybki start
```bash
git clone https://github.com/wronai/markpact.git
# Instalacja
pip install markpact[llm]
# Konfiguracja LLM (wybierz jeden)
markpact config --provider ollama # lokalny
markpact config --provider openrouter --api-key sk-or-v1-xxx # chmura
# Generuj i uruchom jedną komendą!
markpact -p "REST API do zarządzania zadaniami z SQLite" -o todo/README.md --run
markpact -p "URL shortener with FastAPI and SQLite" -o url-test/README.md --run
# Lub z gotowego przykładu
markpact -e todo-api -o todo/README.md --run
```
## 🤖 Generowanie z LLM
Wygeneruj kompletny projekt z opisu tekstowego:
```bash
# Lista 16 gotowych przykładów
markpact --list-examples
# Generuj z promptu
markpact -p "URL shortener z FastAPI i SQLite" -o url/README.md
# Generuj i uruchom natychmiast (one-liner)
markpact -p "Chat WebSocket z FastAPI" -o chat/README.md --run
# Uruchom w izolowanym Docker
markpact -p "Blog API z komentarzami" -o blog/README.md --run --docker
```
**Obsługiwane providery:** Ollama (lokalny), OpenRouter, OpenAI, Anthropic, Groq
Szczegóły: [docs/generator.md](docs/generator.md)
## 📦 Publikacja do rejestrów
Publikuj artefakty bezpośrednio z README:
```bash
# PyPI
markpact README.md --publish --bump patch
# npm
markpact README.md --publish --registry npm
# Docker Hub
markpact README.md --publish --registry docker
# GitHub Container Registry
markpact README.md --publish --registry ghcr
```
Obsługiwane rejestry: **PyPI**, **npm**, **Docker Hub**, **GitHub Packages**, **GHCR**
## 📓 Konwersja Notebooków
Konwertuj notebooki do formatu markpact:
```bash
# Lista obsługiwanych formatów
markpact --list-notebook-formats
# Konwersja Jupyter Notebook
markpact --from-notebook notebook.ipynb -o project/README.md
# Konwersja i uruchomienie
markpact --from-notebook notebook.ipynb -o project/README.md --run
# Podgląd konwersji
markpact --from-notebook notebook.ipynb --convert-only
```
**Obsługiwane formaty:**
| Format | Rozszerzenie | Opis |
|--------|--------------|------|
| Jupyter Notebook | `.ipynb` | Python, R, Julia |
| R Markdown | `.Rmd` | R z markdown |
| Quarto | `.qmd` | Wielojęzyczny |
| Databricks | `.dib` | Python, Scala, R |
| Zeppelin | `.zpln` | Python, Scala, SQL |
## 📚 Dokumentacja
- [Pełna dokumentacja](docs/README.md)
- [Generowanie z LLM](docs/generator.md) ⭐ **NEW**
- [Kontrakt markpact:*](docs/contract.md)
- [CI/CD Integration](docs/ci-cd.md)
- [Współpraca z LLM](docs/llm.md)
## 🎯 Przykłady
| Przykład | Opis | Uruchomienie |
|----------|------|--------------|
| [FastAPI Todo](examples/fastapi-todo/) | REST API z bazą danych | `markpact examples/fastapi-todo/README.md` |
| [Flask Blog](examples/flask-blog/) | Aplikacja webowa z szablonami | `markpact examples/flask-blog/README.md` |
| [CLI Tool](examples/cli-tool/) | Narzędzie linii poleceń | `markpact examples/cli-tool/README.md` |
| [Streamlit Dashboard](examples/streamlit-dashboard/) | Dashboard danych | `markpact examples/streamlit-dashboard/README.md` |
| [Kivy Mobile](examples/kivy-mobile/) | Aplikacja mobilna | `markpact examples/kivy-mobile/README.md` |
| [Electron Desktop](examples/electron-desktop/) | Aplikacja desktopowa | `markpact examples/electron-desktop/README.md` |
| [Markdown Converter](examples/markdown-converter/) | Konwersja zwykłego MD | `markpact examples/markdown-converter/sample.md --convert` |
| [Go HTTP API](examples/go-http-api/) | REST API w Go | `markpact examples/go-http-api/README.md` |
| [Node Express API](examples/node-express-api/) | REST API w Node.js | `markpact examples/node-express-api/README.md` |
| [Static Frontend](examples/static-frontend/) | Statyczny HTML/CSS/JS | `markpact examples/static-frontend/README.md` |
| [Python Typer CLI](examples/python-typer-cli/) | CLI w Python (Typer) | `markpact examples/python-typer-cli/README.md` |
| [Rust Axum API](examples/rust-axum-api/) | REST API w Rust | `markpact examples/rust-axum-api/README.md` |
| [PHP CLI](examples/php-cli/) | CLI w PHP | `markpact examples/php-cli/README.md` |
| [React TypeScript SPA](examples/react-typescript-spa/) | SPA React + TS | `markpact examples/react-typescript-spa/README.md` |
| [TypeScript Node API](examples/typescript-node-api/) | REST API w TS (Node) | `markpact examples/typescript-node-api/README.md` |
| [PyPI Publish](examples/pypi-publish/) | Publikacja do PyPI | `markpact examples/pypi-publish/README.md --publish` |
| [npm Publish](examples/npm-publish/) | Publikacja do npm | `markpact examples/npm-publish/README.md --publish` |
| [Docker Publish](examples/docker-publish/) | Publikacja do Docker | `markpact examples/docker-publish/README.md --publish` |
| [Notebook Converter](examples/notebook-converter/) | Konwersja .ipynb do markpact | `markpact --from-notebook examples/notebook-converter/sample.ipynb --convert-only` |
## 🧪 Testowanie przykładów
Uruchom automatyczne testy wszystkich przykładów:
```bash
# Dry-run (tylko parsowanie)
./scripts/test_examples.sh
# Pełne uruchomienie
./scripts/test_examples.sh --run
# Verbose output
./scripts/test_examples.sh --verbose
```
## 🔄 Konwersja zwykłego Markdown
Markpact może automatycznie konwertować zwykłe pliki Markdown (bez tagów `markpact:*`) do formatu wykonywalnego:
```bash
# Podgląd konwersji
markpact README.md --convert-only
# Konwersja i uruchomienie
markpact README.md --convert
# Auto-detekcja (konwertuj jeśli brak markpact blocks)
markpact README.md --auto
# Zapisz skonwertowany plik
markpact README.md --convert-only --save-converted output.md
```
Konwerter analizuje code blocks i na podstawie heurystyk wykrywa:
- **Zależności** → `markpact:deps` (pakiety Python/Node)
- **Pliki źródłowe** → `markpact:file` (importy, klasy, funkcje)
- **Komendy** → `markpact:run` (python, uvicorn, npm, etc.)
## 1️⃣ Cel projektu
- **Jedno README jako źródło prawdy**
- **Możliwość uruchomienia projektu bez ręcznego tworzenia struktury plików**
- **Automatyzacja**
Bootstrap tworzy pliki w sandboxie, instaluje zależności i uruchamia komendę startową.
## 2️⃣ Kontrakt README (codeblocki `markpact:*`)
- **`markpact:bootstrap <lang>`**
Dokładnie jeden bootstrap na README. Odpowiada za parsowanie codeblocków i uruchomienie.
- **`markpact:deps <scope>`**
Lista zależności dla danego scope (np. `python`).
- **`markpact:file <lang> path=...`**
Zapisuje plik do sandboxu pod ścieżką `path=...`.
- **`markpact:run <lang>`**
Jedna komenda uruchomieniowa wykonywana w sandboxie.
---
```markpact:bootstrap python
#!/usr/bin/env python3
"""MARKPACT v0.1 – Executable Markdown Runtime"""
import os, re, subprocess, sys
from pathlib import Path
README = Path(sys.argv[1] if len(sys.argv) > 1 else "README.md")
SANDBOX = Path(os.environ.get("MARKPACT_SANDBOX", "./sandbox"))
SANDBOX.mkdir(parents=True, exist_ok=True)
RE = re.compile(r"^```markpact:(?P<kind>\w+)(?:\s+(?P<meta>[^\n]+))?\n(?P<body>.*?)\n^```[ \t]*$", re.DOTALL | re.MULTILINE)
def run(cmd):
print(f"[markpact] RUN: {cmd}")
env = os.environ.copy()
venv = SANDBOX / ".venv" / "bin"
if venv.exists():
env.update(VIRTUAL_ENV=str(venv.parent), PATH=f"{venv}:{env.get('PATH','')}")
subprocess.check_call(cmd, shell=True, cwd=SANDBOX, env=env)
def main():
deps, run_cmd = [], None
for m in RE.finditer(README.read_text()):
kind, meta, body = m.group("kind"), (m.group("meta") or "").strip(), m.group("body").strip()
if kind == "file":
p = re.search(r"\bpath=(\S+)", meta)
if not p: raise ValueError(f"markpact:file requires path=..., got {meta!r}")
f = SANDBOX / p[1]
f.parent.mkdir(parents=True, exist_ok=True)
f.write_text(body)
print(f"[markpact] wrote {f}")
elif kind == "deps" and meta == "python":
deps.extend(line.strip() for line in body.splitlines() if line.strip())
elif kind == "run":
run_cmd = body
if deps:
venv_pip = SANDBOX / ".venv" / "bin" / "pip"
if os.environ.get("MARKPACT_NO_VENV") != "1" and not venv_pip.exists():
run(f"{sys.executable} -m venv .venv")
(SANDBOX / "requirements.txt").write_text("\n".join(deps))
run(f"{'.venv/bin/pip' if venv_pip.exists() else 'pip'} install -r requirements.txt")
if run_cmd:
run(run_cmd)
else:
print("[markpact] No run command defined")
if __name__ == "__main__":
main()
```
## 3️⃣ Instalacja
### Opcja A: Pakiet pip (zalecane)
```bash
pip install markpact
```
Użycie:
```bash
markpact README.md # uruchom projekt
markpact README.md --dry-run # podgląd bez wykonywania
markpact README.md -s ./my-sandbox # własny katalog sandbox
```
### Opcja B: Instalacja lokalna (dev)
```bash
git clone https://github.com/wronai/markpact.git
cd markpact
make install # lub: pip install -e .
```
### Opcja C: Ekstrakcja bootstrapu (zero dependencies)
- **Ekstrakcja bootstrapu do pliku**
Ten wariant jest odporny na przypadek, gdy w samym bootstrapie występują znaki ``` (np. w regexie):
```bash
sed -n '/^```markpact:bootstrap/,/^```[[:space:]]*$/p' README.md | sed '1d;$d' > markpact.py
```
- **Uruchomienie**
```bash
python3 markpact.py
```
- **Konfiguracja (env vars)**
```bash
MARKPACT_PORT=8001 MARKPACT_SANDBOX=./.markpact-sandbox python3 markpact.py
```
## 4️⃣ Sandbox i środowisko
- **`MARKPACT_SANDBOX`**
Zmienia katalog sandboxu (domyślnie `./sandbox`).
- **`MARKPACT_NO_VENV=1`**
Wyłącza tworzenie `.venv` w sandboxie (przydatne, jeśli CI/Conda zarządza środowiskiem).
- **Port zajęty (`[Errno 98] address already in use`)**
Ustaw `MARKPACT_PORT` na inny port lub zatrzymaj proces, który używa `8000`.
## 5️⃣ Dependency management
- **Python**
Bootstrap zbiera `markpact:deps python`, zapisuje `requirements.txt` w sandboxie i instaluje zależności.
## 6️⃣ Uruchamianie i workflow
- **Wejście**
`python3 markpact.py [README.md]`
- **Kolejność**
Bootstrap parsuje wszystkie codeblocki, zapisuje pliki i dopiero na końcu uruchamia `markpact:run`.
## 6.1 Konwencje i format metadanych
- **Nagłówek codeblocka**
` ```markpact:<kind> <lang> <meta>`
Minimalnie wymagane jest `markpact:<kind>`.
`lang` jest opcjonalny i pełni rolę informacyjną (bootstrap może go ignorować).
- **Metadane**
Dla `markpact:file` wymagane jest `path=...`.
Metadane mogą zawierać dodatkowe tokeny (np. w przyszłości `mode=...`, `chmod=...`).
## 6.2 CI/CD
- **Rekomendacja**
Uruchamiaj bootstrap w czystym środowisku (np. job CI) i ustaw sandbox na katalog roboczy joba.
- **Przykład (shell)**
```bash
export MARKPACT_SANDBOX=./.markpact-sandbox
export MARKPACT_PORT=8001
python3 markpact.py README.md
```
- **Wskazówki**
- **Deterministyczność**
Pinuj wersje w `markpact:deps` (np. `fastapi==...`).
- **Bezpieczeństwo**
Traktuj `markpact:run` jak skrypt uruchomieniowy repo: w CI odpalaj tylko zaufane README.
- **Cache**
Jeśli CI wspiera cache, cache’uj katalog `MARKPACT_SANDBOX/.venv`.
## 6.3 Współpraca z LLM
- **Zasada**
LLM może generować/edytować projekt poprzez modyfikacje README (codeblocki `markpact:file`, `markpact:deps`, `markpact:run`).
- **Oczekiwania**
- `markpact:file` zawsze zawiera pełną zawartość pliku.
- Każda zmiana zależności idzie przez `markpact:deps`.
- Jedna komenda startowa w `markpact:run`.
## 7️⃣ Najlepsze praktyki
- **Bootstrap jako pierwszy fenced codeblock w README**
- **Każdy plik w osobnym `markpact:file`**
- **Zależności tylko w `markpact:deps`**
- **Jedna komenda startowa w `markpact:run`**
- **Ekstrakcja bootstrapu**
Nie używaj zakresu `/,/```/` (bo ``` może wystąpić w treści, np. w regexie). Używaj `^```$` na końcu.
### Plik konfiguracyjny (~/.markpact/.env)
```bash
# Markpact LLM Configuration
MARKPACT_MODEL="openrouter/nvidia/nemotron-3-nano-30b-a3b:free"
MARKPACT_API_BASE="https://openrouter.ai/api/v1"
MARKPACT_API_KEY="sk-or-v1-xxxxx"
MARKPACT_TEMPERATURE="0.7"
MARKPACT_MAX_TOKENS="4096"
```
## Obsługiwani providerzy LLM
### Ollama (lokalny, domyślny)
```bash
markpact config --provider ollama
markpact config --model ollama/qwen2.5-coder:14b
markpact -p "REST API dla książek"
```
### OpenRouter (darmowe modele!)
```bash
markpact config --provider openrouter --api-key sk-or-v1-xxxxx
markpact config --model openrouter/nvidia/nemotron-3-nano-30b-a3b:free
markpact -p "REST API dla książek"
```
## Działający przykład (FastAPI)
### 1️⃣ Dependencies
*markpact:deps python*
```text markpact:deps python
fastapi
uvicorn
```
### 2️⃣ Application Files
*markpact:file python path=app/main.py*
```python markpact:file path=app/main.py
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def root():
return {"message": "Hello from Executable Markdown"}
```
### 3️⃣ Run Command
*markpact:run python*
```bash markpact:run
uvicorn app.main:app --host 0.0.0.0 --port ${MARKPACT_PORT:-8088}
```
## License
Apache License 2.0 - see [LICENSE](LICENSE) for details.
## Author
Created by **Tom Sapletta** - [tom@sapletta.com](mailto:tom@sapletta.com)
| text/markdown | null | Tom Sapletta <tom@sapletta.com> | null | null | null | executable, markdown, readme, runtime, sandbox | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"build; extra == \"dev\"",
"bump2version>=1.0; extra == \"dev\"",
"litellm>=1.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\"",
"twine; extra == \"dev\"",
"litellm>=1.0; extra == \"llm\""
] | [] | [] | [] | [
"Homepage, https://github.com/wronai/markpact",
"Repository, https://github.com/wronai/markpact",
"Issues, https://github.com/wronai/markpact/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T18:17:11.441163 | markpact-0.1.19.tar.gz | 350,867 | b4/00/61d6ca81e2d4de533529ea644bfaa11ca6d3ab79611123f130bae41bb2fd/markpact-0.1.19.tar.gz | source | sdist | null | false | 5baf0b9d7e8acdc87ba3df3394bf851f | d431516555a01e3d6e485e4622dcaf65c2291baa640dc448358da48d397e3aef | b40061d6ca81e2d4de533529ea644bfaa11ca6d3ab79611123f130bae41bb2fd | Apache-2.0 | [
"LICENSE"
] | 305 |
2.4 | trytond-account | 7.6.9 | Tryton module for accounting | ##############
Account Module
##############
The *Account Module* defines the fundamentals needed for basic double entry
accounting.
It also includes templates for the `basic universal chart of accounts
<https://www.ifrs-gaap.com/basic-universal-coa>`_, balance sheet and income
statement.
| null | Tryton | foundation@tryton.org | null | null | GPL-3 | tryton account | [
"Development Status :: 5 - Production/Stable",
"Environment :: Plugins",
"Framework :: Tryton",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Legal Industry",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",... | [] | http://www.tryton.org/ | http://downloads.tryton.org/7.6/ | >=3.9 | [] | [] | [] | [
"python-dateutil",
"python-sql>=0.7",
"simpleeval",
"trytond_company<7.7,>=7.6",
"trytond_currency<7.7,>=7.6",
"trytond_party<7.7,>=7.6",
"trytond<7.7,>=7.6",
"proteus<7.7,>=7.6; extra == \"test\""
] | [] | [] | [] | [
"Bug Tracker, https://bugs.tryton.org/",
"Documentation, https://docs.tryton.org/modules-account/",
"Forum, https://www.tryton.org/forum",
"Source Code, https://code.tryton.org/tryton"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T18:16:49.263138 | trytond_account-7.6.9.tar.gz | 602,181 | a4/42/0a4ba25564fd46bd1a05b8e7affe998870d6cad295064ce12b7d4531574a/trytond_account-7.6.9.tar.gz | source | sdist | null | false | bd1ad3cc6d0ccdbe70a759db1039eac8 | 97cd09afd448bd4c64dabd630e516773dde97907a420767922fa60b5b86b5e3c | a4420a4ba25564fd46bd1a05b8e7affe998870d6cad295064ce12b7d4531574a | null | [
"LICENSE"
] | 328 |
2.4 | trytond-account-invoice | 6.0.23 | Tryton module for invoicing | ######################
Account Invoice Module
######################
The *Account Invoice Module* adds the concept of invoicing to Tryton.
It allows the creation of customer and supplier invoices, and can handle the
payment terms related to the invoices and show when they have been paid.
| null | Tryton | bugs@tryton.org | null | null | GPL-3 | tryton account invoice | [
"Development Status :: 5 - Production/Stable",
"Environment :: Plugins",
"Framework :: Tryton",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Legal Industry",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",... | [] | http://www.tryton.org/ | http://downloads.tryton.org/6.0/ | >=3.6 | [] | [] | [] | [
"python-dateutil",
"python-sql>=0.4",
"trytond_account<6.1,>=6.0",
"trytond_account_product<6.1,>=6.0",
"trytond_company<6.1,>=6.0",
"trytond_currency<6.1,>=6.0",
"trytond_party<6.1,>=6.0",
"trytond_product<6.1,>=6.0",
"trytond<6.1,>=6.0"
] | [] | [] | [] | [
"Bug Tracker, https://bugs.tryton.org/",
"Documentation, https://docs.tryton.org/modules-account-invoice/",
"Forum, https://www.tryton.org/forum",
"Source Code, https://hg.tryton.org/modules/account_invoice"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T18:15:56.943279 | trytond_account_invoice-6.0.23.tar.gz | 184,240 | f0/0d/b1dabf1ed3bb4212b990dc8384df22f9684a26b05ccb01e7739e23d87203/trytond_account_invoice-6.0.23.tar.gz | source | sdist | null | false | bc74210925741591392a3ab82db0f20d | 37292647ea2640db618f8206ec29dc77f0285b93fe27d88f8bcdd63649a93acc | f00db1dabf1ed3bb4212b990dc8384df22f9684a26b05ccb01e7739e23d87203 | null | [
"LICENSE"
] | 262 |
2.4 | vortex-data | 0.59.4 | Python bindings for Vortex, an Apache Arrow-compatible toolkit for working with compressed array data. | # 🌪️ Vortex
[](https://github.com/vortex-data/vortex/actions)
[](https://www.bestpractices.dev/projects/10567)
[](https://docs.vortex.dev)
[](https://codspeed.io/vortex-data/vortex)
[](https://crates.io/crates/vortex)
[](https://pypi.org/project/vortex-data/)
[](https://central.sonatype.com/artifact/dev.vortex/vortex-spark)
[](https://codecov.io/github/vortex-data/vortex)
[Join the community on Slack!](https://vortex.dev/slack) | [Documentation](https://docs.vortex.dev/) | [Performance Benchmarks](https://bench.vortex.dev)
## Overview
Vortex is a next-generation columnar file format and toolkit designed for high-performance data processing.
It is the fastest and most extensible format for building data systems backed by object storage. It provides:
- **Blazing Fast Performance**
- 100x faster random access reads (vs. modern Apache Parquet)
- 10-20x faster scans
- 5x faster writes
- Similar compression ratios
- Efficient support for wide tables with zero-copy/zero-parse metadata
- **Extensible Architecture**
- Modeled after Apache DataFusion's extensible approach
- Pluggable encoding system, type system, compression strategy, & layout strategy
- Zero-copy compatibility with Apache Arrow
- **Open Source, Neutral Governance**
- A Linux Foundation (LF AI & Data) Project
- Apache-2.0 Licensed
- **Integrations**
- Arrow, DataFusion, DuckDB, Spark, Pandas, Polars, & more
- Apache Iceberg (coming soon)
> 🟢 **Development Status**: Library APIs may change from version to version, but we now consider
> the file format <ins>_stable_</ins>. From release 0.36.0, all future releases of Vortex should
> maintain backwards compatibility of the file format (i.e., be able to read files written by
> any earlier version >= 0.36.0).
## Key Features
### Core Capabilities
- **Logical Types** - Clean separation between logical schema and physical layout
- **Zero-Copy Arrow Integration** - Seamless conversion to/from Apache Arrow arrays
- **Extensible Encodings** - Pluggable physical layouts with built-in optimizations
- **Cascading Compression** - Support for nested encoding schemes
- **High-Performance Computing** - Optimized compute kernels for encoded data
- **Rich Statistics** - Lazy-loaded summary statistics for optimization
### Technical Architecture
#### Logical vs Physical Design
Vortex strictly separates logical and physical concerns:
- **Logical Layer**: Defines data types and schema
- **Physical Layer**: Handles encoding and storage implementation
- **Built-in Encodings**: Compatible with Apache Arrow's memory format
- **Extension Encodings**: Optimized compression schemes (RLE, dictionary, etc.)
## Quick Start
### Installation
#### Rust Crate
All features are exported through the main `vortex` crate.
```bash
cargo add vortex
```
#### Python Package
```bash
uv add vortex-data
```
#### Command Line UI (vx)
For browsing the structure of Vortex files, you can use the `vx` command-line tool.
```bash
# Install latest release
cargo install vortex-tui --locked
# Or build from source
cargo install --path vortex-tui --locked
# Usage
vx browse <file>
```
### Development Setup
#### Prerequisites (macOS)
```bash
# Optional but recommended dependencies
brew install flatbuffers protobuf # For .fbs and .proto files
brew install duckdb # For benchmarks
# Install Rust toolchain
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# or
brew install rustup
# Initialize submodules
git submodule update --init --recursive
# Setup dependencies with uv
uv sync --all-packages
```
### Benchmarking
Use `vx-bench` to run benchmarks comparing engines (DataFusion, DuckDB) and formats (Parquet, Vortex):
```bash
# Install the benchmark orchestrator
uv tool install "bench_orchestrator @ ./bench-orchestrator/"
# Run TPC-H benchmarks
vx-bench run tpch --engine datafusion,duckdb --format parquet,vortex
# Compare results
vx-bench compare --run latest
```
See [bench-orchestrator/README.md](bench-orchestrator/README.md) for full documentation.
### Performance Optimization
For optimal performance, we suggest using [MiMalloc](https://github.com/microsoft/mimalloc):
```rust,ignore
#[global_allocator]
static GLOBAL_ALLOC: MiMalloc = MiMalloc;
```
## Project Information
### License
Licensed under the Apache License, Version 2.0.
### Governance
Vortex is an independent open-source project and not controlled by any single company. The Vortex Project is a
sub-project of the Linux Foundation Projects. The governance model is documented in
[CONTRIBUTING.md](CONTRIBUTING.md) and is subject to the terms of
the [Technical Charter](https://vortex.dev/charter.pdf).
### Contributing
Please **do** read [CONTRIBUTING.md](CONTRIBUTING.md) before you contribute.
### Reporting Vulnerabilities
If you discover a security vulnerability, please email <vuln-report@vortex.dev>.
### Trademarks
Copyright © Vortex a Series of LF Projects, LLC.
For terms of use, trademark policy, and other project policies please see <https://lfprojects.org>
## Acknowledgments
The Vortex project benefits enormously from groundbreaking work from the academic & open-source communities.
### Research in Vortex
- [BtrBlocks](https://www.cs.cit.tum.de/fileadmin/w00cfj/dis/papers/btrblocks.pdf) - Efficient columnar compression
- [FastLanes](https://www.vldb.org/pvldb/vol16/p2132-afroozeh.pdf) & [FastLanes on GPU](https://dbdbd2023.ugent.be/abstracts/felius_fastlanes.pdf) - High-performance integer compression
- [FSST](https://www.vldb.org/pvldb/vol13/p2649-boncz.pdf) - Fast random access string compression
- [ALP](https://ir.cwi.nl/pub/33334/33334.pdf) & [G-ALP](https://dl.acm.org/doi/pdf/10.1145/3736227.3736242) - Adaptive lossless floating-point compression
- [Procella](https://dl.acm.org/citation.cfm?id=3360438) - YouTube's unified data system
- [Anyblob](https://www.durner.dev/app/media/papers/anyblob-vldb23.pdf) - High-performance access to object storage
- [ClickHouse](https://www.vldb.org/pvldb/vol17/p3731-schulze.pdf) - Fast analytics for everyone
- [MonetDB/X100](https://www.cidrdb.org/cidr2005/papers/P19.pdf) - Hyper-Pipelining Query Execution
- [Morsel-Driven Parallelism](https://db.in.tum.de/~leis/papers/morsels.pdf): A NUMA-Aware Query Evaluation Format for the Many-Core Age
- [The FastLanes File Format](https://github.com/cwida/FastLanes/blob/dev/docs/specification.pdf) - Expression Operators
### Vortex in Research
- [Anyblox](https://gienieczko.com/anyblox-paper) - A Framework for Self-Decoding Datasets
- [F3](https://dl.acm.org/doi/pdf/10.1145/3749163) - Open-Source Data File Format for the Future
### Open Source Inspiration
- [Apache Arrow](https://arrow.apache.org)
- [Apache DataFusion](https://github.com/apache/datafusion)
- [parquet2](https://github.com/jorgecarleitao/parquet2) by Jorge Leitao
- [DuckDB](https://github.com/duckdb/duckdb)
- [Velox](https://github.com/facebookincubator/velox) & [Nimble](https://github.com/facebookincubator/nimble)
#### Thanks to all contributors who have shared their knowledge and code with the community! 🚀
| text/markdown; charset=UTF-8; variant=GFM | Vortex Authors <hello@vortex.dev> | Vortex Authors <hello@vortex.dev> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Py... | [] | https://github.com/spiraldb/vortex | null | >=3.11 | [] | [] | [] | [
"pyarrow>=17.0.0",
"substrait>=0.23.0",
"typing-extensions>=4.5.0",
"polars>=1.31.0; extra == \"polars\"",
"pandas>=2.2.0; extra == \"pandas\"",
"numpy>=1.26.0; extra == \"numpy\"",
"duckdb>=1.1.2; extra == \"duckdb\"",
"ray>=2.48; extra == \"ray\""
] | [] | [] | [] | [
"Documentation, https://docs.vortex.dev",
"Changelog, https://github.com/vortex-data/vortex/blob/develop/CHANGELOG.md",
"Issues, https://github.com/vortex-data/vortex/issues",
"Benchmarks, https://bench.vortex.dev"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:15:49.811817 | vortex_data-0.59.4-cp311-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl | 32,432,107 | ad/6e/4720e9b9ed21b9cdcf4cce1a47ab545914725f35fe9bab477ca75ecabefd/vortex_data-0.59.4-cp311-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl | cp311 | bdist_wheel | null | false | 8800f25c3ee2148043850e5dadd37c2c | c0e30d128efa374761c2236e8e0d9b584a7ea2774e94e44936751350eca0a5aa | ad6e4720e9b9ed21b9cdcf4cce1a47ab545914725f35fe9bab477ca75ecabefd | null | [] | 612 |
2.4 | iot-inspector | 3.0.15 | IoT Inspector Client for analyzing IoT device firmware | # IoT Inspector 3
[](https://github.com/astral-sh/ruff)
[](https://github.com/nyu-mlab/iot-inspector-client/actions/workflows/inspector_test.yaml)
[](https://codecov.io/gh/nyu-mlab/iot-inspector-client)
Simply run `./start.bash` for Linux/Mac and `start.bat` for Windows. It will take care of all the dependencies.
If the underlying dependencies is updated, please run the following first:
```bash
uv cache clean
uv lock
uv sync
```
# User guide
Please review the [User Guide](https://github.com/nyu-mlab/iot-inspector-client/wiki) for instructions how to run IoT Inspector.
# Developer Guide
If you are developing IoT Inspector, please read this section.
## Database Schema
When presenting network stats, IoT Inspector reads from an internal SQLite database.
To see how the packet collector and database is implemented, look at the [IoT Inspector Core package](https://github.com/nyu-mlab/inspector-core-library).
You should always read from the database using the following approach:
```python
import libinspector.global_state
db_conn, rwlock = libinspector.global_state.db_conn_and_lock
with rwlock:
db_conn.execute("SELECT * FROM devices")
```
The schema is as follows:
```sql
CREATE TABLE devices (
mac_address TEXT PRIMARY KEY,
ip_address TEXT NOT NULL,
is_inspected INTEGER DEFAULT 0,
is_gateway INTEGER DEFAULT 0,
updated_ts INTEGER DEFAULT 0,
metadata_json TEXT DEFAULT '{}'
);
CREATE TABLE hostnames (
ip_address TEXT PRIMARY KEY,
hostname TEXT NOT NULL,
updated_ts INTEGER DEFAULT 0,
data_source TEXT NOT NULL,
metadata_json TEXT DEFAULT '{}'
);
CREATE TABLE network_flows (
timestamp INTEGER,
src_ip_address TEXT,
dest_ip_address TEXT,
src_hostname TEXT,
dest_hostname TEXT,
src_mac_address TEXT,
dest_mac_address TEXT,
src_port TEXT,
dest_port TEXT,
protocol TEXT,
byte_count INTEGER DEFAULT 0,
packet_count INTEGER DEFAULT 0,
metadata_json TEXT DEFAULT '{}',
PRIMARY KEY (
timestamp,
src_mac_address, dest_mac_address,
src_ip_address, dest_ip_address,
src_port, dest_port,
protocol
)
);
```
# IoT Inspector Helper Scripts
We also include two scripts to help with development and debugging.
## Anonymize
After installing IoT Inspector, you can run the following command:
```bash
anonymize -i <input_pcap_file> -o <output_pcap_file>
```
Here is the help output
```text
anonymize -h
usage: anonymize [-h] [-i INPUT_FILE] [-o OUTPUT]
Anonymize MACs and filter specific control packets (DHCP, SSDP, MDNS) from a PCAP file.
options:
-h, --help show this help message and exit
-i INPUT_FILE, --input INPUT_FILE
The path to the input PCAP file.
-o OUTPUT, --output OUTPUT
The path to save the anonymized PCAP file (default: sanitized_output.pcap).
```
The output PCAP file will have all
* MAC addresses anonymized
* all DHCP, SSDP, and MDNS packets removed.
This is useful for sharing PCAP files without revealing sensitive information.
## PCAP Time Series
After installing IoT Inspector, you can run the following command:
```bash
time-series -i <PCAP_FILE> -m <TARGET_MAC> -o <OUTPUT_PNG_FILE> --b <BIN_SIZE_IN_SECONDS>
```
Here is the help output
```text
usage: time_series [-h] -i INPUT_FILE -m TARGET_MAC [-o OUTPUT] [--interval INTERVAL]
Analyze PCAP file to plot upload and download traffic over time for a specific MAC address.
options:
-h, --help show this help message and exit
-i INPUT_FILE, --input INPUT_FILE
The path to the input PCAP file.
-m TARGET_MAC, --target-mac TARGET_MAC
The MAC address of the device to analyze (e.g., 'aa:bb:cc:dd:ee:ff').
-o OUTPUT, --output OUTPUT
The path to save the output plot PNG file (default: traffic_timeseries.png).
-b BIN_SIZE, --bin BIN_SIZE
The width of time bins in seconds for aggregating traffic data (default: 0.05 seconds).
```
The output will be a PNG file showing the upload and download traffic over time for the specified MAC address. This is useful for visualizing traffic patterns of a device in a PCAP file.
The output should look something like this on the console.
```text
INFO: Starting analysis for: TEST.pcap
INFO: Target MAC for analysis: 44:3d:54:e3:4b:6e
INFO: Time bin size: 0.05 seconds
INFO: Read 2392 packets. Starting data processing...
INFO: Generating plot...
INFO: Successfully saved plot to 'traffic_timeseries.png'
```
| text/markdown | null | Danny Huang <dhuang@nyu.edu>, Andrew Quijano <andrew.quijano@nyu.edu> | null | null | null | iot-inspector, network-traffic-analysis, network-monitoring, iot-security | [
"Topic :: System :: Networking :: Monitoring",
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"... | [] | null | null | >=3.10 | [] | [] | [] | [
"libinspector==1.0.15",
"streamlit>=1.52.1",
"matplotlib==3.10.8"
] | [] | [] | [] | [
"Homepage, https://inspector.engineering.nyu.edu/",
"Source, https://github.com/nyu-mlab/iot-inspector-client/",
"Tracker, https://github.com/nyu-mlab/iot-inspector-client/issues",
"Documentation, https://github.com/nyu-mlab/iot-inspector-client/wiki",
"Download, https://github.com/nyu-mlab/iot-inspector-cl... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:13:59.610167 | iot_inspector-3.0.15-py3-none-any.whl | 273,838 | 27/72/7ba511f892974a6e29cb7eb74ee64a14179aef4b8afffc05d1c60e4febd3/iot_inspector-3.0.15-py3-none-any.whl | py3 | bdist_wheel | null | false | c63a9ab270be2a8d6e41d3d66bc8be51 | 05b3729ce6f4d225f3c5b368b18d03250b3a286660217551be6423769d302471 | 27727ba511f892974a6e29cb7eb74ee64a14179aef4b8afffc05d1c60e4febd3 | Apache-2.0 | [
"LICENSE"
] | 105 |
2.1 | topsis-aishlee-102316083 | 0.3 | TOPSIS command-line implementation in Python |
# TOPSIS Assignment
TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) is a multi-criteria decision-making method.
This package provides a command-line implementation of the TOPSIS method in Python. It helps users rank multiple alternatives based on different criteria, weights, and impacts.
## Features
- Accepts CSV input files
- Supports user-defined weights and impacts
- Handles invalid inputs and errors
- Generates ranked output with TOPSIS scores
- Easy to use command-line interface
## Installation
pip install topsis-aishlee-102316083
## Usage
topsis input.csv weights impacts output.csv
Example:
topsis data.csv 1,1,1,1,1 +,+,+,+,+ result.csv
## Author
Aishlee Joshi
| text/markdown | Aishlee Joshi | ajoshi1_be23@thapar.edu | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [
"numpy",
"pandas"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T18:13:56.966615 | topsis_aishlee_102316083-0.3.tar.gz | 2,280 | 58/31/91cb841a98d8402ecde046b42049d249abb496a39f9b82f348ae1559d54b/topsis_aishlee_102316083-0.3.tar.gz | source | sdist | null | false | e1389b22729d2afc3f9617f2b300d7a8 | e7401682efca591584b7f7605fb5758ac3ac5129d44a042358ca1b5def84b6bf | 583191cb841a98d8402ecde046b42049d249abb496a39f9b82f348ae1559d54b | null | [] | 239 |
2.4 | trackers | 2.2.0 | A unified library for object tracking featuring clean room re-implementations of leading multi-object tracking algorithms | <div align="center">
<img width="200" src="https://raw.githubusercontent.com/roboflow/trackers/refs/heads/main/docs/assets/logo-trackers-violet.svg" alt="trackers logo">
<h1>trackers</h1>
<p>Plug-and-play multi-object tracking for any detection model.</p>
[](https://badge.fury.io/py/trackers)
[](https://pypistats.org/packages/trackers)
[](https://github.com/roboflow/trackers/blob/main/LICENSE.md)
[](https://badge.fury.io/py/trackers)
[](https://huggingface.co/spaces/Roboflow/Trackers)
[](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-objects-with-bytetrack-tracker.ipynb)
[](https://discord.gg/GbfgXGJ8Bk)
</div>
## Try It
No install needed. Try trackers in your browser with our [Hugging Face Playground](https://huggingface.co/spaces/roboflow/trackers).
## Install
```bash
pip install trackers
```
<details>
<summary>install from source</summary>
```bash
pip install git+https://github.com/roboflow/trackers.git
```
</details>
https://github.com/user-attachments/assets/eef9b00a-cfe4-40f7-a495-954550e3ef1f
## Track from CLI
Point at a video, webcam, RTSP stream, or image directory. Get tracked output.
Use our [interactive command builder](https://trackers.roboflow.com/develop/learn/track) to configure your tracking pipeline.
```bash
trackers track \
--source video.mp4 \
--output output.mp4 \
--model rfdetr-medium \
--tracker bytetrack \
--show-labels \
--show-trajectories
```
## Track from Python
Plug trackers into your existing detection pipeline. Works with any detector.
```python
import cv2
import supervision as sv
from inference import get_model
from trackers import ByteTrackTracker
model = get_model(model_id="rfdetr-medium")
tracker = ByteTrackTracker()
label_annotator = sv.LabelAnnotator()
trajectory_annotator = sv.TrajectoryAnnotator()
cap = cv2.VideoCapture("video.mp4")
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
result = model.infer(frame)[0]
detections = sv.Detections.from_inference(result)
tracked = tracker.update(detections)
frame = label_annotator.annotate(frame, tracked)
frame = trajectory_annotator.annotate(frame, tracked)
```
## Evaluate
Benchmark your tracker against ground truth with standard MOT metrics.
```bash
trackers eval \
--gt-dir data/gt \
--tracker-dir data/trackers \
--metrics CLEAR HOTA Identity
```
```
Sequence MOTA HOTA IDF1 IDSW
----------------------------------------------------------
MOT17-02-FRCNN 75.600 62.300 72.100 42
MOT17-04-FRCNN 78.200 65.100 74.800 31
----------------------------------------------------------
COMBINED 75.033 62.400 72.033 73
```
## Algorithms
Clean, modular implementations of leading trackers. See the [tracker comparison](https://trackers.roboflow.com/develop/trackers/comparison/) for detailed benchmarks.
| Algorithm | MOT17 | SportsMOT | SoccerNet |
| :-------------------------------------------: | :------: | :-------: | :-------: |
| [SORT](https://arxiv.org/abs/1602.00763) | 58.4 | 70.9 | 81.6 |
| [ByteTrack](https://arxiv.org/abs/2110.06864) | **60.1** | **73.0** | **84.0** |
| [OC-SORT](https://arxiv.org/abs/2203.14360) | — | — | — |
| [BoT-SORT](https://arxiv.org/abs/2206.14651) | — | — | — |
| [McByte](https://arxiv.org/abs/2506.01373) | — | — | — |
## Contributing
We welcome contributions. Read our [contributor guidelines](https://github.com/roboflow/trackers/blob/main/CONTRIBUTING.md) to get started.
## License
The code is released under the [Apache 2.0 license](https://github.com/roboflow/trackers/blob/main/LICENSE).
| text/markdown | null | "Roboflow et al." <develop@roboflow.com> | null | Piotr Skalski <piotr@roboflow.com> | Apache License 2.0 | AI, DETR, DL, ML, Roboflow, YOLO, bytetrack, deep-learning, machine-learning, mot, sort, tracking, vision | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=2.0.2",
"opencv-python>=4.8.0",
"rich>=13.0.0",
"scipy>=1.13.1",
"supervision>=0.26.1",
"inference-models==0.18.6rc14; extra == \"detection\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:13:40.110600 | trackers-2.2.0.tar.gz | 329,244 | b3/69/993225848cea1d4031bb78932900949ddb4e43ae7d23956a66743458c96d/trackers-2.2.0.tar.gz | source | sdist | null | false | 2a79912dc9256d796d29ac06aed7681c | 9e5bac506d756b882f27a6785906673d7482376e29715d796a8e70740908e77e | b369993225848cea1d4031bb78932900949ddb4e43ae7d23956a66743458c96d | null | [
"LICENSE"
] | 849 |
2.4 | epymorph | 2.0.0b0 | EpiMoRPH spatial disease modeling | # epymorph
The `epymorph` package is the product of the EpiMoRPH (Epidemiological Modeling Resources for Public Health) project, and aims to provide a simplified framework for completing the full lifecycle of a spatial modeling experiment. epymorph streamlines methods for building, simulating, and fitting, metapopulation models of infectious pathogens. This Python package is easily accessible to beginning modelers, while also sophisticated enough to allow rapid design and execution of complex modeling experiments by highly experienced modelers. Specific aims include dramatic streamlining of model building speed, increased model transparency, automated fitting of models to observed data, and easy transportability of models across temporal and geographic scenarios.
Read the [documentation at docs.epimorph.org](https://docs.www.epimorph.org).
For general inquiries please contact us via email at Epymorph@nau.edu
See [CONTRIBUTING.md](CONTRIBUTING.md) for more information on how to contribute to the codebase.
## Configuration
epymorph accepts configuration values provided by your system's environment variables. This may include settings which change the behavior of epymorph, or secrets like API keys needed to interface with third-party services. All values are optional unless you are using a feature which requires them.
Currently supported values include:
- `CENSUS_API_KEY`: your API key for the US Census API ([which you can request here](https://api.census.gov/data/key_signup.html))
- `EPYMORPH_CACHE_PATH`: the path epymorph should use to cache files; this defaults to a location appropriate to your operating system for cached files
| text/markdown | null | Tyler Coles <tyler.coles@nau.edu>, Jeffrey Covington <jeffrey.covington@nau.edu>, Ye Chen <ye.chen@nau.edu>, Eck Doerry <eck.doerry@nau.edu>, Joseph Mihaljevic <joseph.mihaljevic@nau.edu> | null | null | GPL-3.0-only | epidemiology, disease modeling, metapopulation | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: OS Independent",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"Lic... | [] | null | null | >=3.11 | [] | [] | [] | [
"matplotlib~=3.9.0",
"numpy~=1.26.4",
"sympy~=1.12.1",
"psutil~=5.9.8",
"pandas[excel]~=2.2.2",
"geopandas~=0.14.4",
"census~=0.8.22",
"jsonpickle~=3.2.1",
"platformdirs~=4.2.2",
"graphviz~=0.20.3",
"typing_extensions~=4.12.2",
"ipython~=8.26.0",
"rasterio~=1.3.11",
"humanize~=4.10.0",
"... | [] | [] | [] | [
"Homepage, https://docs.www.epimorph.org",
"Source, https://github.com/NAU-CCL/Epymorph",
"Issues, https://github.com/NAU-CCL/Epymorph/issues"
] | uv/0.9.30 {"installer":{"name":"uv","version":"0.9.30","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T18:13:27.314586 | epymorph-2.0.0b0-py3-none-any.whl | 277,576 | af/5c/e64a33ea9b255ac655046f9206a1c8b74b394ca544f9bee17df4cc7eb250/epymorph-2.0.0b0-py3-none-any.whl | py3 | bdist_wheel | null | false | d669cbae7902a8f84eb0ba4bd9665298 | 417bbdf16347527897fae90578ad86848ef6da22088ba452250d8af371c08d53 | af5ce64a33ea9b255ac655046f9206a1c8b74b394ca544f9bee17df4cc7eb250 | null | [] | 205 |
2.4 | adaptive-reranker | 0.1.1 | Adaptive reranker selection via per-query labeling | # adaptive-reranker
lorem ipsum dolor sit amet. | text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"flagembedding>=1.2.0",
"ir-datasets>=0.5.11",
"pandas>=2.3.3",
"pydantic>=2.0",
"rank-bm25>=0.2.2",
"sentence-transformers>=5.1.2",
"torch>=2.0",
"transformers<4.45,>=4.38"
] | [] | [] | [] | [
"Homepage, https://github.com/emirkaan5/adaptive-reranker/",
"Issues, https://github.com/emirkaan5/adaptive-reranker/Issues"
] | twine/6.2.0 CPython/3.13.0 | 2026-02-18T18:13:11.446054 | adaptive_reranker-0.1.1.tar.gz | 159,226 | f9/6c/9f4127e863ee6fca907bde9daaaf3e63d43d3bf1b668b535096db6417667/adaptive_reranker-0.1.1.tar.gz | source | sdist | null | false | cc54eea45526e30dcc5b8a0d7736c04f | 7c04a48b1c8672b0926118aadc3bc9832e3a21ce3ba3015e361491deec12565b | f96c9f4127e863ee6fca907bde9daaaf3e63d43d3bf1b668b535096db6417667 | null | [
"LICENSE"
] | 260 |
2.4 | trytond-account-statement-sepa | 7.0.1 | Tryton module to import SEPA statements | #############################
Account Statement SEPA Module
#############################
The *Account Statement SEPA Module* implements the import of the CAMT.052,
CAMT.053 and CAMT.054 `SEPA <https://www.iso20022.org/>`_ files as statement.
| null | Tryton | foundation@tryton.org | null | null | GPL-3 | tryton account statement SEPA CAMT.052 CAMT.053 CAMT.054 | [
"Development Status :: 5 - Production/Stable",
"Environment :: Plugins",
"Framework :: Tryton",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Legal Industry",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",... | [] | http://www.tryton.org/ | http://downloads.tryton.org/7.0/ | >=3.8 | [] | [] | [] | [
"lxml",
"trytond_account_statement<7.1,>=7.0",
"trytond_bank<7.1,>=7.0",
"trytond<7.1,>=7.0",
"proteus<7.1,>=7.0; extra == \"test\""
] | [] | [] | [] | [
"Bug Tracker, https://bugs.tryton.org/",
"Documentation, https://docs.tryton.org/modules-account-statement-sepa",
"Forum, https://www.tryton.org/forum",
"Source Code, https://code.tryton.org/tryton"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T18:12:53.457317 | trytond_account_statement_sepa-7.0.1.tar.gz | 26,961 | 66/e7/acede5ff0b012cf30c90f2aa73d41e2265ea64560a4df98f0e0ad14be3c3/trytond_account_statement_sepa-7.0.1.tar.gz | source | sdist | null | false | 204205205e77cc500d720a8a8b865952 | 4bf44259ace9f26561bf590bcbd43c4100129421602d86ff373b750c551c4c62 | 66e7acede5ff0b012cf30c90f2aa73d41e2265ea64560a4df98f0e0ad14be3c3 | null | [
"LICENSE"
] | 259 |
2.4 | trytond-incoterm | 7.8.2 | Tryton module for incoterms | ###############
Incoterm Module
###############
This *Incoterm Model* is used to manage the `Incoterms
<https://en.wikipedia.org/wiki/Incoterms>`_ on sales, purchases and shipments.
The module contains the Incoterm versions of 2000, 2010 and 2020.
| null | Tryton | foundation@tryton.org | null | null | GPL-3 | tryton incoterm | [
"Development Status :: 5 - Production/Stable",
"Environment :: Plugins",
"Framework :: Tryton",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Legal Industry",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",... | [] | http://www.tryton.org/ | http://downloads.tryton.org/7.8/ | >=3.9 | [] | [] | [] | [
"python-sql",
"trytond_company<7.9,>=7.8",
"trytond_country<7.9,>=7.8",
"trytond_party<7.9,>=7.8",
"trytond<7.9,>=7.8",
"proteus<7.9,>=7.8; extra == \"test\"",
"trytond_account<7.9,>=7.8; extra == \"test\"",
"trytond_account_invoice<7.9,>=7.8; extra == \"test\"",
"trytond_account_invoice_stock<7.9,>... | [] | [] | [] | [
"Bug Tracker, https://bugs.tryton.org/",
"Documentation, https://docs.tryton.org/modules-incoterm/",
"Forum, https://www.tryton.org/forum",
"Source Code, https://code.tryton.org/tryton"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T18:11:42.418364 | trytond_incoterm-7.8.2.tar.gz | 39,067 | 92/fa/b70f28636d623a746e857456af662cf953596a697abe2004db9dca70fdd3/trytond_incoterm-7.8.2.tar.gz | source | sdist | null | false | 282a1ee5f934e4f906e79597a9011b05 | 5e99848a909f897ec72effc4d0f5be2a40fe2c02127aed082926d451c5175e82 | 92fab70f28636d623a746e857456af662cf953596a697abe2004db9dca70fdd3 | null | [
"LICENSE"
] | 262 |
2.4 | trytond-marketing-email | 6.0.3 | Tryton module to manage marketing mailing lists | Marketing Email Module
######################
The marketing_email module manages mailing lists.
Mailing List
************
A mailing list groups emails under a name and a language
Email
*****
It stores emails for a mailing list and provides links to the related party or
web user.
Two actions are available:
- *Request Subscribe* which sends an e-mail to confirm the subscription to a
list.
- *Request Unsubscribe* which sends an e-mail to confirm the unsubscription of
an email address from the list.
Message
*******
It stores a message to send to all e-mails addresses on a list. A message is
defined by:
* From: the address from which the message is sent.
* List: the list of addresses to send the message to.
* Title
* Content
* State:
* Draft
* Sending
* Sent
A wizard is available that sends a message to a unique e-mail address from the
list for test purposes.
Configuration
*************
The marketing_email module uses parameters from the section:
- `[marketing]`:
- `email_from`: The default `From` for the e-mails that get sent.
- `email_subscribe_url`: the URL to confirm the subscription to which the
parameter `token` will be added.
- `email_unsubscribe_url`: the URL to unsubscribe an e-mail address to
which the parameter `token` will be added.
- `email_spy_pixel`: A boolean to activate spy pixel. Disable by default.
| null | Tryton | bugs@tryton.org | null | null | GPL-3 | tryton marketing email list | [
"Development Status :: 5 - Production/Stable",
"Environment :: Plugins",
"Framework :: Tryton",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Legal Industry",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",... | [] | http://www.tryton.org/ | http://downloads.tryton.org/6.0/ | >=3.6 | [] | [] | [] | [
"trytond_marketing<6.1,>=6.0",
"trytond_party<6.1,>=6.0",
"trytond_web_user<6.1,>=6.0",
"trytond_web_shortener<6.1,>=6.0",
"trytond<6.1,>=6.0"
] | [] | [] | [] | [
"Bug Tracker, https://bugs.tryton.org/",
"Documentation, https://docs.tryton.org/",
"Forum, https://www.tryton.org/forum",
"Source Code, https://hg.tryton.org/modules/marketing_email"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T18:11:16.296628 | trytond_marketing_email-6.0.3.tar.gz | 34,219 | 47/38/9bc6452f366df13692ec88160fb0b40e5a88c09220d818bbc2dddde7194c/trytond_marketing_email-6.0.3.tar.gz | source | sdist | null | false | 8e61c5202cf619550a316f291aa2d5ae | ef06feca363ac0d9d4b74693a30436c7359ba65ba7b470d85a1bd6a657f4df10 | 47389bc6452f366df13692ec88160fb0b40e5a88c09220d818bbc2dddde7194c | null | [
"LICENSE"
] | 237 |
2.4 | teleget9527 | 0.2.0 | TeleGet - High-speed Telegram file downloader SDK with multi-connection parallel downloading | # TeleGet — Telegram Downloader SDK
<p align="center">
<strong>Multi-connection parallel file downloading for Telegram</strong><br>
Built on <a href="https://github.com/LonamiWebs/Telethon">Telethon</a> · Python 3.9+ · Windows / Linux / macOS
</p>
---
## Why TeleGet?
If you've tried downloading large files from Telegram using Telethon's built-in `download_media()`, you've likely hit these walls:
**Speed ceiling.** Telethon downloads through a single MTProto connection. On a free account this tops out around 0.3–0.5 MB/s. A 2 GB video takes over an hour — if the connection doesn't drop first.
**Silent throttling.** Telegram's server silently imposes bandwidth limits per account. You won't see an error; your download just stalls repeatedly, tanking average throughput to unusable levels.
**FloodWait punishment.** Send requests too fast and Telegram locks you out with a `FloodWaitError` — anywhere from 30 seconds to 24 hours. Most wrappers either ignore this or leave you to handle it manually.
**No resume.** Connection drops at 95%? Start over. Telethon has no built-in checkpoint mechanism for partial downloads.
**Cross-DC pain.** Files hosted on a different datacenter than your account require special handling. Telethon doesn't handle this transparently for concurrent downloads.
TeleGet solves all of these with multi-connection parallel downloading, intelligent rate limiting, checkpoint resume, and automatic cross-DC routing.
---
## Download Performance

> 2.16 GB file, free account, same DC — **7.55 MB/s**, 2214/2214 parts, zero failures.
---
## Core Features
- **Multi-connection parallel download** — splits files into chunks, multiple workers download simultaneously
- **Smart rate limiting** — avoids FloodWait and server-side throttling penalties
- **Cross-DC support** — automatically detects file datacenter and routes accordingly
- **Checkpoint resume** — interrupted downloads pick up where they left off, not from zero
- **Efficient disk I/O** — no temp files, no merge step
- **Account isolation** — accounts are fully isolated, crash-safe
- **Proxy auto-detection** — tries system proxy, common local ports, falls back to direct
- **Multi-account management** — switch between accounts without restarting
---
## Requirements
| Requirement | Details |
|-------------|---------|
| Python | >= 3.9 |
| OS | Windows 10+, Linux, macOS |
| Network | Telegram API accessible (direct or via proxy) |
### Telegram API Credentials
You need a Telegram API ID and API Hash from [my.telegram.org](https://my.telegram.org), and at least one Telethon `.session` file.
### Dependencies
| Package | Purpose | Install |
|---------|---------|---------|
| [Telethon](https://github.com/LonamiWebs/Telethon) >= 1.38.0 | Telegram MTProto client | Required |
| [psutil](https://github.com/giampaolo/psutil) >= 5.9.0 | Process monitoring | Required |
| [cryptg](https://github.com/cher-nov/cryptg) >= 0.4.0 | Encryption acceleration (~10x faster) | Recommended |
| [PySocks](https://github.com/Anorov/PySocks) >= 1.7.0 | SOCKS proxy support | Optional |
---
## Installation
```bash
# From PyPI
pip install teleget9527
# Recommended: with encryption acceleration
pip install teleget9527[fast]
# Full install (encryption + proxy)
pip install teleget9527[all]
# From source
git clone https://github.com/xwc9527/TeleGet.git
cd TeleGet
pip install -e ".[all]"
```
---
## Quick Start
### 1. Configure
```bash
cp .env.example .env
```
Edit `.env` with your API credentials and session path. See `.env.example` for all available options.
### 2. Session Setup
Place your Telethon `.session` file in the session directory:
```
data/
└── your_account_id/
└── session.session
```
### 3. Test Login
```bash
python test_login.py
```
### 4. Test Download
```bash
python test_real_download.py
```
### 5. SDK Usage
```python
import asyncio
from tg_downloader import TGDownloader
async def main():
downloader = TGDownloader(
api_id=12345678,
api_hash="your_api_hash",
session_dir="./data",
)
await downloader.start("my_account")
request_id = await downloader.download(
chat_id=-1001234567890,
msg_id=42,
save_path="./downloads/video.mp4",
progress_callback=lambda dl, total, pct: print(f"{pct:.1f}%"),
)
# Wait for completion...
await downloader.shutdown()
asyncio.run(main())
```
The SDK provides 4 async methods: `start()`, `download()`, `cancel()`, `shutdown()`.
---
## What TeleGet Handles
If you're building a Telegram downloader and hitting these errors, TeleGet already handles them:
### Telegram API Errors — All Handled Automatically
- **`FloodWaitError: A wait of X seconds is required`** — Telegram locks you out for sending requests too fast. TeleGet auto-detects and decelerates before this happens.
- **`FloodPremiumWaitError`** — Free accounts get throttled 7–11 seconds per hit. TeleGet handles the backoff transparently.
- **`FILE_REFERENCE_EXPIRED` / `FILE_REFERENCE_INVALID`** — File metadata goes stale after ~1 hour. TeleGet auto-refreshes without restarting the download.
- **`AUTH_KEY_UNREGISTERED`** — DC auth key invalidated server-side. TeleGet rebuilds authorization automatically.
- **`AuthBytesInvalidError`** — Cross-DC auth race condition that crashes most implementations. TeleGet prevents it entirely.
- **`Server closed the connection` / `ConnectionError`** — Telegram drops sockets under load. TeleGet recovers without losing progress.
- **`0 bytes read on a total of 8 expected bytes`** — Silent connection death. TeleGet detects and reconnects.
- **`asyncio.TimeoutError`** — Requests hang indefinitely. TeleGet enforces per-request timeouts and rotates to healthy connections.
- **`WinError 32: The process cannot access the file`** — Windows file locking during rename. TeleGet retries automatically.
### Telegram Download Challenges — All Solved
- **Single-connection speed ceiling** (0.3–0.5 MB/s) → Multi-connection parallel download, 7+ MB/s on free accounts
- **Undocumented rate limits** → Multi-layer adaptive rate limiting that auto-tunes to your account's limits
- **FloodWait cascading failures** → Circuit breaker hierarchy isolates failures per-connection
- **Cross-DC file download** → Automatic DC detection, dedicated connection pool per datacenter
- **Cross-DC `AuthBytesInvalidError`** → Race condition prevention for concurrent auth exports
- **No download resume** → Part-level checkpoint persistence, survives crashes and restarts
- **Cross-session resume** → Switch accounts or restart app, download continues from where it left off
- **Connection storms after "Server closed"** → Disconnect detection with coordinated recovery
- **Download stalls at 99%** → Watchdog detects stuck workers and force-restarts them
- **Disk I/O bottleneck** → Sparse file pre-allocation, direct offset writes, no temp files or merge step
- **Multi-account auth collision** → Process-level account isolation, zero shared state
---
## Troubleshooting
**`FloodWaitError`** — TeleGet handles this automatically. If frequent, increase `rate_limiter_interval`.
**`FloodPremiumWaitError`** — Free account throttling. Lower `connection_pool_size` to reduce frequency.
**`FILE_REFERENCE_EXPIRED`** — Auto-refreshed up to 3 times. If persistent, the message may have been edited or deleted.
**`AUTH_KEY_UNREGISTERED`** — Auto-recovered. If persistent, your `.session` file may be corrupted — re-login.
**Download stalls** — Watchdog auto-recovers after 30s. Check logs for `[WATCHDOG]` entries.
**`WinError 32`** — Auto-retried. If persistent, another process may be holding the file open.
---
## License
AGPL-3.0 — see [LICENSE](LICENSE) for details.
| text/markdown | null | null | null | null | AGPL-3.0-only | telegram, downloader, mtproto, parallel, telethon | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"P... | [] | null | null | >=3.9 | [] | [] | [] | [
"Telethon>=1.38.0",
"psutil>=5.9.0",
"cryptg>=0.4.0; extra == \"fast\"",
"PySocks>=1.7.0; extra == \"proxy\"",
"cryptg>=0.4.0; extra == \"all\"",
"PySocks>=1.7.0; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/xwc9527/TeleGet",
"Repository, https://github.com/xwc9527/TeleGet",
"Issues, https://github.com/xwc9527/TeleGet/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:11:01.748955 | teleget9527-0.2.0.tar.gz | 132,913 | 6b/3d/51dc1bd9d0d0b53b641a137f942b130b09b654dc7b5ebfa0545a021fd4c8/teleget9527-0.2.0.tar.gz | source | sdist | null | false | 659aab412e1bc4a62aa14f2054f8df0d | 1e8e6040bbe4bf5b2c33d4e95cf575444ab56029039ac9b2484289bb7d9d99db | 6b3d51dc1bd9d0d0b53b641a137f942b130b09b654dc7b5ebfa0545a021fd4c8 | null | [
"LICENSE"
] | 233 |
2.4 | dworshak-secret | 1.2.10 | `dworshak-secret` is a light-weight library for local credential access. By adding `dworshak-secret` as a dependency to your Python project, you enable your program or script to leverage secure credentials. | `dworshak-secret` is a light-weight library for local credential access. By adding `dworshak-secret` as a dependency to your Python project, you enable your program or script to leverage secure credentials, typically added with the `dworshak-prompt.DworshakObtain.secret()` function or managed directly with the `dworshak` CLI.
All secrets are stored Fernet-encrypted in a SQL database file.
No opaque blobs — every entry is meaningful and decryptable via the library.
### Example
Typical package inclusion. See below for guidance concerning Termux and iSH Alpine.
```zsh
uv add "dworshak-secret[crypto]"
```
```python
from dworshak_secret import DworshakSecret, initialize_vault, list_credentials
from dworshak_prompt import DworshakObtain
# Initialize the vault (create key and DB if missing)
initialize_vault()
# Store and retrieve credentials by prompting the user on their local machine
username = DworshakObtain.secret("rjn_api", "username")
secret = DworshakObtain.secret("rjn_api", "password")
# ---
# Alternatively, store secrets with a script ....
## (NOT recommended to keep in your codebase or in system history)
DworshakSecret.set("rjn_api", "username", "davey.davidson")
DworshakSecret.set("rjn_api", "password", "s3cr3t")
## ...and then retrieve credentials in your codebase.
username = DworshakSecret.get("rjn_api", "username")
password = DworshakSecret.get("rjn_api", "password")
# ---
# List stored items
for service, item in list_credentials():
print(f"{service}/{item}")
```
---
## Include Cryptography Library
Here we cover using `dworshak-secret` as a dependency in your project.
The central question is how to properly include the `cryptography` package.
On a Termux system, `cryptography` can **(B)** be built from source or **(A)** the precompiled python-cryptography dedicated Termux package can be used.
### Termux Installation
#### A. Use python-cryptography
This is faster but pollutes your local venv with other system site packages.
```
pkg install python-cryptography
uv venv --system-site-packages
uv add dworshak-secret
```
#### B. Allow cryptography to build from source (uv is better at this compared to using pip)
```zsh
pkg install rust binutils
uv add "dworshak-secret[crypto]"
```
---
### iSH Apline installation
```
apk add py3-cryptography
uv venv --system-site-packages
uv add dworshak-secret
```
---
## Why Dworshak Over **keyring**?
Keyring is the go-to for desktop Python apps thanks to native OS backends, but it breaks on Termux because there's no keyring daemon or secure fallback, leaving you with insecure plaintext or install headaches.
Dworshak avoids that entirely with a portable, self-contained Fernet-encrypted SQLite vault that works the same on Linux, macOS, Windows, and Termux on Android tablets.
You get reliable programmatic access via `dworshak_secret.DworshakSecret.get()` (or `dworshak_prompt.DworshakObtain.secret()`).
The Dworshak ecosystem is field-ready for real scripting workflows like API pipelines and skip-the-playstore localhost webapps.
When keyring isn't viable, Dworshak just works.
---
## Sister Projects in the Dworshak Ecosystem
* **CLI/Orchestrator:** [dworshak](https://github.com/City-of-Memphis-Wastewater/dworshak)
* **Interactive UI:** [dworshak-prompt](https://github.com/City-of-Memphis-Wastewater/dworshak-prompt)
* **Secrets Storage:** [dworshak-secret](https://github.com/City-of-Memphis-Wastewater/dworshak-secret)
* **Plaintext Pathed Configs:** [dworshak-config](https://github.com/City-of-Memphis-Wastewater/dworshak-config)
* **Classic .env Injection:** [dworshak-env](https://github.com/City-of-Memphis-Wastewater/dworshak-env)
```python
pipx install dworshak
pip install dworshak-secret
pip install dworshak-config
pip install dworshak-env
pip install dworshak-prompt
```
---
## CLI
`dworshak` is the intended CLI layer, but the `dworshak-secret` CLI can also be used directly.
```
pipx install "dworshak-secret[typer,crypto]"
dworshak-secret helptree
```
<p align="center">
<img src="https://raw.githubusercontent.com/City-of-Memphis-Wastewater/dworshak-secret/main/assets/dworshak-secret_v1.2.9_helptree.svg" width="100%" alt="Screenshot of the Dworshak CLI helptree">
</p>
`helptree` is utility funtion for Typer CLIs, imported from the `typer-helptree` library.
- GitHub: https://github.com/City-of-Memphis-Wastewater/typer-helptree
- PyPI: https://pypi.org/project/typer-helptree/
---
| text/markdown | null | George Clayton Bennett <george.bennett@memphistn.gov> | null | George Clayton Bennett <george.bennett@memphistn.gov> | null | credentials, security | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Progra... | [] | null | null | >=3.9 | [] | [] | [] | [
"pyhabitat>=1.2.2",
"cryptography>=46.0.3; extra == \"crypto\"",
"typer>=0.21.1; extra == \"typer\"",
"rich>=14.3.2; extra == \"typer\"",
"typer-helptree>=0.2.6; extra == \"typer\"",
"dworshak-secret[crypto,typer]; extra == \"full\""
] | [] | [] | [] | [
"Homepage, https://github.com/city-of-memphis-wastewater/dworshak-secret",
"Repository, https://github.com/city-of-memphis-wastewater/dworshak-secret"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:10:55.951704 | dworshak_secret-1.2.10.tar.gz | 20,627 | 38/fd/3b514a2542c679f5326d7db3d8a62acdb79eec318f02ddfa1cf16ef1c5c7/dworshak_secret-1.2.10.tar.gz | source | sdist | null | false | c31e5443596e0d8900f30185e393724d | 0c27a6fec977d3d62f6a7ab24743f1cc0cd9da78cbbc961e286ae8ad3dae70aa | 38fd3b514a2542c679f5326d7db3d8a62acdb79eec318f02ddfa1cf16ef1c5c7 | MIT | [
"LICENSE"
] | 326 |
2.4 | synthetmic | 0.2.0 | A Python package for generating synthetic Laguerre polycrystalline microstructures | # SynthetMic
A Python package for generating synthetic polycrystalline microstructures using Laguerre diagrams, powered by [pysdot](https://github.com/sd-ot/pysdot).
## Installation
To install the latest version of the package via `pip`, run
```
pip install synthetmic
```
> If you are using `uv` to manage your project, run the following command instead:
>
> uv add synthetmic
## Usage
To use this package to generate synthetic microstructures, you need to import the generator class as follows:
```python
from synthetmic import LaguerreDiagramGenerator
```
Create an instance of the class with the default arguments:
```python
generator = LaguerreDiagramGenerator()
```
or with custom parameters:
```python
generator = LaguerreDiagramGenerator(
tol=0.1,
n_iter=5,
damp_param=1.0,
verbose=True,
)
```
We can fit this class to some data by calling the `fit` method. For example, we can create a Laguerre tessellation of the unit cube [0, 1] x [0, 1] x [0, 1] with 1000 cells of equal volume as follows:
```python
import numpy as np
domain = np.array([[0, 1],[0, 1],[0, 1]])
domain_vol = np.prod(domain[:, 1] - domain[:, 0])
n_grains = 1000
seeds = np.column_stack(
[np.random.uniform(low=d[0], high=d[1], size=n_grains) for d in domain]
)
volumes = (np.ones(n_grains) / n_grains) * domain_vol
# call the fit method on data
generator.fit(
seeds=seeds,
volumes=volumes,
domain=domain,
)
```
After calling the fit method, you can use the instance to get various properties of the diagram, e.g., get the centroids and vertices of the cells:
```python
centroids = generator.get_centroids()
vertices = generator.get_vertices()
print("diagram centroids:\n", centroids)
print("diagram vertices:\n", vertices)
```
You can plot the diagram in static or interactive mode by using the fitted instance:
```python
from synthetmic.plot import plot_cells_as_pyvista_fig
plot_cells_as_pyvista_fig(
generator=generator,
save_path="./example_diagram.html",
)
```
The generated HTML file can be viewed via any browser of your choice.
If you prefer a static figure, you can save it with any of the file formats or extensions namely pdf, eps, ps, tex, and svg. Saving the figure as pdf looks like:
```python
plot_cells_as_pyvista_fig(
generator=generator,
save_path="./example_diagram.pdf",
)
```
To see more usage examples, see the `examples` folder or check below on how to run them via `cli.py`.
The example above uses a custom data. If you would like to use one of the data provided by this package, they can be loaded from the `synthetmic.data.paper` and `synthetmic.data.toy` modules. The former gives access to the data for generating some figures from this [paper](https://www.tandfonline.com/doi/full/10.1080/14786435.2020.1790053) and the latter provides access to some useful toy data. All data creators or loaders from these modules return a `synthetmic.data.utils.SynthetMicData` data object which contains the following fields: `seeds`, `volumes`, `domain`, `periodic`, and `init_weights`.
Each of the fields of the data object can be passed to the `LaguerreDiagramGenerator().fit` method either as keyword/positional arguments or as dictionary. For instance, let's load some data from the `synthetmic.data.paper` and pass the fields as keyword arguments:
```python
from synthetmic.data.paper import create_example5p5_data
data = create_example5p5_data(is_periodic=False)
generator.fit(
seeds=data.seeds,
volumes=data.volumes,
domain=data.domain,
)
```
or pass the fields as dictionary:
```python
from dataclasses import asdict
generator.fit(**asdict(data))
```
## Working with source codes
### Build from source
If you would like to build this project from source either for development purposes or for any other reason, it is recommended to install [uv](https://docs.astral.sh/uv/). This is what is adopted in this project. To install uv, follow the instructions in this [link](https://docs.astral.sh/uv/getting-started/installation/).
If you don't want to use uv, you can use other alternatives like [pip](https://pip.pypa.io/en/stable/).
The following instructions use uv for building synthetmic from source.
1. Clone the repository by running
```
git clone https://github.com/synthetic-microstructures/synthetmic
```
1. Create a python virtual environment by running
```
uv venv .venv --python PYTHON_VERSION
```
> Here, PYTHON_VERSION is the supported Python version. Note that this project requires version >=3.12.3
1. Activate the virtual environment by running
```
source .venv/bin/activate
```
1. Prepare all modules and dependencies by running the following:
```
uv sync --all-extras
```
### Running examples
We created a command line interface (cli) for recreating some of the examples provided in the this [paper](https://www.tandfonline.com/doi/full/10.1080/14786435.2020.1790053) (and lots more!).
To check the available commands in the cli, run
```
python cli.py --help
```
There are currently two commands available in the cli: `recreate` and `analyse`.
You can check information about each of these commands by running
```
python cli.py COMMAND --help
```
where `COMMAND` is any of the commands.
Running a command with its appropriate args is simple. For instance, if you would like to recreate some of the two-dimensional examples in the above-mentioned paper, and save the generated plots in the ./plots dir, run
```
python cli.py recreate --example 2d --save-dir ./plots
```
You can do the same for three-dimension examples. You can pass the flag `--interactive` or `-i` to save the generated plots as a `.html` file, which can then be opened in a browser to interact with them:
```
python cli.py recreate --example 2d --save-dir ./plots --interactive
```
> Note: by default, the generated plots will be saved as `.pdf`. Passing `--interactive` flag to 2d case will be skipped since this is not that interesting for interactivity.
### Running tests
To run all tests, run
```
pytest -v tests
```
## Authors and maintainers
- [R. O. Ibraheem](https://github.com/Rasheed19)
- [D. P. Bourne](https://github.com/DPBourne)
- [S. M. Roper](https://github.com/smr29git)
## References
If you use this package in your research, please refer to the link to this project. Additionally, please consider citing the following paper:
```bibtex
@article{Bourne01112020,
author = {D. P. Bourne and P. J. J. Kok and S. M. Roper and W. D. T. Spanjer},
title = {Laguerre tessellations and polycrystalline microstructures: a fast algorithm for generating grains of given volumes},
journal = {Philosophical Magazine},
volume = {100},
number = {21},
pages = {2677--2707},
year = {2020},
publisher = {Taylor \& Francis},
doi = {10.1080/14786435.2020.1790053},
URL = {https://doi.org/10.1080/14786435.2020.1790053},
eprint = {https://doi.org/10.1080/14786435.2020.1790053}
}
```
You may also be interested in some of our other libraries:
* [LPM](https://github.com/DPBourne/Laguerre-Polycrystalline-Microstructures) - MATLAB code for generating synthetic polycrystalline microstructures using Laguerre diagrams
* [pyAPD](https://github.com/mbuze/PyAPD) - a Python library for computing *anisotropic* Laguerre diagrams
* [SynthetMic-GUI](https://github.com/synthetic-microstructures/synthetmic-gui) - a web app for generating 2D and 3D synthetic polycrystalline microstructures using Laguerre tessellations
| text/markdown | null | "R. O. Ibraheem" <ibraheem.abdulrasheed@gmail.com>, "D. P. Bourne" <D.Bourne@hw.ac.uk>, "S. M. Roper" <Steven.Roper@glasgow.ac.uk> | null | "R. O. Ibraheem" <ibraheem.abdulrasheed@gmail.com>, "D. P. Bourne" <D.Bourne@hw.ac.uk>, "S. M. Roper" <Steven.Roper@glasgow.ac.uk> | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"matplotlib>=3.10.3",
"numpy>=2.3.0",
"pysdot==0.2.38",
"pyvista[jupyter]>=0.45.2",
"scipy>=1.15.3",
"vtk>=9.4.2",
"click>=8.2.1; extra == \"cli\"",
"pytest>=8.4.1; extra == \"tests\""
] | [] | [] | [] | [
"homepage, https://github.com/synthetic-microstructures/synthetmic",
"issues, https://github.com/synthetic-microstructures/synthetmic/issues"
] | uv/0.7.15 | 2026-02-18T18:10:31.098596 | synthetmic-0.2.0.tar.gz | 134,946 | 9b/15/f8290c9cef290c30b042a1ba84a20d272512042938573ccb53367bd2c121/synthetmic-0.2.0.tar.gz | source | sdist | null | false | a4f36d1ac19528f264b5ceb0406724a0 | 5f15c45162c5f4c1f37684c014fae0bcc0224a0c249fa8e7a2c331f93b110b62 | 9b15f8290c9cef290c30b042a1ba84a20d272512042938573ccb53367bd2c121 | MIT | [
"LICENSE"
] | 252 |
2.4 | stakefish-web3-utils | 0.10.3 | Stakefish’s web3 utils for Python | .. image:: https://img.shields.io/pypi/v/stakefish-web3-utils.svg
:target: https://pypi.org/project/stakefish-web3-utils
.. image:: https://img.shields.io/pypi/pyversions/stakefish-web3-utils.svg
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
:alt: Code style: Black
See the `Installation Instructions
<https://packaging.python.org/installing/>`_ in the Python Packaging
User's Guide for instructions on installing, upgrading, and uninstalling
stakefish-web3-utils.
Questions and comments should be directed to `GitHub Discussions
<https://github.com/stakefish/web3-utils.py/discussions>`_.
Bug reports and especially tested patches may be
submitted directly to the `bug tracker
<https://github.com/stakefish/web3-utils.py/issues>`_.
| null | Michal Baranowski <mbaranovski@stake.fish>, Mateusz Sokola <mateusz@stake.fish> | null | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: Python :: 3.9"
] | [] | https://github.com/stakefish/web3-utils | null | >=3.9 | [] | [] | [] | [
"web3",
"tenacity",
"python-gitlab",
"eth-brownie; extra == \"tests\"",
"pytest-asyncio; extra == \"tests\"",
"pytest-mock; extra == \"tests\""
] | [] | [] | [] | [] | twine/6.0.0 CPython/3.10.5 | 2026-02-18T18:10:06.220463 | stakefish_web3_utils-0.10.3.tar.gz | 12,736 | 32/5c/5834297575c5ca706ef93d17e41ac1e46f423f7744c5e4c7cd446a4f7ecb/stakefish_web3_utils-0.10.3.tar.gz | source | sdist | null | false | 26051f2199321179aae2853593b1ed68 | dd19938598d8fd6ef390244d1d0614911a0e9e7013621de9d4cc1129312560c8 | 325c5834297575c5ca706ef93d17e41ac1e46f423f7744c5e4c7cd446a4f7ecb | null | [] | 259 |
2.4 | screenwright | 0.1.0b1 | A Python Screenplay pattern framework with cinematic test reporting | # Screenwright
A Python Screenplay pattern framework with cinematic test reporting.
[](https://pypi.org/project/screenwright/)
[](https://pypi.org/project/screenwright/)
[](https://github.com/Hossein-Fasihi/screenwright/blob/main/LICENSE)
[](https://github.com/Hossein-Fasihi/screenwright/actions)
Screenwright brings the Screenplay pattern to Python testing. Actors perform
Tasks, exercise Abilities through Interactions, and verify outcomes by asking
Questions -- all while generating cinematic HTML reports with Material Design 3
dark neon theming.
## Key Features
- **Screenplay Pattern Engine** -- Actor, Task, Interaction, Question, Ability,
and Fact primitives with full event sourcing
- **pytest-bdd Integration** -- Auto-registered plugin with Stage and Actor
fixtures, zero configuration required
- **Cinematic HTML Reports** -- Dashboard overview with summary cards, feature
lists, and scenario rows; theatre-style scenario presentations with actor
entrances, ability badges, interaction arrows, and narration subtitles
- **Material Design 3 Theming** -- YAML-based theme configuration with tonal
palette generation, neon glow effects, and customizable personas
- **Typed Domain Events** -- 17 frozen dataclass events with JSON serialization,
enabling full traceability of test execution
- **Fully Typed** -- py.typed marker with strict mypy compliance
## Installation
```bash
pip install screenwright
```
For development:
```bash
pip install screenwright[dev]
```
## Quick Start
### 1. Write a feature file
```gherkin
# features/search.feature
Feature: Web Search
Scenario: Search for a term
Given Ali can browse the web
When Ali searches for "Screenplay pattern"
Then Ali should see results containing "Screenplay"
```
### 2. Define step implementations using the Screenplay pattern
```python
# tests/step_defs/test_search_steps.py
from pytest_bdd import scenario, given, when, then, parsers
from screenwright import Actor, Ability, Interaction, Question, task
# -- Abilities --
class BrowseTheWeb(Ability):
def __init__(self, driver):
self.driver = driver
@staticmethod
def using(driver):
return BrowseTheWeb(driver)
# -- Interactions --
class SearchFor(Interaction):
def __init__(self, term):
self.term = term
@staticmethod
def the_term(term):
return SearchFor(term)
def perform_as(self, actor):
driver = actor.ability_to(BrowseTheWeb).driver
driver.find_element("name", "q").send_keys(self.term)
driver.find_element("name", "q").submit()
# -- Questions --
class SearchResults(Question):
def answered_by(self, actor):
driver = actor.ability_to(BrowseTheWeb).driver
return driver.find_element("id", "results").text
# -- Steps --
@scenario("../features/search.feature", "Search for a term")
def test_search():
pass
@given("Ali can browse the web")
def ali_can_browse(actor, stage):
ali = stage.actor_named("Ali")
ali.who_can(BrowseTheWeb.using(create_driver()))
@when(parsers.parse('Ali searches for "{term}"'))
def ali_searches(stage, term):
ali = stage.shines_spotlight_on("Ali")
ali.attempts_to(SearchFor.the_term(term))
@then(parsers.parse('Ali should see results containing "{text}"'))
def ali_sees_results(stage, text):
ali = stage.shines_spotlight_on("Ali")
ali.should_see_that(SearchResults(), lambda answer: text in answer)
```
### 3. Run tests
```bash
pytest tests/ --screenwright-report=report.html
```
Open `report.html` to view the cinematic test report.
## Documentation
Full documentation is available at [screenwright.dev](https://screenwright.dev).
## License
Screenwright is released under the [Apache License 2.0](LICENSE).
| text/markdown | Screenwright Contributors | null | null | null | Apache-2.0 | bdd, pytest, reporting, screenplay, testing | [
"Development Status :: 3 - Alpha",
"Framework :: Pytest",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python ... | [] | null | null | >=3.11 | [] | [] | [] | [
"jinja2>=3.1",
"pyyaml>=6.0",
"import-linter>=2.0; extra == \"dev\"",
"mutmut>=3.0; extra == \"dev\"",
"mypy>=1.8; extra == \"dev\"",
"pre-commit>=3.0; extra == \"dev\"",
"pytest-bdd>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.3; extra ==... | [] | [] | [] | [
"Documentation, https://screenwright.dev",
"Repository, https://github.com/Hossein-Fasihi/screenwright"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T18:09:50.793991 | screenwright-0.1.0b1.tar.gz | 92,294 | 38/4c/26b20021b503652ee1889906f950c125d70bc09c9275685e7f8e6d901121/screenwright-0.1.0b1.tar.gz | source | sdist | null | false | 3cc67cea27dbd5560b573ff0af2e0214 | 8cdaaa55e71fd7315c9934f0d4c151a6fff065932db8b263477a56a06674752c | 384c26b20021b503652ee1889906f950c125d70bc09c9275685e7f8e6d901121 | null | [
"LICENSE"
] | 233 |
2.4 | trytond-production | 7.8.2 | Tryton module for production | #################
Production Module
#################
The *Production Module* provides the fundamental concepts required for managing
production.
This includes definition for bill of material and production orders.
| null | Tryton | foundation@tryton.org | null | null | GPL-3 | tryton production | [
"Development Status :: 5 - Production/Stable",
"Environment :: Plugins",
"Framework :: Tryton",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Legal Industry",
"Intended Audience :: Manufacturing",
"License :: OSI Approved :: GNU Gener... | [] | http://www.tryton.org/ | http://downloads.tryton.org/7.8/ | >=3.9 | [] | [] | [] | [
"python-sql>=0.4",
"trytond_company<7.9,>=7.8",
"trytond_product<7.9,>=7.8",
"trytond_stock<7.9,>=7.8",
"trytond<7.9,>=7.8",
"proteus<7.9,>=7.8; extra == \"test\"",
"trytond_stock_lot<7.9,>=7.8; extra == \"test\""
] | [] | [] | [] | [
"Bug Tracker, https://bugs.tryton.org/",
"Documentation, https://docs.tryton.org/modules-production/",
"Forum, https://www.tryton.org/forum",
"Source Code, https://code.tryton.org/tryton"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T18:09:32.032843 | trytond_production-7.8.2.tar.gz | 76,693 | 6d/cc/b270382ef31289c86b56a2bcd338cbab22bd186c734f3078684a60b93130/trytond_production-7.8.2.tar.gz | source | sdist | null | false | e7b82496a7911f6488f120ad8481ea58 | facfe8c550245dc775d4fedb84120a276f99dc5888e8753ad3b8556328a895ab | 6dccb270382ef31289c86b56a2bcd338cbab22bd186c734f3078684a60b93130 | null | [
"LICENSE"
] | 294 |
2.4 | StonerPlots | 1.9.3 | This is a fork of scienceplots and provides a range of matplotlib styles for plotting physics... | # Stoner Plots


[](https://stonerlab.github.io/stonerplots/)
[](https://github.com/stonerlab/stonerplots/actions/workflows/pytest.yaml)
[](https://app.codacy.com/gh/stonerlab/stonerplots/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade)
[](https://github.com/stonerlab/stonerplots/actions/workflows/build_conda.yaml)
[](https://badge.fury.io/gh/stonerlab%2Fstonerplots)

[](https://badge.fury.io/py/StonerPlots)
[](https://anaconda.org/phygbu/stonerplots)
[](https://doi.org/10.5281/zenodo.18603124)
Stoner Plots is a fork of Science Plots with additional features to make plotting of scientific plots easier.

## Usage
Before using the new styles you need to import stonerplots - but you will most likely also want to make use of
one of the context managers - the `SavedFigure` class.
from stonerplots import SavedFigure
with SavedFigure("my_figure.pdf", style=["stoner","aps"]):
plt.figure()
plt.plot(x,y,label="Dataset")
...
There are three main parts to this package:
1. A set of matplotlib style sheets for making plots with styles suitable for a variety of Physics related journals
and formats such as presentations and posters as well as reports and theses.
1. A set of Python Context managers designed to help with the process of preparing production quality figures in
matplotlib.
1. Some definitions of colours based on the Transport for London colour palette and inserted as named colours into
the matplotlib colour tables.
The package is fully documented (see link below) and comes with a set of examples that also serve as unit tests.
## Documentation
Documentation can be found on the [github pages for this repository](
https://stonerlab.github.io/stonerplots/index.html).
## Available Styles
### Core Styles
- stoner - this is the base style sheet
- poster - makes everything bigger for printing on a poster
- notebook - makes things a little bigger for a Jupyter notebook - from the original scienceplots package
- presentation - a style suitable for the main graph on a powerpoint slide
- thesis - a style that tries to look like the CM Physics group LaTeX thesis template
### Journal Styles
- nature - for Nature group journals - from the original scienceplots package
- aaas-science - Science single column style.
- ieee - for IEEE Transactions journals - from the original scienceplots package
- aps - for American Physical Society Journals (like Phys Rev Lett etc.)
- aip - for AIP journals such as Applied Physics Letters - labels in Serif Fonts
- iop - for Institute of Physics Journals.
### Modifiers
- aps1.5 - Switch to 1.5 column wide format
- aps2.0 - Switch to 2 column wide format
- aip2 - Switch to 2 column wide format for AIP journals
- stoner-dark - Switch to a dark background a lighter plotting colours.
- hi-res - Switches to 600dpi plotting (but using eps, pdf or svg is generally a better option)
- med-res - like hi-res, but switches to 300dpi plotting.
- presentation_sm - a style for making 1/2 width graphs.
- presentation_dark - tweak the weight of elements for dark presentations.
- science-2col, science-3col - Science 2 and 3 column width figures
- thesis-sm - reduces the figure width to make the axes closer to 4/3 aspect ratio.
## Context Managers
The package is designed to work by using python context managers to aid plotting. These include:
- SavedFigure - apply style sheets and then save any resulting figures to disc in one or more formats
- CentredAxes - makes a plot where the axes cross at the origin and there is no outside frame.
- StackVertical - make a multi-panel plot where the panels are arranged in a vertical stack and pushed together
so that the top-x-axis on one frame is the bottom of the next.
- MultiPanel - a general-purpose multi-panel plotting helper.
- InsetPlot - create an inset set of axes.
- DoubleYAxis - setup the righthand y axis for a second scale and optional colour the y-axes differently and
merge the legend into a single legend.
## Colour Cycles
The default colour cycle is based on the London Underground map colour scheme (why not?) and goes
- Northern
- Central
- Picadily
- District
- Metropolitan
- Bakerloo
- Jubilee
- Overground
- Victoria
- Elizabeth
- Circle
## Reference
The package adds these as named colours in matplotlib, along with 90,50,70 and 10% shade variants of some of
them. See the [documentation page on colours](https://stonerlab.github.io/stonerplots/colours.html) for a
full list.
This package draws heavily on [scienceplots](https://github.com/garrettj403/SciencePlots), so it
seems only fair to cite the original work....
@software{john_garrett_2023_10206719,
author = {John Garrett and
Echedey Luis and
H.-H. Peng and
Tim Cera and
gobinathj and
Josh Borrow and
Mehmet Keçeci and
Splines and
Suraj Iyer and
Yuming Liu and
cjw and
Mikhail Gasanov},
title = {garrettj403/SciencePlots: 2.1.1},
month = nov,
year = 2023,
publisher = {Zenodo},
version = {2.1.1},
doi = {10.5281/zenodo.10206719},
url = {https://doi.org/10.5281/zenodo.10206719},
}
The doi and BibTex reference for stonerplots is: <https://doi.org/10.5281/zenodo.14026874>
@software{gavin_burnell_2024_14026874,
author = {Gavin Burnell},
title = {stonerlab/stonerplots},
month = February,
year = 2026,
publisher = {Zenodo},
version = {v1.9.2},
doi = {10.5281/zenodo.14026874},
url = {https://doi.org/10.5281/zenodo.14026874},
}
| text/markdown | null | Gavin Burnell <G.Burnell@leeds.ac.uk> | null | null | null | matplotlib-style-sheets, matplotlib-figures, scientific-papers, thesis-template, matplotlib-styles, python | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"bad_path"
] | [] | [] | [] | [
"Homepage, https://github.com/stonerlab/stonerplots/",
"Issues, https://github.com/stonerlab/stonerplots/issues"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T18:09:21.328039 | stonerplots-1.9.3.tar.gz | 3,387,567 | 8e/08/af0622b7a26ad4fbd7ea8f45076038117444256f5c325ab72ec4987826b5/stonerplots-1.9.3.tar.gz | source | sdist | null | false | fdc80c2395b7f6d1025fed15b6fdaacd | a585c3571bcec53fa88dad35970f7ca166202fbe327ab1e47d9418a273d6d5df | 8e08af0622b7a26ad4fbd7ea8f45076038117444256f5c325ab72ec4987826b5 | null | [
"LICENSE"
] | 0 |
2.4 | trytond-production-work | 6.0.2 | Tryton module for production work | Production Work Module
######################
The production work module allows to manage work order for each production.
It also adds in the production cost for the work cost.
Work Center
***********
Work center are places in the warehouse where production operations are
performed. They can be organized in a tree structure and each center can be
linked to a category. A cost can be defined on the work center with two
methods: `Per Cycle` or `Per Hour`.
Work
****
Works define for a production which operation to do at which work center.
They also contains the number of cycles consumed to perform the operation.
The work can be in one of these states:
* Request
The linked production is still waiting.
* Draft
The production has started but no cycle was already consumed.
* Waiting
There are some draft cycles planned.
* Running
There is at least one running cycle.
* Finished
All the cycles are done (or cancelled).
* Done
The production is done.
The works are created on the waiting production using the linked routing. For
each step of the routing, a work is created with the operation. The work center
is set if the operation has a work center category, by choosing a children work
center of this category. Or if the operation has no category, it is the
production work center that is used.
Cycle
*****
Cycles are used to count the consumption and the duration of the work. It also
records the effective cost from the work center.
The cycle can be in one of this states:
* Draft
* Running
* Done
* Cancelled
| null | Tryton | bugs@tryton.org | null | null | GPL-3 | tryton production work | [
"Development Status :: 5 - Production/Stable",
"Environment :: Plugins",
"Framework :: Tryton",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Legal Industry",
"Intended Audience :: Manufacturing",
"License :: OSI Approved :: GNU Gener... | [] | http://www.tryton.org/ | http://downloads.tryton.org/6.0/ | >=3.6 | [] | [] | [] | [
"python-sql>=0.4",
"trytond_company<6.1,>=6.0",
"trytond_product<6.1,>=6.0",
"trytond_production<6.1,>=6.0",
"trytond_production_routing<6.1,>=6.0",
"trytond_stock<6.1,>=6.0",
"trytond<6.1,>=6.0"
] | [] | [] | [] | [
"Bug Tracker, https://bugs.tryton.org/",
"Documentation, https://docs.tryton.org/",
"Forum, https://www.tryton.org/forum",
"Source Code, https://hg.tryton.org/modules/production_work"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T18:09:05.793708 | trytond_production_work-6.0.2.tar.gz | 39,197 | 14/21/e890221e694060387c35cb3f98ec076a4390ae2301b811dd52c879dedb98/trytond_production_work-6.0.2.tar.gz | source | sdist | null | false | 376e44d6b951dc891240726b222c041c | f49dbe436fb38d44d223f087b75e379da8698e4d82f9cbc9dcab322f60c8604f | 1421e890221e694060387c35cb3f98ec076a4390ae2301b811dd52c879dedb98 | null | [
"LICENSE"
] | 246 |
2.4 | urdf-usd-converter | 0.1.0rc1 | A URDF to OpenUSD Data Converter | # urdf-usd-converter
# Overview
A [URDF](https://wiki.ros.org/urdf/XML) to [OpenUSD](https://openusd.org) Data Converter
> Important: This is currently an Alpha product. See the [CHANGELOG](https://github.com/newton-physics/urdf-usd-converter/blob/main/CHANGELOG.md) for features and known limitations.
Key Features:
- Converts an input URDF file into an OpenUSD Layer
- Supports data conversion of visual geometry & materials, as well as the links, collision geometry, and joints necessary for kinematic simulation.
- Available as a python module or command line interface (CLI).
- Creates a standalone, self-contained artifact with no connection to the source URDF, OBJ, DAE, or STL data.
- Structured as an [Atomic Component](https://docs.omniverse.nvidia.com/usd/latest/learn-openusd/independent/asset-structure-principles.html#atomic-model-structure-flowerpot)
- Suitable for visualization & rendering in any OpenUSD Ecosystem application.
- Suitable for import & simulation in [Newton](https://github.com/newton-physics/newton).
This project is part of [Newton](https://github.com/newton-physics), a [Linux Foundation](https://www.linuxfoundation.org) project which is community-built and maintained.
## Implementation Details & Dependencies
Specific implementation details are based on our [URDF to USD Conceptual Data Mapping](https://github.com/newton-physics/urdf-usd-converter/blob/main/docs/concept_mapping.md).
The output asset structure is based on NVIDIA's [Principles of Scalable Asset Structure in OpenUSD](https://docs.omniverse.nvidia.com/usd/latest/learn-openusd/independent/asset-structure-principles.html).
The implementation also leverages the following dependencies:
- NVIDIA's [OpenUSD Exchange SDK](https://docs.omniverse.nvidia.com/usd/code-docs/usd-exchange-sdk/latest/index.html) to author consistent & correct USD data.
- Pixar's OpenUSD python modules & native libraries (vendored via the `usd-exchange` wheel).
- [tinyobjloader](https://github.com/tinyobjloader/tinyobjloader), [pycollada](https://github.com/pycollada/pycollada), and [numpy-stl](https://numpy-stl.readthedocs.io) for parsing any mesh data referenced by the input URDF datasets.
# Get Started
To start using the converter, install the python wheel into a virtual environment using your favorite package manager:
```bash
python -m venv .venv
source .venv/bin/activate
pip install urdf-usd-converter
urdf_usd_converter /path/to/robot.urdf /path/to/usd_robot
```
See `urdf_usd_converter --help` for CLI arguments.
Alternatively, the same converter functionality can be accessed from the python module directly, which is useful when further transforming the USD data after conversion.
```python
import urdf_usd_converter
import usdex.core
from pxr import Sdf, Usd
converter = urdf_usd_converter.Converter()
asset: Sdf.AssetPath = converter.convert("/path/to/robot.urdf", "/path/to/usd_robot")
stage: Usd.Stage = Usd.Stage.Open(asset.path)
# modify further using Usd or usdex.core functionality
usdex.core.saveStage(stage, comment="modified after conversion")
```
## Specifying ROS packages
If a filename within a mesh or texture in the URDF file is specified as `package://<package_name>/<path>`, we must separately provide the actual path where the package is located.
`<package_name>` is the ROS package name and has a corresponding path.
`<path>` is a relative path within that package path.
The package path specified for `<package_name>` can be either a relative path or an absolute path.
If the package path is a relative path, the path specification will be relative to this URDF file.
When the converter traces back to the parent directory with the URDF file as the current directory, if a file matching the specified path is found, the combination of the package name and path at that time is automatically used.
If the ROS package path still cannot be found, we must manually specify the path combination for the package name.
### CLI
Specify the path to the package name by using the `--package` argument in the CLI.
We can also specify multiple packages.
When a filename is specified in an URDF file as shown below, the "robot_package" following `package://` is the package name.
ROS packages are assigned either a relative path or an absolute path from the URDF relative to the package name.
```xml
<material name="body_mat">
<color rgba="1.0 1.0 1.0 1.0"/>
<texture filename="package://robot_package/textures/body_image.png"/>
</material>
```
When specifying the `--package` argument in the CLI as shown below, the actual path will be "/path/to/assets/textures/body_image.png".
```bash
urdf_usd_converter /path/to/robot.urdf /path/to/usd_robot --package robot_package=/path/to/assets
```
If multiple packages and paths exist, specify them as follows.
```bash
urdf_usd_converter /path/to/robot.urdf /path/to/usd_robot --package robot_package=/path/to/assets --package robot_foo=/path/to/foo
```
If the path contains spaces, please enclose it in double quotation marks.
### Python
When specifying a list of ROS package names and paths in Python, assign the package name to the "name" key and the path to the "path" key in a dictionary.
Specify the list of packages for this package in the `ros_packages` argument of `urdf_usd_converter.Converter`.
```python
import urdf_usd_converter
import usdex.core
from pxr import Sdf, Usd
packages = [
{"name": "robot_package", "path": "/path/to/assets"},
{"name": "robot_foo", "path": "/path/to/foo"},
]
converter = urdf_usd_converter.Converter(ros_packages=packages)
asset: Sdf.AssetPath = converter.convert("/path/to/robot.urdf", "/path/to/usd_robot")
```
## Loading the USD Asset
Once your asset is saved to storage, it can be loaded into an OpenUSD Ecosystem application.
We recommend starting with [usdview](https://docs.omniverse.nvidia.com/usd/latest/usdview/index.html), a simple graphics application to confirm the visual geometry & materials are working as expected. You can inspect any of the USD properties in this application, including the UsdPhysics properties.
> Tip: [OpenUSD Exchange Samples](https://github.com/NVIDIA-Omniverse/usd-exchange-samples) provides `./usdview.sh` and `.\usdview.bat` commandline tools which bootstrap usdview with the necessary third party dependencies.
However, you cannot start simulating in usdview, as there is no native simulation engine in this application.
To simulate this asset in Newton, call [newton.ModelBuilder.add_usd()](https://newton-physics.github.io/newton/api/_generated/newton.ModelBuilder.html#newton.ModelBuilder.add_usd) to parse the asset and add it to your Newton model.
Simulating in other UsdPhysics enabled products (e.g. NVIDIA Omniverse, Unreal Engine, etc) may provided mixed results. The rigid bodies are structured hierarchically, which maximal coordinate solvers often do not support. In order to see faithful simulation in these applications, the USD asset will need to be modified to suit the expectations of each target runtime.
# Contribution Guidelines
Contributions from the community are welcome. See [CONTRIBUTING.md](https://github.com/newton-physics/urdf-usd-converter/blob/main/CONTRIBUTING.md) to learn about contributing via GitHub issues, as well as building the project from source and our development workflow.
General contribution guidelines for Newton repositories are available [here](https://github.com/newton-physics/newton-governance/blob/main/CONTRIBUTING.md).
# Community
For questions about this urdf-usd-converter, feel free to join or start a [GitHub Discussions](https://github.com/newton-physics/urdf-usd-converter/discussions).
For questions about OpenUSD Exchange SDK, use the [USD Exchange GitHub Discussions](https://github.com/NVIDIA-Omniverse/usd-exchange/discussions).
For general questions about OpenUSD itself, use the [Alliance for OpenUSD Forum](https://forum.aousd.org).
By participating in this community, you agree to abide by the Linux Foundation [Code of Conduct](https://lfprojects.org/policies/code-of-conduct/).
# References
- [URDF XML Docs](https://wiki.ros.org/urdf/XML)
- [NVIDIA OpenUSD Exchange SDK Docs](https://docs.omniverse.nvidia.com/usd/code-docs/usd-exchange-sdk)
- [OpenUSD API Docs](https://openusd.org/release/api/index.html)
- [OpenUSD User Docs](https://openusd.org/release/index.html)
- [NVIDIA OpenUSD Resources and Learning](https://developer.nvidia.com/usd)
# License
The urdf-usd-converter is provided under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0), as is the [OpenUSD Exchange SDK](https://docs.omniverse.nvidia.com/usd/code-docs/usd-exchange-sdk/latest/docs/licenses.html).
| text/markdown | Newton Developers | null | null | null | Apache-2.0 | null | [] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"newton-usd-schemas>=0.1.0rc2",
"numpy-stl>=3.2",
"pycollada>=0.9.2",
"tinyobjloader>=2.0.0rc13",
"usd-exchange>=2.2.0"
] | [] | [] | [] | [
"Documentation, https://github.com/newton-physics/urdf-usd-converter/#readme",
"Repository, https://github.com/newton-physics/urdf-usd-converter",
"Changelog, https://github.com/newton-physics/urdf-usd-converter/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.12.9 | 2026-02-18T18:09:02.093506 | urdf_usd_converter-0.1.0rc1.tar.gz | 200,364 | 91/82/b45d08be42b4f451beb6aa5144a5599fe4ea6750cc4bbdc2438d7c3eb9e0/urdf_usd_converter-0.1.0rc1.tar.gz | source | sdist | null | false | 20d353933dfed5ada9c2b2beab18d363 | 90aa1797fe5a757690e1481d49969a64621af5e68dd3aec236a19eb60f12b7d3 | 9182b45d08be42b4f451beb6aa5144a5599fe4ea6750cc4bbdc2438d7c3eb9e0 | null | [
"LICENSE.md"
] | 238 |
2.4 | inferedge-moss | 1.0.0b15 | Python SDK for semantic search with on-device AI capabilities | # Moss client library for Python
`inferedge-moss` enables **private, on-device semantic search** in your Python applications with cloud storage capabilities.
Built for developers who want **instant, memory-efficient, privacy-first AI features** with seamless cloud integration.
## ✨ Features
- ⚡ **On-Device Vector Search** - Sub-millisecond retrieval with zero network latency
- 🔍 **Semantic, Keyword & Hybrid Search** - Embedding search blended with Keyword matching
- ☁️ **Cloud Storage Integration** - Automatic index synchronization with cloud storage
- 📦 **Multi-Index Support** - Manage multiple isolated search spaces
- 🛡️ **Privacy-First by Design** - Computation happens locally, only indexes sync to cloud
- 🚀 **High-Performance Rust Core** - Built on optimized Rust bindings for maximum speed
- 🧠 **Custom Embedding Overrides** - Provide your own document and query vectors when you need full control
## 📦 Installation
```bash
pip install inferedge-moss
```
## 🚀 Quick Start
```python
import asyncio
from inferedge_moss import MossClient, DocumentInfo, QueryOptions
async def main():
# Initialize search client with project credentials
client = MossClient("your-project-id", "your-project-key")
# Prepare documents to index
documents = [
DocumentInfo(
id="doc1",
text="How do I track my order? You can track your order by logging into your account.",
metadata={"category": "shipping"}
),
DocumentInfo(
id="doc2",
text="What is your return policy? We offer a 30-day return policy for most items.",
metadata={"category": "returns"}
),
DocumentInfo(
id="doc3",
text="How can I change my shipping address? Contact our customer service team.",
metadata={"category": "support"}
)
]
# Create an index with documents (syncs to cloud)
index_name = "faqs"
await client.create_index(index_name, documents) # Defaults to moss-minilm
print("Index created and synced to cloud!")
# Load the index (from cloud or local cache)
await client.load_index(index_name)
# Search the index
result = await client.query(
index_name,
"How do I return a damaged product?",
QueryOptions(top_k=3, alpha=0.6),
)
# Display results
print(f"Query: {result.query}")
for doc in result.docs:
print(f"Score: {doc.score:.4f}")
print(f"ID: {doc.id}")
print(f"Text: {doc.text}")
print("---")
asyncio.run(main())
```
## 🔥 Example Use Cases
- Smart knowledge base search with cloud backup
- Realtime Voice AI agents with persistent indexes
- Personal note-taking search with sync across devices
- Private in-app AI features with cloud storage
- Local semantic search in edge devices with cloud fallback
## Available Models
- `moss-minilm`: Lightweight model optimized for speed and efficiency
- `moss-mediumlm`: Balanced model offering higher accuracy with reasonable performance
## 🔧 Getting Started
### Prerequisites
- Python 3.8 or higher
- Valid InferEdge project credentials
### Environment Setup
1. **Install the package:**
```bash
pip install inferedge-moss
```
2. **Get your credentials:**
Sign up at [InferEdge Platform](https://platform.inferedge.dev) to get your `project_id` and `project_key`.
3. **Set up environment variables (optional):**
```bash
export MOSS_PROJECT_ID="your-project-id"
export MOSS_PROJECT_KEY="your-project-key"
```
### Basic Usage
```python
import asyncio
from inferedge_moss import MossClient, DocumentInfo, QueryOptions
async def main():
# Initialize client
client = MossClient("your-project-id", "your-project-key")
# Create and populate an index
documents = [
DocumentInfo(id="1", text="Python is a programming language"),
DocumentInfo(id="2", text="Machine learning with Python is popular"),
]
await client.create_index("my-docs", documents)
await client.load_index("my-docs")
# Search
results = await client.query(
"my-docs",
"programming language",
QueryOptions(alpha=1.0),
)
for doc in results.docs:
print(f"{doc.id}: {doc.text} (score: {doc.score:.3f})")
asyncio.run(main())
```
### Hybrid Search Controls
`alpha` lets you decide how much weight to give semantic similarity versus keyword relevance when running `query()`:
```python
# Pure keyword search
await client.query("my-docs", "programming language", QueryOptions(alpha=0.0))
# Mixed results (default 0.8 => semantic heavy)
await client.query("my-docs", "programming language")
# Pure embedding search
await client.query("my-docs", "programming language", QueryOptions(alpha=1.0))
```
Pick any value between 0.0 and 1.0 to tune the blend for your use case.
## 🧠 Providing custom embeddings
Already using your own embedding model? Supply vectors directly when managing
indexes and queries:
```python
import asyncio
from inferedge_moss import DocumentInfo, MossClient, QueryOptions
def my_embedding_model(text: str) -> list[float]:
"""Placeholder for your custom embedding generator."""
...
async def main() -> None:
client = MossClient("your-project-id", "your-project-key")
documents = [
DocumentInfo(
id="doc-1",
text="Attach a caller-provided embedding.",
embedding=my_embedding_model("doc-1"),
),
DocumentInfo(
id="doc-2",
text="Fallback to the built-in model when the field is omitted.",
embedding=my_embedding_model("doc-2"),
),
]
await client.create_index("custom-embeddings", documents) # Defaults to moss-minilm
await client.load_index("custom-embeddings")
results = await client.query(
"custom-embeddings",
"<query text>",
QueryOptions(embedding=my_embedding_model("<query text>"), top_k=10),
)
print(results.docs[0].id, results.docs[0].score)
asyncio.run(main())
```
Leaving the model argument undefined defaults to `moss-minilm`.
Pass `QueryOptions` to reuse your own embeddings or to override `top_k` on a per-query basis.
## 📄 License
This package is licensed under the [PolyForm Shield License 1.0.0](./LICENSE.txt).
- ✅ Free for testing, evaluation, internal use, and modifications.
- ❌ Not permitted for production or competing commercial use.
- 📩 For commercial licenses, contact: <contact@usemoss.dev>
## 📬 Contact
For support, commercial licensing, or partnership inquiries, contact us: [contact@usemoss.dev](mailto:contact@usemoss.dev)
| text/markdown | null | "InferEdge Inc." <contact@usemoss.dev> | null | null | null | search, semantic, embeddings, vector, usemoss, moss | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: P... | [] | null | null | >=3.10 | [] | [] | [] | [
"transformers>=4.21.0",
"numpy>=1.26.4",
"typing-extensions>=4.0.0",
"httpx>=0.25.0",
"onnxruntime>=1.12.0; python_version < \"3.14\"",
"inferedge-moss-core==0.4.2",
"pytest>=8.4.2; extra == \"dev\"",
"pytest-asyncio>=1.2.0; extra == \"dev\"",
"tox>=4.0.0; extra == \"dev\"",
"black>=25.9.0; extra ... | [] | [] | [] | [
"Homepage, https://github.com/usemoss/moss-samples",
"Repository, https://github.com/usemoss/moss-samples",
"Documentation, https://docs.usemoss.dev/"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T18:08:26.195350 | inferedge_moss-1.0.0b15.tar.gz | 31,482 | 3e/5d/f3e39c8bde89de79cd94db614f003e5bf603480d25de2bc454b89f99d102/inferedge_moss-1.0.0b15.tar.gz | source | sdist | null | false | 12f99096bbb123ae6c7d9faa7f841abd | cb3e82ab201c7267105499a89b35e45d1e4f3975cdbe2023943bde1be832e4b4 | 3e5df3e39c8bde89de79cd94db614f003e5bf603480d25de2bc454b89f99d102 | null | [
"LICENSE.txt"
] | 251 |
2.4 | dbmasta | 0.1.22 | A simple MariaDB/Postgres client based on SQLAlchemy core | # dbmasta (simple mariadb/postgres interface)
## Overview
This Python package provides a simple interface for interacting with MariaDB databases using SQLAlchemy Core. It abstracts some common database operations into more manageable Python methods, allowing for easy database queries, inserts, updates, and deletes.
## Installation
To install this package, run the following pip command. **Note: this requires SQLAlchemy 2.0.27 or greater**
```bash
pip install dbmasta
```
## Basic Usage
### Configuration
First, configure the database client with the necessary credentials:
```python
from dbmasta import DataBase, AsyncDataBase
# Initialize the database client
db = DataBase(
dict(
username='username',
password='password',
host='host',
port=3306,
default_database='database_name'
)
)
# Async Version
db = AsyncDataBase(
dict(
username='username',
password='password',
host='host',
port=3306,
default_database='database_name'
)
)
# Initiliaze using environment variables
db = DataBase.env()
# Async Version
db = AsyncDataBase.env()
```
### Executing Queries
You can execute a simple SELECT query to fetch data:
```python
import datetime as dt
# Create parameters
params = {
"date": db.before(dt.date(2024,1,1), inclusive=True)
}
# Execute the query
dbr = db.select("database", "table", params)
# Examine the results
if dbr.successful:
print(dbr.records)
else:
print(dbr.error_info)
```
### Complex Queries
The following query would generate this text:
```python
import datetime as dt
# Create parameters
params = {
"_OR_": db.or_(
[
{"date": db.after(dt.date(2020,1,1)), "category": "sales"},
{"date": db.before(dt.date(2020,1,1)), "category": db.not_(db.in_, ["purchases","adjustments","sales"])},
]
),
"_AND_": db.and_(
[
{"keyfield": db.starts_with("SJ")},
{"keyfield": db.not_(db.ends_with("2E"))}
]
)
"status": "under_review"
}
# Execute the query
dbr = db.select("database", "table", params)
# Examine the results
if dbr.successful:
print(dbr.records)
else:
print(dbr.error_info)
```
The raw text of the query can be retrieved from the attribute `dbr.raw_query` from the DataBaseResponse object,
which the DataBase.select method returns. The text in the above example would be as follows:
```sql
SELECT * FROM `database`.`table`
WHERE ((`date` > '2020-01-01' and `category`='sales') or
(`date` < '2020-01-01' and `category` not in ('sales')))
AND `keyfield` LIKE 'SJ%' AND `keyfield` NOT LIKE '%2E'
AND `status`='under_review';
```
Or in simple terms...
Get all records `under_review` where the keyfield starts with `SJ`, but doesn't end with `2E`. Pull these if either:
- dated after `2020-01-01` and categorized as a `sale`
- dated before `2020-01-01` and not categorized as `sale`,`purchase` or `adjustment`.
### Result Modification from `DataBase.select`
In addition to complex conditions for filtering records, you can:
- sort records
```pythoN
db.select(..., order_by="column_name", reverse=True)
```
- limit and offset results
```python
# for offset pagination
db.select(..., limit=100, offset=0)
```
- filter columns
```python
# only receive the data for the fields you provide
db.select(..., columns=["keyfield", "name", "date"])
```
- get textual output (without executing)
```python
# this will not execute the query, but will return the raw query needed to execute
raw_textual_query = db.select(..., textual=True)
print(raw_textual_query)
new_query = f"INSERT INTO `filteredtable` ({raw_textual_query[:-1]});
dbr = db.run(new_query)
```
- get model output by providing a model factory
```python
from pydantic import BaseModel
import datetime as dt
class Record(BaseModel):
keyfield: str
date: dt.date
status: str
model_factory = lambda row: Record(**row)
# only receive the data for the fields you provide
dbr = db.select(..., response_model=model_factory)
# each record will be an instance of Record
```
| text/markdown | null | Matt Owen <matt@dealerclear.com> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Prog... | [] | null | null | >=3.7 | [] | [] | [] | [
"sqlalchemy>=2.0.41",
"aiomysql",
"pymysql",
"asyncmy",
"asyncpg",
"psycopg-binary"
] | [] | [] | [] | [
"Homepage, https://github.com/mastamatto/dbmasta",
"Repository, https://github.com/mastamatto/dbmasta",
"Issues, https://github.com/mastamatto/dbmasta/issues"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-18T18:07:44.896692 | dbmasta-0.1.22.tar.gz | 33,899 | 9f/99/9539829bcde402f336631d2c2022650ba087702c7d3ae86fb54df26141fc/dbmasta-0.1.22.tar.gz | source | sdist | null | false | 81521a15462a2b81727906c352ad000b | f7efae6228844aaf8dc03aeb91e8aac4182479a3565ca50cffdb0ea9e38ff538 | 9f999539829bcde402f336631d2c2022650ba087702c7d3ae86fb54df26141fc | null | [] | 258 |
2.4 | agentgog | 0.1.16 | Add your description here | # agentgog
CLI assistant that classifies short messages into **CALENDAR**, **TASK**, or **MEMO** and then:
- **CALENDAR** → extracts event details and inserts into **Google Calendar**
- **TASK** → extracts task details and inserts into **Google Tasks**
- **MEMO** → extracts memo details and saves to **Simplenote**
It also provides a general **chat** command, plus extra utilities (**translator**, **qrpayment**, **codeagent**).
## Install
This repo uses `uv`.
```bash
uv sync
```
Run from source:
```bash
uv run agentgog --help
```
## AI Provider setup
### OpenRouter (default)
Set API key via env var (preferred):
```bash
export OPENROUTER_API_KEY="<your_key>"
```
Or put the key in:
- `~/.openai_openrouter.key`
### Ollama (local)
Install and start Ollama:
```bash
ollama serve
```
Then use `-p ollama`.
## Google setup (Calendar + Tasks)
1. Create OAuth client credentials in Google Cloud Console.
2. Save the JSON to:
- `~/.config/google/credentials.json`
On first run, a browser window opens for authorization and a token is cached at:
- `~/.config/google/token.json`
If you ever change scopes and get “insufficient authentication scopes”, delete the token and re-run:
```bash
rm -f ~/.config/google/token.json
```
## Simplenote setup (MEMO)
Set credentials via environment variables:
```bash
export SIMPLENOTE_LOCAL_USER="user@example.com"
export SIMPLENOTE_LOCAL_PASSWORD="<your_password>"
```
## Usage
### Classify and execute
```bash
# Calendar event → Google Calendar
uv run agentgog classify "Meeting with Alice tomorrow at 10am"
# Task → Google Tasks (list: "My Tasks"; falls back to default)
uv run agentgog classify "Buy groceries tomorrow"
# Memo → Simplenote
uv run agentgog classify "Remember that my passport number is 123456789"
```
### Choose provider
```bash
# Use OpenRouter explicitly
uv run agentgog classify "Buy groceries" -p openrouter
# Use local Ollama
uv run agentgog classify "Buy groceries" -p ollama
```
### Chat
Single prompt:
```bash
uv run agentgog chat "Explain the difference between TCP and UDP"
```
Interactive chat (keeps conversation history in-memory):
```bash
uv run agentgog chat -i
```
Interactive commands:
- Quit: `/q`, `/quit`, `/exit`, `quit`, `exit`
- Clear conversation: `/c`, `/clear`, `/reset`, `/r`
Input line editing + persistent command history:
- History file: `~/.agentgog_history`
### Translator (SRT → Czech)
Uses `smolagents` (OpenRouter only):
```bash
uv run agentgog translator -s subtitles.srt -m google/gemma-3-27b-it:free
```
### QR payment
Uses `smolagents` (OpenRouter only):
```bash
uv run agentgog qrpayment "Transfer 500 CZK to account 1234567890/0300" -m google/gemma-3-27b-it:free
```
### Code agent
Uses `smolagents`:
```bash
uv run agentgog codeagent "Write a Python function to compute fibonacci" -m google/gemma-3-27b-it:free -x 10
```
## Logging
- Log file: `~/agentgog.log`
| text/markdown | null | coder <jaromrax@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"beautifulsoup4>=4.14.3",
"click",
"console",
"google-api-python-client",
"google-auth-httplib2",
"google-auth-oauthlib",
"litellm>=1.81.3",
"matplotlib>=3.10.8",
"numpy>=2.4.1",
"pandas>=3.0.0",
"prompt-toolkit",
"qrcode>=8.2",
"requests",
"simplenote>=2.1.4",
"smolagents[openai]>=1.24.... | [] | [] | [] | [] | uv/0.9.7 | 2026-02-18T18:06:56.946434 | agentgog-0.1.16.tar.gz | 90,641 | 58/d6/e46c7d7e17791a3718003631041063fc2e5e50bed033c58a67605ccda17b/agentgog-0.1.16.tar.gz | source | sdist | null | false | 75f6546fbbee8d4a87585c25c3504ea8 | a36fc4973fdc53b0866e3c8cdd97973aba92b065a3ac7bef5e4cfee34328dff8 | 58d6e46c7d7e17791a3718003631041063fc2e5e50bed033c58a67605ccda17b | null | [] | 259 |
2.4 | PyAnimCLI | 0.1.1 | Animações de terminal em Python (progress bar, spinner, loaders) | # PyAnim
Lib de animações para terminal em Python.
## Instalação
```bash
pip install PyAnim
| text/markdown | null | João <joao6ag@gmail.com> | null | null | MIT | terminal, animation, progressbar, cli | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T18:05:45.231173 | pyanimcli-0.1.1.tar.gz | 2,470 | f5/a4/9ccd759d93c930c4e582673397ffefedaabcb38981d17d60f60ded62edd7/pyanimcli-0.1.1.tar.gz | source | sdist | null | false | 093e63574071d2e125cdc94785900ef7 | de33caa7d8318aff5d94b7df146e4009e25d6660f78a2c579da922d5a0877fbe | f5a49ccd759d93c930c4e582673397ffefedaabcb38981d17d60f60ded62edd7 | null | [
"LICENSE"
] | 0 |
2.1 | meridian-tfp-frozen | 0.26.0.1 | Probabilistic modeling and statistical inference in TensorFlow | # TensorFlow Probability
TensorFlow Probability is a library for probabilistic reasoning and statistical
analysis in TensorFlow. As part of the TensorFlow ecosystem, TensorFlow
Probability provides integration of probabilistic methods with deep networks,
gradient-based inference via automatic differentiation, and scalability to
large datasets and models via hardware acceleration (e.g., GPUs) and distributed
computation.
__TFP also works as "Tensor-friendly Probability" in pure JAX!__:
`from tensorflow_probability.substrates import jax as tfp` --
Learn more [here](https://www.tensorflow.org/probability/examples/TensorFlow_Probability_on_JAX).
Our probabilistic machine learning tools are structured as follows.
__Layer 0: TensorFlow.__ Numerical operations. In particular, the LinearOperator
class enables matrix-free implementations that can exploit special structure
(diagonal, low-rank, etc.) for efficient computation. It is built and maintained
by the TensorFlow Probability team and is now part of
[`tf.linalg`](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/ops/linalg)
in core TF.
__Layer 1: Statistical Building Blocks__
* Distributions ([`tfp.distributions`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/python/distributions)):
A large collection of probability
distributions and related statistics with batch and
[broadcasting](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
semantics. See the
[Distributions Tutorial](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb).
* Bijectors ([`tfp.bijectors`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/python/bijectors)):
Reversible and composable transformations of random variables. Bijectors
provide a rich class of transformed distributions, from classical examples
like the
[log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution)
to sophisticated deep learning models such as
[masked autoregressive flows](https://arxiv.org/abs/1705.07057).
__Layer 2: Model Building__
* Joint Distributions (e.g., [`tfp.distributions.JointDistributionSequential`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/python/distributions/joint_distribution_sequential.py)):
Joint distributions over one or more possibly-interdependent distributions.
For an introduction to modeling with TFP's `JointDistribution`s, check out
[this colab](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Modeling_with_JointDistribution.ipynb)
* Probabilistic Layers ([`tfp.layers`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/python/layers)):
Neural network layers with uncertainty over the functions they represent,
extending TensorFlow Layers.
__Layer 3: Probabilistic Inference__
* Markov chain Monte Carlo ([`tfp.mcmc`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/python/mcmc)):
Algorithms for approximating integrals via sampling. Includes
[Hamiltonian Monte Carlo](https://en.wikipedia.org/wiki/Hamiltonian_Monte_Carlo),
random-walk Metropolis-Hastings, and the ability to build custom transition
kernels.
* Variational Inference ([`tfp.vi`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/python/vi)):
Algorithms for approximating integrals via optimization.
* Optimizers ([`tfp.optimizer`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/python/optimizer)):
Stochastic optimization methods, extending TensorFlow Optimizers. Includes
[Stochastic Gradient Langevin Dynamics](http://www.icml-2011.org/papers/398_icmlpaper.pdf).
* Monte Carlo ([`tfp.monte_carlo`](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/python/monte_carlo)):
Tools for computing Monte Carlo expectations.
TensorFlow Probability is under active development. Interfaces may change at any
time.
## Examples
See [`tensorflow_probability/examples/`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/examples/)
for end-to-end examples. It includes tutorial notebooks such as:
* [Linear Mixed Effects Models](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Linear_Mixed_Effects_Models.ipynb).
A hierarchical linear model for sharing statistical strength across examples.
* [Eight Schools](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Eight_Schools.ipynb).
A hierarchical normal model for exchangeable treatment effects.
* [Hierarchical Linear Models](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/HLM_TFP_R_Stan.ipynb).
Hierarchical linear models compared among TensorFlow Probability, R, and Stan.
* [Bayesian Gaussian Mixture Models](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Bayesian_Gaussian_Mixture_Model.ipynb).
Clustering with a probabilistic generative model.
* [Probabilistic Principal Components Analysis](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_PCA.ipynb).
Dimensionality reduction with latent variables.
* [Gaussian Copulas](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Gaussian_Copula.ipynb).
Probability distributions for capturing dependence across random variables.
* [TensorFlow Distributions: A Gentle Introduction](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb).
Introduction to TensorFlow Distributions.
* [Understanding TensorFlow Distributions Shapes](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb).
How to distinguish between samples, batches, and events for arbitrarily shaped
probabilistic computations.
* [TensorFlow Probability Case Study: Covariance Estimation](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Probability_Case_Study_Covariance_Estimation.ipynb).
A user's case study in applying TensorFlow Probability to estimate covariances.
It also includes example scripts such as:
Representation learning with a latent code and variational inference.
* [Vector-Quantized Autoencoder](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/examples/vq_vae.py).
Discrete representation learning with vector quantization.
* [Disentangled Sequential Variational Autoencoder](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/examples/disentangled_vae.py)
Disentangled representation learning over sequences with variational inference.
* [Bayesian Neural Networks](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/examples/bayesian_neural_network.py).
Neural networks with uncertainty over their weights.
* [Bayesian Logistic Regression](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/examples/logistic_regression.py).
Bayesian inference for binary classification.
## Installation
For additional details on installing TensorFlow, guidance installing
prerequisites, and (optionally) setting up virtual environments, see the
[TensorFlow installation guide](https://www.tensorflow.org/install).
### Stable Builds
To install the latest stable version, run the following:
```shell
# Notes:
# - The `--upgrade` flag ensures you'll get the latest version.
# - The `--user` flag ensures the packages are installed to your user directory
# rather than the system directory.
# - TensorFlow 2 packages require a pip >= 19.0
python -m pip install --upgrade --user pip
python -m pip install --upgrade --user tensorflow tensorflow_probability
```
For CPU-only usage (and a smaller install), install with `tensorflow-cpu`.
To use a pre-2.0 version of TensorFlow, run:
```shell
python -m pip install --upgrade --user "tensorflow<2" "tensorflow_probability<0.9"
```
Note: Since [TensorFlow](https://www.tensorflow.org/install) is *not* included
as a dependency of the TensorFlow Probability package (in `setup.py`), you must
explicitly install the TensorFlow package (`tensorflow` or `tensorflow-cpu`).
This allows us to maintain one package instead of separate packages for CPU and
GPU-enabled TensorFlow. See the
[TFP release notes](https://github.com/tensorflow/probability/releases) for more
details about dependencies between TensorFlow and TensorFlow Probability.
### Nightly Builds
There are also nightly builds of TensorFlow Probability under the pip package
`tfp-nightly`, which depends on one of `tf-nightly` or `tf-nightly-cpu`.
Nightly builds include newer features, but may be less stable than the
versioned releases. Both stable and nightly docs are available
[here](https://www.tensorflow.org/probability/api_docs/python/tfp?version=nightly).
```shell
python -m pip install --upgrade --user tf-nightly tfp-nightly
```
### Installing from Source
You can also install from source. This requires the [Bazel](
https://bazel.build/) build system. It is highly recommended that you install
the nightly build of TensorFlow (`tf-nightly`) before trying to build
TensorFlow Probability from source. The most recent version of Bazel that TFP
currently supports is 6.4.0; support for 7.0.0+ is WIP.
```shell
# sudo apt-get install bazel git python-pip # Ubuntu; others, see above links.
python -m pip install --upgrade --user tf-nightly
git clone https://github.com/tensorflow/probability.git
cd probability
bazel build --copt=-O3 --copt=-march=native :pip_pkg
PKGDIR=$(mktemp -d)
./bazel-bin/pip_pkg $PKGDIR
python -m pip install --upgrade --user $PKGDIR/*.whl
```
## Community
As part of TensorFlow, we're committed to fostering an open and welcoming
environment.
* [Stack Overflow](https://stackoverflow.com/questions/tagged/tensorflow): Ask
or answer technical questions.
* [GitHub](https://github.com/tensorflow/probability/issues): Report bugs or
make feature requests.
* [TensorFlow Blog](https://blog.tensorflow.org/): Stay up to date on content
from the TensorFlow team and best articles from the community.
* [Youtube Channel](http://youtube.com/tensorflow/): Follow TensorFlow shows.
* [tfprobability@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/tfprobability):
Open mailing list for discussion and questions.
See the [TensorFlow Community](https://www.tensorflow.org/community/) page for
more details. Check out our latest publicity here:
+ [Coffee with a Googler: Probabilistic Machine Learning in TensorFlow](
https://www.youtube.com/watch?v=BjUkL8DFH5Q)
+ [Introducing TensorFlow Probability](
https://medium.com/tensorflow/introducing-tensorflow-probability-dca4c304e245)
## Contributing
We're eager to collaborate with you! See [`CONTRIBUTING.md`](CONTRIBUTING.md)
for a guide on how to contribute. This project adheres to TensorFlow's
[code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to
uphold this code.
## References
If you use TensorFlow Probability in a paper, please cite:
+ _TensorFlow Distributions._ Joshua V. Dillon, Ian Langmore, Dustin Tran,
Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt
Hoffman, Rif A. Saurous.
[arXiv preprint arXiv:1711.10604, 2017](https://arxiv.org/abs/1711.10604).
(We're aware there's a lot more to TensorFlow Probability than Distributions, but the Distributions paper lays out our vision and is a fine thing to cite for now.)
| text/markdown | Google LLC | no-reply@google.com | null | null | Apache 2.0 | tensorflow probability statistics bayesian machine learning | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language ::... | [] | http://github.com/tensorflow/probability | null | >=3.9 | [] | [] | [] | [
"absl-py",
"six>=1.10.0",
"numpy>=1.13.3",
"decorator",
"cloudpickle>=1.3",
"gast>=0.3.2",
"dm-tree",
"jax; extra == \"jax\"",
"jaxlib; extra == \"jax\"",
"tf-keras-nightly; extra == \"tf\"",
"tfds-nightly; extra == \"tfds\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T18:05:26.956251 | meridian_tfp_frozen-0.26.0.1-py2.py3-none-any.whl | 6,975,558 | 62/11/a106a232bb1ea8936bd19ef5b63481bca44f39600f45e2867f42e951da70/meridian_tfp_frozen-0.26.0.1-py2.py3-none-any.whl | py2.py3 | bdist_wheel | null | false | 8c123fc7ee8837ea57df687e00add41f | 16e6c4e428a469a87a85206e47fdb2a7ac7d01ae6e7c15981322dff86f0cab06 | 6211a106a232bb1ea8936bd19ef5b63481bca44f39600f45e2867f42e951da70 | null | [] | 141 |
2.4 | svg-path-to-shapely | 0.1.0 | Library for loading SVG paths into shapely objects. | # Library for Loading SVG Paths into Shapely Objects
This library provides some utility functions to find all paths in a SVG document and create [`shapely`](https://shapely.readthedocs.io) geometry objects from them.
It uses [`svg.path`](https://github.com/regebro/svg.path), [`shapely`](https://shapely.readthedocs.io) and [`numpy`](https://numpy.org) under the hood.
## Installation
The package is installable via [PyPI](https://pypi.org/project/svg-path-to-shapely).
## Basic Usage
The intended workflow is as follows.
First, load a SVG document and use the `find_all_paths_in_svg` method query the element tree for `path` elements featuring a `d` attribute.
The optional argument `with_namespace` determines, whether the SVG namespace shall respected in the query (if false, all `path` elements no matter what namespace are found).
As the first argument you may give a string containing SVG code (which will be parsed by [`xml.etree.ElementTree`](https://docs.python.org/3/library/xml.etree.elementtree.html)), or already parsed `xml.etree.ElementTree.ElementTree`/`xml.etree.ElementTree.Element` instances.
Alternatively you may read directly from file with `find_all_paths_in_file`.
```python
from svg_path_to_shapely import find_all_paths_in_svg
paths = find_all_paths_in_svg("your svg code...", with_namespace=True)
```
or
```python
from svg_path_to_shapely import find_all_paths_in_svg
from xml.etree.ElementTree import parse
et = parse("some path-like...")
paths = find_all_paths_in_svg(et, with_namespace=True)
```
or
```python
from svg_path_to_shapely import find_all_paths_in_svg
from xml.etree.ElementTree import fromstring
et = fromstring("your svg code...")
paths = find_all_paths_in_svg("your svg code...", with_namespace=True)
```
or
```python
from svg_path_to_shapely import find_all_paths_in_file
paths = find_all_paths_in_file("some path-like...", with_namespace=True)
```
`paths` will then be a list of `xml.etree.ElementTree.Element` instances representing all `path` element in the document.
Then, create [`svg.path.Path`](https://github.com/regebro/svg.path) instances from those by use of `parse_path`.
This step is intentionally left explicit to be able to query for additional attributes as needed (such as `id`) on the path elements.
You may supply the element instance directly or a string with a valid value of the `d` attribute.
```python
from svg_path_to_shapely import parse_path
parsed = [parse_path(p) for p in paths]
```
Last, convert those path instances to [`shapely`](https://shapely.readthedocs.io) geometries using `convert_path_to_line_string`.
This function may return a `LineString`, `LinearRing` or `MultiLineString`, depending on whether the path is open, closed or multi-part (with multiple `M`/`m` directives), respectively.
The optional parameter `count` determines the number of evenly spaced discrete points to approximate arcs and Bezier curves with (as `shapely` does only know linear strings).
```python
from svg_path_to_shapely import convert_path_to_line_string
geoms = [convert_path_to_line_string(p, count=11) for p in parsed]
```
The latter will check if the path is multi-part and split it accordingly.
To spare this effort, if you know the path is single-part, you may use `convert_single_part_path_to_line_string` instead.
This will essentially treat multiple move directives (`M`/`m`) as if they were line directives (`L`/`l`).
The library exports some more of its lower-level functions.
Have a look into their docstrings for information on how to use them.
## Application Examples
You find example SVG documents and respective code in the `test` directory.
### Powder Particle Shape Analysis
A micrograph of copper powder particles was imported in [Inkscape](https://inkscape.org) and paths were manually drawn around the particles to determine their contours.

The paths were extracted and converted to `shapely` geometries to be able to analyse their geometric properties further (for the sake of example just centered at (0, 0) and plotted again).

## Building and Testing
Project dependencies and build process are maintained using [`uv`](https://docs.astral.sh/uv).
Build the package using `uv build`.
Test are run using `uv run pytest`.
## License
The software is distributed under the terms of the [MIT License](LICENSE).
## Contributing
Issues and pull requests are welcome without any special contribution guidelines.
| text/markdown | Max Weiner | Max Weiner <max.weiner@posteo.de> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=2.0",
"shapely>=2.0",
"svg-path>=7.0"
] | [] | [] | [] | [
"Homepage, https://codeberg.org/axtimhaus/svg-path-to-shapely",
"Repository, https://codeberg.org/axtimhaus/svg-path-to-shapely"
] | uv/0.9.22 {"installer":{"name":"uv","version":"0.9.22","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Manjaro Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T18:05:11.726430 | svg_path_to_shapely-0.1.0.tar.gz | 4,289 | 9d/2f/d72c9df5a97e8136c1d99c99dc1ec2bfd5a7bc5adb057ca572f3b59a9dec/svg_path_to_shapely-0.1.0.tar.gz | source | sdist | null | false | 722308d7d9eefba3d1ad1a54fdf4fa78 | 2604252754b4af9df7b1bedd2da0c41f2c57f11fb6f8e5babfa59b77c6ad5f07 | 9d2fd72c9df5a97e8136c1d99c99dc1ec2bfd5a7bc5adb057ca572f3b59a9dec | MIT | [] | 237 |
2.4 | trytond-stock-package-shipping | 7.6.2 | The package shipping module of the Tryton application platform. | Stock Package Shipping Module
#############################
This module is the base module required to interact with shipping service
providers.
Carrier
*******
The Carrier model adds the following field:
- *Shipping Service*: The shipping service of the carrier.
This field is programmatically filled by the modules providing support for
shipping companies.
Package Type
************
The Package Type model has been added the following fields:
- *Length*: The length of the packages of this type
- *Length Unit*: The unit of measure of this length
- *Length Digits*: The precision of length
- *Height*: The height of the packages of this type
- *Height Unit*: The unit of measure of this height
- *Height Digits*: The precision of height
- *Width*: The width of the packages of this type
- *Width Unit*: The unit of measure of this width
- *Width Digits*: The precision of width
Package
*******
The Package model has been added the following fields:
- *Shipping Reference*: The shipping reference provided by the shipping service
- *Shipping Label*: The shipping label provided by the shipping service
- *Weight*: A function field computing the weight of the package with its
content
Shipment Out
************
The Shipment Out model will check once in the Packed state if the shipment is a
valid shipment for the shipping service. He does that by calling a method that
is by convention named ``validate_packing_<shipping service>``.
Once a shipment is packed, the user can create the shipping for each packages
with the shipping service by clicking on the *Create Shipping* button. This
button triggers a wizard that is overridden in shipping service specific
modules. The starting state of the wizard is a ``StateTransition``. Its linked
method is overridden in shipping service modules in order to communicate with
the service.
| null | Tryton | foundation@tryton.org | null | null | GPL-3 | tryton stock package shipping | [
"Development Status :: 5 - Production/Stable",
"Environment :: Plugins",
"Framework :: Tryton",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Legal Industry",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",... | [] | http://www.tryton.org/ | http://downloads.tryton.org/7.6/ | >=3.9 | [] | [] | [] | [
"python-sql>=0.4",
"trytond_carrier<7.7,>=7.6",
"trytond_product<7.7,>=7.6",
"trytond_stock<7.7,>=7.6",
"trytond_stock_package<7.7,>=7.6",
"trytond_stock_shipment_measurements<7.7,>=7.6",
"trytond_stock_shipment_cost<7.7,>=7.6",
"trytond_product_measurements<7.7,>=7.6",
"trytond<7.7,>=7.6",
"prote... | [] | [] | [] | [
"Bug Tracker, https://bugs.tryton.org/",
"Documentation, https://docs.tryton.org/",
"Forum, https://www.tryton.org/forum",
"Source Code, https://code.tryton.org/tryton"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T18:03:44.779209 | trytond_stock_package_shipping-7.6.2.tar.gz | 30,182 | 44/f1/69612d9339b90f614ebad07d49ed7b5f483e1f98b39a338bf305883a041a/trytond_stock_package_shipping-7.6.2.tar.gz | source | sdist | null | false | 84664a4834a7ee4ae072c4a04aa09e73 | 3b5b598403a4037a5295e0015b553880882c1ba11372f4ffb0a23b30e4e2467f | 44f169612d9339b90f614ebad07d49ed7b5f483e1f98b39a338bf305883a041a | null | [
"LICENSE"
] | 229 |
2.4 | trytond-web-user | 6.0.2 | Tryton module to manage Web users | Web User Module
###############
The web_user module provides facilities to manage external user accessing from
the web.
User
****
A user is uniquely identified by an email and he is authenticated using a
hashed password. The user can be linked to a Party.
Two actions are available:
- The *Validate E-mail* which sent an e-mail to the user with a link to an URL
that ensures the address exists.
- The *Reset Password* which sent an e-mail to the user with a link to an URL
to set a new password.
Configuration
*************
The web_user module uses parameters from different sections:
- `web`:
- `reset_password_url`: the URL to reset the password to which the
parameters `email` and `token` will be added.
- `email_validation_url`: the URL for email validation to which the
parameter `token` will be added.
- `email`:
- `from`: the origin address to send emails.
- `session`:
- `web_timeout`: defines in seconds the validity of the web session.
Default: 30 days.
- `web_timeout_reset`: in seconds the validity of the reset password token.
Default: 1 day.
| null | Tryton | bugs@tryton.org | null | null | GPL-3 | web user | [
"Development Status :: 5 - Production/Stable",
"Environment :: Plugins",
"Framework :: Tryton",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Legal Industry",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",... | [] | http://www.tryton.org/ | http://downloads.tryton.org/6.0/ | >=3.6 | [] | [] | [] | [
"trytond_party<6.1,>=6.0",
"trytond<6.1,>=6.0",
"bcrypt; extra == \"bcrypt\"",
"html2text; extra == \"html2text\""
] | [] | [] | [] | [
"Bug Tracker, https://bugs.tryton.org/",
"Documentation, https://docs.tryton.org/modules-web-user",
"Forum, https://www.tryton.org/forum",
"Source Code, https://hg.tryton.org/modules/web_user"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T18:02:41.521531 | trytond_web_user-6.0.2.tar.gz | 36,492 | b6/b4/c5aab0e7680971562092655374ea75c390145c245c2d7a408277f61077c8/trytond_web_user-6.0.2.tar.gz | source | sdist | null | false | a274ddb7804352cd6b65572e387d4684 | 29bab4a6293cc333d4b266b0f6c888859509a577e49ebb6a6ef0a8476a42d7ce | b6b4c5aab0e7680971562092655374ea75c390145c245c2d7a408277f61077c8 | null | [
"LICENSE"
] | 241 |
2.4 | takopi-matrix | 0.3.0 | Matrix transport backend for Takopi | # 🐙 takopi-matrix
Matrix transport backend for [takopi](https://github.com/banteg/takopi).
## Features
- Matrix protocol support via [matrix-nio](https://github.com/matrix-nio/matrix-nio)
- End-to-end encryption (E2EE) by default
- Voice message transcription (OpenAI Whisper)
- File download support
- Interactive onboarding wizard
- Multi-room support with per-room engine defaults
- Project-to-room binding
## Requirements
- Python ≥3.14
- [libolm](https://gitlab.matrix.org/matrix-org/olm) 3.x (for E2EE)
- takopi ≥0.18
## Installation
### 1. Install libolm
| Platform | Command |
|----------|---------|
| Debian/Ubuntu | `sudo apt-get install libolm-dev` |
| Fedora | `sudo dnf install libolm-devel` |
| Arch Linux | `sudo pacman -S libolm` |
| openSUSE | `sudo zypper install libolm-devel` |
| macOS (Homebrew) | `brew install libolm` |
### 2. Install takopi-matrix
```bash
pip install takopi-matrix
```
Or with uv:
```bash
uv tool install takopi --with takopi-matrix
```
## Configuration
### Interactive Setup
```bash
takopi --onboard
```
### Manual Configuration
Add to `~/.takopi/takopi.toml`:
```toml
transport = "matrix"
[transports.matrix]
homeserver = "https://matrix.example.org"
user_id = "@bot:example.org"
access_token = "syt_your_access_token"
room_ids = ["!roomid:example.org"]
# Optional: per-room engine defaults
[transports.matrix.room_engines]
"!room1:example.org" = "claude"
"!room2:example.org" = "codex"
# Optional: project-to-room binding
[transports.matrix.room_projects]
"!room1:example.org" = "myproject"
```
## Documentation
- [Matrix Transport Reference](docs/matrix.md) - Full configuration options
- [Architecture Overview](docs/architecture/overview.md) - System design
- [Development Setup](docs/development/setup.md) - Contributing guide
## License
MIT
| text/markdown | null | null | null | null | MIT License
Copyright (c) 2025
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.14 | [] | [] | [] | [
"anyio>=4.12.0",
"httpx>=0.28.1",
"markdown-it-py",
"matrix-nio[e2e]>=0.24",
"pydantic-settings>=2.12.0",
"pydantic>=2.12.5",
"questionary>=2.1.1",
"rich>=14.2.0",
"structlog>=25.5.0",
"takopi>=0.20.0",
"tomli-w>=1.2.0",
"tomlkit>=0.13.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Zorro909/takopi-matrix",
"Repository, https://github.com/Zorro909/takopi-matrix",
"Issues, https://github.com/Zorro909/takopi-matrix/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:00:46.314022 | takopi_matrix-0.3.0.tar.gz | 142,105 | 2f/8d/242d6421513f6fdcebd145fc90cab6d14f2459f8f0936bc830e9aa37b40f/takopi_matrix-0.3.0.tar.gz | source | sdist | null | false | 9f69fcb3c196fdaa3e2605d40f610f10 | 51a217282177796273138941f3ae7008b52eff3beb6c95052011a8e2ff4d91d7 | 2f8d242d6421513f6fdcebd145fc90cab6d14f2459f8f0936bc830e9aa37b40f | null | [
"LICENSE"
] | 236 |
2.4 | ScandEval | 16.15.0 | The robust European language model benchmark. | <!-- This disables the requirement that the first line is a top-level heading -->
<!-- markdownlint-configure-file { "MD041": false } -->
<div align='center'>
<img
src="https://raw.githubusercontent.com/EuroEval/EuroEval/main/gfx/euroeval.png"
height="500"
width="372"
>
</div>
### The robust European language model benchmark
(formerly known as ScandEval)
______________________________________________________________________
[](https://euroeval.com)
[](https://pypi.org/project/euroeval/)
[](https://arxiv.org/abs/2304.00906)
[](https://arxiv.org/abs/2406.13469)
[](https://github.com/EuroEval/EuroEval/blob/main/LICENSE)
[](https://github.com/EuroEval/EuroEval/commits/main)
[](https://github.com/EuroEval/EuroEval/tree/main/tests)
[](https://github.com/EuroEval/EuroEval/blob/main/CODE_OF_CONDUCT.md)
## Maintainer
- Dan Saattrup Smart ([@saattrupdan](https://github.com/saattrupdan), <dan.smart@alexandra.dk>)
## Installation and usage
See the [documentation](https://euroeval.com/python-package/) for more information.
## Reproducing the evaluation datasets
All datasets used in this project are generated using the scripts located in the
[src/scripts](src/scripts) folder. To reproduce a dataset, run the corresponding script
with the following command
```bash
uv run src/scripts/<name-of-script>.py
```
Replace <name-of-script> with the specific script you wish to execute, e.g.,
```bash
uv run src/scripts/create_allocine.py
```
## Contributors :pray:
A huge thank you to all the contributors who have helped make this project a success!
<a href="https://github.com/peter-sk">
<img
src="https://avatars.githubusercontent.com/u/6168908"
width=50
alt="Contributor avatar for peter-sk"
/>
</a>
<a href="https://github.com/AJDERS">
<img
src="https://avatars.githubusercontent.com/u/38854604"
width=50
alt="Contributor avatar for AJDERS"
/>
</a>
<a href="https://github.com/oliverkinch">
<img
src="https://avatars.githubusercontent.com/u/71556498"
width=50
alt="Contributor avatar for oliverkinch"
/>
</a>
<a href="https://github.com/versae">
<img
src="https://avatars.githubusercontent.com/u/173537"
width=50
alt="Contributor avatar for versae"
/>
</a>
<a href="https://github.com/KennethEnevoldsen">
<img
src="https://avatars.githubusercontent.com/u/23721977"
width=50
alt="Contributor avatar for KennethEnevoldsen"
/>
</a>
<a href="https://github.com/viggo-gascou">
<img
src="https://avatars.githubusercontent.com/u/94069687"
width=50
alt="Contributor avatar for viggo-gascou"
/>
</a>
<a href="https://github.com/mathiasesn">
<img
src="https://avatars.githubusercontent.com/u/27091759"
width=50
alt="Contributor avatar for mathiasesn"
/>
</a>
<a href="https://github.com/Alkarex">
<img
src="https://avatars.githubusercontent.com/u/1008324"
width=50
alt="Contributor avatar for Alkarex"
/>
</a>
<a href="https://github.com/marksverdhei">
<img
src="https://avatars.githubusercontent.com/u/46672778"
width=50
alt="Contributor avatar for marksverdhei"
/>
</a>
<a href="https://github.com/Mikeriess">
<img
src="https://avatars.githubusercontent.com/u/19728563"
width=50
alt="Contributor avatar for Mikeriess"
/>
</a>
<a href="https://github.com/ThomasKluiters">
<img
src="https://avatars.githubusercontent.com/u/8137941"
width=50
alt="Contributor avatar for ThomasKluiters"
/>
</a>
<a href="https://github.com/BramVanroy">
<img
src="https://avatars.githubusercontent.com/u/2779410"
width=50
alt="Contributor avatar for BramVanroy"
/>
</a>
<a href="https://github.com/peregilk">
<img
src="https://avatars.githubusercontent.com/u/9079808"
width=50
alt="Contributor avatar for peregilk"
/>
</a>
<a href="https://github.com/Rijgersberg">
<img
src="https://avatars.githubusercontent.com/u/8604946"
width=50
alt="Contributor avatar for Rijgersberg"
/>
</a>
<a href="https://github.com/duarteocarmo">
<img
src="https://avatars.githubusercontent.com/u/26342344"
width=50
alt="Contributor avatar for duarteocarmo"
/>
</a>
<a href="https://github.com/slowwavesleep">
<img
src="https://avatars.githubusercontent.com/u/44175589"
width=50
alt="Contributor avatar for slowwavesleep"
/>
</a>
<a href="https://github.com/mrkowalski">
<img
src="https://avatars.githubusercontent.com/u/6357044"
width=50
alt="Contributor avatar for mrkowalski"
/>
</a>
<a href="https://github.com/simonevanbruggen">
<img
src="https://avatars.githubusercontent.com/u/24842609"
width=50
alt="Contributor avatar for simonevanbruggen"
/>
</a>
<a href="https://github.com/tvosch">
<img
src="https://avatars.githubusercontent.com/u/110661769"
width=50
alt="Contributor avatar for tvosch"
/>
</a>
<a href="https://github.com/Touzen">
<img
src="https://avatars.githubusercontent.com/u/1416265"
width=50
alt="Contributor avatar for Touzen"
/>
</a>
<a href="https://github.com/caldaibis">
<img
src="https://avatars.githubusercontent.com/u/16032437"
width=50
alt="Contributor avatar for caldaibis"
/>
</a>
<a href="https://github.com/SwekeR-463">
<img
src="https://avatars.githubusercontent.com/u/114919896?v=4"
width=50
alt="Contributor avatar for SwekeR-463"
/>
</a>
### Contribute to EuroEval
We welcome contributions to EuroEval! Whether you're fixing bugs, adding features, or
contributing new datasets, your help makes this project better for everyone.
- **General contributions**: Check out our [contribution guidelines](CONTRIBUTING.md)
for information on how to get started.
- **Adding datasets**: If you're interested in adding a new dataset to EuroEval, we have
a [dedicated guide](NEW_DATASET_GUIDE.md) with step-by-step instructions.
### Special thanks
- Thanks to [Google](https://google.com/) for sponsoring Gemini credits as part of their
[Google Cloud for Researchers Program](https://cloud.google.com/edu/researchers).
- Thanks [@Mikeriess](https://github.com/Mikeriess) for evaluating many of the larger
models on the leaderboards.
- Thanks to [OpenAI](https://openai.com/) for sponsoring OpenAI credits as part of their
[Researcher Access Program](https://openai.com/form/researcher-access-program/).
- Thanks to [UWV](https://www.uwv.nl/) and [KU
Leuven](https://www.arts.kuleuven.be/ling/ccl) for sponsoring the Azure OpenAI
credits used to evaluate GPT-4-turbo in Dutch.
- Thanks to [Miðeind](https://mideind.is/en) for sponsoring the OpenAI
credits used to evaluate GPT-4-turbo in Icelandic and Faroese.
- Thanks to [CHC](https://chc.au.dk/) for sponsoring the OpenAI credits used to
evaluate GPT-4-turbo in German.
## Citing EuroEval
If you want to cite the framework then feel free to use this:
```bibtex
@article{smart2024encoder,
title={Encoder vs Decoder: Comparative Analysis of Encoder and Decoder Language Models on Multilingual NLU Tasks},
author={Smart, Dan Saattrup and Enevoldsen, Kenneth and Schneider-Kamp, Peter},
journal={arXiv preprint arXiv:2406.13469},
year={2024}
}
@inproceedings{smart2023scandeval,
author = {Smart, Dan Saattrup},
booktitle = {Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)},
month = may,
pages = {185--201},
title = {{ScandEval: A Benchmark for Scandinavian Natural Language Processing}},
year = {2023}
}
```
| text/markdown | null | Dan Saattrup Smart <dan.smart@alexandra.dk> | null | Dan Saattrup Smart <dan.smart@alexandra.dk> | MIT License Copyright (c) 2022-2026 Dan Saattrup Smart Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"accelerate>=1.9.0",
"bert-score>=0.3.13",
"click>=8.1.3",
"cloudpickle>=3.1.1",
"datasets>=3.5.0",
"demjson3>=3.0.6",
"evaluate>=0.4.1",
"huggingface-hub>=0.30.1",
"langdetect>=1.0.9",
"levenshtein>=0.24.0",
"litellm>=1.75.6",
"mistral-common[soundfile]",
"more-itertools>=10.5.0",
"nltk>=... | [] | [] | [] | [
"Repository, https://github.com/EuroEval/EuroEval",
"Issues, https://github.com/EuroEval/EuroEval/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T18:00:33.395712 | scandeval-16.15.0-py3-none-any.whl | 243,833 | 1c/85/0c640840cecb0cf08a983effac674c94ee11fcbc4b9ee8b35d8f796a1392/scandeval-16.15.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 93a914e500d01a765127566aaeb5562d | ee2f19fe10991ece43ccd09a807e13798b9d4a3d7b759cda20b57807fcf601c3 | 1c850c640840cecb0cf08a983effac674c94ee11fcbc4b9ee8b35d8f796a1392 | null | [
"LICENSE"
] | 0 |
2.4 | EuroEval | 16.15.0 | The robust European language model benchmark. | <!-- This disables the requirement that the first line is a top-level heading -->
<!-- markdownlint-configure-file { "MD041": false } -->
<div align='center'>
<img
src="https://raw.githubusercontent.com/EuroEval/EuroEval/main/gfx/euroeval.png"
height="500"
width="372"
>
</div>
### The robust European language model benchmark
(formerly known as ScandEval)
______________________________________________________________________
[](https://euroeval.com)
[](https://pypi.org/project/euroeval/)
[](https://arxiv.org/abs/2304.00906)
[](https://arxiv.org/abs/2406.13469)
[](https://github.com/EuroEval/EuroEval/blob/main/LICENSE)
[](https://github.com/EuroEval/EuroEval/commits/main)
[](https://github.com/EuroEval/EuroEval/tree/main/tests)
[](https://github.com/EuroEval/EuroEval/blob/main/CODE_OF_CONDUCT.md)
## Maintainer
- Dan Saattrup Smart ([@saattrupdan](https://github.com/saattrupdan), <dan.smart@alexandra.dk>)
## Installation and usage
See the [documentation](https://euroeval.com/python-package/) for more information.
## Reproducing the evaluation datasets
All datasets used in this project are generated using the scripts located in the
[src/scripts](src/scripts) folder. To reproduce a dataset, run the corresponding script
with the following command
```bash
uv run src/scripts/<name-of-script>.py
```
Replace <name-of-script> with the specific script you wish to execute, e.g.,
```bash
uv run src/scripts/create_allocine.py
```
## Contributors :pray:
A huge thank you to all the contributors who have helped make this project a success!
<a href="https://github.com/peter-sk">
<img
src="https://avatars.githubusercontent.com/u/6168908"
width=50
alt="Contributor avatar for peter-sk"
/>
</a>
<a href="https://github.com/AJDERS">
<img
src="https://avatars.githubusercontent.com/u/38854604"
width=50
alt="Contributor avatar for AJDERS"
/>
</a>
<a href="https://github.com/oliverkinch">
<img
src="https://avatars.githubusercontent.com/u/71556498"
width=50
alt="Contributor avatar for oliverkinch"
/>
</a>
<a href="https://github.com/versae">
<img
src="https://avatars.githubusercontent.com/u/173537"
width=50
alt="Contributor avatar for versae"
/>
</a>
<a href="https://github.com/KennethEnevoldsen">
<img
src="https://avatars.githubusercontent.com/u/23721977"
width=50
alt="Contributor avatar for KennethEnevoldsen"
/>
</a>
<a href="https://github.com/viggo-gascou">
<img
src="https://avatars.githubusercontent.com/u/94069687"
width=50
alt="Contributor avatar for viggo-gascou"
/>
</a>
<a href="https://github.com/mathiasesn">
<img
src="https://avatars.githubusercontent.com/u/27091759"
width=50
alt="Contributor avatar for mathiasesn"
/>
</a>
<a href="https://github.com/Alkarex">
<img
src="https://avatars.githubusercontent.com/u/1008324"
width=50
alt="Contributor avatar for Alkarex"
/>
</a>
<a href="https://github.com/marksverdhei">
<img
src="https://avatars.githubusercontent.com/u/46672778"
width=50
alt="Contributor avatar for marksverdhei"
/>
</a>
<a href="https://github.com/Mikeriess">
<img
src="https://avatars.githubusercontent.com/u/19728563"
width=50
alt="Contributor avatar for Mikeriess"
/>
</a>
<a href="https://github.com/ThomasKluiters">
<img
src="https://avatars.githubusercontent.com/u/8137941"
width=50
alt="Contributor avatar for ThomasKluiters"
/>
</a>
<a href="https://github.com/BramVanroy">
<img
src="https://avatars.githubusercontent.com/u/2779410"
width=50
alt="Contributor avatar for BramVanroy"
/>
</a>
<a href="https://github.com/peregilk">
<img
src="https://avatars.githubusercontent.com/u/9079808"
width=50
alt="Contributor avatar for peregilk"
/>
</a>
<a href="https://github.com/Rijgersberg">
<img
src="https://avatars.githubusercontent.com/u/8604946"
width=50
alt="Contributor avatar for Rijgersberg"
/>
</a>
<a href="https://github.com/duarteocarmo">
<img
src="https://avatars.githubusercontent.com/u/26342344"
width=50
alt="Contributor avatar for duarteocarmo"
/>
</a>
<a href="https://github.com/slowwavesleep">
<img
src="https://avatars.githubusercontent.com/u/44175589"
width=50
alt="Contributor avatar for slowwavesleep"
/>
</a>
<a href="https://github.com/mrkowalski">
<img
src="https://avatars.githubusercontent.com/u/6357044"
width=50
alt="Contributor avatar for mrkowalski"
/>
</a>
<a href="https://github.com/simonevanbruggen">
<img
src="https://avatars.githubusercontent.com/u/24842609"
width=50
alt="Contributor avatar for simonevanbruggen"
/>
</a>
<a href="https://github.com/tvosch">
<img
src="https://avatars.githubusercontent.com/u/110661769"
width=50
alt="Contributor avatar for tvosch"
/>
</a>
<a href="https://github.com/Touzen">
<img
src="https://avatars.githubusercontent.com/u/1416265"
width=50
alt="Contributor avatar for Touzen"
/>
</a>
<a href="https://github.com/caldaibis">
<img
src="https://avatars.githubusercontent.com/u/16032437"
width=50
alt="Contributor avatar for caldaibis"
/>
</a>
<a href="https://github.com/SwekeR-463">
<img
src="https://avatars.githubusercontent.com/u/114919896?v=4"
width=50
alt="Contributor avatar for SwekeR-463"
/>
</a>
### Contribute to EuroEval
We welcome contributions to EuroEval! Whether you're fixing bugs, adding features, or
contributing new datasets, your help makes this project better for everyone.
- **General contributions**: Check out our [contribution guidelines](CONTRIBUTING.md)
for information on how to get started.
- **Adding datasets**: If you're interested in adding a new dataset to EuroEval, we have
a [dedicated guide](NEW_DATASET_GUIDE.md) with step-by-step instructions.
### Special thanks
- Thanks to [Google](https://google.com/) for sponsoring Gemini credits as part of their
[Google Cloud for Researchers Program](https://cloud.google.com/edu/researchers).
- Thanks [@Mikeriess](https://github.com/Mikeriess) for evaluating many of the larger
models on the leaderboards.
- Thanks to [OpenAI](https://openai.com/) for sponsoring OpenAI credits as part of their
[Researcher Access Program](https://openai.com/form/researcher-access-program/).
- Thanks to [UWV](https://www.uwv.nl/) and [KU
Leuven](https://www.arts.kuleuven.be/ling/ccl) for sponsoring the Azure OpenAI
credits used to evaluate GPT-4-turbo in Dutch.
- Thanks to [Miðeind](https://mideind.is/en) for sponsoring the OpenAI
credits used to evaluate GPT-4-turbo in Icelandic and Faroese.
- Thanks to [CHC](https://chc.au.dk/) for sponsoring the OpenAI credits used to
evaluate GPT-4-turbo in German.
## Citing EuroEval
If you want to cite the framework then feel free to use this:
```bibtex
@article{smart2024encoder,
title={Encoder vs Decoder: Comparative Analysis of Encoder and Decoder Language Models on Multilingual NLU Tasks},
author={Smart, Dan Saattrup and Enevoldsen, Kenneth and Schneider-Kamp, Peter},
journal={arXiv preprint arXiv:2406.13469},
year={2024}
}
@inproceedings{smart2023scandeval,
author = {Smart, Dan Saattrup},
booktitle = {Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)},
month = may,
pages = {185--201},
title = {{ScandEval: A Benchmark for Scandinavian Natural Language Processing}},
year = {2023}
}
```
| text/markdown | null | Dan Saattrup Smart <dan.smart@alexandra.dk> | null | Dan Saattrup Smart <dan.smart@alexandra.dk> | MIT License Copyright (c) 2022-2026 Dan Saattrup Smart Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"accelerate>=1.9.0",
"bert-score>=0.3.13",
"click>=8.1.3",
"cloudpickle>=3.1.1",
"datasets>=3.5.0",
"demjson3>=3.0.6",
"evaluate>=0.4.1",
"huggingface-hub>=0.30.1",
"langdetect>=1.0.9",
"levenshtein>=0.24.0",
"litellm>=1.75.6",
"mistral-common[soundfile]",
"more-itertools>=10.5.0",
"nltk>=... | [] | [] | [] | [
"Repository, https://github.com/EuroEval/EuroEval",
"Issues, https://github.com/EuroEval/EuroEval/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T18:00:28.581550 | euroeval-16.15.0-py3-none-any.whl | 243,631 | b2/dd/322a19197dd7b9f170ade148afd537282b2f8d70a00c61cee9caf11dc30e/euroeval-16.15.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 698a809b1f36d3146766c6f95410348d | 88ebe4cbf2023d26828b1e70fc1501e9ab073205550271f9c2856840bb7d8a59 | b2dd322a19197dd7b9f170ade148afd537282b2f8d70a00c61cee9caf11dc30e | null | [
"LICENSE"
] | 0 |
2.3 | agentex-sdk | 0.9.4 | The official Python library for the agentex API |
# Agentex Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/agentex-sdk/)
The Agentex Python library provides convenient access to the Agentex REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## Documentation
The REST API documentation can be found on [docs.gp.scale.com](https://docs.gp.scale.com). The full API of this library can be found in [api.md](https://github.com/scaleapi/scale-agentex-python/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install agentex-sdk
```
## Usage
The full API of this library can be found in [api.md](https://github.com/scaleapi/scale-agentex-python/tree/main/api.md).
```python
import os
from agentex import Agentex
client = Agentex(
api_key=os.environ.get("AGENTEX_SDK_API_KEY"), # This is the default and can be omitted
# defaults to "production".
environment="development",
)
tasks = client.tasks.list()
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `AGENTEX_SDK_API_KEY="My API Key"` to your `.env` file
so that your API Key is not stored in source control.
## Async usage
Simply import `AsyncAgentex` instead of `Agentex` and use `await` with each API call:
```python
import os
import asyncio
from agentex import AsyncAgentex
client = AsyncAgentex(
api_key=os.environ.get("AGENTEX_SDK_API_KEY"), # This is the default and can be omitted
# defaults to "production".
environment="development",
)
async def main() -> None:
tasks = await client.tasks.list()
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
## Debugging
AgentEx provides built-in debugging support for **temporal projects** during local development.
```bash
# Basic debugging
uv run agentex agents run --manifest manifest.yaml --debug-worker
# Wait for debugger to attach before starting
uv run agentex agents run --manifest manifest.yaml --debug-worker --wait-for-debugger
# Custom debug port
uv run agentex agents run --manifest manifest.yaml --debug-worker --debug-port 5679
```
For **VS Code**, add this configuration to `.vscode/launch.json`:
```json
{
"name": "Attach to AgentEx Worker",
"type": "debugpy",
"request": "attach",
"connect": { "host": "localhost", "port": 5678 },
"pathMappings": [{ "localRoot": "${workspaceFolder}", "remoteRoot": "." }],
"justMyCode": false,
"console": "integratedTerminal"
}
```
The debug server automatically finds an available port starting from 5678 and prints connection details when starting.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install agentex-sdk[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from agentex import DefaultAioHttpClient
from agentex import AsyncAgentex
async def main() -> None:
async with AsyncAgentex(
api_key=os.environ.get("AGENTEX_SDK_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
tasks = await client.tasks.list()
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `agentex.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `agentex.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `agentex.APIError`.
```python
import agentex
from agentex import Agentex
client = Agentex()
try:
client.tasks.list()
except agentex.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except agentex.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except agentex.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from agentex import Agentex
# Configure the default for all requests:
client = Agentex(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).tasks.list()
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from agentex import Agentex
# Configure the default for all requests:
client = Agentex(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = Agentex(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).tasks.list()
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/scaleapi/scale-agentex-python/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `AGENTEX_LOG` to `info`.
```shell
$ export AGENTEX_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from agentex import Agentex
client = Agentex()
response = client.tasks.with_raw_response.list()
print(response.headers.get('X-My-Header'))
task = response.parse() # get the object that `tasks.list()` would have returned
print(task)
```
These methods return an [`APIResponse`](https://github.com/scaleapi/scale-agentex-python/tree/main/src/agentex/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/scaleapi/scale-agentex-python/tree/main/src/agentex/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.tasks.with_streaming_response.list() as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from agentex import Agentex, DefaultHttpxClient
client = Agentex(
# Or use the `AGENTEX_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from agentex import Agentex
with Agentex() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/scaleapi/scale-agentex-python/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import agentex
print(agentex.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/scaleapi/scale-agentex-python/tree/main/./CONTRIBUTING.md).
| text/markdown | null | Agentex <roxanne.farhad@scale.com> | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: ... | [] | null | null | <4,>=3.12 | [] | [] | [] | [
"aiohttp<4,>=3.10.10",
"anthropic>=0.40.0",
"anyio<5,>=3.5.0",
"claude-agent-sdk>=0.1.0",
"cloudpickle>=3.1.1",
"datadog>=0.52.1",
"ddtrace>=3.13.0",
"distro<2,>=1.7.0",
"fastapi<0.116,>=0.115.0",
"httpx<0.28,>=0.27.2",
"ipykernel>=6.29.5",
"jinja2<4,>=3.1.3",
"json-log-formatter>=1.1.1",
... | [] | [] | [] | [
"Homepage, https://github.com/scaleapi/scale-agentex-python",
"Repository, https://github.com/scaleapi/scale-agentex-python"
] | twine/5.1.1 CPython/3.12.9 | 2026-02-18T18:00:17.317162 | agentex_sdk-0.9.4.tar.gz | 879,600 | 3f/e9/f5d81d87e0f743ffab3aac0477db21eba31d740bb28434e7836e0231c53a/agentex_sdk-0.9.4.tar.gz | source | sdist | null | false | 75214c5c110a3fe25d9f8abb38233fe1 | dd5133c8bc878c006063cc9504a409a40e1683200fa5a02f692f5466676f44a8 | 3fe9f5d81d87e0f743ffab3aac0477db21eba31d740bb28434e7836e0231c53a | null | [] | 1,430 |
2.4 | jac-client | 0.2.19 | Build full-stack web applications with Jac - one language for frontend and backend. | # Jac Client
Build full-stack web applications with Jac - one language for frontend and backend.
Jac Client enables you to write React-like components, manage state, and build interactive UIs all in Jac. No need for separate frontend frameworks, HTTP clients, or complex build configurations.
---
## Features
- **Single Language**: Write frontend and backend in Jac
- **No HTTP Client**: Use `jacSpawn()` instead of fetch/axios
- **React Hooks**: Use standard React `useState` and `useEffect` hooks (useState is auto-injected when using `has` variables)
- **Component-Based**: Build reusable UI components with JSX
- **Graph Database**: Built-in graph data model eliminates need for SQL/NoSQL
- **Type Safety**: Type checking across frontend and backend
- **Vite-Powered**: Optimized production bundles with Vite
---
## Quick Start
### Installation
```bash
pip install jac-client
```
### Create a New App
```bash
jac create --use client my-app
cd my-app
jac start src/app.jac
```
Visit `http://localhost:8000` to see your app! (The `app` component is served at the root by default.)
You can also access the app at `http://localhost:8000/cl/app`.
> **Note**: The `--use client` flag creates a client-side project with an organized folder structure. Without it, `jac create` creates a standard Jac project.
---
## Documentation
For detailed guides and tutorials, see the **[docs folder](jac_client/docs/)**:
- **[Getting Started Guide](jac_client/docs/README.md)** - Complete beginner's guide
- **[Routing](jac_client/docs/routing.md)** - Multi-page applications with declarative routing (`<Router>`, `<Routes>`, `<Route>`)
- **[Lifecycle Hooks](jac_client/docs/lifecycle-hooks.md)** - Using React hooks (`useState`, `useEffect`)
- **[Advanced State](jac_client/docs/advanced-state.md)** - Managing complex state with React hooks
- **[Imports](jac_client/docs/imports.md)** - Importing third-party libraries (React, Ant Design, Lodash), Jac files, and JavaScript modules
---
## Example
### Simple Counter with React Hooks
```jac
# Note: useState is auto-injected when using has variables in cl blocks
# Only useEffect needs explicit import
cl import from react { useEffect }
cl {
def Counter() -> JsxElement {
# useState is automatically available - no import needed!
[count, setCount] = useState(0);
useEffect(lambda -> None {
console.log("Count changed:", count);
}, [count]);
return <div>
<h1>Count: {count}</h1>
<button onClick={lambda e: any -> None {
setCount(count + 1);
}}>
Increment
</button>
</div>;
}
def app() -> JsxElement {
return Counter();
}
}
```
> **Note:** When using `has` variables in `cl {}` blocks or `.cl.jac` files, the `useState` import is automatically injected. You only need to explicitly import other hooks like `useEffect`.
### Full-Stack Todo App
```jac
# useState is auto-injected, only import useEffect
cl import from react { useEffect }
cl import from '@jac/runtime' { jacSpawn }
# Backend: Jac nodes and walkers
node Todo {
has text: str;
has done: bool = False;
}
walker create_todo {
has text: str;
can create with Root entry {
new_todo = here ++> Todo(text=self.text);
report new_todo;
}
}
walker read_todos {
can read with Root entry {
visit [-->(?:Todo)];
}
}
# Frontend: React component
cl {
def app() -> JsxElement {
# useState is automatically available - no import needed!
[todos, setTodos] = useState([]);
useEffect(lambda -> None {
async def loadTodos() -> None {
result = await jacSpawn("read_todos", "", {});
setTodos(result.reports);
}
loadTodos();
}, []);
return <div>
<h1>My Todos</h1>
{todos.map(lambda todo: any -> any {
return <div key={todo._jac_id}>{todo.text}</div>;
})}
</div>;
}
}
```
---
## Requirements
- Python: 3.12+
- Bun: For package management and Vite bundling ([install](https://bun.sh))
- Jac Language: `jaclang` (installed automatically)
---
## ️ How It Works
Jac Client is a plugin that:
1. Compiles your `.jac` client code to JavaScript
2. Bundles dependencies with Vite for optimal performance
3. Provides a runtime for reactive state and components
4. Integrates seamlessly with Jac's backend graph operations
---
## Learn More
- **Full Documentation**: See [docs/](jac_client/docs/) for comprehensive guides
- **Examples**: Check `jac_client/examples/` for working examples
- **Issues**: Report bugs on [GitHub Issues](https://github.com/Jaseci-Labs/jaseci/issues)
---
## License
MIT License - see [LICENSE](../LICENSE) file.
---
**Happy coding with Jac!**
| text/markdown | null | Jason Mars <jason@mars.ninja> | null | Jason Mars <jason@mars.ninja> | null | jac, jaclang, jaseci, frontend, full-stack, web-development | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"jaclang>=0.10.3",
"python-dotenv==1.0.1; extra == \"dev\"",
"pytest==8.3.5; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/Jaseci-Labs/jaseci",
"Homepage, https://jaseci.org",
"Documentation, https://jac-lang.org"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T18:00:09.743402 | jac_client-0.2.19.tar.gz | 640,069 | 3d/79/b2e1c8968e9195231227c99a7201a335376948d3b336f64a2905b8836ccb/jac_client-0.2.19.tar.gz | source | sdist | null | false | d0a0a70cf76fedfd386c0b1e9ea00b5b | 01ecccde13cea942827703c5f526cf05a3bb2017167ffa39dd7e7581cf958f88 | 3d79b2e1c8968e9195231227c99a7201a335376948d3b336f64a2905b8836ccb | MIT | [] | 357 |
2.4 | jac-super | 0.1.4 | Enhanced console output for Jac CLI with Rich formatting | # Jac Super
Enhanced console output plugin for Jac CLI with Rich formatting.
## Installation
```bash
pip install jac-super
```
Once installed, the plugin automatically registers and enhances all Jac CLI command output.
## Usage
No configuration required. After installation, jac-super automatically enhances output for all Jac commands:
- `jac create` - Enhanced project creation messages
- `jac start` - Server startup and status messages
- `jac run` - Formatted execution output
- `jac config` - Styled configuration display
## Environment Variables
| Variable | Effect |
|----------|--------|
| `NO_COLOR` | Disables colors (fallback to base console) |
| `NO_EMOJI` | Disables emojis (uses text labels) |
| `TERM=dumb` | Disables both colors and emojis |
| text/markdown | null | Jason Mars <jason@mars.ninja> | null | Jason Mars <jason@mars.ninja> | null | jac, jaclang, jaseci, console, rich, cli | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"jaclang>=0.10.3",
"rich>=13.0.0"
] | [] | [] | [] | [
"Repository, https://github.com/Jaseci-Labs/jaseci",
"Homepage, https://jaseci.org",
"Documentation, https://jac-lang.org"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T18:00:06.166116 | jac_super-0.1.4.tar.gz | 15,878 | dc/3a/1305540f6a011bdab3f097587b382de7e5e493a5b661ef0b406c57894baa/jac_super-0.1.4.tar.gz | source | sdist | null | false | 8131acd9e4ad3197322b46b3ac420c22 | 19a4bfff957e55bb9e42203cdbbf78cd8239d66e2f4b60a761aadc77256988f4 | dc3a1305540f6a011bdab3f097587b382de7e5e493a5b661ef0b406c57894baa | MIT | [] | 277 |
2.4 | mlrun | 1.11.0rc36 | Tracking and config of machine learning runs | <a id="top"></a>
[](https://github.com/mlrun/mlrun/actions/workflows/build.yaml?query=branch%3Adevelopment)
[](https://opensource.org/licenses/Apache-2.0)
[](https://pypi.python.org/pypi/mlrun/)
[](https://mlrun.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/astral-sh/ruff)
[](https://github.com/mlrun/mlrun/commits/main)
[](https://github.com/mlrun/mlrun/releases)
[](https://mlopslive.slack.com)
<div>
<span>
<picture>
<img img align="left" src="./docs/_static/images/MLRun-logo.png" alt="MLRun logo" width="150"/>
</picture>
</span>
<span>
<picture>
<img img align="right" src="./docs/_static/images/maintenance_logo.svg" alt="Maintenance logo" width="250"/>
</picture>
</span>
<br clear="all"/>
</div>
# Using MLRun
MLRun is an open source AI orchestration platform for quickly building and managing continuous (gen) AI applications across their lifecycle. MLRun integrates into your development and CI/CD environment and automates the delivery of production data, ML pipelines, and online applications.
MLRun significantly reduces engineering efforts, time to production, and computation resources.
With MLRun, you can choose any IDE on your local machine or on the cloud. MLRun breaks the silos between data, ML, software, and DevOps/MLOps teams, enabling collaboration and fast continuous improvements.
Get started with the MLRun [**Tutorials and Examples**](https://docs.mlrun.org/en/stable/tutorials/index.html) and the [**Set up your client environment**](https://docs.mlrun.org/en/stable/setup-guide.md), or read about the [**MLRun Architecture**](https://docs.mlrun.org/en/stable/architecture.html).
This page explains how MLRun addresses the [**gen AI tasks**](#genai-tasks), [**MLOps tasks**](#mlops-tasks), and presents the [**MLRun core components**](#core-components).
See the supported data stores, development tools, services, platforms, etc., supported by MLRun's open architecture in **https://docs.mlrun.org/en/stable/ecosystem.html**.
## Gen AI tasks
<p align="center"><img src="./docs/_static/images/ai-tasks.png" alt="ai-tasks" width="800"/></p><br>
Use MLRun to develop, scale, deploy, and monitor your AI model across your enterprise. The [**gen AI development workflow**](https://docs.mlrun.org/en/stable/genai/genai-flow.html)
section describes the different tasks and stages in detail.
### Data management
MLRun supports batch or realtime data processing at scale, data lineage and versioning, structured and unstructured data, and more.
Removing inappropriate data at an early stage saves resources that would otherwise be required later on.
**Docs:**
[Using LLMs to process unstructured data](https://docs.mlrun.org/en/stable/genai/data-mgmt/unstructured-data.html),
[Vector databases](https://docs.mlrun.org/en/stable/genai/data-mgmt/vector-databases.html),
[Guardrails for data management](https://docs.mlrun.org/en/stable/genai/data-mgmt/guardrails-data.html)
**Demo:**
[Call center demo](https://github.com/mlrun/demo-call-center)
**Video:**
[Call center](https://youtu.be/YycMbxRgLBA)
### Development
Use MLRun to build an automated ML pipeline to: collect data,
preprocess (prepare) the data, run the training pipeline, and evaluate the model.
**Docs:**
[Working with RAG](https://docs.mlrun.org/en/stable/genai/development/working-with-rag.html), [Evalating LLMs](https://docs.mlrun.org/en/stable/genai/development/evaluating-llms.html), [Fine tuning LLMS](https://docs.mlrun.org/en/stable/genai/development/fine-tuning-llms.html)
**Demos:**
[Call center demo](https://github.com/mlrun/demo-call-center),
[Banking agent demo](https://github.com/mlrun/demo-banking-agent)
**Video:**
[Call center](https://youtu.be/YycMbxRgLBA)
### Deployment
MLRun serving can productize the newly trained LLM as a serverless function using real-time auto-scaling Nuclio serverless functions.
The application pipeline includes all the steps from accepting events or data, contextualizing it with a state preparing the required model features,
inferring results using one or more models, and driving actions.
**Docs:**
[Serving gen AI models](https://docs.mlrun.org/en/stable/genai/deployment/genai_serving.html), [GPU utilization](https://docs.mlrun.org/en/stable/genai/deployment/gpu_utilization.html), [Gen AI realtime serving graph](https://docs.mlrun.org/en/stable/genai/deployment/genai_serving_graph.html)
**Tutorial:**
[Deploy LLM using MLRun](https://docs.mlrun.org/en/stable/tutorials/genai-01-basic-tutorial.html)
**Demos:**
[Call center demo](https://github.com/mlrun/demo-call-center),
[Banking agent demo](https://github.com/mlrun/demo-banking-agent)
**Video:**
[Call center](https://youtu.be/YycMbxRgLBA)
### Live Ops
Monitor all resources, data, model and application metrics to ensure performance. Then identify risks, control costs, and measure business KPIs.
Collect production data, metadata, and metrics to tune the model and application further, and to enable governance and explainability.
**Docs:**
[Model monitoring <monitoring](https://docs.mlrun.org/en/stable/concepts/monitoring.html), [Alerts and notifications](https://docs.mlrun.org/en/stable/concepts/alerts-notifications.html)
**Tutorials:**
[Deploy LLM using MLRun](https://docs.mlrun.org/en/stable/tutorials/genai-01-basic-tutorial.html), [Model monitoring using LLM](https://docs.mlrun.org/en/stable/tutorials/genai-02-monitoring-llm.html)
**Demo:**
[Banking agent demo](https://github.com/mlrun/demo-banking-agent)
<a id="mlops-tasks"></a>
## MLOps tasks
<p align="center"><img src="./docs/_static/images/mlops-task.png" alt="mlrun-tasks" width="800"/></p><br>
The [**MLOps development workflow**](https://docs.mlrun.org/en/stable/mlops-dev-flow.html) section describes the different tasks and stages in detail.
MLRun can be used to automate and orchestrate all the different tasks or just specific tasks (and integrate them with what you have already deployed).
### Project management and CI/CD automation
In MLRun the assets, metadata, and services (data, functions, jobs, artifacts, models, secrets, etc.) are organized into projects.
Projects can be imported/exported as a whole, mapped to git repositories or IDE projects (in PyCharm, VSCode, etc.), which enables versioning, collaboration, and CI/CD.
Project access can be restricted to a set of users and roles.
**Docs:** [Projects and Automation](https://docs.mlrun.org/en/stable/projects/project.html), [CI/CD Integration](https://docs.mlrun.org/en/stable/projects/ci-integration.html)
**Tutorials:** [Quick start](https://docs.mlrun.org/en/stable/tutorials/01-mlrun-basics.html), [Automated ML Pipeline](https://docs.mlrun.org/en/stable/tutorials/04-pipeline.html)
**Video:** [Quick start](https://youtu.be/xI8KVGLlj7Q).
### Ingest and process data
MLRun provides abstract interfaces to various offline and online [**data sources**](https://docs.mlrun.org/en/stable/store/datastore.html), supports batch or realtime data processing at scale, data lineage and versioning, structured and unstructured data, and more.
In addition, the MLRun [**Feature Store**](https://docs.mlrun.org/en/stable/feature-store/feature-store.html) automates the collection, transformation, storage, catalog, serving, and monitoring of data features across the ML lifecycle and enables feature reuse and sharing.
See: **Docs:** [Ingest and process data](https://docs.mlrun.org/en/stable/data-prep/index.html), [Feature Store](https://docs.mlrun.org/en/stable/feature-store/feature-store.html), [Data & Artifacts](https://docs.mlrun.org/en/stable/concepts/data.html)
**Tutorials:** [Quick start](https://docs.mlrun.org/en/stable/tutorials/01-mlrun-basics.html), [Feature Store](https://docs.mlrun.org/en/stable/feature-store/basic-demo.html).
### Develop and train models
MLRun allows you to easily build ML pipelines that take data from various sources or the Feature Store and process it, train models at scale with multiple parameters, test models, tracks each experiments, register, version and deploy models, etc. MLRun provides scalable built-in or custom model training services, integrate with any framework and can work with 3rd party training/auto-ML services. You can also bring your own pre-trained model and use it in the pipeline.
**Docs:** [Develop and train models](https://docs.mlrun.org/en/stable/development/index.html), [Model Training and Tracking](https://docs.mlrun.org/en/stable/development/model-training-tracking.html), [Batch Runs and Workflows](https://docs.mlrun.org/en/stable/concepts/runs-workflows.html)
**Tutorials:** [Train, compare, and register models](https://docs.mlrun.org/en/stable/tutorials/02-model-training.html), [Automated ML Pipeline](https://docs.mlrun.org/en/stable/tutorials/04-pipeline.html)
**Video:** [Train and compare models](https://youtu.be/bZgBsmLMdQo).
### Deploy models and applications
MLRun rapidly deploys and manages production-grade real-time or batch application pipelines using elastic and resilient serverless functions. MLRun addresses the entire ML application: intercepting application/user requests, running data processing tasks, inferencing using one or more models, driving actions, and integrating with the application logic.
**Docs:** [Deploy models and applications](https://docs.mlrun.org/en/stable/deployment/index.html), [Realtime Pipelines](https://docs.mlrun.org/en/stable/serving/serving-graph.html), [Batch Inference](https://docs.mlrun.org/en/stable/deployment/batch_inference.html)
**Tutorials:** [Realtime Serving](https://docs.mlrun.org/en/stable/tutorials/03-model-serving.html), [Batch Inference](https://docs.mlrun.org/en/stable/tutorials/07-batch-infer.html), [Advanced Pipeline](https://docs.mlrun.org/en/stable/tutorials/07-batch-infer.html)
**Video:** [Serving pre-trained models](https://youtu.be/OUjOus4dZfw).
### Model Monitoring
Observability is built into the different MLRun objects (data, functions, jobs, models, pipelines, etc.), eliminating the need for complex integrations and code instrumentation. With MLRun, you can observe the application/model resource usage and model behavior (drift, performance, etc.), define custom app metrics, and trigger alerts or retraining jobs.
**Docs:** [Model monitoring](https://docs.mlrun.org/en/stable/concepts/model-monitoring.html), [Model Monitoring Overview](https://docs.mlrun.org/en/stable/monitoring/model-monitoring-deployment.html)
**Tutorials:** [Model Monitoring & Drift Detection](https://docs.mlrun.org/en/stable/tutorials/05-model-monitoring.html).
<a id="core-components"></a>
## MLRun core components
<p align="center"><img src="./docs/_static/images/mlops-core.png" alt="mlrun-core" width="800"/></p><br>
MLRun includes the following major components:
[**Project Management:**](https://docs.mlrun.org/en/stable/projects/project.html) A service (API, SDK, DB, UI) that manages the different project assets (data, functions, jobs, workflows, secrets, etc.) and provides central control and metadata layer.
[**Functions:**](https://docs.mlrun.org/en/stable/runtimes/functions.html) automatically deployed software package with one or more methods and runtime-specific attributes (such as image, libraries, command, arguments, resources, etc.).
[**Data & Artifacts:**](https://docs.mlrun.org/en/stable/concepts/data.html) Glueless connectivity to various data sources, metadata management, catalog, and versioning for structures/unstructured artifacts.
[**Batch Runs & Workflows:**](https://docs.mlrun.org/en/stable/concepts/runs-workflows.html) Execute one or more functions with specific parameters and collect, track, and compare all their results and artifacts.
[**Real-Time Serving Pipeline:**](https://docs.mlrun.org/en/stable/serving/serving-graph.html) Rapid deployment of scalable data and ML pipelines using real-time serverless technology, including API handling, data preparation/enrichment, model serving, ensembles, driving and measuring actions, etc.
[**Model monitoring:**](https://docs.mlrun.org/en/stable/monitoring/index.html) monitors data, models, resources, and production components and provides a feedback loop for exploring production data, identifying drift, alerting on anomalies or data quality issues, triggering retraining jobs, measuring business impact, etc.
[**Alerts and notifications:**](https://docs.mlrun.org/en/stable/concepts/model-monitoring.html) Use alerts to identify and inform you of possible problem situations. Use notifications to report status on runs and pipelines.
[**Feature Store:**](https://docs.mlrun.org/en/stable/feature-store/feature-store.html) automatically collects, prepares, catalogs, and serves production data features for development (offline) and real-time (online) deployment using minimal engineering effort.
| text/markdown | Yaron Haviv | yaronh@iguazio.com | null | null | Apache License 2.0 | mlrun, mlops, data-science, machine-learning, experiment-tracking | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python",
"Topi... | [] | https://github.com/mlrun/mlrun | null | <3.12,>=3.11 | [] | [] | [] | [
"urllib3~=2.6",
"v3io-frames~=0.13.11",
"GitPython>=3.1.41,~=3.1",
"aiohttp~=3.11",
"aiohttp-retry~=2.9",
"click~=8.1",
"nest-asyncio~=1.0",
"ipython~=8.10",
"nuclio-jupyter~=0.13.2",
"numpy<1.27.0,>=1.26.4",
"pandas<2.2,>=1.2",
"pyarrow<18,>=10.0",
"pyyaml<7,>=6.0.2",
"requests~=2.32",
... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T17:58:34.026233 | mlrun-1.11.0rc36-py3-none-any.whl | 1,371,419 | b7/9d/27acac37ccad9fd315543d2b4e48884855184a3b9eeeb850cca43e743298/mlrun-1.11.0rc36-py3-none-any.whl | py3 | bdist_wheel | null | false | 679b19a467bbdc35d8a02e3669ec1bdb | d664b96b8be1a850cd04af5bf1a776356a615d584c54968f9c6bb9fb3580943b | b79d27acac37ccad9fd315543d2b4e48884855184a3b9eeeb850cca43e743298 | null | [
"LICENSE"
] | 627 |
2.4 | poks | 0.8.0 | A lightweight archive downloader for pre-built binary dependencies. | # Poks
<p align="center">
<a href="https://github.com/cuinixam/poks/actions/workflows/ci.yml?query=branch%3Amain">
<img src="https://img.shields.io/github/actions/workflow/status/cuinixam/poks/ci.yml?branch=main&label=CI&logo=github&style=flat-square" alt="CI Status" >
</a>
<a href="https://codecov.io/gh/cuinixam/poks">
<img src="https://img.shields.io/codecov/c/github/cuinixam/poks.svg?logo=codecov&logoColor=fff&style=flat-square" alt="Test coverage percentage">
</a>
</p>
<p align="center">
<a href="https://github.com/astral-sh/uv">
<img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/uv/main/assets/badge/v0.json" alt="uv">
</a>
<a href="https://github.com/astral-sh/ruff">
<img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json" alt="Ruff">
</a>
<a href="https://github.com/cuinixam/pypeline">
<img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/cuinixam/pypeline/refs/heads/main/assets/badge/v0.json" alt="pypeline">
</a>
<a href="https://github.com/pre-commit/pre-commit">
<img src="https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white&style=flat-square" alt="pre-commit">
</a>
</p>
<p align="center">
<a href="https://pypi.org/project/poks/">
<img src="https://img.shields.io/pypi/v/poks.svg?logo=python&logoColor=fff&style=flat-square" alt="PyPI Version">
</a>
<img src="https://img.shields.io/pypi/pyversions/poks.svg?style=flat-square&logo=python&logoColor=fff" alt="Supported Python versions">
<img src="https://img.shields.io/pypi/l/poks.svg?style=flat-square" alt="License">
</p>
---
**Source Code**: <a href="https://github.com/cuinixam/poks" target="_blank">https://github.com/cuinixam/poks</a>
---
A lightweight, cross-platform archive downloader for pre-built binary dependencies. Inspired by [Scoop](https://scoop.sh/), Poks provides a uniform way to install and manage developer tools using simple JSON manifests.
While Poks includes a CLI, its **main purpose is to be used programmatically** to manage dependencies in your Python projects and automation scripts.
## Features
- **Programmatic API**: First-class Python support for integrating into your tools
- **Cross-Platform**: Works on Windows, Linux, and macOS
- **No Admin Rights**: Installs tools in user space
- **Deterministic**: Pin exact versions in manifests for reproducible builds
- **Relocatable**: The apps directory is self-contained and portable
## Installation
```bash
pip install poks
```
For CLI-only usage:
```bash
pipx install poks
```
## Concepts
- **App**: A tool or dependency you want to install (e.g., CMake, a compiler toolchain). Each app has a name and one or more versions.
- **Manifest**: A JSON file that describes an app — its download URLs, checksums, and platform-specific archives. One manifest per app (e.g., `cmake.json`). See [examples/cmake.json](examples/cmake.json).
- **Bucket**: A git repository containing a collection of manifests. Buckets are how manifests are shared and distributed.
- **Config file**: A JSON file (`poks.json`) that ties it all together — it lists which buckets to use and which apps (with versions) to install from them. See [examples/poks.json](examples/poks.json).
## Installing apps
### From a config file
Use a config file when you want to define a reproducible set of tools for a project. The config references one or more buckets and lists the apps to install from them.
```bash
poks install --config poks.json
```
### From a bucket
Install a single app directly, without a config file. Poks looks up the app's manifest in the specified bucket.
```bash
poks install --app cmake --version 3.28.1 --bucket main
poks install --app cmake --version 3.28.1 --bucket https://github.com/poks/main-bucket.git
poks install --app cmake --version 3.28.1 # searches all local buckets
```
### From a manifest file
Install directly from a local manifest file — no bucket needed. Useful for testing a manifest before publishing it to a bucket. The app name is derived from the filename.
```bash
poks install --manifest cmake.json --version 4.2.3
```
### Platform filtering
Apps in a config file can be restricted to specific operating systems or architectures using the `os` and `arch` fields. Apps that don't match the current platform are silently skipped.
```json
{
"apps": [
{ "name": "cmake", "version": "3.28.1", "bucket": "main" },
{ "name": "mingw-tools", "version": "1.0.0", "bucket": "extras", "os": ["windows"] },
{ "name": "build-essential", "version": "1.0.0", "bucket": "extras", "os": ["linux", "macos"] }
]
}
```
Supported values — `os`: `windows`, `linux`, `macos`; `arch`: `x86_64`, `aarch64`. When omitted, the app is installed on all platforms.
### Other commands
```bash
poks uninstall cmake@3.28.1 # specific version
poks uninstall cmake # all versions
poks uninstall --all # everything
poks search cmake # search across local buckets
poks list # list installed apps
```
## Python API
Poks is designed to be used programmatically. See [examples/](examples/) for complete scripts.
```python
from pathlib import Path
from poks.poks import Poks
poks = Poks(root_dir=Path.home() / ".poks")
poks.install(Path("poks.json")) # from config file
poks.install_app("cmake", "3.28.1", bucket="main") # from bucket
poks.install_from_manifest(Path("cmake.json"), "3.28.1") # from manifest file
```
## Manifest format
For the manifest schema and detailed specifications, see [docs/specs.md](docs/specs.md).
## Contributing
This project uses [pypeline](https://github.com/cuinixam/pypeline) for build automation and `uv` for dependency management.
```bash
# Install pypeline
uv tool install pypeline-runner
# Run full pipeline (lint + tests)
pypeline run
```
For AI agents, see [AGENTS.md](AGENTS.md).
## Credits
[](https://github.com/copier-org/copier)
This package was created with
[Copier](https://copier.readthedocs.io/) and the
[browniebroke/pypackage-template](https://github.com/browniebroke/pypackage-template)
project template.
| text/markdown | cuinixam | me@cuinixam.com | null | null | null | null | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :... | [] | null | null | >=3.10 | [] | [] | [] | [
"gitpython<4,>=3",
"mashumaro<4,>=3.13",
"py-app-dev<3,>=2.1",
"py7zr<1,>=0",
"requests<3,>=2.32",
"typer<1,>=0",
"zstandard<1,>=0.20"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/cuinixam/poks/issues",
"Changelog, https://github.com/cuinixam/poks/blob/main/CHANGELOG.md",
"Repository, https://github.com/cuinixam/poks"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:58:29.896942 | poks-0.8.0.tar.gz | 23,890 | 19/25/017be3794c1719e2a91f791bdd761bb6d7fd0913d24a20b4284892e541a3/poks-0.8.0.tar.gz | source | sdist | null | false | 313558c244ec4adb2425d58289795910 | 18eb0059a41c75a124615d1b75186882314703f4537a66a179860c8ffefabcf8 | 1925017be3794c1719e2a91f791bdd761bb6d7fd0913d24a20b4284892e541a3 | MIT | [
"LICENSE"
] | 252 |
2.4 | empfin | 1.9 | Empirical Finance Tools |
# empfin - Empirical Finance Tools in Python
`empfin` is a Python toolkit for empirical asset pricing models and risk premia estimation. This library is in active development and aims to implement models from all corners of the literature.
# What's Inside
Currently available models for estimation of risk premia:
- `TimeseriesReg`: single-pass OLS time-series regression, described in [Cochrane (2005)](https://press.princeton.edu/books/hardcover/9780691121376/asset-pricing?srsltid=AfmBOoobXP_DmuPEfu1g7gm1ppk4h69GFHtwJqq0ugoZwSYKW60gLXZ6), Section 12.1
- `CrossSectionReg`: two-pass cross-sectional regression, described in [Cochrane (2005)](https://press.princeton.edu/books/hardcover/9780691121376/asset-pricing?srsltid=AfmBOoobXP_DmuPEfu1g7gm1ppk4h69GFHtwJqq0ugoZwSYKW60gLXZ6), Section 12.2
- `NonTradableFactors`: iterative maximum-likelihood estimator for non-tradable factors, described in [Campbell, Lo & MacKinlay (2012)](https://www.amazon.com/Econometrics-Financial-Markets-John-Campbell/dp/0691043019), Section 6.2.3
- `RiskPremiaTermStructure`: term structure of risk premia with a single factor, tradable or not, following [Bryzgalova, Huang & Julliard (2024)](https://doi.org/10.2139/ssrn.4752696). I would like to thank the authors for sharing their replication files.
# Examples
For each model, there is a jupyter notebook with [examples](https://github.com/gusamarante/empfin/tree/main/examples) of their use.
# Installation
```bash
pip install empfin
```
# References
Bryzgalova, Huang, and Julliard (2024) [“_Macro Strikes Back: Term Structure of Risk Premia_”](https://doi.org/10.2139/ssrn.4752696) Working Paper
Cochrane (2005) ["_Asset Pricing: Revised Edition_"](https://press.princeton.edu/books/hardcover/9780691121376/asset-pricing?srsltid=AfmBOoobXP_DmuPEfu1g7gm1ppk4h69GFHtwJqq0ugoZwSYKW60gLXZ6). Princeton University Press.
Campbell, Lo, and MacKinlay (2012) ["_The Econometrics of Financial Markets_"](https://www.amazon.com/Econometrics-Financial-Markets-John-Campbell/dp/0691043019)
# Library Citation
> Gustavo Amarante (2026). empfin - Empirical Finance Tools in Python. Retrieved from https://github.com/gusamarante/empfin
| text/markdown | Gustavo Amarante | null | Gustavo Amarante | gustavoca2@insper.edu.br | null | asset pricing, empirical asset pricing, empirical finance, factor models, finance, risk premia | [] | [] | null | null | null | [] | [] | [] | [
"matplotlib",
"numpy",
"pandas",
"scikit_learn",
"scipy",
"seaborn",
"setuptools",
"statsmodels",
"tqdm"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:58:07.133440 | empfin-1.9.tar.gz | 15,196 | c1/5f/cb94aac0ddadfbfadab0d2f842121f395cb53f90831628120cd47bf01ff9/empfin-1.9.tar.gz | source | sdist | null | false | 2e5a29ac74163b1ed47c10a78f92f96e | dd7669287d5c7bca76741e8b82070b6a5d434235982f76224c40aacc00f3462b | c15fcb94aac0ddadfbfadab0d2f842121f395cb53f90831628120cd47bf01ff9 | null | [
"LICENSE"
] | 254 |
2.4 | kora-sdk | 1.3.0 | Python SDK for the Kora authorization engine — deterministic spend authorization for AI agents | # Kora Python SDK
Python SDK for the Kora authorization engine. Handles Ed25519 signing, nonce generation, canonical JSON serialization, idempotent retry, and offline seal verification.
## Installation
```bash
pip install kora-sdk
```
Or install from source:
```bash
pip install -e sdk/python
```
**Requirements:** Python 3.9+, PyNaCl >= 1.5.0, requests >= 2.28.0
## Quick Start
```python
from kora import Kora
# Initialize with the secret key returned from agent creation
kora = Kora("kora_agent_sk_...")
# Authorize a spend
auth = kora.authorize(
mandate="mandate_abc123",
amount=50_00, # EUR 50.00
currency="EUR",
vendor="aws",
category="compute", # required if mandate has category_allowlist
)
if auth.approved:
print(f"Approved: {auth.decision_id}")
print(f"Daily remaining: {auth.limits_after_approval['daily_remaining_cents']}")
else:
print(f"Denied: {auth.reason_code}")
print(f"Hint: {auth.denial.hint}")
```
## Usage
### Authorize a Spend
```python
from kora import Kora
kora = Kora(
"kora_agent_sk_...",
base_url="http://localhost:8000", # default
ttl=300, # default TTL in seconds
max_retries=2, # automatic idempotent retry on network error
)
result = kora.authorize(
mandate="mandate_abc123",
amount=50_00,
currency="EUR",
vendor="aws",
category="compute",
)
```
### Result Properties
```python
result.approved # bool — True if APPROVED
result.decision # "APPROVED" or "DENIED"
result.decision_id # UUID of the authorization decision
result.reason_code # "OK", "DAILY_LIMIT_EXCEEDED", etc.
result.executable # bool — True if payment can be executed
result.is_valid # bool — True if TTL has not expired
result.is_enforced # bool — True if enforcement_mode == "enforce"
result.enforcement_mode # "enforce" or "log_only"
# On denial:
result.denial.hint # Human-readable suggestion
result.denial.actionable # Machine-readable corrective values
result.denial.failed_check # Which pipeline step failed
# On approval:
result.limits_after_approval # Remaining daily/monthly budget
# Evaluation trace:
result.evaluation_trace.steps # List of pipeline step results
result.evaluation_trace.total_duration_ms # Total evaluation time
# Notary seal:
result.notary_seal.signature # Ed25519 signature (base64)
result.notary_seal.public_key_id
result.notary_seal.algorithm # "Ed25519"
# Trace URL (for debugging denials):
result.trace_url # e.g. http://localhost:8000/v1/authorizations/<id>/trace
```
### Handle Denials
```python
result = kora.authorize(
mandate="mandate_abc123",
amount=999_99,
currency="EUR",
vendor="aws",
)
if not result.approved:
print(f"Denied: {result.reason_code}")
print(f"Hint: {result.denial.hint}")
# Machine-readable corrective values
if result.reason_code == "DAILY_LIMIT_EXCEEDED":
available = result.denial.actionable["available_cents"]
print(f"Available budget: {available} cents")
if result.reason_code == "VENDOR_NOT_ALLOWED":
allowed = result.denial.actionable["allowed_vendors"]
print(f"Allowed vendors: {allowed}")
# Full trace URL for debugging
print(f"Trace: {result.trace_url}")
```
### Verify Notary Seal (Offline)
```python
from base64 import b64decode
# Kora's public key (from your deployment)
kora_public_key = b64decode("...")
is_valid = kora.verify_seal(result, kora_public_key)
print(f"Seal valid: {is_valid}")
```
### Simulation Mode
Test denial scenarios without affecting state. Requires an admin key with `simulation_access=true`.
```python
result = kora.authorize(
mandate="mandate_abc123",
amount=100,
currency="EUR",
vendor="aws",
simulate="DAILY_LIMIT_EXCEEDED",
admin_key="kora_admin_...",
)
assert result.simulated is True
assert result.decision == "DENIED"
assert result.reason_code == "DAILY_LIMIT_EXCEEDED"
assert result.notary_seal is None # no seal in simulation
```
### OpenAI Function Tool Schema
Generate an OpenAI-compatible function tool definition for use with LLM agents:
```python
tool = kora.as_tool("mandate_abc123")
# Returns:
# {
# "type": "function",
# "function": {
# "name": "kora_authorize_spend",
# "description": "Authorize a spend against a Kora mandate...",
# "parameters": {
# "type": "object",
# "properties": {
# "amount_cents": {"type": "integer", "description": "..."},
# "currency": {"type": "string", "description": "..."},
# "vendor_id": {"type": "string", "description": "..."},
# },
# "required": ["amount_cents", "currency", "vendor_id"]
# }
# }
# }
# With category enum constraint:
tool = kora.as_tool("mandate_abc123", category_enum=["compute", "api_services"])
```
Use with OpenAI:
```python
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Buy $50 of AWS compute"}],
tools=[kora.as_tool("mandate_abc123")],
)
```
## Agent Self-Correction Pattern
```python
from kora import Kora
kora = Kora("kora_agent_sk_...")
# First attempt — too large
auth = kora.authorize(mandate="mandate_abc123", amount=999_99, currency="EUR", vendor="aws")
if not auth.approved and auth.reason_code == "DAILY_LIMIT_EXCEEDED":
# Read the actionable hint
available = auth.denial.actionable["available_cents"]
print(f"Budget available: {available} cents, retrying...")
# Retry with corrected amount
auth = kora.authorize(mandate="mandate_abc123", amount=available, currency="EUR", vendor="aws")
print(f"Second attempt: {auth.decision}") # APPROVED
```
## API Reference
### `Kora(key_string, base_url=None, ttl=300, max_retries=2)`
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `key_string` | str | required | Agent secret key (`kora_agent_sk_...`) |
| `base_url` | str | `http://localhost:8000` | Kora API base URL |
| `ttl` | int | 300 | Default TTL for decisions (seconds) |
| `max_retries` | int | 2 | Automatic retries on network error |
### `kora.authorize(**kwargs) -> AuthorizationResult`
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `mandate` | str | yes | Mandate ID |
| `amount` | int | yes | Amount in cents |
| `currency` | str | yes | 3-letter currency code |
| `vendor` | str | yes | Vendor identifier |
| `category` | str | no | Spending category |
| `simulate` | str | no | Force denial reason code (simulation mode) |
| `admin_key` | str | no | Admin key for simulation access |
### `kora.verify_seal(result, public_key) -> bool`
Verify the Ed25519 notary seal offline.
### `kora.as_tool(mandate, category_enum=None) -> dict`
Generate OpenAI function tool schema.
| text/markdown | Kora Protocol | null | null | null | AGPL-3.0-or-later | kora, authorization, ai-agents, spending, ed25519, deterministic, fintech | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: P... | [] | null | null | >=3.10 | [] | [] | [] | [
"pynacl>=1.5.0",
"requests>=2.28.0",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/kora-protocol/kora",
"Repository, https://github.com/kora-protocol/kora"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-18T17:58:04.091933 | kora_sdk-1.3.0.tar.gz | 28,429 | b8/7a/56da736dd7b6f657d4d9183a0830dc53d66661bd1618488189384ea6dc0f/kora_sdk-1.3.0.tar.gz | source | sdist | null | false | 2de35f6290aa797662ec4e53936ea1a5 | 2f280c2a474c97e5b7fc17294eeba6c6791d865655691253d7df60108dd4b4f3 | b87a56da736dd7b6f657d4d9183a0830dc53d66661bd1618488189384ea6dc0f | null | [] | 236 |
2.4 | inferedge-moss-core | 0.4.2 | Proprietary core index logic for Moss | # InferEdge MOSS Core
`inferedge-moss-core` is the high-performance Rust-based core library that powers the MOSS semantic search SDK.
## Overview
This package is designed to be used as a dependency by higher-level SDKs.
**Note**: For most use cases, you should use [`inferedge-moss`](https://pypi.org/project/inferedge-moss/) instead, which provides a complete Python SDK with cloud integration and a user-friendly API.
## Installation
```bash
pip install inferedge-moss-core
```
## Usage
This is a low-level library. For typical usage, install the main SDK:
```bash
pip install inferedge-moss
```
## Related Packages
- [`inferedge-moss`](https://pypi.org/project/inferedge-moss/) - Complete Python SDK with cloud integration
- [`@inferedge/moss`](https://www.npmjs.com/package/@inferedge/moss) - JavaScript/TypeScript SDK
## 📄 License
This package is licensed under the [PolyForm Shield License 1.0.0](./LICENSE).
- ✅ Free for testing, evaluation, internal use, and modifications.
- ❌ Not permitted for production or competing commercial use.
- 📩 For commercial licenses, contact: <contact@inferedge.dev>
## 📬 Contact
For support, commercial licensing, or partnership inquiries, contact us: [contact@inferedge.dev](mailto:contact@inferedge.dev)
| text/markdown; charset=UTF-8; variant=GFM | InferEdge, Inc. | null | null | null | SEE LICENSE IN LICENSE | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming ... | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/InferEdge-Inc/moss",
"Issues, https://github.com/InferEdge-Inc/moss/issues"
] | maturin/1.12.2 | 2026-02-18T17:57:18.706738 | inferedge_moss_core-0.4.2-cp312-cp312-manylinux_2_38_aarch64.whl | 5,419,093 | 2e/a5/03742fba58ee194cff31f14b46452b015ddeceb3f4f9cbe7700a7d5fe072/inferedge_moss_core-0.4.2-cp312-cp312-manylinux_2_38_aarch64.whl | cp312 | bdist_wheel | null | false | a7ba9980a5a2c39f2bacb4d9ba42789c | 691729a20f9fddc2062c78f7fdd63fed4126c8f4fd5d865d5645f2bdf7d31cbb | 2ea503742fba58ee194cff31f14b46452b015ddeceb3f4f9cbe7700a7d5fe072 | null | [] | 1,837 |
2.2 | certora-cli-beta-mirror | 8.10.0 | Runner for the Certora Prover | Commit 0040758. Build and Run scripts for executing the Certora Prover on Solidity smart contracts.
| text/markdown | Certora | support@certora.com | null | null | GPL-3.0-only | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent"
] | [] | https://pypi.org/project/certora-cli-beta-mirror | null | >=3.9 | [] | [] | [] | [
"click",
"json5",
"pycryptodome",
"requests",
"rich",
"sly",
"tabulate",
"tqdm",
"StrEnum",
"jinja2",
"wcmatch",
"typing_extensions"
] | [] | [] | [] | [
"Documentation, https://docs.certora.com/en/latest/",
"Source, https://github.com/Certora/CertoraProver"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T17:56:53.085037 | certora_cli_beta_mirror-8.10.0.tar.gz | 42,911,216 | 2d/f6/717706fe1b3097889cccfec3034a74d87ce48ba7f2f62cb318fc6b03c75a/certora_cli_beta_mirror-8.10.0.tar.gz | source | sdist | null | false | 1b3ebc4ffac2bd9f9154a77002a8a012 | 91fbe715cbb2bb74f7784e226e4380649d349b695efcdd073c0556ce9fda3143 | 2df6717706fe1b3097889cccfec3034a74d87ce48ba7f2f62cb318fc6b03c75a | null | [] | 415 |
2.2 | certora-cli-beta | 8.10.0 | Runner for the Certora Prover | Commit 0040758. Build and Run scripts for executing the Certora Prover on Solidity smart contracts.
| text/markdown | Certora | support@certora.com | null | null | GPL-3.0-only | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent"
] | [] | https://pypi.org/project/certora-cli-beta | null | >=3.9 | [] | [] | [] | [
"click",
"json5",
"pycryptodome",
"requests",
"rich",
"sly",
"tabulate",
"tqdm",
"StrEnum",
"jinja2",
"wcmatch",
"typing_extensions"
] | [] | [] | [] | [
"Documentation, https://docs.certora.com/en/latest/",
"Source, https://github.com/Certora/CertoraProver"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T17:56:46.603954 | certora_cli_beta-8.10.0.tar.gz | 42,910,980 | 8f/37/3d4d38a8a779ae73426f5450c068f82b02276cdb0d7e75bfcbbd205110b8/certora_cli_beta-8.10.0.tar.gz | source | sdist | null | false | c7be218b5790cb1f3149176b9af2796f | c07304103d9d98cbde258ce811bcd6168671862ba89dbebe9774b3942ac90ca2 | 8f373d4d38a8a779ae73426f5450c068f82b02276cdb0d7e75bfcbbd205110b8 | null | [] | 604 |
2.4 | pocketdock | 1.2.6 | Portable, offline-first container sandboxes for LLM agents and dev workflows | # pocketdock
[](https://github.com/deftio/pocketdock/actions/workflows/ci.yml)
[](https://pypi.org/project/pocketdock/)
[](https://github.com/deftio/pocketdock/actions/workflows/ci.yml)
[](https://deftio.github.io/pocketdock/)
[](https://opensource.org/licenses/BSD-2-Clause)
**Portable, offline-first container sandboxes for LLM agents and dev workflows.**
One Container class. Podman-first, Docker-compatible. Python SDK + CLI. Zero cloud. Zero API keys.
## Why pocketdock?
Managed sandbox platforms require API keys, cloud accounts, and an internet connection. Rolling your own container glue means rewriting hundreds of lines of boilerplate every time. pocketdock sits in between: a clean Python SDK that talks directly to your container engine over its Unix socket, works entirely offline, and has zero external dependencies for the core SDK.
## Features
- **Three execution modes** — blocking, streaming, and detached (background) with ring buffer
- **File operations** — read, write, list, push, and pull files between host and container
- **Persistent sessions** — long-lived shell sessions with state (cwd, env vars, history)
- **Resource limits** — memory caps, CPU throttling, per-container isolation
- **Port mapping** — expose container ports on the host (e.g., `ports={8080: 80}`)
- **Container persistence** — stop/resume, snapshot to image, volume mounts
- **Project management** — `.pocketdock/` project directories with config, logging, and health checks
- **Image profiles** — six pre-baked Dockerfiles: minimal-python, minimal-node, minimal-bun, dev, agent, embedded
- **Full CLI** — 22 commands for container lifecycle, file ops, and project management
- **Async-first** — sync facade over async core; use either API style
- **Callbacks** — register handlers for stdout, stderr, and exit events
## Quick Example
```python
from pocketdock import create_new_container
with create_new_container() as c:
result = c.run("echo hello")
print(result.stdout) # "hello\n"
print(result.ok) # True
```
## Install
```bash
pip install pocketdock # SDK + CLI (includes click, rich)
pip install pocketdock[agent] # + LLM agent (litellm, python-dotenv)
```
Single-file downloads (no pip required) are available from [GitHub Releases](https://github.com/deftio/pocketdock/releases).
Requires [Podman](https://podman.io/getting-started/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/).
```bash
# Build the minimal-python image (~25MB, <500ms startup)
pocketdock build minimal-python
```
## Documentation
Full documentation is available at **[deftio.github.io/pocketdock](https://deftio.github.io/pocketdock/)**.
- [Quickstart](https://deftio.github.io/pocketdock/quickstart/) — install, build, run your first container
- [User Guide](https://deftio.github.io/pocketdock/guide/containers/) — containers, commands, files, sessions, persistence, profiles
- [CLI Reference](https://deftio.github.io/pocketdock/cli/) — all 22 commands with examples
- [API Reference](https://deftio.github.io/pocketdock/reference/api/) — full SDK reference
## Architecture
```
User Code / LLM Agent / CLI
|
v
pocketdock SDK
+--------------------------------------+
| Container (sync) -> AsyncContainer | facade pattern
| +- _socket_client (raw HTTP/Unix) |
+- ProjectManager (.pocketdock/) |
+- Persistence (resume, snapshot) |
+- Sessions (persistent shells) |
+--------------------------------------+
| raw HTTP over Unix socket
| (one connection per operation)
v
Podman (rootless) / Docker Engine
```
**Design principles:**
- **Connection-per-operation** — each API call opens its own Unix socket. No pooling.
- **Async core, sync facade** — `AsyncContainer` does all real work. `Container` is a sync wrapper.
- **No cached state** — always polls live from the engine.
- **Minimal dependencies** — stdlib-only for the core SDK.
## Development
```bash
uv sync --dev # Install dependencies
uv run pytest # Run tests (100% coverage enforced)
uv run ruff check . # Lint (zero warnings)
uv run mypy --strict python/ # Type checking (strict mode)
uv run mkdocs serve # Local docs site
```
## License
BSD-2-Clause. Copyright (c) deftio llc.
| text/markdown | deftio llc | null | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming ... | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"pyyaml>=6.0",
"rich>=13.0",
"tomli>=2.0; python_version < \"3.11\"",
"litellm; extra == \"agent\"",
"python-dotenv; extra == \"agent\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:56:35.260772 | pocketdock-1.2.6.tar.gz | 173,110 | c8/9d/5ea9e914c35a6da30ec9617198e033925d6dfa256c953c81d0bd54c1039d/pocketdock-1.2.6.tar.gz | source | sdist | null | false | d2fa9f32aea2aaf70e57601b23d45e74 | 1103a67dc402b0822800660db3eea712e83dd9a6aaa9f7120a2bd47f7a46bb6d | c89d5ea9e914c35a6da30ec9617198e033925d6dfa256c953c81d0bd54c1039d | BSD-2-Clause | [
"LICENSE"
] | 243 |
2.4 | starcatpy | 1.0.10 | Implements *CellAnnotator (aka *CAT/starCAT), annotating scRNA-Seq with predefined gene expression programs | ## starCAT <img src="https://drive.google.com/uc?export=view&id=1W1in9vldkKdNe6ncwsHD6L6MSvfcKV6M" width="130px" align="right" />
Implements starCellAnnoTator (AKA starCAT), annotating scRNA-Seq with predefined gene expression programs
<br>
## Citation
If you use starCAT, please cite our [manuscript](https://www.nature.com/articles/s41592-025-02793-1).
## Installation
You can install starCAT and its dependencies via the Python Package Index.
```bash
pip install starcatpy
```
We tested it with scikit-learn 1.3.2, AnnData 0.9.2, and python 3.8. To run the tutorials, you also need jupyter or jupyterlab as well as scanpy and cnmf:
```bash
pip install jupyterlab scanpy cnmf
```
## Basic starCAT usage
Please see our tutorials in [python](Examples/starCAT_vignette.ipynb) and [R](Examples/starCAT_vignette_R.ipynb). A sample pipeline using a pre-built reference programs (TCAT.V1) is shown below.
```python
# Load default TCAT reference from starCAT databse
tcat = starCAT(reference='TCAT.V1')
# tcat.ref.iloc[:5, :5]
# A1BG AARD AARSD1 ABCA1 ABCB1
# CellCycle-G2M 2.032614 22.965553 17.423538 3.478179 2.297279
# Translation 35.445282 0.000000 9.245893 0.477994 0.000000
# HLA 18.192997 14.632670 2.686475 3.937182 0.000000
# ISG 0.436212 0.000000 18.078197 17.354506 0.000000
# Mito 10.293049 0.000000 52.669895 14.615502 3.341488
# Load cell x genes counts data
adata = tcat.load_counts(datafn)
# Run starCAT
# expects the input data to be raw counts and to be stored in adata.X
# rather than adata.layers['counts']
usage, scores = tcat.fit_transform(adata)
usage.iloc[0:2, 0:4]
# CellCycle-G2M Translation HLA ISG
# CATGCCTAGTCGATAA-1-gPlexA4 0.000039 0.001042 0.001223 0.000162
# AAGACCTGTAGCGTCC-1-gPlexC6 0.000246 0.100023 0.002991 0.042354
scores.iloc[0:2, :]
# ASA Proliferation ASA_binary \
# CATGCCTAGTCGATAA-1-gPlexA4 0.001556 0.00052 False
# AAGACCTGTAGCGTCC-1-gPlexC6 0.012503 0.01191 False
# Proliferation_binary Multinomial_Label
# CATGCCTAGTCGATAA-1-gPlexA4 False CD8_TEMRA
# AAGACCTGTAGCGTCC-1-gPlexC6 False CD4_Naive
```
starCAT also can be run in the command line.
```bash
starcat --reference "TCAT.V1" --counts {counts_fn} --output-dir {output_dir} --name {outuput_name}
```
* --reference - name of a default reference to download (ex. TCAT.V1) OR filepath containing a reference set of GEPs by genes (*.tsv/.csv/.txt), default is 'TCAT.V1'
* --counts - filepath to input (cell x gene) counts matrix as a matrix market (.mtx.gz), tab delimited text file, or anndata file (.h5ad)
* --scores - optional path to yaml file for calculating score add-ons, not necessary for pre-built references
* --output-dir - the output directory. all output will be placed in {output-dir}/{name}...'. default directory is '.'
* --name - the output analysis prefix name, default is 'starCAT'
For code to reproduce figures and analyses from our manuscript, please refer to the [TCAT analysis](https://github.com/immunogenomics/TCAT_analysis) Github.
## Alternate implementation
For small datasets (smaller than ~50,000 cells or 700 MB), try running starCAT without installing any packages on our [website](https://immunogenomics.io/starcat/).
## Creating your own reference
We provide example scripts for constructing custom starCAT references from [a single cNMF run](./Examples/build_reference_vignette.ipynb) or [multiple cNMF runs](./Examples/build_multidataset_reference_vignette.ipynb).
__Please let us know if you are interested in making your reference publically available for others to use analogous to our TCAT.V1 reference. You can email me at dkotliar@broadinstitute.org__
| text/markdown | Dylan Kotliar, Michelle Curtis | dylkot@gmail.com, curtism@broadinstitute.org | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/immunogenomics/starCAT | null | null | [] | [] | [] | [
"scikit-learn>=1.0",
"anndata",
"pandas",
"numpy",
"scipy",
"pyyaml",
"requests"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/immunogenomics/starCAT/issues"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T17:56:20.104954 | starcatpy-1.0.10.tar.gz | 18,076 | 26/8f/b99f2a6e4d0d9596e3c2a9b0415ec564cfb9ac69e1ce1a32b2d9ca217045/starcatpy-1.0.10.tar.gz | source | sdist | null | false | 88cb12d60a772af2533b4b1adcfb74b2 | ff1b7e7a6d3e9432a7a8443bff810a44780bda188722388a6565ae09c03d4186 | 268fb99f2a6e4d0d9596e3c2a9b0415ec564cfb9ac69e1ce1a32b2d9ca217045 | null | [
"LICENSE"
] | 358 |
2.4 | svg-path-extractor | 0.1.0 | Library for loading SVG paths into shapely objects. | # Library for Loading SVG Paths into Shapely Objects
This library provides some utility functions to find all paths in a SVG document and create [`shapely`](https://shapely.readthedocs.io) geometry objects from them.
It uses [`svg.path`](https://github.com/regebro/svg.path), [`shapely`](https://shapely.readthedocs.io) and [`numpy`](https://numpy.org) under the hood.
## Installation
The package is installable via [PyPI](https://pypi.org/project/svg-path-to-shapely).
## Basic Usage
The intended workflow is as follows.
First, load a SVG document and use the `find_all_paths_in_svg` method query the element tree for `path` elements featuring a `d` attribute.
The optional argument `with_namespace` determines, whether the SVG namespace shall respected in the query (if false, all `path` elements no matter what namespace are found).
As the first argument you may give a string containing SVG code (which will be parsed by [`xml.etree.ElementTree`](https://docs.python.org/3/library/xml.etree.elementtree.html)), or already parsed `xml.etree.ElementTree.ElementTree`/`xml.etree.ElementTree.Element` instances.
Alternatively you may read directly from file with `find_all_paths_in_file`.
```python
from svg_path_to_shapely import find_all_paths_in_svg
paths = find_all_paths_in_svg("your svg code...", with_namespace=True)
```
or
```python
from svg_path_to_shapely import find_all_paths_in_svg
from xml.etree.ElementTree import parse
et = parse("some path-like...")
paths = find_all_paths_in_svg(et, with_namespace=True)
```
or
```python
from svg_path_to_shapely import find_all_paths_in_svg
from xml.etree.ElementTree import fromstring
et = fromstring("your svg code...")
paths = find_all_paths_in_svg("your svg code...", with_namespace=True)
```
or
```python
from svg_path_to_shapely import find_all_paths_in_file
paths = find_all_paths_in_file("some path-like...", with_namespace=True)
```
`paths` will then be a list of `xml.etree.ElementTree.Element` instances representing all `path` element in the document.
Then, create [`svg.path.Path`](https://github.com/regebro/svg.path) instances from those by use of `parse_path`.
This step is intentionally left explicit to be able to query for additional attributes as needed (such as `id`) on the path elements.
You may supply the element instance directly or a string with a valid value of the `d` attribute.
```python
from svg_path_to_shapely import parse_path
parsed = [parse_path(p) for p in paths]
```
Last, convert those path instances to [`shapely`](https://shapely.readthedocs.io) geometries using `convert_path_to_line_string`.
This function may return a `LineString`, `LinearRing` or `MultiLineString`, depending on whether the path is open, closed or multi-part (with multiple `M`/`m` directives), respectively.
The optional parameter `count` determines the number of evenly spaced discrete points to approximate arcs and Bezier curves with (as `shapely` does only know linear strings).
```python
from svg_path_to_shapely import convert_path_to_line_string
geoms = [convert_path_to_line_string(p, count=11) for p in parsed]
```
The latter will check if the path is multi-part and split it accordingly.
To spare this effort, if you know the path is single-part, you may use `convert_single_part_path_to_line_string` instead.
This will essentially treat multiple move directives (`M`/`m`) as if they were line directives (`L`/`l`).
The library exports some more of its lower-level functions.
Have a look into their docstrings for information on how to use them.
## Application Examples
You find example SVG documents and respective code in the `test` directory.
### Powder Particle Shape Analysis
A micrograph of copper powder particles was imported in [Inkscape](https://inkscape.org) and paths were manually drawn around the particles to determine their contours.

The paths were extracted and converted to `shapely` geometries to be able to analyse their geometric properties further (for the sake of example just centered at (0, 0) and plotted again).

## Building and Testing
Project dependencies and build process are maintained using [`uv`](https://docs.astral.sh/uv).
Build the package using `uv build`.
Test are run using `uv run pytest`.
## License
The software is distributed under the terms of the [MIT License](LICENSE).
## Contributing
Issues and pull requests are welcome without any special contribution guidelines.
| text/markdown | Max Weiner | Max Weiner <max.weiner@posteo.de> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=2.0",
"shapely>=2.0",
"svg-path>=7.0"
] | [] | [] | [] | [
"Homepage, https://codeberg.org/axtimhaus/svg-path-to-shapely",
"Repository, https://codeberg.org/axtimhaus/svg-path-to-shapely"
] | uv/0.9.22 {"installer":{"name":"uv","version":"0.9.22","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Manjaro Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T17:56:13.414294 | svg_path_extractor-0.1.0.tar.gz | 4,290 | 0d/c1/00ad98d7c6907efddb683e701b139266464561cde6cec1e901512695ad76/svg_path_extractor-0.1.0.tar.gz | source | sdist | null | false | d17e38dd5e5a2ce8f77b15e552ecae58 | 9b66203703bc04fa68e85877b921ea6bf7411d7f73e505d18cb2735f44d4de77 | 0dc100ad98d7c6907efddb683e701b139266464561cde6cec1e901512695ad76 | MIT | [] | 216 |
2.3 | frogml-cli | 0.2.0 | Frogml CLI for frogml models | # Frogml cli
Frogml cli is an end-to-end production ML platform designed to allow data scientists to build, deploy, and monitor their models in production with minimal engineering friction.
### Frogml cli
| text/markdown | JFrog Ltd. | contact@jfrog.com | null | null | Apache-2.0 | mlops, ml, deployment, serving, model, jfrog | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Pro... | [] | null | null | <3.14,>=3.9 | [] | [] | [] | [
"croniter==1.4.1",
"tabulate>=0.8.0",
"python-json-logger<4.0.0,>=3.2.1",
"yaspin>=2.0.0",
"rich>=13.0.0",
"cookiecutter",
"gitpython>=2.1.0",
"boto3<2.0.0,>=1.24.116; extra == \"batch\" or extra == \"feedback\" or extra == \"local-deployment\"",
"pandas<3.0.0,>=2.3.0; extra == \"batch\" or extra ==... | [] | [] | [] | [
"Home page, https://www.jfrog.com/"
] | poetry/2.1.3 CPython/3.9.25 Linux/6.12.66-88.122.amzn2023.x86_64 | 2026-02-18T17:55:25.789886 | frogml_cli-0.2.0.tar.gz | 140,955 | 4e/8c/8256be2e363a848f0d079c533629c8b2b9bb6197b4059e56f5ea917fcfa2/frogml_cli-0.2.0.tar.gz | source | sdist | null | false | 55908e4dad8a31e1c0b44bdaf5b24aac | 721a285032d0a5647747571235e5d2c14eb8fbbe3348be0a2703c096e8432271 | 4e8c8256be2e363a848f0d079c533629c8b2b9bb6197b4059e56f5ea917fcfa2 | null | [] | 638 |
2.4 | baltra-sdk | 1.0.52 | Internal SDK to share Baltra domain and infrastructure | # Baltra SDK - Centralized and Decoupled Architecture
Internal SDK package for sharing Baltra domain models, infrastructure adapters, and common utilities across microservices.
## Changelog
### Version 1.0.8 (Current)
#### Database Models
- **HiringObjectives**: Added `role_id` foreign key field referencing `roles.role_id` with CASCADE delete
- **HiringObjectives**: Added `objective_amount` integer field (required, non-nullable)
These changes enable hiring objectives to be linked to specific roles and track target hiring amounts per role.
### Version 1.0.7
- Initial stable release of the SDK
### Version 1.0.6
- Foundation models and database adapters
### Version 1.0.5
- Core screening models and utilities
### Version 1.0.4
- Initial database models migration
### Version 1.0.3
- Basic SDK structure and packaging
### Version 1.0.2
- Initial repository setup
### Version 1.0.1
- Project initialization
---
## Overview
This SDK consolidates business logic, domain contracts, and common integrations (databases, Meta, Step Functions, external providers) into a single reusable package, avoiding duplication across microservices and reducing maintenance time when models or schemas change.
## Scope
- Python library (`baltra-sdk`) distributed via private PyPI or Git repository
- Exports services, models, repositories, and external clients used by Baltra microservices
- Includes shared utilities (logging, configuration, validations, unit of work)
- Defines explicit contracts between domain and infrastructure layers to preserve backward compatibility
## Design Principles
- **Domain independent of infrastructure**: services consume ports (interfaces) without knowing concrete implementations
- **No side effects on import**: initializations (sessions, clients) execute under factories or context managers
- **Idempotency and backward compatibility**: schema changes must be exposed without breaking existing versions
- **Semantic versioning**: any breaking change must increment the major version and trigger coordinated migrations
- **Built-in observability**: SDK exposes hooks for metrics, logging, and tracing without coupling to a particular provider
## Current Architecture
```
baltra_sdk/
├── backend/
│ └── db/
│ ├── models.py # Legacy database models
│ └── screening_models.py # Screening domain models (26 models)
├── lambdas/
│ ├── db/
│ │ └── sql_utils.py # SQL utilities for Lambda functions
│ ├── services/
│ │ ├── openai_utils.py # OpenAI integration utilities
│ │ ├── whatsapp_messages.py # WhatsApp message handling
│ │ └── whatsapp_utils.py # WhatsApp utilities
│ └── utils/
│ └── candidate_data_fetcher.py # Candidate data fetching logic
├── shared/
│ ├── elevenlabs/
│ │ └── elevenlabs_prompt.py # ElevenLabs integration
│ ├── email_templates/
│ │ └── email_templates.py # Email template utilities
│ ├── funnel_states/
│ │ └── funnel_states.py # Funnel state definitions
└── __init__.py # Public entry points
```
## Database Models
The SDK provides SQLAlchemy models for the screening domain through `baltra_sdk.backend.db.screening_models`. The main models include:
### Core Models
- **Users**: User authentication and profile data
- **CompanyGroups**: Company group configurations
- **BusinessUnits**: Business unit definitions
- **Roles**: Role definitions with eligibility criteria
- **Locations**: Location data for jobs and interviews
### Hiring Models
- **HiringObjectives**: Hiring goals with role association and target amounts
- **Candidates**: Candidate profiles and tracking
- **CandidateFunnelLog**: Funnel state transitions
- **CandidateReferences**: Candidate reference tracking
- **ReferenceMessages**: Reference message logs
### Screening Models
- **QuestionSets**: Question set definitions
- **ScreeningQuestions**: Individual screening questions
- **ScreeningAnswers**: Candidate answers to screening questions
- **PhoneInterviews**: Phone interview records
- **PhoneInterviewQuestions**: Phone interview question sets
### Communication Models
- **MessageTemplates**: WhatsApp message templates
- **ScreeningMessages**: Screening conversation messages
- **WhatsappStatusUpdates**: WhatsApp message status tracking
- **EmailLogs**: Email communication logs
- **OnboardingResponses**: Onboarding form responses
### Additional Models
- **ProductUsage**: Product usage tracking
- **DashboardConfigurations**: Dashboard configuration settings
- **CandidateMedia**: Candidate media files
- **ResponseTiming**: Response time analytics
- **EligibilityEvaluationLog**: Eligibility evaluation tracking
- **AdTemplate**: Advertisement template definitions
### Database Utilities
The SDK provides `DBShim` for database session management outside Flask contexts:
```python
from baltra_sdk.backend.db.screening_models import DBShim, build_db_url_from_settings
from config.settings import settings
db_shim = DBShim.from_settings(settings)
session = db_shim.session
```
## Dependencies
### Core Dependencies
- SQLAlchemy >=2.0,<3.0
- Flask-SQLAlchemy >=3.0,<4.0
- psycopg2-binary >=2.9,<3.0
- python-dotenv >=0.21,<1.0
- boto3 >=1.26
- requests >=2.28,<3.0
- PyJWT >=2.0,<3.0
- Flask >=2.2,<3.0
### Optional Dependencies
#### Web Extras
```bash
pip install baltra-sdk[web]
```
Includes: Jinja2, gunicorn
#### Auth Extras
```bash
pip install baltra-sdk[auth]
```
Includes: authlib, bcrypt
#### MSSQL Extras
```bash
pip install baltra-sdk[mssql]
```
Includes: pyodbc
#### Scheduler Extras
```bash
pip install baltra-sdk[scheduler]
```
Includes: APScheduler
#### Reporting Extras
```bash
pip install baltra-sdk[reporting]
```
Includes: pandas, numpy, matplotlib, Pillow, playwright
#### AI Extras
```bash
pip install baltra-sdk[ai]
```
Includes: openai, aiohttp
#### All Extras
```bash
pip install baltra-sdk[all]
```
Includes all optional dependencies
## Installation
### Production
```bash
pip install --no-cache-dir --upgrade "baltra-sdk==1.0.8" \
--extra-index-url "${PIP_EXTRA_INDEX_URL}"
```
### Development (Editable Mode)
```bash
pip install -e ./baltra-sdk
```
### Docker Development
Mount the SDK as a volume for hot-reload in `entrypoint.sh`:
```bash
set -euo pipefail
if [ -d "/sdk" ]; then
pip install -e /sdk
else
pip install --no-cache-dir --upgrade "baltra-sdk==${SDK_VERSION:-1.0.*}" \
--extra-index-url "${PIP_EXTRA_INDEX_URL}"
fi
exec "$@"
```
## Usage Examples
### Database Session Management
```python
from baltra_sdk.backend.db.screening_models import DBShim
from config.settings import settings
db_shim = DBShim.from_settings(settings)
session = db_shim.session
try:
from baltra_sdk.backend.db.screening_models import Candidates
candidates = session.query(Candidates).filter_by(business_unit_id=123).all()
finally:
db_shim.remove_session()
```
### Lambda Function Usage
```python
from baltra_sdk.lambdas.services.whatsapp_messages import process_message
from baltra_sdk.lambdas.utils.candidate_data_fetcher import CandidateDataFetcher
def lambda_handler(event, context):
fetcher = CandidateDataFetcher()
candidate_data = fetcher.fetch(phone_number="+1234567890")
return process_message(event, candidate_data)
```
## Versioning Strategy
- **MAJOR**: Breaking changes in contracts (database models, service interfaces)
- **MINOR**: New features that maintain backward compatibility
- **PATCH**: Bug fixes without API changes
## Release Process
1. Merge to `main` triggers packaging pipeline (`python -m build`)
2. Publication to private PyPI / Git release with changelog
3. Semantic tag (`v1.0.8`) and signed `.whl` artifact
4. Downstream pipelines update images that depend on the SDK
## Migration Notes
When upgrading between versions:
1. Check the changelog for database schema changes
2. Run database migrations if required
3. Update imports if any module paths changed
4. Test integration points before production deployment
### Breaking Changes Policy
- Major version increments indicate breaking changes
- Breaking changes are documented in the changelog
- Migration guides are provided for major version upgrades
- Deprecated features are marked and removed in the next major version
## Quality and Observability
- Unit tests per module (domain isolated with stubs, infra with database fixtures in Docker)
- Contracts validated with type hints and runtime checks
- Integration tests with in-memory SQLite for database operations
- Logging structured through Python's logging module
## Development Guidelines
- Use `DBShim` for database sessions outside Flask contexts
- Do not read environment variables at import time; use cached `get_settings()` functions
- Models use Flask-SQLAlchemy with explicit foreign key relationships
- Repositories should expose idempotent and transactional methods
- Use adapters per service for complex scenarios
## Support
For issues, questions, or contributions, please contact the SDK maintainers or open an issue in the repository.
## License
Proprietary - Baltra Internal Use Only
| text/markdown | null | Baltra <soporte@baltra.ai> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"SQLAlchemy<3.0,>=2.0",
"python-dotenv<1.0,>=0.21",
"psycopg2-binary<3.0,>=2.9",
"requests<3.0,>=2.28",
"boto3>=1.26",
"mixpanel>=4.10",
"typing-extensions>=4.0",
"pytz>=2022.0",
"PyJWT<3.0,>=2.0",
"click<9.0,>=8.0",
"Flask<3.0,>=2.2",
"Flask-Cors>=3.0",
"Flask-Session>=0.4",
"Flask-SQLAlc... | [] | [] | [] | [
"Repository, https://github.com/Baltra-ai/baltra-sdk"
] | twine/6.2.0 CPython/3.11.13 | 2026-02-18T17:54:43.400749 | baltra_sdk-1.0.52.tar.gz | 45,204 | 9c/bc/723298e72b3a9c982ead86af0d3d21d0ff4a5dcb164d448aeead10d0931a/baltra_sdk-1.0.52.tar.gz | source | sdist | null | false | 2520c36d5a0c25bd0aca07a39a7fdd22 | bf1f141d4603b95cb5b2ffcb427902db74e2164149630785bc4be2f4bdfac9cb | 9cbc723298e72b3a9c982ead86af0d3d21d0ff4a5dcb164d448aeead10d0931a | LicenseRef-Proprietary | [] | 542 |
2.4 | amsdal_ml | 1.0.1 | amsdal_ml plugin for AMSDAL Framework | # AMSDAL ML
[](https://github.com/amsdal/amsdal_ml/actions/workflows/ci.yml)
[](https://www.python.org/downloads/)
Machine learning plugin for the AMSDAL Framework, providing embeddings, vector search, semantic retrieval, and AI agents with support for OpenAI models.
## Features
- **Vector Embeddings**: Generate and store embeddings for any AMSDAL model with automatic chunking
- **Semantic Search**: Query your data using natural language with tag-based filtering
- **AI Agents**: Build Q&A systems with streaming support and citation tracking
- **Async-First**: Optimized for high-performance async operations
- **MCP Integration**: Expose and consume tools via Model Context Protocol (stdio/HTTP)
- **File Attachments**: Process and embed documents with built-in loaders
- **Extensible**: Abstract base classes for custom models, retrievers, and ingesters
## Installation
```bash
pip install amsdal-ml
```
### Requirements
- Python 3.11 or higher
- AMSDAL Framework 0.5.6+
- OpenAI API key (for default implementations)
## Quick Start
### 1. Configuration
Create a `.env` file in your project root:
```env
OPENAI_API_KEY=sk-your-api-key-here
async_mode=true
ml_model_class=amsdal_ml.ml_models.openai_model.OpenAIModel
ml_retriever_class=amsdal_ml.ml_retrievers.openai_retriever.OpenAIRetriever
ml_ingesting_class=amsdal_ml.ml_ingesting.openai_ingesting.OpenAIIngesting
```
Create a `config.yml` for AMSDAL connections:
```yaml
application_name: my-ml-app
async_mode: true
connections:
- name: sqlite_state
backend: sqlite-state-async
credentials:
- db_path: ./warehouse/state.sqlite3
- check_same_thread: false
- name: lock
backend: amsdal_data.lock.implementations.thread_lock.ThreadLock
resources_config:
repository:
default: sqlite_state
lock: lock
```
### 2. Generate Embeddings
```python
from amsdal_ml.ml_ingesting.openai_ingesting import OpenAIIngesting
from amsdal_ml.ml_config import ml_config
# Initialize ingesting
ingester = OpenAIIngesting(
model=MyModel,
embedding_field='embedding',
)
# Generate embeddings for an instance
instance = MyModel(content='Your text here')
embeddings = await ingester.agenerate_embeddings(instance)
await ingester.asave(embeddings, instance)
```
### 3. Semantic Search
```python
from amsdal_ml.ml_retrievers.openai_retriever import OpenAIRetriever
retriever = OpenAIRetriever()
# Search for relevant content
results = await retriever.asimilarity_search(
query='What is machine learning?',
k=5,
include_tags=['documentation']
)
for chunk in results:
print(f'{chunk.object_class}:{chunk.object_id} - {chunk.raw_text}')
```
### 4. Build an AI Agent
```python
from amsdal_ml.agents.default_qa_agent import DefaultQAAgent
agent = DefaultQAAgent()
# Ask questions
output = await agent.arun('Explain vector embeddings')
print(output.answer)
print(f'Used tools: {output.used_tools}')
# Stream responses
async for chunk in agent.astream('What is semantic search?'):
print(chunk, end='', flush=True)
```
### 5. Functional Calling Agent with Python Tools
```python
from amsdal_ml.agents.functional_calling_agent import FunctionalCallingAgent
from amsdal_ml.agents.python_tool import PythonTool
from amsdal_ml.ml_models.openai_model import OpenAIModel
llm = OpenAIModel()
agent = FunctionalCallingAgent(model=llm, tools=[search_tool, render_tool])
result = await agent.arun(user_query="Find products with price > 100", history=[])
```
### 6. Natural Language Query Retriever
```python
from amsdal_ml.ml_retrievers.query_retriever import NLQueryRetriever
retriever = NLQueryRetriever(llm=llm, queryset=Product.objects.all())
documents = await retriever.invoke("Show me red products", limit=10)
```
### 7. Document Ingestion Pipeline
```python
from amsdal_ml.ml_ingesting import ModelIngester
from amsdal_ml.ml_ingesting.pipeline import DefaultIngestionPipeline
from amsdal_ml.ml_ingesting.loaders.pdf_loader import PdfLoader
from amsdal_ml.ml_ingesting.processors.text_cleaner import TextCleaner
from amsdal_ml.ml_ingesting.splitters.token_splitter import TokenSplitter
from amsdal_ml.ml_ingesting.embedders.openai_embedder import OpenAIEmbedder
from amsdal_ml.ml_ingesting.stores.embedding_data import EmbeddingDataStore
pipeline = DefaultIngestionPipeline(
loader=PdfLoader(), # Uses pymupdf for PDF processing
cleaner=TextCleaner(),
splitter=TokenSplitter(max_tokens=800, overlap_tokens=80),
embedder=OpenAIEmbedder(),
store=EmbeddingDataStore(),
)
ingester = ModelIngester(
pipeline=pipeline,
base_tags=["document"],
base_metadata={"source": "pdf"},
)
```
## Architecture
### Core Components
- **`MLModel`**: Abstract interface for LLM inference (invoke, stream, with attachments)
- **`MLIngesting`**: Generate text and embeddings from data objects with chunking
- **`MLRetriever`**: Semantic similarity search with tag-based filtering
- **`Agent`**: Q&A and task-oriented agents with streaming and citations
- **`EmbeddingModel`**: Database model storing 1536-dimensional vectors linked to source objects
- **`PythonTool`**: Tool for executing Python functions within agents
- **`FunctionalCallingAgent`**: Agent specialized in functional calling with configurable tools
- **`NLQueryRetriever`**: Retriever for natural language queries on AMSDAL querysets
- **`DefaultIngestionPipeline`**: Pipeline for document ingestion including loader, cleaner, splitter, embedder, and store
- **`ModelIngester`**: High-level ingester for processing models with customizable pipelines and metadata
- **`PdfLoader`**: Document loader using pymupdf for PDF processing
- **`TextCleaner`**: Processor for cleaning and normalizing text
- **`TokenSplitter`**: Splitter for dividing text into chunks based on token count
- **`OpenAIEmbedder`**: Embedder for generating embeddings via OpenAI API
- **`EmbeddingDataStore`**: Store for saving embedding data linked to source objects
- **MCP Server/Client**: Expose retrievers as tools or consume external MCP services
### Configuration
All settings are managed via `MLConfig` in `.env`:
```env
# Model Configuration
llm_model_name=gpt-4o
llm_temperature=0.0
embed_model_name=text-embedding-3-small
# Chunking Parameters
embed_max_depth=2
embed_max_chunks=10
embed_max_tokens_per_chunk=800
# Retrieval Settings
retriever_default_k=8
```
## Development
### Setup
```bash
# Install dependencies
pip install --upgrade uv hatch==1.14.2
hatch env create
hatch run sync
```
### Testing
```bash
# Run all tests with coverage
hatch run cov
# Run specific tests
hatch run test tests/test_openai_model.py
# Watch mode
pytest tests/ -v
```
### Code Quality
```bash
# Run all checks (style + typing)
hatch run all
# Format code
hatch run fmt
# Type checking
hatch run typing
```
### AMSDAL CLI
```bash
# Generate a new model
amsdal generate model MyModel --format py
# Generate property
amsdal generate property --model MyModel embedding_field
# Generate transaction
amsdal generate transaction ProcessEmbeddings
# Generate hook
amsdal generate hook --model MyModel on_create
```
## MCP Server
Run the retriever as an MCP server for integration with Claude Desktop or other MCP clients:
```bash
python -m amsdal_ml.mcp_server.server_retriever_stdio \
--amsdal-config "$(echo '{"async_mode": true, ...}' | base64)"
```
The server exposes a `search` tool for semantic search in your knowledge base.
## Development
### Release Workflow
1. Develop on a feature branch, create PR to `main` — CI runs lint + tests
2. When ready to release, create a `release/X.Y.Z` branch, bump version in `amsdal_ml/__about__.py`, update `CHANGELOG.md`
3. Merge `release/*` to `main` — CD workflow automatically creates tag, builds, publishes to PyPI, and creates GitHub Release with changelog
See [RELEASE.md](RELEASE.md) for the full step-by-step guide.
## License
See [LICENSE.txt](LICENSE.txt) for the AMSDAL End User License Agreement.
For third-party dependency licenses, see `amsdal_ml/Third-Party Materials - AMSDAL Dependencies - License Notices.md`.
## Links
- [AMSDAL Framework](https://github.com/amsdal/amsdal)
- [Documentation](https://docs.amsdal.com)
- [Issue Tracker](https://github.com/amsdal/amsdal_ml/issues) | text/markdown | null | null | null | null | AMSDAL End User License Agreement
Version: 1.0
Last Updated: February 6, 2026
PREAMBLE
This Agreement is a legally binding agreement between you and AMSDAL regarding the Library. Read this Agreement carefully before accepting it, or downloading or using the Library.
By downloading, installing, running, executing, or otherwise using the Library, by paying the License Fees, or by explicitly accepting this Agreement, whichever is earlier, you agree to be bound by this Agreement without modifications or reservations.
If you do not agree to be bound by this Agreement, you shall not download, install, run, execute, accept, use or permit others to download, install, run, execute, accept, or otherwise use the Library.
If you are acting for or on behalf of an entity, then you accept this Agreement on behalf of such entity and you hereby represent that you are authorized to accept this Agreement and enter into a binding agreement with us on such entity’s behalf.
1. INTERPRETATION
1.1. The following definitions shall apply, unless otherwise expressly stated in this Agreement:
“Additional Agreement” means a written agreement executed between you and us that supplements and/or modifies this Agreement by specifically referring hereto.
“Agreement” means this AMSDAL End User License Agreement as may be updated or supplemented from time to time.
“AMSDAL”, “we”, “us” means AMSDAL INC., a Delaware corporation having its principal place of business in the State of New York.
“Communications” means all and any notices, requests, demands and other communications required or may be given under the terms of this Agreement or in connection herewith.
“Consumer” means, unless otherwise defined under the applicable legislation, a person who purchases or uses goods or services for personal, family, or household purposes.
“Documentation” means the technical, user, or other documentation, as may be updated from time to time, such as manuals, guidelines, which is related to the Library and provided or distributed by us or on our behalf, if any.
“Free License Plan” means the License Plan that is provided free of charge, with no License Fee due.
“Library” means the AMSDAL ML plugin and its components, as may be updated from time to time, including the amsdal_ml package and its dependencies.
“License Fee” means the consideration to be paid by you to us for the License as outlined herein.
“License Plan” means a predetermined set of functionality, restrictions, or services applicable to the Library.
“License” has the meaning outlined in Clause 2.1.
“Parties” means AMSDAL and you.
“Party” means either AMSDAL or you.
“Product Page” means our website page related to the Library, if any.
“Third-Party Materials” means the code, software or other content that is distributed by third parties under free or open-source software licenses (such as MIT, Apache 2.0, BSD) that allow for editing, modifying, or reusing such content.
“Update” means an update, patch, fix, support release, modification, or limited functional enhancement to the Library, including but not limited to error corrections to the Library, which does not, in our opinion, constitute an upgrade or a new/separate product.
“U.S. Export Laws” means the United States Export Administration Act and any other export law, restriction, or regulation.
“Works” means separate works, such as software, that are developed using the Library. The Works should not merely be a fork, alternative, copy, or derivative work of the Library or its part.
“You” means either you as a single individual or a single entity you represent.
1.2. Unless the context otherwise requires, a reference to one gender shall include a reference to the other genders; words in the singular shall include the plural and in the plural shall include the singular; any words following the terms including, include, in particular, for example, or any similar expression shall be construed as illustrative and shall not limit the sense of the words, description, definition, phrase or term preceding those terms; except where a contrary intention appears, a reference to a Section or Clause is a reference to a Section or Clause of this Agreement; Section and Clause headings do not affect the interpretation of this Agreement.
1.3. Each provision of this Agreement shall be construed as though both Parties participated equally in the drafting of same, and any rule of construction that a document shall be construed against the drafting Party, including without limitation, the doctrine is commonly known as “contra proferentem”, shall not apply to the interpretation of this Agreement.
2. LICENSE, RESTRICTIONS
2.1. License Grant. Subject to the terms and conditions contained in this Agreement, AMSDAL hereby grants to you a non-exclusive, non-transferable, revocable, limited, worldwide, and non-sublicensable license (the “License”) to install, run, and use the Library, as well as to modify and customize the Library to implement it in the Works.
2.2. Restrictions. As per the License, you shall not, except as expressly permitted herein, (i) sell, resell, transfer, assign, pledge, rent, rent out, lease, assign, distribute, copy, or encumber the Library or the rights in the Library, (ii) use the Library other than as expressly authorized in this Agreement, (iii) remove any copyright notice, trademark notice, and/or other proprietary legend or indication of confidentiality set forth on or contained in the Library, if any, (iv) use the Library in any manner that violates the laws of the United States of America or any other applicable law, (v) circumvent any feature, key, or other licensing control mechanism related to the Library that ensures compliance with this Agreement, (vi) reverse engineer, decompile, disassemble, decrypt or otherwise seek to obtain the source code to the Library, (vii) with respect to the Free License Plan, use the Library to provide a service to a third party, and (viii) permit others to do anything from the above.
2.3. Confidentiality. The Library, including any of its elements and components, shall at all times be treated by you as confidential and proprietary. You shall not disclose, transfer, or otherwise share the Library to any third party without our prior written consent. You shall also take all reasonable precautions to prevent any unauthorized disclosure and, in any event, shall use your best efforts to protect the confidentiality of the Library. This Clause does not apply to the information and part of the Library that (i) is generally known to the public at the time of disclosure, (ii) is legally received by you from a third party which rightfully possesses such information, (iii) becomes generally known to the public subsequent to the time of such disclosure, but not as a result of unauthorized disclosure hereunder, (iv) is already in your possession prior to obtaining the Library, or (v) is independently developed by you or on your behalf without use of or reference to the Library.
2.4. Third-Party Materials. By entering into this Agreement, you acknowledge and confirm that the Library includes the Third-Party Materials. The information regarding the Third-Party Materials will be provided to you along with the Library. If and where necessary, you shall comply with the terms and conditions applicable to the Third-Party Materials.
2.5. Title. The Library is protected by law, including without limitation the copyright laws of the United States of America and other countries, and by international treaties. AMSDAL or its licensors reserve all rights not expressly granted to you in this Agreement. You agree that AMSDAL and/or its licensors own all right, title, interest, and intellectual property rights associated with the Library, including related applications, plugins or extensions, and you will not contest such ownership.
2.6. No Sale. The Library provided hereunder is licensed, not sold. Therefore, the Library is exempt from the “first sale” doctrine, as defined in the United States copyright laws or any other applicable law. For purposes of clarification only, you accept, acknowledge and agree that this is a license agreement and not an agreement for sale, and you shall have no ownership rights in any intellectual or tangible property of AMSDAL or its licensors.
2.7. Works. We do not obtain any rights, title or interest in and to the Works. Once and if the Library components lawfully become a part of the Works, you are free to choose the terms governing the Works. If the License is terminated you shall not use the Library within the Works.
2.8. Statistics. You hereby acknowledge and agree that we reserve the right to track and analyze the Library usage statistics and metrics.
3. LICENSE PLANS
3.1. Plans. The Library, as well as its functionality and associated services, may be subject to certain restrictions and limitations depending on the License Plan. The License Plan’s description, including any terms, such as term, License Fees, features, etc., are or will be provided by us including via the Product Page.
3.2. Plan Change. The Free License Plan is your default License Plan. You may change your License Plan by following our instructions that may be provided on the Product Page or otherwise. Downgrades are available only after the end of the respective prepaid License Plan.
3.3. Validity. You may have only one valid License Plan at a time. The License Plan is valid when it is fully prepaid by you (except for the Free License Plan which is valid only if and as long as we grant the License to you) and this Agreement is not terminated in accordance with the terms hereof.
3.4. Terms Updates. The License Plan’s terms may be updated by us at our sole discretion with or without prior notice to you. The License Plan updates that worsen terms and conditions of your valid License Plan will only be effective for the immediately following License Plan period, if any.
3.5. Free License Plan. We may from time to time at our discretion with or without notice and without liability to you introduce, update, suspend, or terminate the Free License Plan. The Free License Plan allows you to determine if the Library suits your particular needs. The Library provided under the Free License Plan is not designed to and shall not be used in trade, commercial activities, or your normal course of business.
4. PAYMENTS
4.1. License Fees. In consideration for the License provided hereunder, you shall, except for the Free License Plan, pay the License Fee in accordance with the terms of the chosen License Plan or Additional Agreement, if any.
4.2. Updates. We reserve the right at our sole discretion to change any License Fees, as well as to introduce or change any new payments at any time. The changes will not affect the prepaid License Plans; however they will apply starting from the immediately following License Plan period.
4.3. Payment Terms. Unless otherwise agreed in the Additional Agreement, the License Fees are paid fully in advance.
4.4. Precondition. Except for the Free License Plan, payment of the License Fee shall be the precondition for the License. Therefore, if you fail to pay the License Fee in full in accordance with the terms hereof, this Agreement, as well as the License, shall immediately terminate.
4.5. Currency and Fees. Unless expressly provided, prices are quoted in U.S. dollars. All currency conversion fees shall be paid by you. Each Party shall cover its own commissions and fees applicable to the transactions contemplated hereunder.
4.6. Refunds. There shall be no partial or total refunds of the License Fees that were already paid to us, including without limitation if you failed to download or use the Library.
4.7. Taxes. Unless expressly provided, all amounts are exclusive of taxes, including value added tax, sales tax, goods and services tax or other similar tax, each of which, where chargeable by us, shall be payable by you at the rate and in the manner prescribed by law. All other taxes, duties, customs, or similar charges shall be your responsibility.
5. UPDATES, AVAILABILITY, SUPPORT
5.1. Updates. Except for the Free License Plan, you are eligible to receive all relevant Updates during the valid License Plan at no additional charge. The Library may be updated at our sole discretion with or without notice to you. However, we shall not be obligated to make any Updates.
5.2. Availability. We do not guarantee that any particular feature or functionality of the Library will be available at any time.
5.3. Support. Unless otherwise decided by us at our sole discretion, we do not provide any support services. There is no representation or warranty that any functionality or Library as such will be supported by us.
5.4. Termination. We reserve the right at our sole discretion to discontinue the Library distribution and support at any time by providing prior notice to you. However, we will continue to maintain the Library until the end of then-current License Plan.
6. TERM, TERMINATION
6.1. Term. Unless terminated earlier on the terms outlined herein, this Agreement shall be in force as long as you have a valid License Plan. Once your License Plan expires, this Agreement shall automatically expire.
6.2. Termination Without Cause. You may terminate this Agreement for convenience at any time.
6.3. Termination For Breach. If you are in breach of this Agreement and you fail to promptly, however not later than within ten (10) days, following our notice to cure such breach, we may immediately terminate this Agreement.
6.4. Termination For Material Breach. If you are in material breach of this Agreement, we may immediately terminate this Agreement upon written notice to you.
6.5. Termination of Free License Plan. If you are using the Library under the Free License Plan, this Agreement may be terminated by us at any time with or without notice and without any liability to you.
6.6. Effect of Termination. Once this Agreement is terminated or expired, (i) the License shall terminate or expire, (ii) you shall immediately cease using the Library, (iii) you shall permanently erase the Library and its copies that are in your possession or control, (iv) if technically possible, we will discontinue the Library operation, (v) all our obligations under this Agreement shall cease, and (vi) the License Fees or any other amounts that were paid to us hereunder, if any, shall not be reimbursed.
6.7. Survival. Clauses and Sections 2.2-2.5, 4.6, 4.7, 6.6, 6.7, 7.7, 8, 9.2, 10-12 shall survive any termination or expiration of this Agreement regardless of the reason.
7. REPRESENTATIONS, WARRANTIES
7.1. Mutual Representation. Each Party represents that it has the legal power and authority to enter into this Agreement. If you act on behalf of an entity, you hereby represent that you are authorized to accept this Agreement and enter into a binding agreement with us on such entity’s behalf.
7.2. Not a Consumer. You represent that you are not entering into this Agreement as a Consumer and that you do not intend to use the Library as a Consumer. The Library is not intended to be used by Consumers, therefore you shall not enter into this Agreement, and download and use the Library if you act as a Consumer.
7.3. Sanctions and Restrictions. You represent that you are not (i) a citizen or resident of, or person subject to jurisdiction of, Iran, Syria, Venezuela, Cuba, North Korea, or Russia, or (ii) a person subject to any sanctions administered or enforced by the United States Office of Foreign Assets Control or United Nations Security Council.
7.4. IP Warranty. Except for the Free License Plan, we warrant that, to our knowledge, the Library does not violate or infringe any third-party intellectual property rights, including copyright, rights in patents, trade secrets, and/or trademarks, and that to our knowledge no legal action has been taken in relation to the Library for any infringement or violation of any third party intellectual property rights.
7.5. No Harmful Code Warranty. Except for the Free License Plan, we warrant that we will use commercially reasonable efforts to protect the Library from, and the Library shall not knowingly include, malware, viruses, trap doors, back doors, or other means or functions which will detrimentally interfere with or otherwise adversely affect your use of the Library or which will damage or destroy your data or other property. You represent that you will use commercially reasonable efforts and industry standard tools to prevent the introduction of, and you will not knowingly introduce, viruses, malicious code, malware, trap doors, back doors or other means or functions by accessing the Library, the introduction of which may detrimentally interfere with or otherwise adversely affect the Library or which will damage or destroy data or other property.
7.6. Documentation Compliance Warranty. Except for the Free License Plan, we warrant to you that as long as you maintain a valid License Plan the Library shall perform substantially in accordance with the Documentation. Your exclusive remedy, and our sole liability, with respect to any breach of this warranty, will be for us to use commercially reasonable efforts to promptly correct the non-compliance (provided that you promptly notify us in writing and allow us a reasonable cure period).
7.7. Disclaimer of Warranties. Except for the warranties expressly stated above in this Section, the Library is provided “as is”, with all faults and deficiencies. We disclaim all warranties, express or implied, including, but not limited to, warranties of merchantability, fitness for a particular purpose, title, availability, error-free or uninterrupted operation, and any warranties arising from course of dealing, course of performance, or usage of trade to the extent that we may not as a matter of applicable law disclaim any implied warranty, the scope, and duration of such warranty will be the minimum permitted under applicable law.
8. LIABILITY
8.1. Limitation of Liability. To the maximum extent permitted by applicable law, in no event shall AMSDAL be liable under any theory of liability for any indirect, incidental, special, or consequential damages of any kind (including, without limitation, any such damages arising from breach of contract or warranty or from negligence or strict liability), including, without limitation, loss of profits, revenue, data, or use, or for interrupted communications or damaged data, even if AMSDAL has been advised or should have known of the possibility of such damages.
8.2. Liability Cap. In any event, our aggregate liability under this Agreement, negligence, strict liability, or other theory, at law or in equity, will be limited to the total License Fees paid by you under this Agreement for the License Plan valid at the time when the relevant event happened.
8.3. Force Majeure. Neither Party shall be held liable for non-performance or undue performance of this Agreement caused by force majeure. Force majeure means an event or set of events, which is unforeseeable, unavoidable, and beyond control of the respective Party, for instance fire, flood, hostilities, declared or undeclared war, military actions, revolutions, act of God, explosion, strike, embargo, introduction of sanctions, act of government, act of terrorism.
8.4. Exceptions. Nothing contained herein limits our liability to you in the event of death, personal injury, gross negligence, willful misconduct, or fraud.
8.5. Remedies. In addition to, and not in lieu of the termination provisions set forth in Section 6 above, you agree that, in the event of a threatened or actual breach of a provision of this Agreement by you, (i) monetary damages alone will be an inadequate remedy, (ii) such breach will cause AMSDAL great, immediate, and irreparable injury and damage, and (iii) AMSDAL shall be entitled to seek and obtain, from any court of competent jurisdiction (without the requirement of the posting of a bond, if applicable), immediate injunctive and other equitable relief in addition to, and not in lieu of, any other rights or remedies that AMSDAL may have under applicable laws.
9. INDEMNITY
9.1. Our Indemnity. Except for the Free License Plan users, we will defend, indemnify, and hold you harmless from any claim, suit, or action to you based on our alleged violation of the IP Warranty provided in Clause 7.4 above, provided you (i) notify us in writing promptly upon notice of such claim and (ii) cooperate fully in the defense of such claim, suit, or action. We shall, at our own expense, defend such a claim, suit, or action, and you shall have the right to participate in the defense at your own expense. For the Free License Plan users, you shall use at your own risk and expense, and we have no indemnification obligations.
9.2. Your Indemnity. You will defend, indemnify, and hold us harmless from any claim, suit, or action to us based on your alleged violation of this Agreement, provided we notify you in writing promptly upon notice of such claim, suit, or action. You shall, at your own expense, defend such a claim, suit, or action.
10. GOVERNING LAW, DISPUTE RESOLUTION
10.1. Law. This Agreement shall be governed by the laws of the State of New York, USA, without reference to conflicts of laws principles. Provisions of the United Nations Convention on the International Sale of Goods shall not apply to this Agreement.
10.2. Negotiations. The Parties shall seek to solve amicably any disputes, controversies, claims, or demands arising out of or relating to this Agreement, as well as those related to execution, breach, termination, or invalidity hereof. If the Parties do not reach an amicable resolution within thirty (30) days, any dispute, controversy, claim or demand shall be finally settled by the competent court as outlined below.
10.3. Jurisdiction. The Parties agree that the exclusive jurisdiction and venue for any dispute arising out of or related to this Agreement shall be the courts of the State of New York and the courts of the United States of America sitting in the County of New York.
10.4. Class Actions Waiver. The Parties agree that any dispute arising out of or related to this Agreement shall be pursued individually. Neither Party shall act as a plaintiff or class member in any supposed purported class or representative proceeding, including, but not limited to, a federal or state class action lawsuit, against the other Party in relation herewith.
10.5. Costs. In the event of any legal proceeding between the Parties arising out of or related to this Agreement, the prevailing Party shall be entitled to recover, in addition to any other relief awarded or granted, its reasonable costs and expenses (including attorneys’ and expert witness’ fees) incurred in such proceeding.
11. COMMUNICATION
11.1. Communication Terms. Any Communications shall be in writing. When sent by ordinary mail, Communication shall be sent by personal delivery, by certified or registered mail, and shall be deemed delivered upon receipt by the recipient. When sent by electronic mail (email), Communication shall be deemed delivered on the day following the day of transmission. Any Communication given by email in accordance with the terms hereof shall be of full legal force and effect.
11.2. Contact Details. Your contact details must be provided by you to us. AMSDAL contact details are as follows: PO Box 940, Bedford, NY 10506; ams@amsdal.com. Either Party shall keep its contact details correct and up to date. Either Party may update its contact details by providing a prior written notice to the other Party in accordance with the terms hereof.
12. MISCELLANEOUS
12.1. Export Restrictions. The Library originates from the United States of America and may be subject to the United States export administration regulations. You agree that you will not (i) transfer or export the Library into any country or (ii) use the Library in any manner prohibited by the U.S. Export Laws. You shall comply with the U.S. Export Laws, as well as all applicable international and national laws related to the export or import regulations that apply in relation to your use of the Library.
12.2. Entire Agreement. This Agreement shall constitute the entire agreement between the Parties, supersede and extinguish all previous agreements, promises, assurances, warranties, representations and understandings between them, whether written or oral, relating to its subject matter.
12.3. Additional Agreements. AMSDAL and you are free to enter into any Additional Agreements. In the event of conflict, unless otherwise explicitly stated, the Additional Agreement shall control.
12.4. Modifications. We may modify, supplement or update this Agreement from time to time at our sole and absolute discretion. If we make changes to this Agreement, we will (i) update the “Version” and “Last Updated” date at the top of this Agreement and (ii) notify you in advance before the changes become effective. Your continued use of the Library is deemed acceptance of the amended Agreement. If you do not agree to any part of the amended Agreement, you shall immediately discontinue any use of the Library, which shall be your sole remedy.
12.5. Assignment. You shall not assign or transfer any rights or obligations under this Agreement without our prior written consent. We may upon prior written notice unilaterally transfer or assign this Agreement, including any rights and obligations hereunder at any time and no such transfer or assignment shall require your additional consent or approval.
12.6. Severance. If any provision or part-provision of this Agreement is or becomes invalid, illegal or unenforceable, it shall be deemed modified to the minimum extent necessary to make it valid, legal, and enforceable. If such modification is not possible, the relevant provision or part-provision shall be deemed deleted. If any provision or part-provision of this Agreement is deemed deleted under the previous sentence, AMSDAL will in good faith replace such provision with a new one that, to the greatest extent possible, achieves the intended commercial result of the original provision. Any modification to or deletion of a provision or part-provision under this Clause shall not affect the validity and enforceability of the rest of this Agreement.
12.7. Waiver. No failure or delay by a Party to exercise any right or remedy provided under this Agreement or by law shall constitute a waiver of that or any other right or remedy, nor shall it preclude or restrict the further exercise of that or any other right or remedy.
12.8. No Partnership or Agency. Nothing in this Agreement is intended to, or shall be deemed to, establish any partnership, joint venture or employment relations between the Parties, constitute a Party the agent of another Party, or authorize a Party to make or enter into any commitments for or on behalf of any other Party. | null | [
"License :: Other/Proprietary License"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"aiohttp>=3.13.3",
"amsdal-cli>=0.7.0",
"amsdal-server>=0.7.0",
"amsdal>=0.7.0",
"mcp>=1.26.0",
"openai>=2.20.0",
"pydantic-settings~=2.12",
"pydantic~=2.12",
"pymupdf>=1.24.10",
"python-magic>=0.4.27",
"tenacity>=8.0.0"
] | [] | [] | [] | [] | python-httpx/0.28.1 | 2026-02-18T17:54:32.339879 | amsdal_ml-1.0.1.tar.gz | 1,073,268 | 85/ff/5b2e8a5e13ded3f59e0cbd0229e2b2ee7acea4c3ea424c94d6f4561d7f59/amsdal_ml-1.0.1.tar.gz | source | sdist | null | false | 32cc6ecbd088bf62e46e97cfcc68ca3b | 5e911cf7de6251df71b80b2346baf8e7fad67eac2c45cd4cb2b0a0bd5a042506 | 85ff5b2e8a5e13ded3f59e0cbd0229e2b2ee7acea4c3ea424c94d6f4561d7f59 | null | [
"LICENSE.txt"
] | 0 |
2.4 | fixos | 2.1.9 | AI-powered Linux/Windows diagnostics and repair – audio, hardware, system issues | ```
___ _ ___ ____
/ _(_)_ __ / _ \/ ___|
| |_| \ \/ / | | | \___ \
| _| |> < | |_| |___) |
|_| |_/_/\_\ \___/|____/
AI-powered OS Diagnostics • v2.2.0
```
# fixOS v2.2 🔧🤖
[](https://www.python.org/)
[](https://opensource.org/licenses/Apache-2.0)
[](https://github.com/wronai/fixos)
[](https://github.com/wronai/fixos#-dostępni-providerzy-llm-12)
[](https://github.com/wronai/fixos)
[](https://github.com/wronai/fixos/actions)
**AI diagnostyka i naprawa systemów** – Linux, Windows, macOS
z anonimizacją danych, trybem HITL/Autonomous, grafem problemów i 12 providerami LLM.
> 🔗 **GitHub**: https://github.com/wronai/fixos
---
## 🌍 Cross-Platform Support
| System | Package Manager | Audio | Hardware | System |
|:--|:--|:--:|:--:|:--:|
| **Linux** (Fedora, Ubuntu, Arch, Debian) | dnf / apt / pacman | ✅ ALSA/PipeWire/SOF | ✅ DMI/sensors | ✅ systemd/journal |
| **Windows** 10/11 | winget / choco | ✅ WMI Audio | ✅ WMI Hardware | ✅ Event Log |
| **macOS** 12+ | brew | ✅ CoreAudio | ✅ system_profiler | ✅ launchd |
---
## Szybki start (3 kroki)
```bash
# 1. Instalacja
pip install -e ".[dev]"
# 2. Wybierz provider i pobierz klucz API
fixos llm # lista 12 providerów z linkami
# 3. Zapisz klucz i uruchom
fixos token set AIzaSy... # Gemini (darmowy, domyślny)
fixos fix
```
---
## Komendy CLI
```
fixos – ekran powitalny z listą komend i statusem
fixos fix – diagnoza + sesja naprawcza z AI (HITL)
fixos scan – diagnostyka systemu bez AI
fixos orchestrate – zaawansowana orkiestracja (graf problemów DAG)
fixos llm – lista 12 providerów LLM + linki do kluczy API
fixos token set KEY – zapisz klucz API do .env (auto-detekcja providera)
fixos token show – pokaż aktualny token (zamaskowany)
fixos token clear – usuń token z .env
fixos config show – pokaż konfigurację
fixos config init – utwórz .env z szablonu
fixos config set K V – ustaw wartość w .env
fixos providers – skrócona lista providerów
fixos test-llm – testuj połączenie z LLM
```
### Przykłady użycia
```bash
# Tylko diagnostyka audio + zapis do pliku
fixos scan --audio --output /tmp/audio-report.json
# Napraw audio i thumbnails (HITL – pyta o potwierdzenie)
fixos fix --modules audio,thumbnails
# Tryb autonomiczny (agent sam naprawia, max 5 akcji)
fixos fix --mode autonomous --max-fixes 5
# Zaawansowana orkiestracja z grafem zależności
fixos orchestrate --dry-run
# Pokaż tylko darmowe providery LLM
fixos llm --free
# Ustaw Groq jako provider (ultra-szybki, darmowy)
fixos token set gsk_... --provider groq
fixos fix --provider groq
# Timeout 30 minut
fixos fix --timeout 1800
```
---
## 🤖 Dostępni Providerzy LLM (12)
| # | Provider | Tier | Model domyślny | Klucz API |
|:--|:--|:--:|:--|:--|
| 1 | **gemini** | 🟢 FREE | gemini-2.5-flash | [aistudio.google.com](https://aistudio.google.com/app/apikey) |
| 2 | **openrouter** | 🟢 FREE | openai/gpt-4o-mini | [openrouter.ai/settings/keys](https://openrouter.ai/settings/keys) |
| 3 | **mistral** | 🟢 FREE | mistral-small-latest | [console.mistral.ai](https://console.mistral.ai/api-keys/) |
| 4 | **groq** | 🟢 FREE | llama-3.1-8b-instant | [console.groq.com/keys](https://console.groq.com/keys) |
| 5 | **together** | 🟢 FREE | llama-3.2-11B | [api.together.ai](https://api.together.ai/settings/api-keys) |
| 6 | **cohere** | 🟢 FREE | command-r | [dashboard.cohere.com](https://dashboard.cohere.com/api-keys) |
| 7 | **cerebras** | 🟢 FREE | llama3.1-8b | [cloud.cerebras.ai](https://cloud.cerebras.ai/platform/) |
| 8 | **ollama** | 🟢 LOCAL | llama3.2 | [ollama.com/download](https://ollama.com/download) |
| 9 | **openai** | 💰 PAID | gpt-4o-mini | [platform.openai.com](https://platform.openai.com/api-keys) |
| 10 | **anthropic** | 💰 PAID | claude-3-haiku | [console.anthropic.com](https://console.anthropic.com/settings/keys) |
| 11 | **xai** | 💰 PAID | grok-beta | [console.x.ai](https://console.x.ai/) |
| 12 | **deepseek** | 💰 PAID | deepseek-chat | [platform.deepseek.com](https://platform.deepseek.com/api_keys) |
```bash
fixos llm # pełna lista z opisami i gotowymi komendami
fixos llm --free # tylko darmowe
```
---
## Tryby agenta
### 👤 Human-in-the-Loop (HITL) – domyślny
```
LLM sugeruje → Ty decydujesz → Skrypt wykonuje
fixos [00:58:42] ❯ 1 ← napraw problem nr 1
fixos [00:58:30] ❯ A ← napraw wszystkie
fixos [00:58:20] ❯ !systemctl status pipewire ← własna komenda
fixos [00:58:10] ❯ search sof-firmware lenovo ← szukaj zewnętrznie
fixos [00:57:55] ❯ D ← opisz własny problem
fixos [00:57:40] ❯ ? ← zapytaj o szczegóły
fixos [00:57:30] ❯ q ← zakończ
```
Wyjście koloryzowane: 🔴 krytyczne / 🟡 ważne / 🟢 drobne, bloki kodu z ramkami box-drawing.
### 🤖 Autonomous – agent działa samodzielnie
```bash
fixos fix --mode autonomous --max-fixes 10
```
- Protokół JSON: `{ "action": "EXEC|SEARCH|SKIP|DONE", "command": "...", "reason": "..." }`
- Zabezpieczenia: lista zabronionych komend (`rm -rf /`, `mkfs`, `fdisk`, `dd if=...`)
- Każde `EXEC` logowane z wynikiem i oceną LLM
- Wymaga jawnego `yes` na starcie
### 🎼 Orchestrate – graf problemów (DAG)
```bash
fixos orchestrate
fixos orchestrate --dry-run # podgląd bez wykonywania
```
- Buduje graf zależności między problemami
- Po każdej naprawie re-diagnozuje i wykrywa nowe problemy
- LLM ocenia wynik każdej komendy (JSON structured output)
---
## 🔒 Anonimizacja danych
Zawsze pokazywana przed wysłaniem do LLM. Maskowane kategorie:
| Kategoria | Przykład | Zamiennik |
|:--|:--|:--|
| Hostname | `moj-laptop` | `[HOSTNAME]` |
| Username | `jan` | `[USER]` |
| Ścieżki /home | `/home/jan/.pyenv/versions/3.12/bin/python` | `/home/[USER]/...` |
| Adresy IPv4 | `192.168.1.100` | `192.168.XXX.XXX` |
| Adresy MAC | `aa:bb:cc:dd:ee:ff` | `XX:XX:XX:XX:XX:XX` |
| Tokeny API | `sk-abc123...` | `[API_TOKEN_REDACTED]` |
| UUID hardware | `a1b2c3d4-...` | `[UUID-REDACTED]` |
| Numery seryjne | `SN: PF1234567` | `Serial: [SERIAL-REDACTED]` |
| Hasła w env | `PASSWORD=secret` | `PASSWORD=[REDACTED]` |
---
## Moduły diagnostyki
| Moduł | Linux | Windows | macOS | Co sprawdza |
|:--|:--:|:--:|:--:|:--|
| `system` | ✅ | ✅ | ✅ | CPU, RAM, dyski, usługi, aktualizacje, SELinux, firewall |
| `audio` | ✅ | ✅ | ✅ | ALSA/PipeWire/SOF (Linux), WMI Audio (Win), CoreAudio (Mac) |
| `thumbnails` | ✅ | ➖ | ➖ | ffmpegthumbnailer, cache, GNOME gsettings |
| `hardware` | ✅ | ✅ | ✅ | DMI/WMI/system_profiler, BIOS, GPU, czujniki, bateria |
| `security` | ✅ | ✅ | ✅ | Firewall, otwarte porty, SELinux/AppArmor, SSH config, fail2ban, SUID |
| `resources` | ✅ | ✅ | ✅ | Co zajmuje dysk, top procesów CPU/RAM, autostart, OOM events |
```bash
# Tylko bezpieczeństwo
fixos scan --modules security
# Zasoby – co zajmuje dysk i pamięć
fixos scan --modules resources
# Pełna diagnostyka z naprawą
fixos fix --modules system,security,resources
```
---
## Zewnętrzne źródła wiedzy (fallback)
Gdy LLM nie zna rozwiązania, fixos szuka automatycznie w:
- **Fedora Bugzilla** – baza zgłoszonych błędów
- **ask.fedoraproject.org** – forum społeczności
- **Arch Wiki** – doskonałe źródło dla ogólnych problemów Linux
- **GitHub Issues** – PipeWire, ALSA, linux-hardware repos
- **DuckDuckGo** – ogólne wyszukiwanie (bez klucza API)
- **Google via SerpAPI** – najlepsze wyniki (opcjonalny klucz `SERPAPI_KEY`)
---
## Konfiguracja (.env)
```bash
fixos config init # utwórz .env z szablonu
fixos config show # sprawdź aktualną konfigurację
```
```env
LLM_PROVIDER=gemini # gemini|openai|openrouter|groq|mistral|...
GEMINI_API_KEY=AIzaSy... # klucz Gemini (darmowy)
AGENT_MODE=hitl # hitl|autonomous
SHOW_ANONYMIZED_DATA=true # pokaż dane przed wysłaniem
ENABLE_WEB_SEARCH=true # fallback do zewnętrznych źródeł
SESSION_TIMEOUT=3600 # timeout sesji (1h)
SERPAPI_KEY= # opcjonalny – lepsze wyniki wyszukiwania
```
---
## Testy i Docker
### Uruchomienie testów
```bash
# Wszystkie testy jednostkowe (bez API, szybkie)
pytest tests/unit/ -v
# Testy e2e z mock LLM
pytest tests/e2e/ -v
# Tylko testy z prawdziwym API (wymaga tokena w .env)
pytest tests/e2e/ -v -m real_api
# Pokrycie kodu
pytest --cov=fixos --cov-report=html
make test-coverage
```
### Docker – symulowane środowiska
```bash
# Zbuduj wszystkie obrazy
docker compose -f docker/docker-compose.yml build
# Scenariusze broken
docker compose -f docker/docker-compose.yml run broken-audio
docker compose -f docker/docker-compose.yml run broken-thumbnails
docker compose -f docker/docker-compose.yml run broken-network
docker compose -f docker/docker-compose.yml run broken-full
# Uruchom testy e2e w Dockerze
docker compose -f docker/docker-compose.yml run e2e-tests
```
### Środowiska Docker
| Obraz | Scenariusz |
|:--|:--|
| `fixos-broken-audio` | Brak sof-firmware, PipeWire failed, no ALSA cards |
| `fixos-broken-thumbnails` | Brak thumbnailerów, pusty cache, brak GStreamer |
| `fixos-broken-network` | NetworkManager failed, DNS broken, rfkill blocked |
| `fixos-broken-full` | Wszystkie problemy naraz + pending updates + failed services |
---
## Struktura projektu
```
fixos/
├── fixos/
│ ├── cli.py # Komendy CLI (Click) – fixos, fix, scan, llm, ...
│ ├── config.py # Konfiguracja + 12 providerów LLM
│ ├── platform_utils.py # Cross-platform (Linux/Win/Mac)
│ ├── agent/
│ │ ├── hitl.py # HITL z koloryzowanym markdown output
│ │ └── autonomous.py # Tryb autonomiczny z JSON protokołem
│ ├── diagnostics/
│ │ └── system_checks.py # Moduły: system, audio, thumbnails, hardware
│ ├── fixes/
│ │ ├── knowledge_base.py # Baza znanych bugów z heurystykami
│ │ └── heuristics.py # Matcher diagnostics → known fixes
│ ├── orchestrator/
│ │ ├── graph.py # Graf problemów (DAG)
│ │ ├── executor.py # Bezpieczny executor komend
│ │ └── orchestrator.py # Główna pętla orkiestracji
│ ├── providers/
│ │ └── llm.py # Multi-provider LLM client
│ └── utils/
│ ├── anonymizer.py # Anonimizacja z raportem
│ └── web_search.py # Bugzilla/AskFedora/ArchWiki/GitHub/DDG
├── tests/
│ ├── conftest.py # Fixtures + mock diagnostics
│ ├── e2e/
│ │ ├── test_audio_broken.py
│ │ ├── test_thumbnails_broken.py
│ │ ├── test_network_broken.py
│ │ ├── test_executor.py
│ │ └── test_cli.py
│ └── unit/
│ ├── test_core.py
│ ├── test_anonymizer.py
│ └── test_executor.py
├── docker/
│ ├── base/Dockerfile
│ ├── broken-audio/Dockerfile
│ ├── broken-thumbnails/Dockerfile
│ ├── broken-network/Dockerfile
│ └── broken-full/Dockerfile
├── .env.example
├── pytest.ini
└── pyproject.toml
```
---
## 🚀 Planowane funkcje (Roadmap)
### v2.3 – Heurystyki bez LLM
- `fixos quickfix` – natychmiastowe naprawy bez API (baza 30+ znanych bugów)
- Dopasowanie heurystyczne diagnostyki do znanych wzorców
- Działa offline, zero tokenów
### v2.4 – Raporty i historia
- `fixos report` – eksport sesji do HTML/PDF/Markdown
- `fixos history` – historia napraw z wynikami
- Porównanie stanu przed/po naprawie
### v2.5 – Integracje
- `fixos watch` – monitoring w tle, powiadomienia przy problemach
- Webhook do Slack/Discord przy wykryciu błędów krytycznych
- Integracja z Prometheus/Grafana (metryki diagnostyczne)
### v3.0 – Multi-agent
- Równoległe agenty dla różnych modułów (audio, sieć, dysk)
- Koordynator z priorytetyzacją problemów
- Uczenie się z historii napraw (fine-tuning lokalnych modeli)
---
## Licencja
Apache License 2.0 – see [LICENSE](LICENSE) for details.
## License
Apache License 2.0 - see [LICENSE](LICENSE) for details.
## Author
Created by **Tom Sapletta** - [tom@sapletta.com](mailto:tom@sapletta.com)
| text/markdown | fixos contributors | Tom Sapletta <tom@sapletta.com> | null | null | null | linux, windows, diagnostics, ai, llm, audio, system-repair, cross-platform | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Topic :: System :: Systems Administration",
"Topic :: ... | [] | https://github.com/wronai/fixos | null | >=3.10 | [] | [] | [] | [
"openai>=1.35.0",
"prompt_toolkit>=3.0.43",
"psutil>=5.9.0",
"pyyaml>=6.0",
"click>=8.1.0",
"python-dotenv>=1.0.0",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-mock>=3.12.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/wronai/fixos",
"Bug Tracker, https://github.com/wronai/fixos/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T17:54:25.254308 | fixos-2.1.9.tar.gz | 104,440 | 78/56/3979123d92ce746bc7e854c8f002c20d3bf4d8d133c805b88ea133472d82/fixos-2.1.9.tar.gz | source | sdist | null | false | 67807a0523461ca997d4aa366a9f1251 | e6773a547e379ada99771cb3b543dbadcf580e87615ab3998073896f2e07ee3e | 78563979123d92ce746bc7e854c8f002c20d3bf4d8d133c805b88ea133472d82 | Apache-2.0 | [
"LICENSE"
] | 235 |
2.3 | griptape-nodes | 0.74.5 | Add your description here | <picture>
<source media="(prefers-color-scheme: dark)" srcset="docs/assets/img/griptape_nodes_from_foundry_white.svg">
<img alt="Griptape Nodes" src="docs/assets/img/griptape_nodes_from_foundry_black.svg" width="600">
</picture>
Griptape Nodes is a powerful, visual, node-based workflow builder designed for professional artists and creators. Build and execute complex AI workflows through the cloud-based [Griptape Nodes IDE](https://app.nodes.griptape.ai/) - an intuitive drag-and-drop interface.
This repository contains the Griptape Nodes Engine - the local component that runs securely on your machine, providing a performant foundation for workflow execution.
[](https://vimeo.com/1064451891)
*(Clicking the image opens the video on Vimeo)*
**✨ Key Features:**
- **🎯 Visual Workflow Editor:** Design and connect nodes representing different AI tasks, tools, and logic through the cloud-based IDE
- **🏠 Local Engine:** Run workflows securely on your own machine or infrastructure
- **🐍 Portable Python Workflows:** Workflows are saved as self-executable Python files for portability, debugability, and learning
- **🌐 Multi-Device Access:** Client/server architecture lets you access your workflows from any device
- **🧩 Extensible:** Build your own custom nodes and libraries to extend functionality
- **⚡ Scriptable Interface:** Interact with and control flows programmatically
**🔗 Learn More:**
- **📚 Full Documentation:** [docs.griptapenodes.com](https://docs.griptapenodes.com)
- **⚙️ Installation:** [docs.griptapenodes.com/en/stable/installation/](https://docs.griptapenodes.com/en/latest/installation/)
- **🔧 Engine Configuration:** [docs.griptapenodes.com/en/stable/configuration/](https://docs.griptapenodes.com/en/latest/configuration/)
- **📋 Migration Guide:** [MIGRATION.md](MIGRATION.md) - Guide for migrating from deprecated nodes
**🧩 Extending Griptape Nodes:**
Want to create custom nodes for your specific workflow needs? Griptape Nodes is designed to be extensible through custom libraries:
- **📦 Custom Library Template:** Get started with the [Griptape Nodes Library Template](https://github.com/griptape-ai/griptape-nodes-library-template)
- **🛠️ Build Custom Nodes:** Create specialized nodes tailored to your artistic and creative workflows
______________________________________________________________________
## 🚀 Quick Installation
Follow these steps to get the Griptape Nodes engine running on your system:
1. **🔐 Login:** Visit [Griptape Nodes](https://app.nodes.griptape.ai/) and log in or sign up using your Griptape Cloud credentials.
1. **💾 Install Command:** Once logged in, you'll find a setup screen. Copy the installation command provided in the "New Installation" section. It will look similar to this (use the **exact** command provided on the website):
```bash
curl -LsSf https://raw.githubusercontent.com/griptape-ai/griptape-nodes/main/install.sh | bash
```
1. **⚡ Run Installer:** Open a terminal on your machine (local or cloud environment) and paste/run the command. The installer uses `uv` for fast installation; if `uv` isn't present, the script will typically handle installing it.
1. **⚙️ Initial Configuration (Automatic on First Run):**
- The first time you run the engine command (`griptape-nodes` or `gtn`), it will guide you through the initial setup:
- **📁 Workspace Directory:** You'll be prompted to choose a directory where Griptape Nodes will store configurations, project files, secrets (`.env`), and generated assets. You can accept the default (`<current_directory>/GriptapeNodes`) or specify a custom path.
- **🔑 Griptape Cloud API Key:** Return to the [Griptape Nodes setup page](https://app.nodes.griptape.ai/) in your browser, click "Generate API Key", copy the key, and paste it when prompted in the terminal.
1. **🚀 Start the Engine:** After configuration, start the engine by running:
```bash
griptape-nodes
```
*(or the shorter alias `gtn`)*
1. **🔗 Connect Workflow Editor:** Refresh the Griptape Nodes Workflow Editor page in your browser. It should now connect to your running engine.
You're now ready to start building flows! 🎉 For more detailed setup options and troubleshooting, see the full [Documentation](https://docs.griptapenodes.com/).
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <3.13,>=3.12.0 | [] | [] | [] | [
"griptape>=1.9.1",
"pydantic>=2.10.6",
"python-dotenv>=1.0.1",
"xdg-base-dirs>=6.0.2",
"httpx<1.0.0,>=0.28.0",
"websockets<17.0.0,>=15.0.1",
"tomlkit>=0.13.2",
"uv>=0.6.16",
"fastapi>=0.115.12",
"uvicorn>=0.34.2",
"packaging>=25.0",
"python-multipart>=0.0.20",
"json-repair>=0.46.1",
"mcp[w... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T17:53:20.145015 | griptape_nodes-0.74.5.tar.gz | 639,376 | 49/5a/502eeb47c1e40666e4af48f9d8f61bc653ea5a5b017b75bb52be3757feb9/griptape_nodes-0.74.5.tar.gz | source | sdist | null | false | 2cd37429bd4dea738a668de032457007 | 5f098b74f77cd6c7f36cea97876a551cdd041675248c5bc9b3938de52430e082 | 495a502eeb47c1e40666e4af48f9d8f61bc653ea5a5b017b75bb52be3757feb9 | null | [] | 632 |
2.4 | tomoscan | 2.3.6 | Utility to access tomography data at ESRF | # tomoscan
This library offers an abstraction to:
* access tomography data from spec acquisitions (EDF) and bliss acquisitions (HDF5)
* read and write volumes from / to HDF5, JP2K, TIFF and EDF
## installation
### using pypi
To install the latest 'tomoscan' pip package
``` bash
pip install tomoscan
```
### using gitlab repository
``` bash
pip install git+https://gitlab.esrf.fr/tomotools/tomoscan.git
```
## documentation
General documentation can be found here: [https://tomotools.gitlab-pages.esrf.fr/tomoscan/](https://tomotools.gitlab-pages.esrf.fr/tomoscan/)
| text/markdown | null | Henri Payno <henri.payno@esrf.fr>, Pierre Paleo <pierre.paleo@esrf.fr>, Pierre-Olivier Autran <pierre-olivier.autran@esrf.fr>, Jérôme Lesaint <jerome.lesaint@esrf.fr>, Alessandro Mirone <mirone@esrf.fr> | null | null | MIT | NXtomo, nexus, tomography, esrf, bliss-tomo | [
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Environment :: Cons... | [] | null | null | >=3.10 | [] | [] | [] | [
"defusedxml",
"h5py>=3.0",
"silx>=2.0",
"lxml",
"dicttoxml",
"psutil",
"nxtomo>=3.0.0dev0",
"numpy",
"packaging>=22.0",
"pint",
"platformdirs",
"hdf5plugin; extra == \"compression-plugins\"",
"blosc2-grok; extra == \"compression-plugins\"",
"glymur; extra == \"full-no-compression-plugins\"... | [] | [] | [] | [] | twine/6.1.0 CPython/3.11.2 | 2026-02-18T17:52:45.001219 | tomoscan-2.3.6.tar.gz | 177,516 | 2f/16/e68952a43f96672eea68d43f78d024939199be2e78f38ed9763749fc251d/tomoscan-2.3.6.tar.gz | source | sdist | null | false | 06c970b1eaa8e67b945b6dfcfaecdfe4 | d51171f045def44d7302ca851b1bbb78997d611a5d6be705ef9ad7755223811b | 2f16e68952a43f96672eea68d43f78d024939199be2e78f38ed9763749fc251d | null | [
"LICENSE"
] | 528 |
2.4 | pycuda | 2025.1.3 | Python wrapper for Nvidia CUDA | PyCUDA: Pythonic Access to CUDA, with Arrays and Algorithms
=============================================================
.. image:: https://gitlab.tiker.net/inducer/pycuda/badges/main/pipeline.svg
:alt: Gitlab Build Status
:target: https://gitlab.tiker.net/inducer/pycuda/commits/main
.. image:: https://badge.fury.io/py/pycuda.png
:target: https://pypi.org/project/pycuda
.. image:: https://zenodo.org/badge/1575319.svg
:alt: Zenodo DOI for latest release
:target: https://zenodo.org/badge/latestdoi/1575319
PyCUDA lets you access `Nvidia <https://nvidia.com>`_'s `CUDA
<https://nvidia.com/cuda/>`_ parallel computation API from Python.
Several wrappers of the CUDA API already exist-so what's so special
about PyCUDA?
* Object cleanup tied to lifetime of objects. This idiom, often
called
`RAII <https://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization>`_
in C++, makes it much easier to write correct, leak- and
crash-free code. PyCUDA knows about dependencies, too, so (for
example) it won't detach from a context before all memory
allocated in it is also freed.
* Convenience. Abstractions like pycuda.driver.SourceModule and
pycuda.gpuarray.GPUArray make CUDA programming even more
convenient than with Nvidia's C-based runtime.
* Completeness. PyCUDA puts the full power of CUDA's driver API at
your disposal, if you wish. It also includes code for
interoperability with OpenGL.
* Automatic Error Checking. All CUDA errors are automatically
translated into Python exceptions.
* Speed. PyCUDA's base layer is written in C++, so all the niceties
above are virtually free.
* Helpful `Documentation <https://documen.tician.de/pycuda>`_.
Relatedly, like-minded computing goodness for `OpenCL <https://www.khronos.org/registry/OpenCL/>`_
is provided by PyCUDA's sister project `PyOpenCL <https://pypi.org/project/pyopencl>`_.
| null | Andreas Kloeckner | inform@tiker.net | null | null | MIT | null | [
"Environment :: Console",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Other Audience",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: C++",
"Program... | [] | http://mathema.tician.de/software/pycuda | null | ~=3.8 | [] | [] | [] | [
"pytools>=2011.2",
"platformdirs>=2.2.0",
"mako"
] | [] | [] | [] | [
"Source, https://github.com/inducer/pycuda"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T17:52:25.127035 | pycuda-2025.1.3.tar.gz | 1,690,651 | 09/07/2b1eea34f1b620db10fe05b50d8d7620e858fe2c42da984f07e49021a1e3/pycuda-2025.1.3.tar.gz | source | sdist | null | false | b9b4b90809bfc6e52deeeafe6312a60c | ff16d807b4601bb8a5c3adadb6a4726774e5dd8ffdf61c9b23a41858748fd77a | 09072b1eea34f1b620db10fe05b50d8d7620e858fe2c42da984f07e49021a1e3 | null | [
"LICENSE"
] | 236 |
2.4 | swh.graph.libs | 11.0.0 | Rust libraries for the Software Heritage graph service (Luigi tasks) | Software Heritage - libraries for the graph service
===================================================
Rust libraries and utilities built around `swh-graph <https://docs.softwareheritage.org/devel/swh-graph/>`_.
* `swh-graph-stdlib <https://docs.rs/swh-graph-stdlib/>`_: algorithms to
mine information from, and run common algorithms on, swh-graph
* `swh_graph_topology <https://docs.rs/swh_graph_topology>`_: currently
just an implementation of reading nodes in (precomputed) topological order
| text/x-rst | null | Software Heritage developers <swh-devel@inria.fr> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
"Development Status :: 3 - Alpha"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"swh.graph>=v8.0.0",
"luigi!=3.5.2,!=3.7.0,!=3.7.1; extra == \"luigi\"",
"pyarrow<19.0.0; extra == \"luigi\"",
"tqdm; extra == \"luigi\"",
"swh.export[luigi]>=v1.2.0; extra == \"luigi\"",
"swh.graph[luigi]>=v8.0.0; extra == \"luigi\"",
"datafusion!=43.1.0,!=44.0.0; extra == \"testing\"",
"pyarrow-stub... | [] | [] | [] | [
"Homepage, https://gitlab.softwareheritage.org/swh/devel/swh-graph-libs",
"Bug Reports, https://gitlab.softwareheritage.org/swh/devel/swh-graph-libs/-/issues",
"Funding, https://www.softwareheritage.org/donate",
"Documentation, https://docs.softwareheritage.org/devel/swh-graph-libs/",
"Source, https://gitla... | twine/6.2.0 CPython/3.11.12 | 2026-02-18T17:52:19.480342 | swh_graph_libs-11.0.0.tar.gz | 105,320 | 7a/a1/c615b1ab2e464b1cdaf3b3196bbc2a5e08117d9d65eeba58f669a2d97077/swh_graph_libs-11.0.0.tar.gz | source | sdist | null | false | 6e54bada549835b77fc696f2142b0a60 | ef08d595eb0db322bba98485d3aa6bf713dc5b63d059d78df24b2370f74230fb | 7aa1c615b1ab2e464b1cdaf3b3196bbc2a5e08117d9d65eeba58f669a2d97077 | null | [
"LICENSE",
"AUTHORS"
] | 0 |
2.4 | mcp-shadow | 0.0.1 | Reserved package name for Shadow MCP. | # mcp-shadow
Reserved package name for Shadow MCP.
| text/markdown | Shadow MCP | null | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.6 | 2026-02-18T17:52:16.390325 | mcp_shadow-0.0.1.tar.gz | 1,045 | 85/41/8b6dcc373be4771ff4501fba6ad65d93e76773dd194300e824efd15331dd/mcp_shadow-0.0.1.tar.gz | source | sdist | null | false | 9f599fabe8484188b75175e981a31d8d | 68ebcc217fca97c9e9731cffeb0e68d587b0d3fbf974d508d1ff4e0b899a617f | 85418b6dcc373be4771ff4501fba6ad65d93e76773dd194300e824efd15331dd | null | [] | 256 |
2.4 | DB-First | 5.1.3 | Web-framework independent CRUD tools for working with database via SQLAlchemy. | # DB-First
Web-framework independent CRUD tools for working with database via SQLAlchemy.
<!--TOC-->
- [DB-First](#db-first)
- [Features](#features)
- [Installation](#installation)
- [Examples](#examples)
- [Full example](#full-example)
<!--TOC-->
## Features
* DBAL - database access layer.
* Actions templates.
* Bulk methods for create, read, update and delete object from database.
* Method of paginating data.
* StatementMaker class for create query 'per-one-model'.
* Marshmallow (https://github.com/marshmallow-code/marshmallow) schemas for serialization input data.
* Marshmallow schemas for deserialization SQLAlchemy result object to `dict`.
* Datetime with UTC timezone validation in `BaseSchema`.
## Installation
Recommended using the latest version of Python. DB-First supports Python 3.12 and newer.
Install and update using `pip`:
```shell
$ pip install -U db_first
```
## Examples
### Full example
```python
from db_first.actions import BaseAction
from db_first.base_model import ModelMixin
from db_first.dbal import SqlaDBAL
from db_first.dbal.exceptions import DBALObjectNotFoundException
from marshmallow import fields
from marshmallow import Schema
from sqlalchemy import create_engine
from sqlalchemy.orm import declarative_base
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import Session
engine = create_engine('sqlite://', echo=True, future=True)
session = Session(engine)
Base = declarative_base()
class Items(ModelMixin, Base):
__tablename__ = 'items'
data: Mapped[str] = mapped_column(comment='Data of item.')
Base.metadata.create_all(engine)
class ItemsDBAL(SqlaDBAL[Items]):
"""Items DBAL."""
class ItemSchema(Schema):
id = fields.UUID()
data = fields.String()
created_at = fields.DateTime()
class CreateItemAction(BaseAction):
def validate(self) -> None:
ItemSchema(exclude=['id', 'created_at']).load(self._data)
def action(self) -> Items:
return ItemsDBAL(self._session).create(**self._data)
class ReadItemAction(BaseAction):
def validate(self) -> None:
ItemSchema().load(self._data)
def action(self) -> Items:
return ItemsDBAL(self._session).read(**self._data)
class UpdateItemAction(BaseAction):
def validate(self) -> None:
ItemSchema(only=['id', 'data']).load(self._data)
def action(self) -> Items:
return ItemsDBAL(self._session).update(**self._data)
class DeleteItemAction(BaseAction):
def validate(self) -> None:
ItemSchema(only=['id']).load(self._data)
def action(self) -> None:
return ItemsDBAL(self._session).delete(**self._data)
if __name__ == '__main__':
new_item = CreateItemAction(session, {'data': 'data'}).run()
print('New item:', new_item)
item = ReadItemAction(session, {'id': new_item.id}).run()
print('Item:', item)
updated_item = UpdateItemAction(session, {'id': new_item.id, 'data': 'updated_data'}).run()
print('Updated item:', updated_item)
DeleteItemAction(session, {'id': new_item.id}).run()
try:
item = ReadItemAction(session, {'id': new_item.id}).run()
except DBALObjectNotFoundException:
print('Deleted item.')
```
| text/markdown | null | Konstantin Fadeev <fadeev@legalact.pro> | null | null | MIT License
Copyright (c) 2024 Konstantin Fadeev
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Database",
"Topic ::... | [] | null | null | >=3.11 | [] | [] | [] | [
"SQLAlchemy>=2.0.0",
"marshmallow>=3.14.1",
"build==1.4.0; extra == \"dev\"",
"psycopg[binary]==3.3.2; extra == \"dev\"",
"pre-commit==4.5.1; extra == \"dev\"",
"pytest==9.0.2; extra == \"dev\"",
"pytest-cov==7.0.0; extra == \"dev\"",
"python-dotenv==1.2.1; extra == \"dev\"",
"tox==4.34.1; extra == ... | [] | [] | [] | [
"changelog, https://github.com/flask-pro/db-first/blob/master/CHANGES.md",
"repository, https://github.com/flask-pro/db-first"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T17:51:59.918343 | db_first-5.1.3.tar.gz | 14,654 | 71/ac/4eaf878de9d47e760c2b1ffaddc43f8f09e22dbea5413d9eb654cd87c663/db_first-5.1.3.tar.gz | source | sdist | null | false | 560b04656f6cdbf3241948434c1f7dd3 | fde4430f240a81946fb6950db78a2e34bbfa704a709ec337600ff73b76136741 | 71ac4eaf878de9d47e760c2b1ffaddc43f8f09e22dbea5413d9eb654cd87c663 | null | [
"LICENSE"
] | 0 |
2.4 | mcpplay | 0.1.0 | The Python-native playground for MCP servers | <p align="center">
<img src="docs/logo.svg" alt="mcpplay" width="80" />
</p>
<h1 align="center">mcpplay</h1>
<p align="center">
<em>The <code>FastAPI /docs</code> experience, for MCP servers.</em>
</p>
<p align="center">
<a href="https://pypi.org/project/mcpplay/"><img src="https://img.shields.io/pypi/v/mcpplay?color=blue" alt="PyPI version" /></a>
<a href="https://pypi.org/project/mcpplay/"><img src="https://img.shields.io/pypi/pyversions/mcpplay" alt="Python versions" /></a>
<a href="https://github.com/gauthierpiarrette/mcpplay/blob/main/LICENSE"><img src="https://img.shields.io/github/license/gauthierpiarrette/mcpplay" alt="License" /></a>
</p>
<p align="center">
<a href="https://mcpplay.dev">Documentation</a> • <a href="https://mcpplay.dev/getting-started/">Getting Started</a> • <a href="https://github.com/gauthierpiarrette/mcpplay/issues">Issues</a>
</p>
<p align="center">
<img src="https://github.com/user-attachments/assets/822c7871-3d89-4133-9ec3-59bfee697ff5" alt="mcpplay in action" width="800" />
</p>
**mcpplay** gives you a browser-based playground for any [MCP](https://modelcontextprotocol.io) server. One command, and you get auto-generated forms, live results, and a full execution timeline.
## Installation
```bash
pip install mcpplay
mcpplay demo
```
Your browser opens to `http://localhost:8321`. That's it!
<p align="center">
<img src="docs/assets/mcpplay_demo.png" alt="mcpplay screenshot" width="800" />
</p>
### Point it at your own server
```bash
mcpplay run server.py
mcpplay run --command "node server.js"
mcpplay run --command "uvx my-mcp-server"
```
### Options
```bash
mcpplay run server.py --port 9000 # Custom port (default: 8321)
mcpplay run server.py --no-open # Don't auto-open browser
mcpplay run server.py --no-reload # Disable hot reload
mcpplay run server.py --env API_KEY=xxx # Pass env vars to server
```
## Features
- **Schema-aware forms** - auto-generated from your tool's JSON Schema, with enums, nested objects, arrays, and defaults.
- **Live execution** - run tools and see structured results instantly.
- **Persistent timeline** - every call logged with inputs, outputs, and latency. Replay with one click.
- **Hot reload** - edit your server, mcpplay restarts it. Session and timeline preserved.
- **Localhost-only** - bound to `127.0.0.1` with Origin validation. No remote exposure.
## How It Works
```
Browser UI ←→ mcpplay (Python proxy) ←→ Your MCP Server
(Svelte) (uvicorn + WebSocket) (stdio)
```
mcpplay spawns your server as a child process, connects via stdio, and proxies everything through a WebSocket to the browser. Executions are logged to a local SQLite database.
## Comparison
| | **mcpplay** | MCP Inspector | MCPJam Inspector |
|---|---|---|---|
| **Install** | `pip install` | `npx` (Node required) | Docker |
| **Python native** | ✅ | ❌ | ❌ |
| **Form generation** | Full JSON Schema | Basic inputs | Basic inputs |
| **Hot reload** | ✅ | ❌ | ❌ |
| **Execution timeline** | Persistent, replayable | Log | Log |
## Development
```bash
git clone https://github.com/gauthierpiarrette/mcpplay
cd mcpplay
# Python backend
uv sync --dev
uv run pytest
# Frontend (Svelte)
cd frontend
npm install
npm run dev # Dev server with HMR
npm run build # Build to src/mcpplay/static/
```
## License
Apache 2.0
| text/markdown | Gauthier Piarrette | null | null | null | null | agent, developer-tools, fastmcp, llm, mcp, mcp client, mcp server, mcp-tools, model context protocol, playground | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intellig... | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.0",
"mcp>=1.0",
"starlette>=0.27",
"uvicorn[standard]>=0.20",
"watchfiles>=0.20"
] | [] | [] | [] | [
"Homepage, https://github.com/gauthierpiarrette/mcpplay",
"Documentation, https://gauthierpiarrette.github.io/mcpplay/",
"Repository, https://github.com/gauthierpiarrette/mcpplay",
"Issues, https://github.com/gauthierpiarrette/mcpplay/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:51:55.115820 | mcpplay-0.1.0.tar.gz | 469,496 | e7/46/1a5af75cb8b8552644a6130092a753f32f5f2c811da33c09f89a693131f6/mcpplay-0.1.0.tar.gz | source | sdist | null | false | 6b546584ad54b78fe8439b2e7a757ca3 | 4cbe9ddcb5742be1abe0d689fee58e5d001f30e6197bc2eba497f878e3c06a1b | e7461a5af75cb8b8552644a6130092a753f32f5f2c811da33c09f89a693131f6 | Apache-2.0 | [
"LICENSE"
] | 254 |
2.4 | fastapi-vue | 1.1.1 | Serves Vue assets on a FastAPI app. Use fastapi-vue-setup tool to add Vue build to your package. | # fastapi-vue
Runtime helpers for FastAPI + Vite/Vue projects.
## Overview
This package provides:
- `fastapi_vue.Frontend`: serves built SPA assets (with SPA support, caching, and optional zstd)
- `fastapi_vue.server.run`: a small Uvicorn runner with convenient `listen` endpoint parsing
## Quickstart
Serve built frontend assets from `frontend-build/`:
```python
from pathlib import Path
from contextlib import asynccontextmanager
from fastapi import FastAPI
from fastapi_vue import Frontend
frontend = Frontend(Path(__file__).with_name("frontend-build"), spa=True)
@asynccontextmanager
async def lifespan(app: FastAPI):
await frontend.load()
yield
app = FastAPI(lifespan=lifespan)
# Add API routes here...
# Final catch-all route for frontend files (keep at end of file)
frontend.route(app, "/")
```
## Frontend
`Frontend` serves a directory with:
- RAM caching, with zstd compression when smaller than original
- Browser caching: ETag + Last-Modified, Immutable assets
- Favicon mapping (serve PNG or other images there instead)
- SPA routing (serve browsers index.html at all paths not otherwise handled)
Dev-mode behavior with `FastAPI(debug=True)`: requests error HTTP 409 with a message telling you to use the Vite dev server instead. Avoids accidentally using outdated `frontend-build` during development.
- `directory`: Path on local filesystem
- `index`: Index file name (default: `index.html`)
- `spa`: Serve index at any path (default: `False`)
- `catch_all`: Register a single catch-all handler instead of a route to each file; default for SPA
- `cached`: Path prefixes treated as immutable (default: `/assets/`)
- `favicon`: Optional path or glob (e.g. `/assets/logo*.png`)
- `zstdlevel`: Compression level (default: 18)
ℹ️ Even when your page has a meta tag giving favicon location, browsers still try loading `/favicon.ico` whenever looking at something else. We find it more convenient to simply serve the image where the browser expects it, with correct MIME type. This also allows having a default favicon for your application that can be easily overriden at the reverse proxy (Caddy, Nginx) to serve the company branding if needed in deployment.
## Server runner
When you need more flexibility than `fastapi` CLI can provide (e.g. CLI arguments to your own program), you may use this convenience to run FastAPI app with Uvicorn startup on given `listen` endpoints. Runs in the same process if possible but delegates to `uvicorn.run()` for auto-reloads and multiple workers. This would typically be called from your CLI main, which can set its own env variables to pass information to the FastAPI instances that run (Python imports only work in same-process mode).
```python
from fastapi_vue import server
server.run("my_app.app:app", listen=["localhost:8000"])
```
- As a deployment option, environment `FORWARDED_ALLOW_IPS` controls `X-Forwarded` trusted IPs (default: `127.0.0.1,::1`).
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"blake3>=1.0.8",
"fastapi>=0.115.0",
"zstandard>=0.23.0"
] | [] | [] | [] | [
"Homepage, https://git.zi.fi/LeoVasanko/fastapi-vue",
"Repository, https://github.com/LeoVasanko/fastapi-vue"
] | uv/0.9.21 {"installer":{"name":"uv","version":"0.9.21","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T17:51:00.139174 | fastapi_vue-1.1.1.tar.gz | 7,548 | 9d/22/e8ff347e97bdb8aadffca9e347045905fda8ce0f9b1501f004140fa8d6a4/fastapi_vue-1.1.1.tar.gz | source | sdist | null | false | 216242430ab22cf6cfaf37af900fdfcb | 9ae6f6ad92c3bc57b574c1b88030e22a1b1090d4943af5c6ce084bf25b3e9258 | 9d22e8ff347e97bdb8aadffca9e347045905fda8ce0f9b1501f004140fa8d6a4 | null | [] | 269 |
2.4 | fastapi-vue-setup | 1.1.1 | Tool to create or patch FastAPI+Vue projects with integrated build/dev systems | # fastapi-vue-setup
Create or patch a FastAPI + Vue project with an integrated dev/build workflow.
- Development: one command runs Vite + FastAPI (reloads)
- Production: `uv build` bakes the built Vue assets into the Python package (no Node/JS runtime needed to *run* the installed package)
## Quick start
Install [UV](https://docs.astral.sh/uv/) and any JS runtime (node, deno, or bun).
This README uses `my-app` as the example project name:
- project directory: `my-app/`
- Python module: `my_app`
- env prefix: `MY_APP`
- CLI command: `my-app`
Create a new project in `./my-app`:
```sh
uvx fastapi-vue-setup my-app
```
Once in your source tree, you will typically use `.` for the path. If there is an existing project, `fastapi-vue-setup` will do its best to find and patch a backend module and create or patch a Vue project in `frontend/`. The integration can be upgraded by running a new version of `fastapi-vue-setup` on it, preserving earlier default ports and user customizations.
## In your project
ℹ️ Everything below is meant to be run within your project source tree.
The setup creates a CLI entry for your package, so that it becomes a command to run, not a Python module nor `fastapi myapp...`. The CLI main can be customized, although --listen should be kept for devserver compatibility.
You can choose the JS runtime with environment `JS_RUNTIME` (e.g. `node`, `deno`, `bun`, or path to one). This is used by the build and the devserver scripts. By default any available runtime on the system is chosen.
### Development server (Vite + FastAPI)
```sh
uv run scripts/devserver.py [args]
```
ℹ️ Arguments are forwarded to the main CLI, except that `--listen` controls where Vite listens, and `--backend` is passed to main CLI as `--listen`.
### Production
Build the Python package (this compiles the Vue frontend) and run the production server:
```sh
uv build && uv run my-app [args]
```
Once happy with it, publish the package
```sh
uv build && uv publish
```
Afterwards, you can easily run it anywhere, no JS runtimes required:
```sh
uvx my-app [args]
```
ℹ️ Instead of `uvx` you may consider `uv tool install`, oldskool `pip install` or whatever best suits you.
### Vite plugin
The generated Vite plugin lives in `frontend/vite-plugin-fastapi.js` and defaults to proxying `/api`.
It reads `MY_APP_BACKEND_URL` to know where to proxy; if unset it falls back to your configured default backend port.
## Project layout (typical)
```
my-app/
├── frontend/ # Vue app (Vite)
│ ├── src/
│ ├── vite-plugin-fastapi.js
│ └── package.json
├── my_app/ # Python package
│ ├── __main__.py # CLI entrypoint
│ ├── app.py # FastAPI app
│ └── frontend-build/ # built assets (included in distributions)
├── pyproject.toml
└── scripts/
├── devserver.py # Run Vite and FastAPI together in dev mode
└── fastapi-vue/ # Dev utilities (only on the source tree)
├── build-frontend.py
├── buildutil.py
└── devutil.py
```
## The fastapi-vue runtime module
The backend runs the FastAPI app and serves the frontend build using the companion package in [fastapi-vue/README.md](fastapi-vue/README.md). Your project will depend on Fastapi and this lightweight module.
ℹ️ Development functionality is in `scripts/fastapi-vue/` directly in your source tree, and is not to be confused with this runtime module. Only the runtime is installed with your package.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"ruff>=0.14.13",
"tomlkit>=0.12.0"
] | [] | [] | [] | [
"Homepage, https://git.zi.fi/LeoVasanko/fastapi-vue-setup",
"Repository, https://github.com/LeoVasanko/fastapi-vue-setup"
] | uv/0.9.21 {"installer":{"name":"uv","version":"0.9.21","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T17:50:58.281561 | fastapi_vue_setup-1.1.1-py3-none-any.whl | 26,295 | df/94/185e76173277495d5a84e75583d0a3acd46b4d657404d6e94126d7bd10e1/fastapi_vue_setup-1.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 7c819efb00bb2961eb0e9d4b010685d9 | 6ca791c0d3e86e9888d10c22d1b52f12cad0d02b5d6b0b94528d70f99b73bba6 | df94185e76173277495d5a84e75583d0a3acd46b4d657404d6e94126d7bd10e1 | null | [] | 234 |
2.4 | pypkg-recdep | 0.5 | Recursively find dependencies for PyPI packages | # pypkg-recdep
## Background
Some organizations need to have python packages available in an internal network with no connection to internet. It is not enough to download and copy the python packages you know that you will use, you also have to include all dependencies. And the dependencies of the dependencies, recursively.
This python package is created to automate the collection of information needed to maintain python packages in such a setup with an internal network with no direct internet connection.
## Using it
If you want to use it install it using pip from [https://pypi.org/project/pypkg-recdep](https://pypi.org/project/pypkg-recdep). There is no need download anything from Bitbucket to use the application.
### Installing on mac and Linux
````sh
pip3 install --upgrade pypkg-recdep
````
### Installing on Microsoft Windows
````sh
pip install --upgrade pypkg-recdep
````
## What it does
* It creates a list of all dependencies of a package on PyPI and includes other information (like licences) that might be needed to evaluate if the use of package is OK in a specific context.
* All dependencies includes not only the direct dependencies but also the indirect dependencies of the dependencies.
* It can also make a list of all packages available in an internal Azure Devops (ADO) server hosting the internal package repository.
* It can also make a list of all packages available in an internal server using PyPI protocols hosting the internal package repository.
* The lists of internally available packages can be used to as input when creating the list of all dependencies of a package on PyPI. Then the produced output differentiates between internally already available packages and packages that need to be processed to become available internally.
## Output formats
In version version 0.3 the only output file format for the dependency information was Markdown. Starting in version 0.4 several output file formats are supported.
To limit the number of dependencies of pypkg-recdep (as you might want to install it in environment where each extra dependency causes extra work) it lists only mformat [https://pypi.org/project/mformat](https://pypi.org/project/mformat) as a dependency. This gives a limited number of output file formats (like Markdown and HTML). By installing mformat-ext [https://pypi.org/project/mformat-ext](https://pypi.org/project/mformat-ext) together with pypkg-recdep you will automatically get additional output formats (like Microsoft Word docx and LibreOffice Open Document Text odt)-
## Version history
| Version | Date | Python version | Description |
|---------|-------------|-----------------|-------------------------------------|
| 0.5 | 18 Feb 2026 | 3.12 or newer | Improved with newer dependencies |
| 0.4 | 01 Feb 2026 | 3.12 or newer | Additional output formats |
| 0.3 | 18 Dec 2025 | 3.12 or newer | Fixed edge cases |
| 0.2.1 | 14 Dec 2025 | 3.12 or newer | Bug fix release |
| 0.1 | 15 Aug 2025 | 3.12 or newer | First released version |
## How to use it
### Use help
On mac and Linux:
```` text
% python3 -m pypkg_recdep --help
% python3 -m pypkg_recdep printdeps --help
% python3 -m pypkg_recdep listinternal --help
% python3 -m pypkg_recdep listado --help
````
On Microsoft Windows:
```` text
% python -m pypkg_recdep --help
% python -m pypkg_recdep printdeps --help
% python -m pypkg_recdep listinternal --help
% python -m pypkg_recdep listado --help
````
### Get information on first packages
So you have set up a service to store python packages in your isolated network. The first thing you need to do is to get information from PyPI.org about a few packages that are candidates to have on you internal network. However, you are using the old Python 3.12 in the internal network so all packages should run on that version. Also the service to store python packages have a bug for metadata version greater than 2.3, so you need to select package versions based on this. You also want a list of PURLs [https://github.com/package-url/purl-spec#purl](https://github.com/package-url/purl-spec#purl) to feed into some script you have.
On a computer with internet access run on mac or Linux:
```` text
% python3 -m pypkg_recdep printdeps --package setuptools --pythonversion 3.12 --metadatamax 2.3 --listpurls purls.txt --output setuptools.md
% python3 -m pypkg_recdep printdeps --package selenium --pythonversion 3.12 --metadatamax 2.3 --listpurls purls.txt --output selenium.md
File purls.txt exists. Appending to it.
````
or using the short version of the command line flags on mac and Linux:
```` text
% python3 -m pypkg_recdep printdeps -p setuptools -y 3.12 -m 2.3 -l purls.txt -o setuptools.md
% python3 -m pypkg_recdep printdeps -p selenium -y 3.12 -m 2.3 -l purls.txt -o selenium.md
File purls.txt exists. Appending to it.
````
On a computer with internet access run on Microsoft Windows:
```` text
% python -m pypkg_recdep printdeps --package setuptools --pythonversion 3.12 --metadatamax 2.3 --listpurls purls.txt --output setuptools.md
% python -m pypkg_recdep printdeps --package selenium --pythonversion 3.12 --metadatamax 2.3 --listpurls purls.txt --output selenium.md
File purls.txt exists. Appending to it.
````
or using the short version of the command line flags on Microsoft Windows:
```` text
% python -m pypkg_recdep printdeps -p setuptools -y 3.12 -m 2.3 -l purls.txt -o setuptools.md
% python -m pypkg_recdep printdeps -p selenium -y 3.12 -m 2.3 -l purls.txt -o selenium.md
File purls.txt exists. Appending to it.
````
As you notice you will need to let the program create a separate output file with information for each top-level package you want information on. This output file contains information on that package and on all packages it depends on, and on all packages the dependencies depend on, and so on.
However, you can use the same file name of the list of PURLs for several top level packages.
### Knowing what is on your server
Once you have downloaded a number of packages and uploaded them to your local server on your local isolated network, you want to know what you already have locally. You want to avoid analyzing and downloading versions of packages that you already have.
To solve this pypkg_recdep can list what you have on your local server.
#### Using PyPI REST API to list your local server
On a Linux or mac computer on your local network run:
```` text
% python3 -m pypkg_recdep listinternal --url https://pypi.local --output internal.csv
````
or if you need to use a personal access token, run:
```` text
% python3 -m pypkg_recdep listinternal --url https://pypi.org --output internal.csv --patfile my_pat_file
````
Of course this can also be done on using the short versions of the command line flags (with or without the personal access token):
```` text
% python3 -m pypkg_recdep listinternal -u https://pypi.org -o internal.csv
````
or if you need to use a personal access token, run:
```` text
% python3 -m pypkg_recdep listinternal -u https://pypi.org -o internal.csv -a my_pat_file
````
To run these on a computer running Microsoft Windows just type "python" instead of "python3".
#### Listing python packages on Azure Devops Server
```` text
% python3 -m pypkg_recdep listado --help usage: pypkg_recdep listado [-h] [-o OUTPUT] [-s INSTANCE] [-c COLLECTION]
[-p PROJECT] [-i INCLUDE_TYPES] [-a PATFILE]
List python packages in Azure Dev Ops (ADO) server matching types.
options:
-h, --help show this help message and exit
-o, --output OUTPUT Name of output CSV file (default: ado.csv).
-s, --instance INSTANCE
Instance or server name. (Default: "devops.local.net".
Set environment variable ADO_INSTANCE to change
default value.)
-c, --collection COLLECTION
Collection. (Default: "Python". Set environment
variable ADO_COLLECTION to change default value.)
-p, --project PROJECT
Project.(Default: None. Set environment variable
ADO_PROJECT to change default value.)
-i, --include-types INCLUDE_TYPES
Comma separated list of package types to include.
(Default: "wheel". Set environment variable ADO_TYPES
to change default value.)
-a, --patfile PATFILE
File name of file with personal access token.
(Default: None. Set environment variable PATFILE to
change default value.)
Useful when having internal ADO server mirroring part of PyPI.org.
````
### Get information on additional packages
The CSV file of the packages on your local server (created in the text above) can be used as an exclude file for what dependent packages to exclude (i.e. not download this time) when getting information on additional packages from PyPI.org.
On a computer with internet access run on mac or Linux:
```` text
% python3 -m pypkg_recdep printdeps --package excel-list-transform --pythonversion 3.12 --metadatamax 2.3 --listpurls purls.txt --output excel-list-transform.md --excludecsv internal.csv --excludetext 'already on internal server'
````
Of course this can also be done on using the short versions of the command line flags
```` text
% python3 -m pypkg_recdep printdeps -p excel-list-transform -y 3.12 -m 2.3 -l purls.txt -o excel-list-transform.md -e internal.csv -t 'already on internal server'
````
To run these on a computer running Microsoft Windows just type "python" instead of "python3".
## Source code
Source code and tests are available at [https://bitbucket.org/tom-bjorkholm/pypkg-recdep](https://bitbucket.org/tom-bjorkholm/pypkg-recdep).
## Test summary
* Test result: 461 passed, 2 skipped in 5s
* No Flake8 warnings.
* No mypy errors found.
* 0.5 built and tested using python version: Python 3.14.3
| text/markdown | Tom Björkholm | Tom Björkholm <klausuler_linnet0q@icloud.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"mformat>=0.3",
"pypi-simple>=1.8.0",
"packaging>=26.0",
"argcomplete>=3.6.3",
"requests>=2.32.3",
"types-requests>=2.32.4.20250913",
"pip>=25.3",
"setuptools>=80.9.0",
"build>=1.3.0",
"wheel>=0.45.1"
] | [] | [] | [] | [
"Source code, https://bitbucket.org/tom-bjorkholm/pypkg-recdep"
] | twine/6.0.1 CPython/3.12.6 | 2026-02-18T17:50:55.471748 | pypkg_recdep-0.5.tar.gz | 26,045 | 90/8f/d7062df57a6cd145953632f133c8562c86b724b3d5f0ed09b0c20e3c5cf8/pypkg_recdep-0.5.tar.gz | source | sdist | null | false | 529c7305493c424458326fad339cfa0d | d015795806975c01f453501ea3729fee4a0bbcfb70ea3054db3d07303b1d5e5e | 908fd7062df57a6cd145953632f133c8562c86b724b3d5f0ed09b0c20e3c5cf8 | null | [] | 229 |
2.4 | gh-backup | 0.3.0 | Backup a GitHub organization or user: repos, issues, and pull requests | # gh-backup
[](https://github.com/eoin-obrien/gh-backup/actions/workflows/ci.yml)
[](https://codecov.io/gh/eoin-obrien/gh-backup)
[](https://pypi.org/project/gh-backup/)
[](https://www.python.org/downloads/)
[](LICENSE.md)
[](https://github.com/astral-sh/ruff)
[](https://github.com/astral-sh/uv)
[](https://github.com/pre-commit/pre-commit)
[](https://conventionalcommits.org)
Backup a GitHub organization or user: repos, issues, and pull requests.
Clones all repositories with full git history and exports issues/PRs as JSON, then compresses everything into a `.tar.zst` archive.
## Requirements
- [GitHub CLI (`gh`)](https://cli.github.com/) — authenticated with a token that has `repo` and `read:org` scopes
## Installation
```bash
# uv (recommended)
uv tool install gh-backup
# pip
pip install gh-backup
# pipx
pipx install gh-backup
```
## Authentication
```bash
gh-backup auth
```
Checks that `gh` is authenticated and reports the active account and token scopes.
## Usage
### Export an organization
```bash
gh-backup export myorg --output /backups
```
### Export a user account
```bash
gh-backup export myusername --output /backups
```
Account type (org or user) is detected automatically.
### Options
| Option | Short | Description |
|---|---|---|
| `--output PATH` | `-o` | Directory to write exports into (required) |
| `--workers N` | `-w` | Parallel clone workers (default: 4, max: 32) |
| `--repos NAME` | `-r` | Only export specific repos (repeatable) |
| `--format` | | Archive format: `zst` (default), `gz`, or `xz` |
| `--no-compress` | | Keep raw export directory, skip archiving |
| `--keep-dir` | | Keep uncompressed directory after archiving |
| `--shallow` | | Shallow clone (`--depth 1`); faster but no full history |
| `--gc` | | Run `git gc --aggressive` on each clone to shrink pack files |
| `--dry-run` | | List repos that would be exported without writing anything |
| `--skip-forks` | | Exclude forked repositories |
| `--skip-archived` | | Exclude archived repositories |
| `--visibility` | | Only export repos with this visibility: `all` (default), `public`, or `private` |
| `--skip-issues` | | Skip issues and pull request export |
| `--verbose` | `-v` | Enable debug logging |
### Examples
```bash
# Export an org with more workers
gh-backup export myorg --output /backups --workers 8
# Export specific repos only
gh-backup export myorg --output /backups --repos frontend --repos backend
# Export a user account, skip issues, no compression
gh-backup export myusername --output /backups --skip-issues --no-compress
# Export with GZ compression instead of ZST
gh-backup export myorg --output /backups --format gz
```
Each run is saved to a timestamped subdirectory under the output directory, then compressed into a single archive.
## Output structure
The archive unpacks to:
```
<org>-<timestamp>/
├── metadata.json # Export config and repo stats
├── repos/
│ ├── repo1.git # Bare mirror clone (git clone --mirror)
│ └── ...
└── issues/
├── repo1/
│ ├── issues.json
│ └── pulls.json
└── ...
```
## Exit codes
| Code | Meaning |
|---|---|
| `0` | Success |
| `1` | Export failed |
| `2` | Partial failure — some repos failed, others succeeded |
| `130` | Cancelled with Ctrl+C |
## Development
Requires [uv](https://docs.astral.sh/uv/).
```bash
make install # Install dependencies and set up pre-commit hooks
make test # Run the test suite
make lint # Check linting and formatting (ruff)
make lint-fix # Auto-fix linting and formatting issues
make commit # Create a conventional commit (via commitizen)
```
| text/markdown | null | Eoin O'Brien <eoin@eoin.ai> | null | null | null | archive, backup, cli, git, github | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Topic :: Software Development :: Version Control :: Git",
"Topic :: System :: Archiving :: Back... | [] | null | null | >=3.13 | [] | [] | [] | [
"rich>=13",
"tenacity>=9",
"typer>=0.15",
"zstandard>=0.23"
] | [] | [] | [] | [
"Homepage, https://github.com/eoin-obrien/gh-backup",
"Repository, https://github.com/eoin-obrien/gh-backup",
"Changelog, https://github.com/eoin-obrien/gh-backup/blob/master/CHANGELOG.md",
"Bug Tracker, https://github.com/eoin-obrien/gh-backup/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T17:50:43.684982 | gh_backup-0.3.0-py3-none-any.whl | 28,306 | 99/de/2404aae943a13e337feabfcad98f7c8b990924bc09a035a1d0cf47b81cbd/gh_backup-0.3.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 74de2452ba6e6bd3965e6ac075437abf | 4597e2e18b2b8da9a167274cd3dab0ebb4f6bd1c96a8b2b58be7c203325110c5 | 99de2404aae943a13e337feabfcad98f7c8b990924bc09a035a1d0cf47b81cbd | AGPL-3.0-only | [
"LICENSE.md"
] | 223 |
2.4 | dyslexai-taxonomy | 0.1.0 | Shared taxonomy definitions for computer vision inspection domains | # dyslexai-taxonomy
Shared taxonomy definitions for computer vision inspection domains.
## Overview
This package provides standardized category definitions, compliance rules, and calibration data for computer vision models targeting inspection domains. Both data generation tools (kubric-stair) and runtime inference applications (spatial-vision-app) import from this shared taxonomy.
## Installation
```bash
pip install dyslexai-taxonomy
```
For development:
```bash
pip install -e ".[dev]"
```
## Usage
```python
from dyslexai_taxonomy import get_domain, list_domains
# List available domains
domains = list_domains()
# ['spatial_vision']
# Get a specific domain
domain = get_domain("spatial_vision")
# Access categories
coarse = domain.get_coarse_categories()
fine = domain.get_fine_categories()
# Get compliance rules for a category
rules = domain.get_compliance_rules("outlet")
```
### Direct imports
```python
from dyslexai_taxonomy.domains.spatial_vision import (
COARSE_CATEGORIES,
FINE_CATEGORIES,
get_coarse_for_fine,
get_compliance_rules,
CALIBRATION_ANCHORS,
)
# Get parent category for a fine category
coarse = get_coarse_for_fine("gfci") # Returns "outlet"
# Get standard dimensions for calibration
outlet_dims = CALIBRATION_ANCHORS["outlet"]
```
## Architecture
```
dyslexai-taxonomy (standalone)
↑ ↑
│ │
kubric-stair spatial-vision-app
(data generation) (runtime inference)
```
Both projects import from dyslexai-taxonomy independently. Neither depends on the other.
## Domains
### spatial_vision
Home inspection and code compliance domain.
**Coarse Categories (15-18):**
- Wall components: outlet, switch, vent, panel, thermostat, junction_box
- Openings: door, window, cabinet_door
- Vertical transport: stair, ramp
- HVAC equipment: furnace, water_heater, ac_unit
- Safety: smoke_detector, co_detector
- Background: wall_surface, floor_surface
**Fine Categories (150+):**
Each coarse category expands to multiple fine subcategories. For example:
- outlet → duplex, gfci, usb, 240v, outdoor...
- stair → straight, l_shaped, spiral, exterior...
**Compliance Rules:**
- NEC (National Electrical Code)
- IRC (International Residential Code)
- ADA (Americans with Disabilities Act)
- NFPA (Fire safety)
## License
Apache-2.0
| text/markdown | dyslexai-cardnl | null | null | null | null | building-codes, computer-vision, detection, inspection, taxonomy | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming ... | [] | null | null | >=3.10 | [] | [] | [] | [
"pyyaml>=6.0",
"black>=23.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/dyslexai-cardnl/dyslexai-taxonomy",
"Repository, https://github.com/dyslexai-cardnl/dyslexai-taxonomy"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T17:49:47.436328 | dyslexai_taxonomy-0.1.0.tar.gz | 19,830 | 13/57/766e7f002ad6d71d9e5f1f1fb9bef4cad82aaed844270238c9b3c6a85df6/dyslexai_taxonomy-0.1.0.tar.gz | source | sdist | null | false | 39047180761379bee97202a335ba7f05 | 9bed955713862101c3a76a0650f956910adce6af9518c23096649159cfdef767 | 1357766e7f002ad6d71d9e5f1f1fb9bef4cad82aaed844270238c9b3c6a85df6 | Apache-2.0 | [
"LICENSE"
] | 277 |
2.4 | mujoco-usd-converter | 0.1.0a6 | A MuJoCo to OpenUSD Data Converter | # mujoco-usd-converter
# Overview
A [MuJoCo](https://mujoco.org) to [OpenUSD](https://openusd.org) Data Converter
> Important: This is currently an Alpha product. See the [CHANGELOG](https://github.com/newton-physics/mujoco-usd-converter/blob/main/CHANGELOG.md) for features and known limitations.
Key Features:
- Converts an input MJCF file into an OpenUSD Layer
- Supports data conversion of visual geometry & materials, as well as the bodies, collision geometry, sites, joints, and actuators necessary for kinematic simulation.
- Available as a python module or command line interface (CLI).
- Creates a standalone, self-contained artifact with no connection to the source MJCF, OBJ, or STL data.
- Structured as an [Atomic Component](https://docs.omniverse.nvidia.com/usd/latest/learn-openusd/independent/asset-structure-principles.html#atomic-model-structure-flowerpot)
- Suitable for visualization & rendering in any OpenUSD Ecosystem application.
- Suitable for [import & simulation directly in MuJoCo Simulate](#loading-usd-in-mujoco-simulate).
This project is part of [Newton](https://github.com/newton-physics), a [Linux Foundation](https://www.linuxfoundation.org) project which is community-built and maintained.
## Menagerie Benchmarks
We run regular benchmarks on the [MuJoCo Menagerie](https://github.com/google-deepmind/mujoco_menagerie). See the latest results [here](https://github.com/newton-physics/mujoco-usd-converter/blob/main/benchmarks.md).
## Implementation Details & Dependencies
Specific implementation details are based on the "MJC to USD Conceptual Data Mapping" document, which is a collaboration between Google DeepMind and NVIDIA. This document will be made public once the project moves out of the alpha phase.
One important detail is that this document recommends nested rigid bodies within articulations, as it more faithfully matches the kinematic tree in MuJoCo and meets the needs of reduced coordinate solvers. Support for nested bodies in UsdPhysics is fairly new (as of USD 25.11), and some existing applications may not support this style of nesting.
The output asset structure is based on NVIDIA's [Principles of Scalable Asset Structure in OpenUSD](https://docs.omniverse.nvidia.com/usd/latest/learn-openusd/independent/asset-structure-principles.html).
The implementation also leverages the following dependencies:
- NVIDIA's [OpenUSD Exchange SDK](https://docs.omniverse.nvidia.com/usd/code-docs/usd-exchange-sdk/latest/index.html) to author consistent & correct USD data.
- Pixar's OpenUSD python modules & native libraries (vendored via the `usd-exchange` wheel).
- Google DeepMind's `mujoco` python module for parsing MJCF into MjSpec
- A codeless version of the [MjcPhysics USD schema](https://mujoco.readthedocs.io/en/latest/OpenUSD/mjcPhysics.html) from MuJoCo to author the MuJoCo specific Prims, Applied APIs, and Attributes.
- [tinyobjloader](https://github.com/tinyobjloader/tinyobjloader) and [numpy-stl](https://numpy-stl.readthedocs.io) for parsing any mesh data referenced by the input MJCF datasets.
# Get Started
To start using the converter, install the python wheel into a virtual environment using your favorite package manager:
```bash
python -m venv .venv
source .venv/bin/activate
pip install mujoco-usd-converter
mujoco_usd_converter /path/to/robot.xml /path/to/usd_robot
```
See `mujoco_usd_converter --help` for CLI arguments.
Alternatively, the same converter functionality can be accessed from the python module directly, which is useful when further transforming the USD data after conversion.
```python
import mujoco_usd_converter
import usdex.core
from pxr import Sdf, Usd
converter = mujoco_usd_converter.Converter()
asset: Sdf.AssetPath = converter.convert("/path/to/robot.xml", "/path/to/usd_robot")
stage: Usd.Stage = Usd.Stage.Open(asset.path)
# modify further using Usd or usdex.core functionality
usdex.core.saveStage(stage, comment="modified after conversion")
```
## Loading the USD Asset
Once your asset is saved to storage, it can be loaded into an OpenUSD Ecosystem application, including a custom build of MuJoCo itself.
We recommend starting with [usdview](https://docs.omniverse.nvidia.com/usd/latest/usdview/index.html), a simple graphics application to confirm the visual geometry & materials are working as expected. You can inspect any of the USD properties in this application, including the UsdPhysics and MjcPhysics properties.
> Tip: [OpenUSD Exchange Samples](https://github.com/NVIDIA-Omniverse/usd-exchange-samples) provides `./usdview.sh` and `.\usdview.bat` commandline tools which bootstrap usdview with the necessary third party dependencies.
However, you cannot start simulating in usdview, as there is no native simulation engine in this application.
To simulate this asset in Newton, call [newton.ModelBuilder.add_usd()](https://newton-physics.github.io/newton/api/_generated/newton.ModelBuilder.html#newton.ModelBuilder.add_usd) to parse the asset and add it to your Newton model.
It is also possible to simulate this asset directly in [MuJoCo itself](#loading-usd-in-mujoco-simulate).
Simulating in other UsdPhysics enabled products (e.g. NVIDIA Omniverse, Unreal Engine, etc) may provided mixed results. The MJC physics data is structured hierarchically, which maximal coordinate solvers often do not support. Similarly, many of the important simulation settings are authored via MjcPhysics schemas, which is a USD plugin developed by Google DeepMind, that needs to be deployed & supported for import by the target runtime. In order to see faithful simulation in these applications, the USD asset will need to be modified to suit the expectations of each target runtime.
## Loading USD in MuJoCo Simulate
Loading any USD Layer into MuJoCo Simulate requires a USD enabled build of MuJoCo (i.e. built from source against your own OpenUSD distribution).
> Important : USD support in MuJoCo is currently listed as experimental
To build MuJoCo with USD support, follow the usual CMake build instructions & provide the `USD_DIR` argument when configuring cmake. If you do not have a local USD distribution you will need to build or acquire one.
> Tip: OpenUSD Exchange provides a commandline tool to acquire many precompiled distributions of OpenUSD across several platforms & python versions. See the [install_usdex](https://docs.omniverse.nvidia.com/usd/code-docs/usd-exchange-sdk/latest/docs/devtools.html#install-usdex) documentation for details.
Once MuJoCo is compiled, you can launch the `./bin/simulate` app and drag & drop your USD asset into the viewport. The robot should load & simulate just as if you were using the original MJCF dataset.
# Contribution Guidelines
Contributions from the community are welcome. See [CONTRIBUTING.md](https://github.com/newton-physics/mujoco-usd-converter/blob/main/CONTRIBUTING.md) to learn about contributing via GitHub issues, as well as building the project from source and our development workflow.
General contribution guidelines for Newton repositories are available [here](https://github.com/newton-physics/newton-governance/blob/main/CONTRIBUTING.md).
# Community
For questions about this mujoco-usd-converter, feel free to join or start a [GitHub Discussions](https://github.com/newton-physics/mujoco-usd-converter/discussions).
For questions about OpenUSD Exchange SDK, use the [USD Exchange GitHub Discussions](https://github.com/NVIDIA-Omniverse/usd-exchange/discussions).
For questions about MuJoCo or the MjcPhysics USD Schemas, use the [MuJoCo Forums](https://github.com/google-deepmind/mujoco/discussions/categories/asking-for-help).
For general questions about OpenUSD itself, use the [Alliance for OpenUSD Forum](https://forum.aousd.org).
By participating in this community, you agree to abide by the Linux Foundation [Code of Conduct](https://lfprojects.org/policies/code-of-conduct/).
# References
- [MuJoCo Docs](https://mujoco.readthedocs.io/en/latest/overview.html)
- [NVIDIA OpenUSD Exchange SDK Docs](https://docs.omniverse.nvidia.com/usd/code-docs/usd-exchange-sdk)
- [OpenUSD API Docs](https://openusd.org/release/api/index.html)
- [OpenUSD User Docs](https://openusd.org/release/index.html)
- [NVIDIA OpenUSD Resources and Learning](https://developer.nvidia.com/usd)
# License
The mujoco-usd-converter is provided under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0), as is the [OpenUSD Exchange SDK](https://docs.omniverse.nvidia.com/usd/code-docs/usd-exchange-sdk/latest/docs/licenses.html) and [MuJoCo](https://github.com/google-deepmind/mujoco/blob/main/LICENSE).
| text/markdown | Newton Developers | null | null | null | Apache-2.0 | null | [] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"mujoco>=3.5.0",
"newton-usd-schemas>=0.1.0rc2",
"numpy-stl>=3.2",
"tinyobjloader>=2.0.0rc13",
"usd-exchange>=2.2.0"
] | [] | [] | [] | [
"Documentation, https://github.com/newton-physics/mujoco-usd-converter/#readme",
"Repository, https://github.com/newton-physics/mujoco-usd-converter",
"Changelog, https://github.com/newton-physics/mujoco-usd-converter/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.12.9 | 2026-02-18T17:48:49.581805 | mujoco_usd_converter-0.1.0a6.tar.gz | 154,926 | 70/99/47cdf8f46b9bfb7f7d7516262095de8008a321e7e526c7a42ec37cc88daa/mujoco_usd_converter-0.1.0a6.tar.gz | source | sdist | null | false | 8764809a9ff56aecfe5aaa02596e0827 | 4e4140485da4eb3c0d05954d05178a0a72b13ee73e34e9a63d174de3aeb6d25c | 709947cdf8f46b9bfb7f7d7516262095de8008a321e7e526c7a42ec37cc88daa | null | [
"LICENSE.md"
] | 232 |
2.4 | unicex | 0.18.3 | Unified Crypto Exchange API | # Unified Crypto Exchange API
`unicex` — асинхронная библиотека для работы с криптовалютными биржами, реализующая унифицированный интерфейс поверх «сырых» REST и WebSocket API разных бирж. Поддерживает спотовый и USDT-фьючерсный рынки.
## ✅ Статус реализации
| Exchange | Client | Auth | WS Manager | User WS | Uni Client | Uni WS Manager | ExchangeInfo |
|-----------------|--------|------|------------|---------|------------|----------------|--------------|
| **Aster** | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| **Binance** | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| **Bitget** | ✓ | ✓ | ✓ | | ✓ | ✓ | ✓ |
| **Bybit** | ✓ | ✓ | ✓ | | ✓ | ✓ | ✓ |
| **Gateio** | ✓ | ✓ | ✓ | | ✓ | ✓ | ✓ |
| **Hyperliquid** | ✓ | ✓ | ✓ | ✓ | ✓ | | |
| **Mexc** | ✓ | ✓ | ✓ | | ✓ | ✓ | ✓ |
| **Okx** | ✓ | ✓ | ✓ | | ✓ | ✓ | ✓ |
| **Kucoin** | | | | | ✓ | | |
| **BingX** | | | | | ✓ | | |
---
### 📖 Описание колонок
- **Client** - Обертки над HTTP методами следующих разделов: market, order, position, account.
- **Auth** - Поддержка авторизации и приватных эндпоинтов.
- **WS Manager** - Обертки над вебсокетами биржи.
- **User WS** - Поддержка пользовательских вебсокетов.
- **UniClient** - Унифированный клиент.
- **UniWebsocketManager** - Унифированный менеджер вебсокетов.
- **ExchangeInfo** - Информация о бирже для округления цен и объемов
---
## 🚀 Быстрый старт
- Установка: `pip install unicex` или из исходников: `pip install -e .`
- Библиотека полностью асинхронная. Примеры импорта:
- Сырые клиенты: `from unicex.binance import Client`
- Унифицированные клиенты: `from unicex.binance import UniClient`
- Вебсокет менеджеры: `from unicex.binance import WebsocketManager, UniWebsocketManager`
### Пример: Получение рыночных данных через API
```python
import asyncio
from unicex import Exchange, Timeframe, get_uni_client
# Выбираем биржу, с которой хотим работать.
# Поддерживаются: Binance, Bybit, Bitget, Mexc, Gateio, Hyperliquid и другие.
exchange = Exchange.BYBIT
async def main() -> None:
"""Пример простого использования унифицированного клиента unicex."""
# 1️⃣ Создаём клиент для выбранной биржи
client = await get_uni_client(exchange).create()
# 2️⃣ Получаем открытый интерес по всем контрактам
open_interest = await client.open_interest()
print(open_interest)
# Пример вывода:
# {
# "BTCUSDT": {"t": 1759669833728, "v": 61099320.0},
# "ETHUSDT": {"t": 1759669833728, "v": 16302340.0},
# "SOLUSDT": {"t": 1759669833728, "v": 3427780.0},
# ...
# }
# 3️⃣ Можно точно так же получать другие данные в едином формате:
await client.tickers() # список всех тикеров
await client.futures_tickers() # тикеры фьючерсов
await client.ticker_24hr() # статистика за 24 часа (spot)
await client.futures_ticker_24hr() # статистика за 24 часа (futures)
await client.klines("BTCUSDT", Timeframe.MIN_5) # свечи спота
await client.futures_klines("BTCUSDT", Timeframe.HOUR_1) # свечи фьючерсов
await client.funding_rate() # ставка финансирования
if __name__ == "__main__":
asyncio.run(main())
```
### Пример: Получение данных в реальном времени через Websocket API
```python
import asyncio
from unicex import Exchange, TradeDict, get_uni_websocket_manager
from unicex.enums import Timeframe
# Выбираем биржу, с которой хотим работать.
# Поддерживаются: Binance, Bybit, Bitget, Mexc, Gateio, Hyperliquid и другие.
exchange = Exchange.BITGET
async def main() -> None:
"""Пример простого использования унифицированного менеджера Websocket от UniCEX."""
# 1️⃣ Создаём WebSocket-менеджер для выбранной биржи
ws_manager = get_uni_websocket_manager(exchange)()
# 2️⃣ Подключаемся к потоку сделок (aggTrades)
aggtrades_ws = ws_manager.aggtrades(
callback=callback,
symbols=["BTCUSDT", "ETHUSDT"],
)
# Запускаем получение данных
await aggtrades_ws.start()
# 3️⃣ Примеры других типов потоков:
futures_aggtrades_ws = ws_manager.futures_aggtrades(
callback=callback,
symbols=["BTCUSDT", "ETHUSDT"],
)
klines_ws = ws_manager.klines(
callback=callback,
symbols=["BTCUSDT", "ETHUSDT"],
timeframe=Timeframe.MIN_5,
)
futures_klines_ws = ws_manager.futures_klines(
callback=callback,
symbols=["BTCUSDT", "ETHUSDT"],
timeframe=Timeframe.MIN_1,
)
# 💡 Также у каждой биржи есть свой WebsocketManager:
# unicex.<exchange>.websocket_manager.WebsocketManager
# В нём реализованы остальные методы для работы с WS API.
async def callback(trade: TradeDict) -> None:
"""Обработка входящих данных из Websocket."""
print(trade)
# Пример вывода:
# {'t': 1759670527594, 's': 'BTCUSDT', 'S': 'BUY', 'p': 123238.87, 'v': 0.05}
# {'t': 1759670527594, 's': 'BTCUSDT', 'S': 'BUY', 'p': 123238.87, 'v': 0.04}
# {'t': 1759670346828, 's': 'ETHUSDT', 'S': 'SELL', 'p': 4535.0, 'v': 0.0044}
# {'t': 1759670347087, 's': 'ETHUSDT', 'S': 'BUY', 'p': 4534.91, 'v': 0.2712}
if __name__ == "__main__":
asyncio.run(main())
```
### Пример: Округление цен используя фоновый класс ExchangeInfo
```python
import asyncio
from unicex import start_exchanges_info, get_exchange_info, Exchange
async def main() -> None:
# ⏳ Запускаем фоновые процессы, которые собирают рыночные параметры всех бирж:
# - количество знаков после точки для цены и объема
# - множители контрактов для фьючерсов
await start_exchanges_info()
# Небольшая пауза, чтобы данные успели подгрузиться
await asyncio.sleep(1)
# 1️⃣ Пример 1: Округление цены для фьючерсов OKX
okx_exchange_info = get_exchange_info(Exchange.OKX)
okx_rounded_price = okx_exchange_info.round_futures_price("BTC-USDT-SWAP", 123456.1234567890)
print(okx_rounded_price) # >> 123456.1
# 2️⃣ Пример 2: Округление объема для спота Binance
binance_exchange_info = get_exchange_info(Exchange.BINANCE)
binance_rounded_quantity = binance_exchange_info.round_quantity("BTCUSDT", 1.123456789)
print(binance_rounded_quantity) # >> 1.12345
# 3️⃣ Пример 3: Получение множителя контракта (например, Mexc Futures)
mexc_exchange_info = get_exchange_info(Exchange.MEXC)
mexc_contract_multiplier = mexc_exchange_info.get_futures_ticker_info("BTC_USDT")["contract_size"]
print(mexc_contract_multiplier) # >> 0.0001
# 4️⃣ Пример 4: Реальное применение — вычисляем тейк-профит вручную
# Допустим, позиция открыта по 123123.1 USDT, хотим +3.5% тейк-профит:
take_profit_raw = 123123.1 * 1.035
print("До округления:", take_profit_raw) # >> 127432.40849999999
# Биржа требует цену в допустимом формате — округляем:
take_profit = okx_exchange_info.round_futures_price("BTC-USDT-SWAP", take_profit_raw)
print("После округления:", take_profit) # >> 127432.4
# Теперь это число можно безопасно передать в API без ошибок:
# await client.create_order(symbol="BTC-USDT-SWAP", price=take_profit, ...)
if __name__ == "__main__":
asyncio.run(main())
```
| text/markdown | null | LoveBloodAndDiamonds <ayazshakirzyanov27@gmail.com> | null | null | BSD 3-Clause License
Copyright (c) 2025, LoveBloodAndDiamonds
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiohttp>=3.12.15",
"eth-account>=0.13.7",
"loguru>=0.7.3",
"msgpack>=1.1.1",
"orjson>=3.11.3",
"protobuf>=6.32.1",
"websockets>=15.0.1"
] | [] | [] | [] | [
"Github, https://github.com/LoveBloodAndDiamonds/uni-cex-api",
"Author, https://t.me/LoveBloodAndDiamonds",
"Readthedocs, https://unicex.readthedocs.io/ru/latest/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:48:28.618609 | unicex-0.18.3.tar.gz | 181,422 | 6f/01/a6046d48dc45a55cb31a00aa3eee0b2f4277bebb2cf6ebbfd18d93e7c2b1/unicex-0.18.3.tar.gz | source | sdist | null | false | c1b2065732b05cd527a1fde953382adf | cbea978535f3312b0b5d954015ef6306153054fe00bbf3d1f26a3848771c0be5 | 6f01a6046d48dc45a55cb31a00aa3eee0b2f4277bebb2cf6ebbfd18d93e7c2b1 | null | [
"LICENSE"
] | 260 |
2.4 | convert-poetry2uv | 0.3.12 | Poetry to uv tool. To migrate from a poetry managed repo to uv. | # convert-poetry2uv
The convert_poetry2uv.py script is meant to easily convert the pyproject.toml to be consumed by `uv` instead of `poetry`.
> Poetry v2 came out after this tool. The tool has been modified to work with poetry v2 format as well. Please create an issue/PR if you find any issues.
It has a dry-run flag, to have a temporary file to validate the output. When not running the dry-run the original file is saved with a .org extension.
uv run convert_poetry2uv.py <path to file> [-n]
You may need to make some manual changes.
The layout might not be exactly to your liking. I would recommend using [Even better toml](https://marketplace.visualstudio.com/items?itemName=tamasfe.even-better-toml) in VSCode. Just open the newly generated toml file and save. It will format the file according to the toml specification.
## Caveats
* If you were using the poetry build-system, it is removed in the generated pyproject.toml.
* if you had optional dev groups, the dev group libraries will be used, the optional flag is removed
# Using as a tool
The script can be run as a tool using [`uvx`](https://docs.astral.sh/uv/guides/tools/)
uvx convert-poetry2uv --help
## uv instructions
Once the pyproject.toml is converted, you can use `uv` to manage your project. To start fresh, the .venv directory is removed followed by the creation and sync of the .venv directory.
rm -rf .venv
uv venv # or 'uv venv -p 3.12' to specify a python version
uv sync
With this you are good to go and are able to validate the migration was a success.
## Pypi
The script is also available on pypi as [convert-poetry2uv](https://pypi.org/project/convert-poetry2uv/)
pip install convert-poetry2uv
# Contribute
Though I've tried to make it as complete as possible, it is not guaranteed to work for all cases. Feel free to contribute to the code or create an issue with the toml file that is not converted correctly.
## Versions/Releases
The version is automatically updated with `python-semantic-release`. Take note of the `pyproject.toml` to see which keywords can be added to the commit message to ensure the correct version is released. The release is created when merged to main.
## Trusted Publisher
> Note to self: When a new github workflow is required, don't forget to add the new workflow to the trusted publisher list.
# Links
* [Writing pyproject.toml](https://packaging.python.org/en/latest/guides/writing-pyproject-toml/)
* [uv pyproject.toml](https://docs.astral.sh/uv/concepts/projects/layout/)
* [Poetry pyproject.toml](https://python-poetry.org/docs/pyproject/)
* [Real python blog: Python and toml](https://realpython.com/python-toml/#write-toml-documents-with-tomli_w)
* [tomlkit docs](https://tomlkit.readthedocs.io/en/latest/quickstart/#)
* [Taskfile installation](https://taskfile.dev/installation/)
* [uv installation](https://docs.astral.sh/uv/getting-started/installation/)
| text/markdown | null | Bart <bart@bamweb.nl> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"tomlkit>=0.13.2"
] | [] | [] | [] | [
"Pull Requests, https://github.com/bartdorlandt/convert_poetry2uv/pulls",
"Bug Tracker, https://github.com/bartdorlandt/convert_poetry2uv/issues",
"Changelog, https://github.com/bartdorlandt/convert_poetry2uv/blob/main/CHANGELOG.md",
"Repository, https://github.com/bartdorlandt/convert_poetry2uv"
] | twine/6.1.0 CPython/3.12.8 | 2026-02-18T17:48:26.348218 | convert_poetry2uv-0.3.12.tar.gz | 9,985 | 8a/a0/dffe3360e32d747a2f9ab43325e0ee6112a15cca7a66b51ad61c01b46845/convert_poetry2uv-0.3.12.tar.gz | source | sdist | null | false | 6bbec495a381f0ba1f3cf2001155c0d2 | aa15fa2166574db0ebb3977a9766c41cedfb91b3be42444ffd407a0b26d12900 | 8aa0dffe3360e32d747a2f9ab43325e0ee6112a15cca7a66b51ad61c01b46845 | null | [] | 236 |
2.4 | temporalio | 1.23.0 | Temporal.io Python SDK | 
[](https://pypi.org/project/temporalio)
[](https://pypi.org/project/temporalio)
[](LICENSE)
**📣 News: Integration between OpenAI Agents SDK and Temporal is now in public preview. [Learn more](temporalio/contrib/openai_agents/README.md).**
[Temporal](https://temporal.io/) is a distributed, scalable, durable, and highly available orchestration engine used to
execute asynchronous, long-running business logic in a scalable and resilient way.
"Temporal Python SDK" is the framework for authoring workflows and activities using the Python programming language.
Also see:
* [Application Development Guide](https://docs.temporal.io/application-development?lang=python) - Once you've tried our
[Quick Start](#quick-start), check out our guide on how to use Temporal in your Python applications, including
information around Temporal core concepts.
* [Python Code Samples](https://github.com/temporalio/samples-python)
* [API Documentation](https://python.temporal.io) - Complete Temporal Python SDK Package reference.
In addition to features common across all Temporal SDKs, the Python SDK also has the following interesting features:
**Type Safe**
This library uses the latest typing and MyPy support with generics to ensure all calls can be typed. For example,
starting a workflow with an `int` parameter when it accepts a `str` parameter would cause MyPy to fail.
**Different Activity Types**
The activity worker has been developed to work with `async def`, threaded, and multiprocess activities. Threaded activities are the initial recommendation, and further guidance can be found in [the docs](https://docs.temporal.io/develop/python/python-sdk-sync-vs-async).
**Custom `asyncio` Event Loop**
The workflow implementation basically turns `async def` functions into workflows backed by a distributed, fault-tolerant
event loop. This means task management, sleep, cancellation, etc have all been developed to seamlessly integrate with
`asyncio` concepts.
See the [blog post](https://temporal.io/blog/durable-distributed-asyncio-event-loop) introducing the Python SDK for an
informal introduction to the features and their implementation.
---
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Contents**
- [Quick Start](#quick-start)
- [Installation](#installation)
- [Implementing a Workflow](#implementing-a-workflow)
- [Running a Workflow](#running-a-workflow)
- [Next Steps](#next-steps)
- [Usage](#usage)
- [Client](#client)
- [Data Conversion](#data-conversion)
- [Pydantic Support](#pydantic-support)
- [Custom Type Data Conversion](#custom-type-data-conversion)
- [Workers](#workers)
- [Workflows](#workflows)
- [Definition](#definition)
- [Running](#running)
- [Invoking Activities](#invoking-activities)
- [Invoking Child Workflows](#invoking-child-workflows)
- [Timers](#timers)
- [Conditions](#conditions)
- [Asyncio and Determinism](#asyncio-and-determinism)
- [Asyncio Cancellation](#asyncio-cancellation)
- [Workflow Utilities](#workflow-utilities)
- [Exceptions](#exceptions)
- [Signal and update handlers](#signal-and-update-handlers)
- [External Workflows](#external-workflows)
- [Testing](#testing)
- [Automatic Time Skipping](#automatic-time-skipping)
- [Manual Time Skipping](#manual-time-skipping)
- [Mocking Activities](#mocking-activities)
- [Workflow Sandbox](#workflow-sandbox)
- [How the Sandbox Works](#how-the-sandbox-works)
- [Avoiding the Sandbox](#avoiding-the-sandbox)
- [Customizing the Sandbox](#customizing-the-sandbox)
- [Passthrough Modules](#passthrough-modules)
- [Invalid Module Members](#invalid-module-members)
- [Known Sandbox Issues](#known-sandbox-issues)
- [Global Import/Builtins](#global-importbuiltins)
- [Sandbox is not Secure](#sandbox-is-not-secure)
- [Sandbox Performance](#sandbox-performance)
- [Extending Restricted Classes](#extending-restricted-classes)
- [Certain Standard Library Calls on Restricted Objects](#certain-standard-library-calls-on-restricted-objects)
- [is_subclass of ABC-based Restricted Classes](#is_subclass-of-abc-based-restricted-classes)
- [Activities](#activities)
- [Definition](#definition-1)
- [Types of Activities](#types-of-activities)
- [Synchronous Activities](#synchronous-activities)
- [Synchronous Multithreaded Activities](#synchronous-multithreaded-activities)
- [Synchronous Multiprocess/Other Activities](#synchronous-multiprocessother-activities)
- [Asynchronous Activities](#asynchronous-activities)
- [Activity Context](#activity-context)
- [Heartbeating and Cancellation](#heartbeating-and-cancellation)
- [Worker Shutdown](#worker-shutdown)
- [Testing](#testing-1)
- [Interceptors](#interceptors)
- [Nexus](#nexus)
- [Plugins](#plugins)
- [Usage](#usage-1)
- [Plugin Implementations](#plugin-implementations)
- [Advanced Plugin Implementations](#advanced-plugin-implementations)
- [Client Plugins](#client-plugins)
- [Worker Plugins](#worker-plugins)
- [Workflow Replay](#workflow-replay)
- [Observability](#observability)
- [Metrics](#metrics)
- [OpenTelemetry Tracing](#opentelemetry-tracing)
- [Protobuf 3.x vs 4.x](#protobuf-3x-vs-4x)
- [Known Compatibility Issues](#known-compatibility-issues)
- [gevent Patching](#gevent-patching)
- [Development](#development)
- [Building](#building)
- [Prepare](#prepare)
- [Build](#build)
- [Use](#use)
- [Local SDK development environment](#local-sdk-development-environment)
- [Testing](#testing-2)
- [Proto Generation and Testing](#proto-generation-and-testing)
- [Style](#style)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# Quick Start
We will guide you through the Temporal basics to create a "hello, world!" script on your machine. It is not intended as
one of the ways to use Temporal, but in reality it is very simplified and decidedly not "the only way" to use Temporal.
For more information, check out the docs references in "Next Steps" below the quick start.
## Installation
Install the `temporalio` package from [PyPI](https://pypi.org/project/temporalio).
These steps can be followed to use with a virtual environment and `pip`:
* [Create a virtual environment](https://packaging.python.org/en/latest/tutorials/installing-packages/#creating-virtual-environments)
* Update `pip` - `python -m pip install -U pip`
* Needed because older versions of `pip` may not pick the right wheel
* Install Temporal SDK - `python -m pip install temporalio`
The SDK is now ready for use. To build from source, see "Building" near the end of this documentation.
**NOTE: This README is for the current branch and not necessarily what's released on `PyPI`.**
## Implementing a Workflow
Create the following in `activities.py`:
```python
from temporalio import activity
@activity.defn
def say_hello(name: str) -> str:
return f"Hello, {name}!"
```
Create the following in `workflows.py`:
```python
from datetime import timedelta
from temporalio import workflow
# Import our activity, passing it through the sandbox
with workflow.unsafe.imports_passed_through():
from activities import say_hello
@workflow.defn
class SayHello:
@workflow.run
async def run(self, name: str) -> str:
return await workflow.execute_activity(
say_hello, name, schedule_to_close_timeout=timedelta(seconds=5)
)
```
Create the following in `run_worker.py`:
```python
import asyncio
import concurrent.futures
from temporalio.client import Client
from temporalio.worker import Worker
# Import the activity and workflow from our other files
from activities import say_hello
from workflows import SayHello
async def main():
# Create client connected to server at the given address
client = await Client.connect("localhost:7233")
# Run the worker
with concurrent.futures.ThreadPoolExecutor(max_workers=100) as activity_executor:
worker = Worker(
client,
task_queue="my-task-queue",
workflows=[SayHello],
activities=[say_hello],
activity_executor=activity_executor,
)
await worker.run()
if __name__ == "__main__":
asyncio.run(main())
```
Assuming you have a [Temporal server running on localhost](https://docs.temporal.io/docs/server/quick-install/), this
will run the worker:
python run_worker.py
## Running a Workflow
Create the following script at `run_workflow.py`:
```python
import asyncio
from temporalio.client import Client
# Import the workflow from the previous code
from workflows import SayHello
async def main():
# Create client connected to server at the given address
client = await Client.connect("localhost:7233")
# Execute a workflow
result = await client.execute_workflow(SayHello.run, "my name", id="my-workflow-id", task_queue="my-task-queue")
print(f"Result: {result}")
if __name__ == "__main__":
asyncio.run(main())
```
Assuming you have `run_worker.py` running from before, this will run the workflow:
python run_workflow.py
The output will be:
Result: Hello, my-name!
## Next Steps
Temporal can be implemented in your code in many different ways, to suit your application's needs. The links below will
give you much more information about how Temporal works with Python:
* [Code Samples](https://github.com/temporalio/samples-python) - If you want to start with some code, we have provided
some pre-built samples.
* [Application Development Guide](https://docs.temporal.io/application-development?lang=python) Our Python specific
Developer's Guide will give you much more information on how to build with Temporal in your Python applications than
our SDK README ever could (or should).
* [API Documentation](https://python.temporal.io) - Full Temporal Python SDK package documentation.
---
# Usage
From here, you will find reference documentation about specific pieces of the Temporal Python SDK that were built around
Temporal concepts. *This section is not intended as a how-to guide* -- For more how-to oriented information, check out
the links in the [Next Steps](#next-steps) section above.
### Client
A client can be created and used to start a workflow like so:
```python
from temporalio.client import Client
async def main():
# Create client connected to server at the given address and namespace
client = await Client.connect("localhost:7233", namespace="my-namespace")
# Start a workflow
handle = await client.start_workflow(MyWorkflow.run, "some arg", id="my-workflow-id", task_queue="my-task-queue")
# Wait for result
result = await handle.result()
print(f"Result: {result}")
```
Some things to note about the above code:
* A `Client` does not have an explicit "close"
* To enable TLS, the `tls` argument to `connect` can be set to `True` or a `TLSConfig` object
* A single positional argument can be passed to `start_workflow`. If there are multiple arguments, only the
non-type-safe form of `start_workflow` can be used (i.e. the one accepting a string workflow name) and it must be in
the `args` keyword argument.
* The `handle` represents the workflow that was started and can be used for more than just getting the result
* Since we are just getting the handle and waiting on the result, we could have called `client.execute_workflow` which
does the same thing
* Clients can have many more options not shown here (e.g. data converters and interceptors)
* A string can be used instead of the method reference to call a workflow by name (e.g. if defined in another language)
* Clients do not work across forks
Clients also provide a shallow copy of their config for use in making slightly different clients backed by the same
connection. For instance, given the `client` above, this is how to have a client in another namespace:
```python
config = client.config()
config["namespace"] = "my-other-namespace"
other_ns_client = Client(**config)
```
#### Data Conversion
Data converters are used to convert raw Temporal payloads to/from actual Python types. A custom data converter of type
`temporalio.converter.DataConverter` can be set via the `data_converter` parameter of the `Client` constructor. Data
converters are a combination of payload converters, payload codecs, and failure converters. Payload converters convert
Python values to/from serialized bytes. Payload codecs convert bytes to bytes (e.g. for compression or encryption).
Failure converters convert exceptions to/from serialized failures.
The default data converter supports converting multiple types including:
* `None`
* `bytes`
* `google.protobuf.message.Message` - As JSON when encoding, but has ability to decode binary proto from other languages
* Anything that can be converted to JSON including:
* Anything that [`json.dump`](https://docs.python.org/3/library/json.html#json.dump) supports natively
* [dataclasses](https://docs.python.org/3/library/dataclasses.html)
* Iterables including ones JSON dump may not support by default, e.g. `set`
* [IntEnum, StrEnum](https://docs.python.org/3/library/enum.html) based enumerates
* [UUID](https://docs.python.org/3/library/uuid.html)
* `datetime.datetime`
To use pydantic model instances, see [Pydantic Support](#pydantic-support).
`datetime.date` and `datetime.time` can only be used with the Pydantic data converter.
Although workflows, updates, signals, and queries can all be defined with multiple input parameters, users are strongly
encouraged to use a single `dataclass` or Pydantic model parameter, so that fields with defaults can be easily added
without breaking compatibility. Similar advice applies to return values.
Classes with generics may not have the generics properly resolved. The current implementation does not have generic
type resolution. Users should use concrete types.
##### Pydantic Support
To use Pydantic model instances, install Pydantic and set the Pydantic data converter when creating client instances:
```python
from temporalio.contrib.pydantic import pydantic_data_converter
client = Client(data_converter=pydantic_data_converter, ...)
```
This data converter supports conversion of all types supported by Pydantic to and from JSON.
In addition to Pydantic models, these include all `json.dump`-able types, various non-`json.dump`-able standard library
types such as dataclasses, types from the datetime module, sets, UUID, etc, and custom types composed of any of these.
Pydantic v1 is not supported by this data converter. If you are not yet able to upgrade from Pydantic v1, see
https://github.com/temporalio/samples-python/tree/main/pydantic_converter/v1 for limited v1 support.
##### Custom Type Data Conversion
For converting from JSON, the workflow/activity type hint is taken into account to convert to the proper type. Care has
been taken to support all common typings including `Optional`, `Union`, all forms of iterables and mappings, `NewType`,
etc in addition to the regular JSON values mentioned before.
Data converters contain a reference to a payload converter class that is used to convert to/from payloads/values. This
is a class and not an instance because it is instantiated on every workflow run inside the sandbox. The payload
converter is usually a `CompositePayloadConverter` which contains a multiple `EncodingPayloadConverter`s it uses to try
to serialize/deserialize payloads. Upon serialization, each `EncodingPayloadConverter` is tried until one succeeds. The
`EncodingPayloadConverter` provides an "encoding" string serialized onto the payload so that, upon deserialization, the
specific `EncodingPayloadConverter` for the given "encoding" is used.
The default data converter uses the `DefaultPayloadConverter` which is simply a `CompositePayloadConverter` with a known
set of default `EncodingPayloadConverter`s. To implement a custom encoding for a custom type, a new
`EncodingPayloadConverter` can be created for the new type. For example, to support `IPv4Address` types:
```python
class IPv4AddressEncodingPayloadConverter(EncodingPayloadConverter):
@property
def encoding(self) -> str:
return "text/ipv4-address"
def to_payload(self, value: Any) -> Optional[Payload]:
if isinstance(value, ipaddress.IPv4Address):
return Payload(
metadata={"encoding": self.encoding.encode()},
data=str(value).encode(),
)
else:
return None
def from_payload(self, payload: Payload, type_hint: Optional[Type] = None) -> Any:
assert not type_hint or type_hint is ipaddress.IPv4Address
return ipaddress.IPv4Address(payload.data.decode())
class IPv4AddressPayloadConverter(CompositePayloadConverter):
def __init__(self) -> None:
# Just add ours as first before the defaults
super().__init__(
IPv4AddressEncodingPayloadConverter(),
*DefaultPayloadConverter.default_encoding_payload_converters,
)
my_data_converter = dataclasses.replace(
DataConverter.default,
payload_converter_class=IPv4AddressPayloadConverter,
)
```
Imports are left off for brevity.
This is good for many custom types. However, sometimes you want to override the behavior of the just the existing JSON
encoding payload converter to support a new type. It is already the last encoding data converter in the list, so it's
the fall-through behavior for any otherwise unknown type. Customizing the existing JSON converter has the benefit of
making the type work in lists, unions, etc.
The `JSONPlainPayloadConverter` uses the Python [json](https://docs.python.org/3/library/json.html) library with an
advanced JSON encoder by default and a custom value conversion method to turn `json.load`ed values to their type hints.
The conversion can be customized for serialization with a custom `json.JSONEncoder` and deserialization with a custom
`JSONTypeConverter`. For example, to support `IPv4Address` types in existing JSON conversion:
```python
class IPv4AddressJSONEncoder(AdvancedJSONEncoder):
def default(self, o: Any) -> Any:
if isinstance(o, ipaddress.IPv4Address):
return str(o)
return super().default(o)
class IPv4AddressJSONTypeConverter(JSONTypeConverter):
def to_typed_value(
self, hint: Type, value: Any
) -> Union[Optional[Any], _JSONTypeConverterUnhandled]:
if issubclass(hint, ipaddress.IPv4Address):
return ipaddress.IPv4Address(value)
return JSONTypeConverter.Unhandled
class IPv4AddressPayloadConverter(CompositePayloadConverter):
def __init__(self) -> None:
# Replace default JSON plain with our own that has our encoder and type
# converter
json_converter = JSONPlainPayloadConverter(
encoder=IPv4AddressJSONEncoder,
custom_type_converters=[IPv4AddressJSONTypeConverter()],
)
super().__init__(
*[
c if not isinstance(c, JSONPlainPayloadConverter) else json_converter
for c in DefaultPayloadConverter.default_encoding_payload_converters
]
)
my_data_converter = dataclasses.replace(
DataConverter.default,
payload_converter_class=IPv4AddressPayloadConverter,
)
```
Now `IPv4Address` can be used in type hints including collections, optionals, etc.
### Workers
Workers host workflows and/or activities. Here's how to run a worker:
```python
import asyncio
import logging
from temporalio.client import Client
from temporalio.worker import Worker
# Import your own workflows and activities
from my_workflow_package import MyWorkflow, my_activity
async def run_worker(stop_event: asyncio.Event):
# Create client connected to server at the given address
client = await Client.connect("localhost:7233", namespace="my-namespace")
# Run the worker until the event is set
worker = Worker(client, task_queue="my-task-queue", workflows=[MyWorkflow], activities=[my_activity])
async with worker:
await stop_event.wait()
```
Some things to note about the above code:
* This creates/uses the same client that is used for starting workflows
* While this example accepts a stop event and uses `async with`, `run()` and `shutdown()` may be used instead
* Workers can have many more options not shown here (e.g. data converters and interceptors)
### Workflows
#### Definition
Workflows are defined as classes decorated with `@workflow.defn`. The method invoked for the workflow is decorated with
`@workflow.run`. Methods for signals, queries, and updates are decorated with `@workflow.signal`, `@workflow.query`
and `@workflow.update` respectively. Here's an example of a workflow:
```python
import asyncio
from datetime import timedelta
from temporalio import workflow
# Pass the activities through the sandbox
with workflow.unsafe.imports_passed_through():
from .my_activities import GreetingInfo, create_greeting_activity
@workflow.defn
class GreetingWorkflow:
def __init__(self) -> None:
self._current_greeting = "<unset>"
self._greeting_info = GreetingInfo()
self._greeting_info_update = asyncio.Event()
self._complete = asyncio.Event()
@workflow.run
async def run(self, name: str) -> str:
self._greeting_info.name = name
while True:
# Store greeting
self._current_greeting = await workflow.execute_activity(
create_greeting_activity,
self._greeting_info,
start_to_close_timeout=timedelta(seconds=5),
)
workflow.logger.debug("Greeting set to %s", self._current_greeting)
# Wait for salutation update or complete signal (this can be
# cancelled)
await asyncio.wait(
[
asyncio.create_task(self._greeting_info_update.wait()),
asyncio.create_task(self._complete.wait()),
],
return_when=asyncio.FIRST_COMPLETED,
)
if self._complete.is_set():
return self._current_greeting
self._greeting_info_update.clear()
@workflow.signal
async def update_salutation(self, salutation: str) -> None:
self._greeting_info.salutation = salutation
self._greeting_info_update.set()
@workflow.signal
async def complete_with_greeting(self) -> None:
self._complete.set()
@workflow.query
def current_greeting(self) -> str:
return self._current_greeting
@workflow.update
def set_and_get_greeting(self, greeting: str) -> str:
old = self._current_greeting
self._current_greeting = greeting
return old
```
This assumes there's an activity in `my_activities.py` like:
```python
from dataclasses import dataclass
from temporalio import workflow
@dataclass
class GreetingInfo:
salutation: str = "Hello"
name: str = "<unknown>"
@activity.defn
def create_greeting_activity(info: GreetingInfo) -> str:
return f"{info.salutation}, {info.name}!"
```
Some things to note about the above workflow code:
* Workflows run in a sandbox by default.
* Users are encouraged to define workflows in files with no side effects or other complicated code or unnecessary
imports to other third party libraries.
* Non-standard-library, non-`temporalio` imports should usually be "passed through" the sandbox. See the
[Workflow Sandbox](#workflow-sandbox) section for more details.
* This workflow continually updates the queryable current greeting when signalled and can complete with the greeting on
a different signal
* Workflows are always classes and must have a single `@workflow.run` which is an `async def` function
* Workflow code must be deterministic. This means no `set` iteration, threading, no randomness, no external calls to
processes, no network IO, and no global state mutation. All code must run in the implicit `asyncio` event loop and be
deterministic. Also see the [Asyncio and Determinism](#asyncio-and-determinism) section later.
* `@activity.defn` is explained in a later section. For normal simple string concatenation, this would just be done in
the workflow. The activity is for demonstration purposes only.
* `workflow.execute_activity(create_greeting_activity, ...` is actually a typed signature, and MyPy will fail if the
`self._greeting_info` parameter is not a `GreetingInfo`
Here are the decorators that can be applied:
* `@workflow.defn` - Defines a workflow class
* Must be defined on the class given to the worker (ignored if present on a base class)
* Can have a `name` param to customize the workflow name, otherwise it defaults to the unqualified class name
* Can have `dynamic=True` which means all otherwise unhandled workflows fall through to this. If present, cannot have
`name` argument, and run method must accept a single parameter of `Sequence[temporalio.common.RawValue]` type. The
payload of the raw value can be converted via `workflow.payload_converter().from_payload`.
* `@workflow.run` - Defines the primary workflow run method
* Must be defined on the same class as `@workflow.defn`, not a base class (but can _also_ be defined on the same
method of a base class)
* Exactly one method name must have this decorator, no more or less
* Must be defined on an `async def` method
* The method's arguments are the workflow's arguments
* The first parameter must be `self`, followed by positional arguments. Best practice is to only take a single
argument that is an object/dataclass of fields that can be added to as needed.
* `@workflow.init` - Specifies that the `__init__` method accepts the workflow's arguments.
* If present, may only be applied to the `__init__` method, the parameters of which must then be identical to those of
the `@workflow.run` method.
* The purpose of this decorator is to allow operations involving workflow arguments to be performed in the `__init__`
method, before any signal or update handler has a chance to execute.
* `@workflow.signal` - Defines a method as a signal
* Can be defined on an `async` or non-`async` method at any point in the class hierarchy, but if the decorated method
is overridden, then the override must also be decorated.
* The method's arguments are the signal's arguments.
* Return value is ignored.
* May mutate workflow state, and make calls to other workflow APIs like starting activities, etc.
* Can have a `name` param to customize the signal name, otherwise it defaults to the unqualified method name.
* Can have `dynamic=True` which means all otherwise unhandled signals fall through to this. If present, cannot have
`name` argument, and method parameters must be `self`, a string signal name, and a
`Sequence[temporalio.common.RawValue]`.
* Non-dynamic method can only have positional arguments. Best practice is to only take a single argument that is an
object/dataclass of fields that can be added to as needed.
* See [Signal and update handlers](#signal-and-update-handlers) below
* `@workflow.update` - Defines a method as an update
* Can be defined on an `async` or non-`async` method at any point in the class hierarchy, but if the decorated method
is overridden, then the override must also be decorated.
* May accept input and return a value
* The method's arguments are the update's arguments.
* May be `async` or non-`async`
* May mutate workflow state, and make calls to other workflow APIs like starting activities, etc.
* Also accepts the `name` and `dynamic` parameters like signal, with the same semantics.
* Update handlers may optionally define a validator method by decorating it with `@update_handler_method.validator`.
To reject an update before any events are written to history, throw an exception in a validator. Validators cannot
be `async`, cannot mutate workflow state, and return nothing.
* See [Signal and update handlers](#signal-and-update-handlers) below
* `@workflow.query` - Defines a method as a query
* Should return a value
* Should not be `async`
* Temporal queries should never mutate anything in the workflow or call any calls that would mutate the workflow
* Also accepts the `name` and `dynamic` parameters like signal and update, with the same semantics.
#### Running
To start a locally-defined workflow from a client, you can simply reference its method like so:
```python
from temporalio.client import Client
from my_workflow_package import GreetingWorkflow
async def create_greeting(client: Client) -> str:
# Start the workflow
handle = await client.start_workflow(GreetingWorkflow.run, "my name", id="my-workflow-id", task_queue="my-task-queue")
# Change the salutation
await handle.signal(GreetingWorkflow.update_salutation, "Aloha")
# Tell it to complete
await handle.signal(GreetingWorkflow.complete_with_greeting)
# Wait and return result
return await handle.result()
```
Some things to note about the above code:
* This uses the `GreetingWorkflow` from the previous section
* The result of calling this function is `"Aloha, my name!"`
* `id` and `task_queue` are required for running a workflow
* `client.start_workflow` is typed, so MyPy would fail if `"my name"` were something besides a string
* `handle.signal` is typed, so MyPy would fail if `"Aloha"` were something besides a string or if we provided a
parameter to the parameterless `complete_with_greeting`
* `handle.result` is typed to the workflow itself, so MyPy would fail if we said this `create_greeting` returned
something besides a string
#### Invoking Activities
* Activities are started with non-async `workflow.start_activity()` which accepts either an activity function reference
or a string name.
* A single argument to the activity is positional. Multiple arguments are not supported in the type-safe form of
start/execute activity and must be supplied via the `args` keyword argument.
* Activity options are set as keyword arguments after the activity arguments. At least one of `start_to_close_timeout`
or `schedule_to_close_timeout` must be provided.
* The result is an activity handle which is an `asyncio.Task` and supports basic task features
* An async `workflow.execute_activity()` helper is provided which takes the same arguments as
`workflow.start_activity()` and `await`s on the result. This should be used in most cases unless advanced task
capabilities are needed.
* Local activities work very similarly except the functions are `workflow.start_local_activity()` and
`workflow.execute_local_activity()`
* ⚠️Local activities are currently experimental
* Activities can be methods of a class. Invokers should use `workflow.start_activity_method()`,
`workflow.execute_activity_method()`, `workflow.start_local_activity_method()`, and
`workflow.execute_local_activity_method()` instead.
* Activities can callable classes (i.e. that define `__call__`). Invokers should use `workflow.start_activity_class()`,
`workflow.execute_activity_class()`, `workflow.start_local_activity_class()`, and
`workflow.execute_local_activity_class()` instead.
#### Invoking Child Workflows
* Child workflows are started with async `workflow.start_child_workflow()` which accepts either a workflow run method
reference or a string name. The arguments to the workflow are positional.
* A single argument to the child workflow is positional. Multiple arguments are not supported in the type-safe form of
start/execute child workflow and must be supplied via the `args` keyword argument.
* Child workflow options are set as keyword arguments after the arguments. At least `id` must be provided.
* The `await` of the start does not complete until the start has been accepted by the server
* The result is a child workflow handle which is an `asyncio.Task` and supports basic task features. The handle also has
some child info and supports signalling the child workflow
* An async `workflow.execute_child_workflow()` helper is provided which takes the same arguments as
`workflow.start_child_workflow()` and `await`s on the result. This should be used in most cases unless advanced task
capabilities are needed.
#### Timers
* A timer is represented by normal `asyncio.sleep()` or a `workflow.sleep()` call
* Timers are also implicitly started on any `asyncio` calls with timeouts (e.g. `asyncio.wait_for`)
* Timers are Temporal server timers, not local ones, so sub-second resolution rarely has value
* Calls that use a specific point in time, e.g. `call_at` or `timeout_at`, should be based on the current loop time
(i.e. `workflow.time()`) and not an actual point in time. This is because fixed times are translated to relative ones
by subtracting the current loop time which may not be the actual current time.
#### Conditions
* `workflow.wait_condition` is an async function that doesn't return until a provided callback returns true
* A `timeout` can optionally be provided which will throw a `asyncio.TimeoutError` if reached (internally backed by
`asyncio.wait_for` which uses a timer)
#### Asyncio and Determinism
Workflows must be deterministic. Workflows are backed by a custom
[asyncio](https://docs.python.org/3/library/asyncio.html) event loop. This means many of the common `asyncio` calls work
as normal. Some asyncio features are disabled such as:
* Thread related calls such as `to_thread()`, `run_coroutine_threadsafe()`, `loop.run_in_executor()`, etc
* Calls that alter the event loop such as `loop.close()`, `loop.stop()`, `loop.run_forever()`,
`loop.set_task_factory()`, etc
* Calls that use anything external such as networking, subprocesses, disk IO, etc
Also, there are some `asyncio` utilities that internally use `set()` which can make them non-deterministic from one
worker to the next. Therefore the following `asyncio` functions have `workflow`-module alternatives that are
deterministic:
* `asyncio.as_completed()` - use `workflow.as_completed()`
* `asyncio.wait()` - use `workflow.wait()`
#### Asyncio Cancellation
Cancellation is done using `asyncio` [task cancellation](https://docs.python.org/3/library/asyncio-task.html#task-cancellation).
This means that tasks are requested to be cancelled but can catch the
[`asyncio.CancelledError`](https://docs.python.org/3/library/asyncio-exceptions.html#asyncio.CancelledError), thus
allowing them to perform some cleanup before allowing the cancellation to proceed (i.e. re-raising the error), or to
deny the cancellation entirely. It also means that
[`asyncio.shield()`](https://docs.python.org/3/library/asyncio-task.html#shielding-from-cancellation) can be used to
protect tasks against cancellation.
The following tasks, when cancelled, perform a Temporal cancellation:
* Activities - when the task executing an activity is cancelled, a cancellation request is sent to the activity
* Child workflows - when the task starting or executing a child workflow is cancelled, a cancellation request is sent to
cancel the child workflow
* Timers - when the task executing a timer is cancelled (whether started via sleep or timeout), the timer is cancelled
When the workflow itself is requested to cancel, `Task.cancel` is called on the main workflow task. Therefore,
`asyncio.CancelledError` can be caught in order to handle the cancel gracefully.
Workflows follow `asyncio` cancellation rules exactly which can cause confusion among Python developers. Cancelling a
task doesn't always cancel the thing it created. For example, given
`task = asyncio.create_task(workflow.start_child_workflow(...`, calling `task.cancel` does not cancel the child
workflow, it only cancels the starting of it, which has no effect if it has already started. However, cancelling the
result of `handle = await workflow.start_child_workflow(...` or
`task = asyncio.create_task(workflow.execute_child_workflow(...` _does_ cancel the child workflow.
Also, due to Temporal rules, a cancellation request is a state not an event. Therefore, repeated cancellation requests
are not delivered, only the first. If the workflow chooses swallow a cancellation, it cannot be requested again.
#### Workflow Utilities
While running in a workflow, in addition to features documented elsewhere, the following items are available from the
`temporalio.workflow` package:
* `continue_as_new()` - Async function to stop the workflow immediately and continue as new
* `info()` - Returns information about the current workflow
* `logger` - A logger for use in a workflow (properly skips logging on replay)
* `now()` - Returns the "current time" from the workflow's perspective
#### Exceptions
* Workflows/updates can raise exceptions to fail the workflow or the "workflow task" (i.e. suspend the workflow
in a retrying state).
* Exceptions that are instances of `temporalio.exceptions.FailureError` will fail the workflow with that exception
* For failing the workflow explicitly with a user exception, use `temporalio.exceptions.ApplicationError`. This can
be marked non-retryable or include details as needed.
* Other exceptions that come from activity execution, child execution, cancellation, etc are already instances of
`FailureError` and will fail the workflow when uncaught.
* Update handlers are special: an instance of `temporalio.exceptions.FailureError` raised in an update handler will fail
the update instead of failing the workflow.
* All other exceptions fail the "workflow task" which means the workflow will continually retry until the workflow is
fixed. This is helpful for bad code or other non-predictable exceptions. To actually fail the workflow, use an
`ApplicationError` as mentioned above.
This default can be changed by providing a list of exception types to `workflow_failure_exception_types` when creating a
`Worker` or `failure_exception_types` on the `@workflow.defn` decorator. If a workflow-thrown exception is an instance
of any type in either list, it will fail the workflow (or update) instead of the workflow task. This means a value of
`[Exception]` will cause every exception to fail the workflow instead of the workflow task. Also, as a special case, if
`temporalio.workflow.NondeterminismError` (or any superclass of it) is set, non-deterministic exceptions will fail the
workflow. WARNING: These settings are experimental.
#### Signal and update handlers
Signal and update handlers are defined using decorated methods as shown in the example [above](#definition). Client code
sends signals and updates using `workflow_handle.signal`, `workflow_handle.execute_update`, or
`workflow_handle.start_update`. When the workflow receives one of these requests, it starts an `asyncio.Task` executing
the corresponding handler method with the argument(s) from the request.
The handler methods may be `async def` and can do all the async operations described above (e.g. invoking activities and
child workflows, and waiting on timers and conditions). Notice that this means that handler tasks will be executing
concurrently with respect to each other and the main workflow task. Use
[asyncio.Lock](https://docs.python.org/3/library/asyncio-sync.html#lock) and
[asyncio.Semaphore](https://docs.python.org/3/li | text/markdown; charset=UTF-8; variant=GFM | null | Temporal Technologies Inc <sdk@temporal.io> | null | null | null | temporal, workflow | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"nexus-rpc==1.3.0",
"protobuf<7.0.0,>=3.20",
"python-dateutil<3,>=2.8.2; python_full_version < \"3.11\"",
"types-protobuf>=3.20",
"typing-extensions<5,>=4.2.0",
"grpcio<2,>=1.48.2; extra == \"grpc\"",
"openai-agents<0.7,>=0.3; extra == \"openai-agents\"",
"mcp<2,>=1.9.4; extra == \"openai-agents\"",
... | [] | [] | [] | [
"Bug Tracker, https://github.com/temporalio/sdk-python/issues",
"Documentation, https://docs.temporal.io/docs/python",
"Homepage, https://github.com/temporalio/sdk-python",
"Repository, https://github.com/temporalio/sdk-python"
] | uv/0.9.2 | 2026-02-18T17:48:22.353239 | temporalio-1.23.0.tar.gz | 1,933,051 | 67/48/ba7413e2fab8dcd277b9df00bafa572da24e9ca32de2f38d428dc3a2825c/temporalio-1.23.0.tar.gz | source | sdist | null | false | be2d35c490d73dd7d98caf4560df830c | 72750494b00eb73ded9db76195e3a9b53ff548780f73d878ec3f807ee3191410 | 6748ba7413e2fab8dcd277b9df00bafa572da24e9ca32de2f38d428dc3a2825c | MIT | [
"LICENSE"
] | 139,981 |
2.4 | ocdsmerge-rs | 0.1.5 | Merges JSON texts conforming to the Open Contracting Data Standard | # OCDS Merge
[](https://crates.io/crates/ocdsmerge)
[](https://github.com/open-contracting/ocds-merge-rs/actions/workflows/ci.yml)
[](https://codecov.io/github/open-contracting/ocds-merge-rs)
Merge JSON texts conforming to the Open Contracting Data Standard.
If you are viewing this on GitHub, crates.io or similar, open the [full documentation](https://ocds-merge-rs.readthedocs.io/) for additional details.
| text/markdown; charset=UTF-8; variant=GFM | null | Open Contracting Partnership <data@open-contracting.org> | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPy... | [] | http://github.com/open-contracting/ocds-merge-rs | null | null | [] | [] | [] | [
"maturin>=1.4; extra == \"dev\"",
"coverage; extra == \"test\"",
"jsonref; extra == \"test\"",
"pytest; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/open-contracting/ocds-merge-rs"
] | maturin/1.12.2 | 2026-02-18T17:47:10.297228 | ocdsmerge_rs-0.1.5-cp314-cp314-win32.whl | 268,048 | 38/e9/ff5246cdafafd996e9d3cea0fc49bb0b04ad0b16986bf776c028c988df26/ocdsmerge_rs-0.1.5-cp314-cp314-win32.whl | cp314 | bdist_wheel | null | false | 0d5f75ff0a4152d3834cf6379c0f92e5 | 86948fecb64b17c30648264d6e47d99a40f0442f535e5bb6afd4cae15479c27f | 38e9ff5246cdafafd996e9d3cea0fc49bb0b04ad0b16986bf776c028c988df26 | null | [] | 5,126 |
2.4 | physrisk-lib | 1.7.1 | OS-Climate Physical Risk Library | <!-- markdownlint-disable -->
<!-- prettier-ignore-start -->
> [!IMPORTANT]
> On June 26 2024, Linux Foundation announced the merger of its financial services umbrella, the Fintech Open Source Foundation ([FINOS](https://finos.org)), with OS-Climate, an open source community dedicated to building data technologies, modeling, and analytic tools that will drive global capital flows into climate change mitigation and resilience; OS-Climate projects are in the process of transitioning to the [FINOS governance framework](https://community.finos.org/docs/governance); read more on [finos.org/press/finos-join-forces-os-open-source-climate-sustainability-esg](https://finos.org/press/finos-join-forces-os-open-source-climate-sustainability-esg)
<!-- prettier-ignore-end -->
<!-- markdownlint-enable -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable-next-line MD013 -->
[](https://os-climate.org/) [](https://os-climate.slack.com) [](https://github.com/os-climate/physrisk) [](https://pypi.org/project/physrisk-lib) [](https://opensource.org/licenses/Apache-2.0)
<!-- markdownlint-disable-next-line MD013 -->
[![pre-commit.ci status badge]][pre-commit.ci results page] [](https://test.pypi.org/project/physrisk-lib) [](https://github.com/os-climate/physrisk/actions/workflows/build-test.yaml) [](https://github.com/os-climate/physrisk/actions/workflows/codeql.yml) [](https://scorecard.dev/viewer/?uri=github.com/os-climate/physrisk)
<!-- prettier-ignore-end -->
# Physrisk
Physical climate risk calculation engine.
<img src="https://raw.githubusercontent.com/os-climate/physrisk/main/docs/images/OS-Climate-Logo.png" alt="drawing" width="150"/>
## About physrisk
An [OS-Climate](https://os-climate.org) project, physrisk is a library for
assessing the physical effects of climate change and thereby the potential
benefit of measures to improve resilience.
An introduction and methodology can be found in the
[online documentation](https://physrisk.readthedocs.io/en/latest/).
Physrisk is primarily designed to run 'bottom-up' calculations that model
the impact of climate hazards on large numbers of individual assets
(including natural) and operations. These calculations can be used to assess
financial risks or socio-economic impacts. To do this physrisk collects:
- hazard indicators and
- models of vulnerability of assets/operations to hazards.
Hazard indicators are on-boarded from public resources or inferred from
climate projections, e.g. from CMIP or CORDEX data sets. Indicators are
created from code in the
[hazard repository](https://github.com/os-climate/hazard) to make
calculations as transparent as possible.
Physrisk is also designed to be a hosted, e.g. to provide on-demand
calculations.
[physrisk-api](https://github.com/os-climate/physrisk-api) and
[physrisk-ui](https://github.com/os-climate/physrisk-ui) provide an example
API and user interface. A
[development version of the UI](https://physrisk-ui-physrisk.apps.osc-cl1.apps.os-climate.org)
is hosted by OS-Climate.
## Using the library
The library can be run locally. The library is installed via:
```bash
pip install physrisk-lib
```
Hazard indicator data is freely available via the [Amazon Sustainability Data Initiative, here](https://registry.opendata.aws/os-climate-physrisk/).
Information about the project is available via the
[community-hub](https://github.com/os-climate/OS-Climate-Community-Hub).
An inventory of the hazard data is maintained in the
[hazard inventory](https://github.com/os-climate/hazard/blob/main/src/inventories/hazard/inventory.json)
(this is used by the physrisk library itself). The
[UI hazard viewer](https://physrisk-ui-physrisk.apps.osc-cl1.apps.os-climate.org)
is a convenient way to browse data sets.
A good place to start is the Getting Started section in the documentation site which has a number of walk-throughs.
[pre-commit.ci results page]: https://results.pre-commit.ci/latest/github/os-climate/physrisk/main
[pre-commit.ci status badge]: https://results.pre-commit.ci/badge/github/os-climate/physrisk/main.svg
| text/markdown | null | Joe Moorhouse <5102656+joemoorhouse@users.noreply.github.com> | null | null | null | Physical, Climate, Risk, Finance | [
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: Unix",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Pr... | [] | null | null | >=3.10 | [] | [] | [] | [
"affine>=2.4.0",
"aiohttp>=3.13.3",
"dependency-injector>=4.48.0",
"geopandas>=1.1.0",
"h3>=4.3.0",
"lmdbm>=0.0.6",
"numba>=0.61.0",
"numpy>=2.2.0",
"pint>=0.24.0",
"pillow>=11.3.0",
"pydantic>=2.11.0",
"pyproj>=3.7.0",
"python-dotenv>=1.1.0",
"requests>=2.32.0",
"scipy>=1.15.0",
"shap... | [] | [] | [] | [
"Homepage, https://github.com/os-climate/physrisk",
"Repository, https://github.com/os-climate/physrisk",
"Downloads, https://github.com/os-climate/physrisk/releases",
"Bug Tracker, https://github.com/os-climate/physrisk/issues",
"Documentation, https://github.com/os-climate/physrisk/tree/main/docs",
"Sou... | twine/6.2.0 CPython/3.12.12 | 2026-02-18T17:47:02.118748 | physrisk_lib-1.7.1.tar.gz | 5,207,388 | d1/dd/f333a17f0a871cd103176d737184fc7727978aacd198cdda9b4e3942417c/physrisk_lib-1.7.1.tar.gz | source | sdist | null | false | 297cdcef12c1dcb7bc266316f7d806c0 | b133f506f9ddbd979ab4f3a7320dc02a426040d2f8bb2960c9ae8282ad1ce041 | d1ddf333a17f0a871cd103176d737184fc7727978aacd198cdda9b4e3942417c | Apache-2.0 | [
"LICENSE"
] | 276 |
2.4 | code-agnostic | 0.2.0 | Centralized hub for LLM coding config: MCP, skills, rules, and agents. | # code-agnostic
One config, every AI editor. Keep MCP servers, rules, skills, and agents in a single hub and sync them into editor-specific layouts.
## Why
AI coding tools each want config in a different place and format. When you use more than one, you end up copy-pasting MCP servers, duplicating rules, and manually keeping things in sync. `code-agnostic` removes that overhead: define once, sync everywhere.
## How it works
```
~/.config/code-agnostic/ Your single source of truth
├── config/
│ └── mcp.base.json MCP servers (editor-agnostic)
├── rules/
│ └── python-style.md Rules with YAML frontmatter
├── skills/
│ └── code-reviewer/SKILL.md Skills with YAML frontmatter
└── agents/
└── architect.md Agents with YAML frontmatter
↓ plan / apply ↓
~/.config/opencode/ Compiled & synced for OpenCode
~/.cursor/ Compiled & synced for Cursor
~/.codex/ Compiled & synced for Codex
```
Each resource is cross-compiled to the target editor's native format. Rules become `.mdc` files for Cursor, `AGENTS.md` sections for OpenCode/Codex, etc.
## Install
```bash
uv tool install code-agnostic
```
Or run without installing:
```bash
uvx code-agnostic
```
Or run the published Docker image to isolate filesystem access to mounted paths only:
```bash
docker run --rm -it \
-v "$(pwd):/workspace" \
-w /workspace \
ghcr.io/dhvcc/code-agnostic:latest plan
```
By default, config stays inside the container at `/root/.config` unless you mount a host path.
## Quick start
```bash
# Import existing config from an editor you already use
code-agnostic import plan -a codex
code-agnostic import apply -a codex
# Enable target editors
code-agnostic apps enable -a cursor
code-agnostic apps enable -a opencode
# Preview and apply
code-agnostic plan
code-agnostic apply
```
## Editor compatibility
| Feature | OpenCode | Cursor | Codex |
|---------|:--------:|:------:|:-----:|
| MCP sync | yes | yes | yes |
| Rules sync (cross-compiled) | yes | yes | yes |
| Skills sync | yes | yes | yes |
| Agents sync | yes | yes | -- |
| Workspace propagation | yes | yes | yes |
| Import from | yes | yes | yes |
| Interactive import (TUI) | yes | yes | yes |
Codex does not support agents natively.
## Features
### Sync engine
Plan-then-apply workflow. Preview every change before it touches disk.
```bash
code-agnostic plan -a cursor # dry-run for one editor
code-agnostic plan # dry-run for all
code-agnostic apply # apply changes
code-agnostic status # check drift
```
### MCP management
Add, remove, and list MCP servers without editing JSON by hand.
```bash
code-agnostic mcp add github --command npx --args @modelcontextprotocol/server-github --env GITHUB_TOKEN
code-agnostic mcp list
code-agnostic mcp remove github
```
Env vars without a value (`--env GITHUB_TOKEN`) are stored as `${GITHUB_TOKEN}` references.
### Rules with metadata
Rules live in `rules/` as markdown files with optional YAML frontmatter:
```markdown
---
description: "Python coding standards"
globs: ["*.py"]
always_apply: false
---
Always use type hints. Prefer dataclasses over dicts.
```
Cross-compiled per editor: Cursor gets `.mdc` files with native frontmatter, OpenCode/Codex get `AGENTS.md` sections.
```bash
code-agnostic rules list
code-agnostic rules remove --name python-style
```
### Skills and agents
Canonical YAML frontmatter format, cross-compiled per editor.
```bash
code-agnostic skills list
code-agnostic agents list
```
### Workspaces
Register workspace directories. Repos inside them get rules, skills, and agents propagated as symlinks.
```bash
code-agnostic workspaces add --name myproject --path ~/code/myproject
code-agnostic workspaces list
```
### Git exclude
Prevent synced paths from showing up in `git status`. Managed per-workspace with customizable patterns.
```bash
code-agnostic workspaces git-exclude # all workspaces
code-agnostic workspaces git-exclude -w myproject # one workspace
code-agnostic workspaces exclude-add --pattern "*.generated" -w myproject
code-agnostic workspaces exclude-list -w myproject
```
### Import
Migrate existing config from any supported editor into the hub.
```bash
code-agnostic import plan -a codex
code-agnostic import apply -a codex
code-agnostic import apply -a cursor --include mcp --on-conflict overwrite
code-agnostic import plan -a codex -i # interactive TUI picker
```
### CLI conventions
All commands use named flags (`-a`, `-w`, `-v`). Singular aliases work too: `app` = `apps`, `workspace` = `workspaces`.
## Roadmap
- [x] Plan/apply/status sync engine
- [x] MCP server sync across editors
- [x] Skills and agents sync (symlink-based)
- [x] Workspace propagation into git repos
- [x] Import from existing editor configs
- [x] Consistent CLI with named flags and aliases
- [x] MCP add/remove/list commands
- [x] Rules system with YAML frontmatter and per-editor compilation
- [x] Cross-compilation for skills and agents
- [x] Per-workspace git-exclude customization
- [x] Interactive TUI for import selection
- [ ] Claude Code support
- [ ] `rules add` / `skills add` / `agents add` commands (open `$EDITOR` with template)
- [ ] Planner integration for cross-compiled skills and agents
- [ ] Shell auto-complete
- [ ] Full TUI mode (command palette + menus)
## Testing
```bash
uv sync --dev
uv run test
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"click",
"rich",
"jsonschema>=4.0",
"tomli",
"pyyaml>=6.0",
"textual>=0.47",
"pre-commit>=4.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"jsonschema>=4.0; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\"",
"mypy>=1.11; extra == \"dev\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T17:46:54.171036 | code_agnostic-0.2.0.tar.gz | 91,952 | 1b/2d/7dc4916b22eb3c291bb5c7d96a70d121bc873644824b506978fff7924e99/code_agnostic-0.2.0.tar.gz | source | sdist | null | false | 6e44d2f033737c4c9944ff294701d303 | 0e963f14fa9ddb7681dc5c585208113562d0d3fd04f7cd9cc564d59036b3ca4c | 1b2d7dc4916b22eb3c291bb5c7d96a70d121bc873644824b506978fff7924e99 | null | [
"LICENSE"
] | 229 |
2.4 | nf-robot | 3.4.3 | Robot control system for Stringman | # nf_robot
Control code for the Stringman household robotic crane from Neufangled Robotics
## [Build Guides and Documentation](https://neufangled.com/docs)
Purchase assembled robots or kits at [neufangled.com](https://neufangled.com)
## Installation of stringman controller
Linux (python 3.11 or later)
sudo apt install python3-dev python3-virtualenv python3-pip ffmpeg
python -m virtualenv venv
pip install "nf_robot[host]"
Start headless robot controller in LAN-only mode.
The particular robot details will be read from/saved to bedroom.conf
stringman-headless --config=bedroom.conf
## Installation of Robot Control Panel (developers)
git clone https://github.com/nhnifong/cranebot3-firmware.git
sudo apt install python3-dev python3-virtualenv python3-pip ffmpeg
python -m virtualenv venv
source venv/bin/activate
pip install -e ".[host,dev,pi]"
### If you have an RTX 5090
pip install --force-reinstall torch==2.8.0 torchvision==0.23.0 torchaudio==2.8.0 torchcodec==0.6.0 --index-url https://download.pytorch.org/whl/cu129
### Run tests
pytest tests
### Setting up a component
Robot components that boot from the [`stringman-zero2w.img`](https://storage.googleapis.com/stringman-models/stringman-zero2w.img) (1.6GB) image should begin looking for wifi share codes with their camera immediately. You can produce a code with [qifi.org](htts://qifi.org)
Once the pi sees the code it will connect to the network and remember those settings. It should then be discoverable by the control panel via multicast DNS (Bonjour)
## Starting from a base rpi image
Alternatively the software can be set up from a fresh raspberry pi lite 64 bit image.
After booting any raspberry pi from a fresh image, perform an update
sudo apt update -y && sudo apt full-upgrade -y -o Dpkg::Options::="--force-confold" && sudo apt install -y git python3-dev python3-virtualenv rpicam-apps i2c-tools
Clone the [cranebot-firmware](https://github.com/nhnifong/cranebot3-firmware) repo
git clone https://github.com/nhnifong/cranebot3-firmware.git && cd cranebot3-firmware
Set the component type by uncommenting the appropriate line in server.conf
nano server.conf
Install stringman
chmod +x install.sh
sudo ./install.sh
### Additional settings for anchors
Setup for any raspberry pi that will be part of an anchor
Enable uart serial harware interface interactively.
sudo raspi-config
In interface optoins, select serial port. disable the login shell, but enable hardware serial.
add the following lines lines to to `/boot/firmware/config.txt` at the end this disables bluetooth, which would otherwise occupy the uart hardware.
Then reboot after this change
enable_uart=1
dtoverlay=disable-bt
### Additional settings for gripper
Setup for the raspberry pi in the gripper with the inventor hat mini
Enable i2c
sudo raspi-config nonint do_i2c 0
Add this line to `/boot/firmware/config.txt` just under `dtparam=i2c_arm=on` and reboot
dtparam=i2c_baudrate=400000
## Rebuilding the python module
within a venv install the build tools
python3 -m pip install --upgrade build twine
Bump the version number in pyproject.toml
then at this repo's root, build the module. Artifacts will be in dist/
python3 -m build
Upload to Pypi
python3 -m twine upload dist/*
### QA scripts
Note that if you are proceeding to QA scripts right after doing the steps above you must reboot and then stop the service before running those scrips.
sudo reboot now
log back in
sudo systemctl stop cranebot.service
Run QA scripts for the specific component type
/opt/robot/env/bin/qa-anchor anchor|power_anchor
/opt/robot/env/bin/qa-gripper
/opt/robot/env/bin/qa-gripper-arp
These scripts both check whether everything is connected as it should be and in the case of anchors, set whether it is a power anchor or not.
To update to the lastest nf_robot version in a component
/opt/robot/env/bin/pip install --upgrade "nf_robot[pi]"
## Training models
## Support this project
[Donate on Ko-fi](https://ko-fi.com/neufangled)
| text/markdown | null | Nathaniel Nifong <naavox@gmail.com> | null | null | Apache 2.0 | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=2.0",
"zeroconf",
"asyncio",
"websockets",
"betterproto2",
"opencv-contrib-python-headless>=4.0",
"scipy; extra == \"host\"",
"torch>=2.4.0; extra == \"host\"",
"torchvision; extra == \"host\"",
"pupil-apriltags; extra == \"host\"",
"av; extra == \"host\"",
"huggingface_hub; extra == \... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T17:46:01.724206 | nf_robot-3.4.3.tar.gz | 147,994 | 0c/c4/e79820c8456f02355cc09dffe4ec490163e9581a217aaf73a92cf5f9bf56/nf_robot-3.4.3.tar.gz | source | sdist | null | false | d2a3ea23b5e919ce219812ed412b9203 | 33052ca921977a966b493ff74502ddfcd7aa2534c28b05c512f9cf60b9ddd5b1 | 0cc4e79820c8456f02355cc09dffe4ec490163e9581a217aaf73a92cf5f9bf56 | null | [
"LICENSE"
] | 237 |
2.4 | gamspy | 1.20.0 | Python-based algebraic modeling interface to GAMS | 
-----------------
[](https://gamspy.readthedocs.io/en/latest/)
[](https://pepy.tech/project/gamspy)
[](https://gamspy.readthedocs.io/en/latest/)
# GAMSPy: Algebraic Modeling Interface to GAMS
## Installation
```sh
pip install gamspy
```
## What is it?
**gamspy** is a mathematical optimization package that combines the power of the high performance GAMS execution system
and flexibility of the Python language. It includes all GAMS symbols (Set, Alias, Parameter, Variable, and
Equation) to compose mathematical models, a math package, and various utility functions.
## Documentation
The official documentation is hosted on [GAMSPy Readthedocs](https://gamspy.readthedocs.io/en/latest/index.html).
## Design Philosophy
GAMSPy makes extensive use of set based operations -- the absence of any explicit looping, indexing, etc., in native Python.
These things are taking place, of course, just “behind the scenes” in optimized, pre-compiled C code.
Set based approach has many advantages:
- Results in more concise Python code -- avoids inefficient and difficult to read for loops
- Closely resembles standard mathematical notation
- Easier to read
- Fewer lines of code generally means fewer bugs
## Main Features
Here are just a few of the things that **gamspy** does well:
- Specify model algebra in Python natively
- Combines the flexibility of Python programming flow controls and the power of model specification in GAMS
- Test a variety of solvers on a model by changing only one line
## Getting Help
For usage questions, the best place to go to is [GAMSPy Documentation](https://gamspy.readthedocs.io/en/latest/index.html).
General questions and discussions can also take place on the [GAMSPy Discourse Platform](https://forum.gams.com).
| text/markdown | null | GAMS Development Corporation <support@gams.com> | null | null | null | Optimization, GAMS | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Programming Language :: Python",
"Topic :: Software Development",
"Topic :: Scientific/Engineering",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Progra... | [] | null | null | >=3.10 | [] | [] | [] | [
"gamsapi<54.0.0,>=53.1.0",
"gamspy_base<54.0.0,>=53.1.0",
"pandas<3.1,>=2.2.2",
"pydantic>=2.0",
"requests>=2.28.0",
"typer>=0.16.0",
"torch>=2.7.0; extra == \"torch\""
] | [] | [] | [] | [
"homepage, https://gams.com/sales/gamspy_facts/",
"documentation, https://gamspy.readthedocs.io/en/latest/user/index.html",
"repository, https://github.com/GAMS-dev/gamspy",
"changelog, https://github.com/GAMS-dev/gamspy/blob/develop/CHANGELOG.md"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T17:43:49.197819 | gamspy-1.20.0.tar.gz | 188,421 | 9e/08/07cf17313717772c0da3b2d3ebaff7b9687be0b0532a447419b7a04b3f79/gamspy-1.20.0.tar.gz | source | sdist | null | false | 8c6cb90b779ddd63b8a0cce502af840e | 8c74c0d687b3793f63d3af5bcf726536935d766ecef140cc29f024939ede10d0 | 9e0807cf17313717772c0da3b2d3ebaff7b9687be0b0532a447419b7a04b3f79 | null | [
"LICENSE"
] | 467 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.