metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | karaoke-gen | 0.118.0 | Generate karaoke videos with synchronized lyrics. Handles the entire process from downloading audio and lyrics to creating the final video with title screens. | # Karaoke Generator 🎶 🎥 🚀




Generate professional karaoke videos with instrumental audio and synchronized lyrics. Available as a **local CLI** (`karaoke-gen`) or **cloud-based CLI** (`karaoke-gen-remote`) that offloads processing to Google Cloud.
## ✨ Two Ways to Generate Karaoke
### 1. Local CLI (`karaoke-gen`)
Run all processing locally on your machine. Requires GPU for optimal audio separation performance.
```bash
karaoke-gen "ABBA" "Waterloo"
```
### 2. Remote CLI (`karaoke-gen-remote`)
Offload all processing to a cloud backend. No GPU required - just authenticate and submit jobs.
```bash
karaoke-gen-remote ./song.flac "ABBA" "Waterloo"
```
Both CLIs produce identical outputs: 4K karaoke videos, CDG+MP3 packages, audio stems, and more.
---
## 🎯 Features
### Core Pipeline
- **Audio Separation**: AI-powered vocal/instrumental separation using MDX and Demucs models
- **Lyrics Transcription**: Word-level timestamps via AudioShake API
- **Lyrics Correction**: Match transcription against online lyrics (Genius, Spotify, Musixmatch)
- **Human Review**: Interactive UI for correcting lyrics before final render
- **Video Rendering**: High-quality 4K karaoke videos with customizable styles
- **Multiple Outputs**: MP4 (4K lossless/lossy, 720p), MKV, CDG+MP3, TXT+MP3
### Distribution Features
- **YouTube Upload**: Automatic upload to your YouTube channel
- **Dropbox Integration**: Organize output in brand-coded folders
- **Google Drive**: Upload to public share folders
- **Discord Notifications**: Webhook notifications on completion
---
## 📦 Installation
```bash
pip install karaoke-gen
```
This installs both `karaoke-gen` (local) and `karaoke-gen-remote` (cloud) CLIs.
### Requirements
- Python 3.10-3.13
- FFmpeg
- For local processing: CUDA-capable GPU or Apple Silicon CPU recommended
### Transcription Provider Setup
**Transcription is required** for creating karaoke videos with synchronized lyrics. The system needs word-level timing data to display lyrics in sync with the music.
#### Option 1: AudioShake (Recommended)
Commercial service with high-quality transcription. Best for production use.
```bash
export AUDIOSHAKE_API_TOKEN="your_audioshake_token"
```
Get an API key at [https://www.audioshake.ai/](https://www.audioshake.ai/) - business only, at time of writing this.
#### Option 2: Local Whisper (No Cloud Required)
Run Whisper directly on your local machine using whisper-timestamped. Works on CPU, NVIDIA GPU (CUDA), or Apple Silicon.
```bash
# Install with local Whisper support
pip install "karaoke-gen[local-whisper]"
# Optional: Configure model size (tiny, base, small, medium, large)
export WHISPER_MODEL_SIZE="medium"
# Optional: Force specific device (cpu, cuda, mps)
export WHISPER_DEVICE="cpu"
```
**Model Size Guide:**
| Model | VRAM | Speed | Quality |
|-------|------|-------|---------|
| tiny | ~1GB | Fast | Lower |
| base | ~1GB | Fast | Basic |
| small | ~2GB | Medium | Good |
| medium | ~5GB | Slower | Better |
| large | ~10GB | Slowest | Best |
**CPU-Only Installation** (no GPU required):
```bash
# Pre-install CPU-only PyTorch first
pip install torch torchaudio --index-url https://download.pytorch.org/whl/cpu
pip install "karaoke-gen[local-whisper]"
```
Local Whisper runs automatically as a fallback when no cloud transcription services are configured.
#### Option 3: Whisper via RunPod
Cloud-based alternative using OpenAI's Whisper model on RunPod infrastructure.
```bash
export RUNPOD_API_KEY="your_runpod_key"
export WHISPER_RUNPOD_ID="your_whisper_endpoint_id"
```
Set up a Whisper endpoint at [https://www.runpod.io/](https://www.runpod.io/)
#### Without Transcription (Instrumental Only)
If you don't need synchronized lyrics, use the `--skip-lyrics` flag:
```bash
karaoke-gen --skip-lyrics "Artist" "Title"
```
This creates an instrumental-only karaoke video without lyrics overlay.
> **Note:** See `lyrics_transcriber_temp/README.md` for detailed transcription provider configuration options.
---
## 🖥️ Local CLI (`karaoke-gen`)
### Basic Usage
```bash
# Generate from local audio file
karaoke-gen ./song.mp3 "Artist Name" "Song Title"
# Search and download audio automatically
karaoke-gen "Rick Astley" "Never Gonna Give You Up"
# Process from YouTube URL
karaoke-gen "https://www.youtube.com/watch?v=dQw4w9WgXcQ" "Rick Astley" "Never Gonna Give You Up"
```
### Remote Audio Separation (Optional)
Offload just the GPU-intensive audio separation to Modal.com while keeping other processing local:
```bash
export AUDIO_SEPARATOR_API_URL="https://USERNAME--audio-separator-api.modal.run"
karaoke-gen "Artist" "Title"
```
### Key Options
```bash
# Custom styling
karaoke-gen --style_params_json="./styles.json" "Artist" "Title"
# Generate CDG and TXT packages
karaoke-gen --enable_cdg --enable_txt "Artist" "Title"
# Skip video encoding (CDG/TXT only, faster)
karaoke-gen --no-video --enable_cdg "Artist" "Title"
# YouTube upload
karaoke-gen --enable_youtube_upload --youtube_description_file="./desc.txt" "Artist" "Title"
# Full production run
karaoke-gen \
--style_params_json="./branding.json" \
--enable_cdg \
--enable_txt \
--brand_prefix="BRAND" \
--enable_youtube_upload \
--youtube_description_file="./description.txt" \
"Artist" "Title"
```
### Full Options Reference
```bash
karaoke-gen --help
```
---
## ☁️ Remote CLI (`karaoke-gen-remote`)
The remote CLI submits jobs to a Google Cloud backend that handles all processing. You don't need a GPU or any audio processing libraries installed locally.
### Setup
1. **Set the backend URL:**
```bash
export KARAOKE_GEN_URL="https://api.nomadkaraoke.com" # Or your own backend
```
2. **Authenticate with Google Cloud:**
```bash
gcloud auth login
```
### Basic Usage
```bash
# Submit a job
karaoke-gen-remote ./song.flac "ABBA" "Waterloo"
# The CLI will:
# 1. Upload your audio file
# 2. Monitor processing progress
# 3. Open lyrics review UI when ready
# 4. Prompt for instrumental selection
# 5. Download all outputs when complete
```
### Job Management
```bash
# List all jobs
karaoke-gen-remote --list
# Resume monitoring an existing job
karaoke-gen-remote --resume abc12345
# Cancel a running job
karaoke-gen-remote --cancel abc12345
# Delete a job and its files
karaoke-gen-remote --delete abc12345
```
### Full Production Run
```bash
karaoke-gen-remote \
--style_params_json="./karaoke-styles.json" \
--enable_cdg \
--enable_txt \
--brand_prefix=NOMAD \
--enable_youtube_upload \
--youtube_description_file="./youtube-description.txt" \
./song.flac "Artist" "Title"
```
### Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `KARAOKE_GEN_URL` | Backend service URL | Required |
| `KARAOKE_GEN_AUTH_TOKEN` | Admin auth token (for protected endpoints) | Optional |
| `REVIEW_UI_URL` | Lyrics review UI URL | `https://gen.nomadkaraoke.com/lyrics/` |
| `POLL_INTERVAL` | Seconds between status polls | `5` |
**Note:** The `REVIEW_UI_URL` defaults to the hosted lyrics review UI. For local development, set it to `http://localhost:5173` if you're running the frontend dev server.
### Authentication
The backend uses token-based authentication for admin operations (bulk delete, internal worker triggers). For basic job submission and monitoring, authentication is optional.
**For admin access:**
```bash
export KARAOKE_GEN_AUTH_TOKEN="your-admin-token"
```
The token must match one of the tokens configured in the backend's `ADMIN_TOKENS` environment variable.
### Non-Interactive Mode
For automated/CI usage:
```bash
karaoke-gen-remote -y ./song.flac "Artist" "Title"
```
The `-y` flag auto-accepts default corrections and selects clean instrumental.
---
## 🎨 Style Configuration
Create a `styles.json` file to customize the karaoke video appearance:
```json
{
"intro": {
"video_duration": 5,
"background_image": "/path/to/title-background.png",
"font": "/path/to/Font.ttf",
"artist_color": "#ffdf6b",
"title_color": "#ffffff"
},
"karaoke": {
"background_image": "/path/to/karaoke-background.png",
"font_path": "/path/to/Font.ttf"
},
"end": {
"background_image": "/path/to/end-background.png"
},
"cdg": {
"font_path": "/path/to/Font.ttf",
"instrumental_background": "/path/to/cdg-background.png"
}
}
```
When using `karaoke-gen-remote`, all referenced files are automatically uploaded with your job.
---
## 📤 Output Files
A completed job produces:
```
BRAND-1234 - Artist - Title/
├── Artist - Title (Final Karaoke Lossless 4k).mp4 # ProRes 4K
├── Artist - Title (Final Karaoke Lossless 4k).mkv # FLAC audio 4K
├── Artist - Title (Final Karaoke Lossy 4k).mp4 # H.264 4K
├── Artist - Title (Final Karaoke Lossy 720p).mp4 # H.264 720p
├── Artist - Title (Final Karaoke CDG).zip # CDG+MP3 package
├── Artist - Title (Final Karaoke TXT).zip # TXT+MP3 package
├── Artist - Title (Karaoke).cdg # Individual CDG
├── Artist - Title (Karaoke).mp3 # Karaoke audio
├── Artist - Title (Karaoke).lrc # LRC lyrics
├── Artist - Title (Karaoke).ass # ASS subtitles
├── Artist - Title (Title).mov # Title screen video
├── Artist - Title (End).mov # End screen video
├── Artist - Title (Instrumental...).flac # Clean instrumental
├── Artist - Title (Instrumental +BV...).flac # With backing vocals
└── stems/ # All audio stems
├── ...Vocals....flac
├── ...Bass....flac
├── ...Drums....flac
└── ...
```
---
## 🏗️ Deploy Your Own Backend
The cloud backend runs on Google Cloud Platform using:
- **Cloud Run**: Serverless API hosting
- **Firestore**: Job state management
- **Cloud Storage**: File uploads and outputs
- **Modal.com**: GPU-accelerated audio separation
- **AudioShake**: Lyrics transcription API
### Prerequisites
- Google Cloud account with billing enabled
- [Pulumi CLI](https://www.pulumi.com/docs/install/)
- Modal.com account (for audio separation)
- AudioShake API key
### Infrastructure Setup
```bash
cd infrastructure
# Install dependencies
pip install -r requirements.txt
# Login to Pulumi
pulumi login
# Create a stack
pulumi stack init prod
# Configure GCP project
pulumi config set gcp:project your-project-id
pulumi config set gcp:region us-central1
# Deploy infrastructure
pulumi up
```
This creates:
- Firestore database
- Cloud Storage bucket
- Artifact Registry
- Service account with IAM roles
- Secret Manager secrets (you add values)
### Add Secret Values
```bash
# AudioShake API key
echo -n "your-audioshake-key" | gcloud secrets versions add audioshake-api-key --data-file=-
# Genius API key
echo -n "your-genius-key" | gcloud secrets versions add genius-api-key --data-file=-
# Modal API URL
echo -n "https://your-modal-url" | gcloud secrets versions add audio-separator-api-url --data-file=-
# YouTube OAuth credentials (JSON)
gcloud secrets versions add youtube-oauth-credentials --data-file=./youtube-creds.json
# Dropbox OAuth credentials (JSON)
gcloud secrets versions add dropbox-oauth-credentials --data-file=./dropbox-creds.json
# Google Drive service account (JSON)
gcloud secrets versions add gdrive-service-account --data-file=./gdrive-sa.json
```
### Deploy Cloud Run
Deployments happen automatically via GitHub Actions CI when pushing to `main`.
See `.github/workflows/ci.yml` for the full deployment workflow.
### Point CLI to Your Backend
```bash
export KARAOKE_GEN_URL="https://your-backend.run.app"
karaoke-gen-remote ./song.flac "Artist" "Title"
```
---
## 🔌 Backend API Reference
The backend exposes a REST API for job management.
### Job Submission
**POST** `/api/jobs/upload`
Submit a new karaoke generation job with audio file and options.
```bash
curl -X POST "https://api.example.com/api/jobs/upload" \
-F "file=@song.flac" \
-F "artist=ABBA" \
-F "title=Waterloo" \
-F "enable_cdg=true" \
-F "enable_txt=true" \
-F "brand_prefix=NOMAD" \
-F "style_params=@styles.json" \
-F "style_karaoke_background=@background.png"
```
### Job Status
**GET** `/api/jobs/{job_id}`
Get job status and details.
```bash
curl "https://api.example.com/api/jobs/abc12345"
```
### List Jobs
**GET** `/api/jobs`
List all jobs with optional status filter.
```bash
curl "https://api.example.com/api/jobs?status=complete&limit=10"
```
### Cancel Job
**POST** `/api/jobs/{job_id}/cancel`
Cancel a running job.
```bash
curl -X POST "https://api.example.com/api/jobs/abc12345/cancel" \
-H "Content-Type: application/json" \
-d '{"reason": "User cancelled"}'
```
### Delete Job
**DELETE** `/api/jobs/{job_id}`
Delete a job and its files.
```bash
curl -X DELETE "https://api.example.com/api/jobs/abc12345?delete_files=true"
```
### Lyrics Review
**GET** `/api/review/{job_id}/correction-data`
Get correction data for lyrics review.
**POST** `/api/review/{job_id}/complete`
Submit corrected lyrics and trigger video rendering.
### Instrumental Selection
**GET** `/api/jobs/{job_id}/instrumental-options`
Get available instrumental options.
**POST** `/api/jobs/{job_id}/select-instrumental`
Submit instrumental selection (clean or with_backing).
```bash
curl -X POST "https://api.example.com/api/jobs/abc12345/select-instrumental" \
-H "Content-Type: application/json" \
-d '{"selection": "clean"}'
```
### Download Files
**GET** `/api/jobs/{job_id}/download-urls`
Get download URLs for all output files.
**GET** `/api/jobs/{job_id}/download/{category}/{file_key}`
Stream download a specific file.
### Health Check
**GET** `/api/health`
Check backend health status.
---
## 🔧 Troubleshooting
### "No suitable files found for processing"
This error occurs during the finalisation step when the `(With Vocals).mkv` file is missing. This file is created during lyrics transcription.
**Most common cause:** No transcription provider configured.
**Quick fix:**
1. Check if transcription providers are configured:
```bash
echo $AUDIOSHAKE_API_TOKEN
echo $RUNPOD_API_KEY
```
2. If both are empty, set up a provider (see [Transcription Provider Setup](#transcription-provider-setup))
3. Or use `--skip-lyrics` for instrumental-only karaoke:
```bash
karaoke-gen --skip-lyrics "Artist" "Title"
```
**Other causes:**
- Invalid API credentials - verify your tokens are correct and active
- API service unavailable - check service status pages
- Network connectivity issues - ensure you can reach the API endpoints
- Transcription timeout - try again or use a different provider
### Transcription Fails Silently
If karaoke-gen runs without errors but produces no synchronized lyrics:
1. **Check logs** - Run with `--log_level debug` for detailed output:
```bash
karaoke-gen --log_level debug "Artist" "Title"
```
2. **Verify environment variables** - Ensure API tokens are exported in your shell:
```bash
# Check if set
printenv | grep -E "(AUDIOSHAKE|RUNPOD|WHISPER)"
# Set in current session
export AUDIOSHAKE_API_TOKEN="your_token"
```
3. **Test API connectivity** - Verify you can reach the transcription service
### "No lyrics found from any source"
This warning means no reference lyrics were fetched from online sources (Genius, Spotify, Musixmatch). The transcription will still work, but auto-correction may be less accurate.
**To fix:**
- Set `GENIUS_API_TOKEN` for Genius lyrics
- Set `SPOTIFY_COOKIE_SP_DC` for Spotify lyrics
- Set `RAPIDAPI_KEY` for Musixmatch lyrics
- Or provide lyrics manually with `--lyrics_file /path/to/lyrics.txt`
### Video Quality Issues
If the output video has quality problems:
- Ensure FFmpeg is properly installed: `ffmpeg -version`
- Check available codecs: `ffmpeg -codecs`
- For 4K output, ensure sufficient disk space (10GB+ per track)
### Local Whisper Issues
#### GPU Out of Memory
If you get CUDA out of memory errors:
```bash
# Use a smaller model
export WHISPER_MODEL_SIZE="small" # or "tiny"
# Or force CPU mode
export WHISPER_DEVICE="cpu"
```
#### Slow Transcription on CPU
CPU transcription is significantly slower than GPU. For faster processing:
- Use a smaller model (`tiny` or `base`)
- Consider using cloud transcription (AudioShake or RunPod)
- On Apple Silicon, the `small` model offers good speed/quality balance
#### Model Download Issues
Whisper models are downloaded on first use (~1-3GB depending on size). If downloads fail:
- Check your internet connection
- Set a custom cache directory: `export WHISPER_CACHE_DIR="/path/with/space"`
- Models are cached in `~/.cache/whisper/` by default
#### whisper-timestamped Not Found
If you get "whisper-timestamped is not installed":
```bash
pip install "karaoke-gen[local-whisper]"
# Or install directly:
pip install whisper-timestamped
```
#### Disabling Local Whisper
If you want to disable local Whisper (e.g., to force cloud transcription):
```bash
export ENABLE_LOCAL_WHISPER="false"
```
---
## 🧪 Development
### Running Tests
```bash
# Run all tests
pytest tests/ backend/tests/ -v
# Run only unit tests
pytest tests/unit/ -v
# Run with coverage
pytest tests/unit/ -v --cov=karaoke_gen --cov-report=term-missing
```
### Project Structure
```
karaoke-gen/
├── karaoke_gen/ # Core CLI package
│ ├── utils/
│ │ ├── gen_cli.py # Local CLI (karaoke-gen)
│ │ └── remote_cli.py # Remote CLI (karaoke-gen-remote)
│ ├── karaoke_finalise/ # Video encoding, packaging, distribution
│ └── style_loader.py # Unified style configuration
├── backend/ # Cloud backend (FastAPI)
│ ├── api/routes/ # API endpoints
│ ├── workers/ # Background processing workers
│ └── services/ # Business logic services
├── infrastructure/ # Pulumi IaC for GCP
├── docs/ # Documentation
└── tests/ # Test suite
```
---
## 📄 License
MIT
---
## 🤝 Contributing
Contributions are welcome! Please see our contributing guidelines.
| text/markdown | Andrew Beveridge | andrew@beveridge.uk | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"aiohttp<4.0.0,>=3.13.2",
"argparse>=1.4.0",
"attrs>=24.2.0",
"audio-separator[cpu]>=0.34.0",
"beautifulsoup4>=4",
"black>=23; extra == \"dev\"",
"cattrs>=24.1.2",
"dropbox>=12",
"email-validator>=2.0.0",
"fastapi>=0.104.0",
"fetch-lyrics-from-genius>=0.1",
"ffmpeg-python<0.3.0,>=0.2.0",
"flacfetch>=0.9.0",
"fonttools>=4.55",
"google-api-python-client",
"google-auth",
"google-auth-httplib2",
"google-auth-oauthlib",
"google-cloud-firestore>=2.14.0",
"google-cloud-run>=0.10.0",
"google-cloud-secret-manager>=2.18.0",
"google-cloud-storage>=2.14.0",
"google-cloud-tasks>=2.16.0",
"httpx>=0.25.0",
"jiwer>=3.0.0",
"karaoke-lyrics-processor>=0.6",
"kbputils<0.0.17,>=0.0.16",
"langchain>=0.3.0",
"langchain-anthropic>=0.2.0",
"langchain-core>=0.3.0",
"langchain-google-genai>=2.0.0",
"langchain-ollama>=0.2.0",
"langchain-openai>=0.2.0",
"langfuse>=3.0.0",
"langgraph>=0.2.0",
"lyrics-converter>=0.2.1",
"lyricsgenius>=3",
"matplotlib>=3",
"metaphone>=0.6",
"mutagen>=1.47",
"nest-asyncio>=1.5",
"nltk>=3.9",
"numpy>=2",
"ollama>=0.4.7",
"openai>=1.63.2",
"opentelemetry-api>=1.20.0",
"opentelemetry-exporter-gcp-trace>=1.6.0",
"opentelemetry-instrumentation-fastapi>=0.41b0",
"opentelemetry-instrumentation-httpx>=0.41b0",
"opentelemetry-instrumentation-logging>=0.41b0",
"opentelemetry-resourcedetector-gcp>=1.6.0a0",
"opentelemetry-sdk>=1.20.0",
"pillow>=10.1",
"psutil<8.0.0,>=7.0.0",
"pydantic>=2.5.0",
"pydantic-settings>=2.1.0",
"pydub>=0.25.1",
"pyinstaller>=6.3",
"pyperclip",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest-mock>=3.10; extra == \"dev\"",
"python-dotenv>=1.0.0",
"python-levenshtein>=0.26",
"python-multipart<0.0.21,>=0.0.20",
"python-slugify>=8",
"pywebpush>=2.0.0",
"requests>=2",
"sendgrid>=6.10.0",
"shortuuid>=1.0.13",
"spacy>=3.8.7",
"spacy-syllables>=3",
"srsly>=2.5.1",
"stripe>=7.0.0",
"syllables>=1",
"syrics>=0",
"tenacity<9.0.0,>=8.2.0",
"thefuzz>=0.22",
"toml>=0.10",
"torch>=2.7",
"tqdm>=4.67",
"transformers>=4.47",
"uvicorn[standard]>=0.24.0",
"whisper-timestamped>=1.15.0; extra == \"local-whisper\"",
"yt-dlp>=2024.0.0"
] | [] | [] | [] | [
"Documentation, https://github.com/nomadkaraoke/karaoke-gen/blob/main/README.md",
"Homepage, https://github.com/nomadkaraoke/karaoke-gen",
"Repository, https://github.com/nomadkaraoke/karaoke-gen"
] | poetry/2.3.2 CPython/3.13.11 Linux/6.11.0-1018-azure | 2026-02-20T18:26:05.990481 | karaoke_gen-0.118.0-py3-none-any.whl | 8,855,811 | 5f/5d/a7eaf34bf79340857e4b97dacf48fef77d19ef5add26771bfc39f06a96b7/karaoke_gen-0.118.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 071a8eaad531c7d1310b81689ea7621b | 89feccefd10fd1224375e17e483a7cc32d98d03895fb9cf03c24e2d93c878e15 | 5f5da7eaf34bf79340857e4b97dacf48fef77d19ef5add26771bfc39f06a96b7 | null | [
"LICENSE"
] | 223 |
2.4 | jugs-chassis | 0.1.1 | Shared chassis (logging, etc.) for JUGS services | # jugs-chassis
`jugs-chassis` is a shared JUGS library that provides reusable utilities for multiple services.
It is designed for cross-cutting concerns that should be imported directly by services (for example: logging setup and shared runtime conventions), instead of being exposed as a standalone service.
## Why this package exists
In JUGS, some capabilities are best delivered as code dependencies rather than API calls.
Using `jugs-chassis` as a dependency makes it easier to:
- Reuse the same behavior across services.
- Keep service internals consistent.
- Ship functionality (not inter-service data exchange) in a versioned Python package.
## Current scope
At the moment, the primary implemented utility is logging configuration.
The logging setup is currently tuned for one service context. As the microservices chassis effort progresses, this will be normalized so all JUGS services can use a common logging standard in later versions.
## Installation
### Dockerizing and using in services independently
```bash
pip install jugs-chassis
```
### Installing as a dependency (e.g., inside Dockerized services)
```bash
cd libs/jugs_chassis
pip install .
```
### Developing locally (cloning the repository)
```python
from jugs_chassis.logging import configure_logging, set_request_id
configure_logging()
set_request_id('req-123')
```
Optional environment variables commonly used by the logging module include:
- `LOG_LEVEL`
- `WERKZEUG_LEVEL`
- `LOG_SERVICE`
- `LOG_ENV`
- `LOG_DIR_BASE`
## JUGS
JUGS is a sector-based carbon-emission evaluation framework built with a microservices architecture.
Each module runs as an independent service (a "jug"). Current services focus on building life-cycle assessment and city-scale geospatial cleaning/validation workflows. Alongside services, JUGS also includes shared Python libraries such as `jugs-chassis` to provide reusable internal functionality.
## Roadmap note
`jugs-chassis` will continue to grow as a shared library for JUGS services, with future versions expanding beyond logging into additional standardized service utilities.
| text/markdown | Alireza Adli | null | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://demianadli.com",
"Repository, https://github.com/demianAdli/jugs",
"Source, https://github.com/demianAdli/jugs/tree/main/libs/jugs_chassis",
"Issues, https://github.com/demianAdli/jugs/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T18:25:47.337012 | jugs_chassis-0.1.1.tar.gz | 15,618 | 4e/95/8554ff87dca04dd8fd5d31177bccd81b1c6276779c9a1925ca67f0d95741/jugs_chassis-0.1.1.tar.gz | source | sdist | null | false | 063823500d7063abbd218c75a9902201 | aab4861c9be98f9cc56da1330d780d54f4aafa79f38e8f7a9c6e9b1e2e482236 | 4e958554ff87dca04dd8fd5d31177bccd81b1c6276779c9a1925ca67f0d95741 | null | [
"LICENSE"
] | 215 |
2.4 | ndx-rate-maps | 0.1.0 | NWB extension for storing rate maps (spatial firing rate maps, tuning curves) | # ndx-rate-maps Extension for NWB
NWB extension for storing rate maps (spatial firing rate maps, tuning curves).
`RateMapTable` is a `DynamicTable` where each row holds a rate map for one unit. It supports both 1D tuning curves (e.g., head direction) and 2D spatial rate maps (e.g., place cells). All maps in a table share the same binning. For multiple conditions, create separate table instances.
## Installation
```bash
pip install ndx-rate-maps
```
For development:
```bash
git clone https://github.com/catalystneuro/ndx-rate-maps.git
cd ndx-rate-maps
pip install -e .
```
## Usage
### 2D spatial rate maps
```python
import numpy as np
from datetime import datetime
from zoneinfo import ZoneInfo
from pynwb import NWBFile, NWBHDF5IO
from pynwb.behavior import Position, SpatialSeries
from hdmf.common import DynamicTableRegion, VectorData
from ndx_rate_maps import RateMapTable
# Create NWBFile and add units
nwbfile = NWBFile(
session_description="session",
identifier="id",
session_start_time=datetime.now(ZoneInfo("UTC")),
)
nwbfile.add_unit_column("location", "brain region")
for i in range(3):
nwbfile.add_unit(spike_times=np.array([1.0, 2.0, 3.0]) + i, location="CA1")
# Create rate map table
num_units = 3
num_bins_x, num_bins_y = 50, 50
rate_maps = RateMapTable(
name="place_rate_maps",
description="2D spatial rate maps, 2cm bins, gaussian-smoothed",
bin_edges_dim0=np.linspace(0, 100, num_bins_x + 1), # x bin edges in cm
dim0_label="x_position",
dim0_unit="cm",
bin_edges_dim1=np.linspace(0, 100, num_bins_y + 1), # y bin edges in cm
dim1_label="y_position",
dim1_unit="cm",
smoothing_kernel="gaussian",
smoothing_kernel_width=3.0, # in cm (same units as bin edges)
units=DynamicTableRegion(
name="units",
data=list(range(num_units)),
description="units for each rate map",
table=nwbfile.units,
),
rate_map=VectorData(
name="rate_map",
description="Rate map values in Hz",
data=np.random.rand(num_units, num_bins_x, num_bins_y),
),
occupancy_map=VectorData(
name="occupancy_map",
description="Time spent per bin in seconds",
data=np.random.rand(num_units, num_bins_x, num_bins_y),
),
)
# Add to processing module
module = nwbfile.create_processing_module("behavior", "Behavioral data")
module.add(rate_maps)
# Write
with NWBHDF5IO("example.nwb", "w") as io:
io.write(nwbfile)
# Read
with NWBHDF5IO("example.nwb", "r") as io:
read_nwb = io.read()
read_table = read_nwb.processing["behavior"]["place_rate_maps"]
print(read_table.unit_of_measurement) # "Hz"
print(read_table["rate_map"][0].shape) # (50, 50)
```
### 1D head direction tuning curves
```python
num_bins = 60
hd_maps = RateMapTable(
name="hd_tuning_curves",
description="Head direction tuning curves, 6-degree bins",
bin_edges_dim0=np.linspace(0, 2 * np.pi, num_bins + 1), # radians
dim0_label="head_direction",
dim0_unit="radians",
units=DynamicTableRegion(
name="units",
data=list(range(num_units)),
description="units for each tuning curve",
table=nwbfile.units,
),
rate_map=VectorData(
name="rate_map",
description="Rate map values in Hz",
data=np.random.rand(num_units, num_bins),
),
)
```
### Time support
Link the rate maps to the time intervals over which they were computed:
```python
from pynwb.epoch import TimeIntervals
# Create intervals for the run epochs used to compute rate maps
time_support = TimeIntervals(name="time_support", description="run epochs")
time_support.add_interval(start_time=0.0, stop_time=120.0)
time_support.add_interval(start_time=200.0, stop_time=350.0)
module.add(time_support)
rate_maps.time_support = time_support
```
This also works with `nwbfile.trials`, `nwbfile.epochs`, or any `TimeIntervals` table.
### Multiple conditions
Create separate tables for each condition:
```python
left_trials = RateMapTable(
name="rate_maps_left_trials",
description="Rate maps for left-choice trials",
# ... same structure as above
)
right_trials = RateMapTable(
name="rate_maps_right_trials",
description="Rate maps for right-choice trials",
# ... same structure as above
)
```
## Schema
### `RateMapTable` (extends `DynamicTable`)
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `unit_of_measurement` | text (fixed: "Hz") | Yes | Unit for rate values |
| `smoothing_kernel` | text | No | Smoothing kernel type (e.g., "gaussian") |
| `smoothing_kernel_width` | float64 | No | Kernel width, in same units as bin edges |
| `dim0_label` | text | Yes | Label for the first dimension (e.g., "x_position") |
| `dim0_unit` | text | Yes | Unit of measurement for the first dimension (e.g., "cm") |
| `dim1_label` | text | No | Label for the second dimension (2D only) |
| `dim1_unit` | text | No | Unit of measurement for the second dimension (2D only) |
| `bin_edges_dim0` | float64 array | Yes | Bin edges along first dimension |
| `bin_edges_dim1` | float64 array | No | Bin edges along second dimension (2D only) |
| `units` | DynamicTableRegion | Yes | Reference to Units table |
| `rate_map` | VectorData (float64) | Yes | Rate map values in Hz |
| `occupancy_map` | VectorData (float64) | No | Time spent per bin in seconds |
| `spike_count_map` | VectorData (float64) | No | Spike count per bin |
| `source_timeseries` | link to TimeSeries | No | Source behavioral timeseries |
| `time_support` | link to TimeIntervals | No | Time intervals over which rate maps were computed |
---
This extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).
| text/markdown | null | Ben Dichter <ben.dichter@catalystneuro.com> | null | null | null | NWB, NeurodataWithoutBorders, ndx-extension, nwb-extension | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"hdmf>=4.0.0",
"pynwb>=3.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/catalystneuro/ndx-rate-maps",
"Bug Tracker, https://github.com/catalystneuro/ndx-rate-maps/issues"
] | twine/6.2.0 CPython/3.12.9 | 2026-02-20T18:25:26.668892 | ndx_rate_maps-0.1.0.tar.gz | 20,503 | e9/f7/85a8584c90bed15808ec7c8ded6c35f4d5c2f6816e82fa43dc959eca7861/ndx_rate_maps-0.1.0.tar.gz | source | sdist | null | false | 50519dfc5cfc49123c57bcc7e14674cd | 5f08beb4fc30131ec224b750af7dff256438aabb4d0414f6e3f59710b0f24b77 | e9f785a8584c90bed15808ec7c8ded6c35f4d5c2f6816e82fa43dc959eca7861 | BSD-3-Clause | [
"LICENSE.txt"
] | 220 |
2.4 | wrapex | 0.1.0 | Command Dispatch Refactoring Toolkit — programmatic access to skills, rules, schemas, templates, and examples for incremental command-pattern adoption. | # wrapex
A toolkit for incrementally adopting a **command dispatch architecture** in any TypeScript/React codebase. AI-agent-first — the primary interface is an AI coding agent that reads the skill files and follows them step by step.
## What It Does
wrapex helps you wrap existing functions, store actions, and event handlers into a centralized **command registry** — without modifying existing code. Once commands are registered, you can wire them to:
- **Command palette** (Ctrl+K search and execute)
- **Keyboard shortcuts**
- **AI assistant tools** (Vercel AI SDK / Anthropic SDK)
- **MCP server tools** (for Claude Desktop, Cursor, etc.)
- **Telemetry** (Sentry breadcrumbs + analytics events)
- **Test skeletons**
- **Feature flags**
## The Strangler Fig Approach
The toolkit follows the strangler fig migration pattern:
1. **Wrap**: Create new command definition files that delegate to existing code. Zero changes to original files.
2. **Enrich**: Add Zod schemas, descriptions, keybindings, when-clauses.
3. **Wire**: Connect commands to palettes, shortcuts, AI, MCP, telemetry, tests.
The existing codebase continues to work unchanged. Commands are an additional layer, not a replacement.
## Installation
### npm (TypeScript runtime)
```bash
npm install wrapex
```
```typescript
import { defineCommand, createRegistry } from 'wrapex';
import { createPaletteAdapter } from 'wrapex/adapters';
import { CommandCandidate } from 'wrapex/schemas';
```
### pip (Python — data access)
```bash
pip install wrapex
```
```python
import wrapex
# List and read skills, rules, examples, schemas
wrapex.list_skills() # ['01-diagnose.md', '02-plan.md', ...]
wrapex.get_skill('01') # Returns the content of 01-diagnose.md
wrapex.get_rule('naming') # Returns command-naming.md content
wrapex.get_example('zustand') # Returns the zustand example README
```
### GitHub (raw files for AI agents)
Point your AI coding agent directly at the repo. The `SKILL.md` file is the master guide.
## Quick Start
### For AI Agents
Read `SKILL.md` — it's the master guide. Follow the skills in order:
1. `skills/01-diagnose.md` → Scan the codebase, produce a diagnosis report
2. `skills/02-plan.md` → Prioritize candidates into a phased backlog
3. `skills/03-scaffold.md` → Set up the registry infrastructure
4. `skills/04-wrap.md` → Create command wrappers
5. `skills/05-enrich.md` → Add schemas and metadata
6. `skills/06-12` → Wire to palette, shortcuts, AI, MCP, tests, etc.
### For Humans
1. Point your AI coding agent (Claude Code, Cursor, Copilot) at this toolkit.
2. Tell it to read `SKILL.md` and run the diagnosis on your codebase.
3. Review the diagnosis report and refactoring plan.
4. Let the agent scaffold and wrap commands.
5. Choose which wire-up skills you want (palette, AI tools, MCP, etc.).
## Directory Structure
```
wrapex/
├── SKILL.md # Master skill file for AI agents
├── AGENTS.md # AI agent instructions
├── skills/ # 12 step-by-step skill files
├── rules/ # Naming, categories, when-clause conventions
├── examples/ # Worked examples (zustand, event-handler, api-call, redux)
├── templates/ # command-definition.ts.template
├── src/ # TypeScript runtime (npm package source)
│ ├── define-command.ts # defineCommand helper + types
│ ├── command-registry.ts # Core registry with middleware pipeline
│ ├── middleware-pipeline.ts # Pre-built middleware
│ ├── validation-middleware.ts # Zod validation middleware
│ ├── telemetry-middleware.ts # Sentry + analytics middleware
│ ├── adapters/ # palette, MCP, AI tools, test-generator
│ └── schemas/ # Zod schemas for diagnosis, planning
├── python/wrapex/ # Python package source
├── ts-tests/ # TypeScript tests (vitest)
└── py-tests/ # Python tests (pytest)
```
## Design Principles
1. **AI-agent-first**: Skill files are instructions an AI agent reads and follows.
2. **Zero-touch**: Diagnose and Wrap phases never modify existing files.
3. **Progressive value**: Each skill delivers standalone value.
4. **Generic**: Works with Zustand, Redux, MobX, or plain React state.
5. **Small runtime**: Registry + middleware + adapters < 500 lines total.
## Compatibility
Works with (but does not depend on):
- **State**: Zustand, Redux Toolkit, MobX, Jotai, Recoil, plain React
- **Schemas**: Zod v3 or v4
- **Palette**: cmdk, kbar
- **Shortcuts**: tinykeys
- **AI**: Vercel AI SDK, Anthropic SDK
- **MCP**: MCP TypeScript SDK
- **Telemetry**: Sentry, PostHog, Amplitude
- **Testing**: Vitest, Jest, Playwright
- **Feature flags**: PostHog, LaunchDarkly, Statsig
## Architecture Reference
The command dispatch pattern draws from:
- **VS Code's command system**: String IDs, when-clauses, dual registration
- **Redux middleware**: Composable pipeline wrapping dispatch
- **Strangler fig pattern**: Incremental migration without big-bang rewrites
- **Zod as schema bridge**: Single definition → TS types + JSON Schema + validation
| text/markdown | thorwhalen | null | null | null | null | ai-toolkit, command-dispatch, command-pattern, mcp, refactoring | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pytest>=7.0; extra == \"testing\""
] | [] | [] | [] | [
"Homepage, https://github.com/thorwhalen/wrapex",
"Repository, https://github.com/thorwhalen/wrapex",
"Issues, https://github.com/thorwhalen/wrapex/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T18:24:59.264226 | wrapex-0.1.0.tar.gz | 49,070 | 18/a0/7d79a0cd89d83d31c50fab43301ccd5bc4ac617f161261501b955c89f3ac/wrapex-0.1.0.tar.gz | source | sdist | null | false | 69372dcd990d14d70a19722629bb2a65 | 2656ffdfce240561dad6fc7413912212d31ffae909e47217dd5cfe2fbb9cedb4 | 18a07d79a0cd89d83d31c50fab43301ccd5bc4ac617f161261501b955c89f3ac | Apache-2.0 | [
"LICENSE"
] | 215 |
2.4 | mlconfgen | 0.2.3 | Shape-constrained molecule generation via Equivariant Diffusion and GCN | # ML Conformer Generator
[](https://doi.org/10.1039/D5DD00318K)
<img src="https://raw.githubusercontent.com/Membrizard/ml_conformer_generator/main/assets/logo/mlconfgen_logo.png" width="200" style="display: block; margin: 0 10%;">
**ML Conformer Generator**
is a tool for shape-constrained molecule generation using an Equivariant Diffusion Model (EDM)
and a Graph Convolutional Network (GCN). It is designed to generate 3D molecular conformations
that are both chemically valid and spatially similar to a reference shape.
## Supported features
* **Shape-guided molecular generation**
Generate novel molecules that conform to arbitrary 3D shapes—such as protein binding pockets or custom-defined spatial regions.
* **Reference-based conformer similarity**
Create molecules conformations of which closely resemble a reference structure, supporting scaffold-hopping and ligand-based design workflows.
* **Fragment-based inpainting**
Fix specific substructures or fragments within a molecule and complete or grow the rest in a geometrically consistent manner.
## Citation
If you use **MLConfGen** in your research, please cite:
Denis Sapegin, Fedor Bakharev, Dmitry Krupenya, Azamat Gafurov, Konstantin Pildish, and Joseph C. Bear.
*Moment of inertia as a simple shape descriptor for diffusion-based shape-constrained molecular generation.*
Digital Discovery, 2025.
DOI: [10.1039/D5DD00318K](https://doi.org/10.1039/D5DD00318K)
---
## Installation
1. Install the package:
`pip install mlconfgen`
2. Load the weights from Huggingface
> https://huggingface.co/Membrizard/ml_conformer_generator
`edm_moi_chembl_15_39.pt`
`adj_mat_seer_chembl_15_39.pt`
---
## 🐍 Python API
See interactive examples: `./python_api_demo.ipynb`
```python
from rdkit import Chem
from mlconfgen import MLConformerGenerator, evaluate_samples
model = MLConformerGenerator(
edm_weights="./edm_moi_chembl_15_39.pt",
adj_mat_seer_weights="./adj_mat_seer_chembl_15_39.pt",
diffusion_steps=100,
)
reference = Chem.MolFromMolFile('./assets/demo_files/ceyyag.mol')
samples = model.generate_conformers(reference_conformer=reference, n_samples=20, variance=2)
aligned_reference, std_samples = evaluate_samples(reference, samples)
```
---
## 🚀 Overview
This solution employs:
- **Equivariant Diffusion Model (EDM) [[1]](https://doi.org/10.48550/arXiv.2203.17003)**: For generating atom coordinates and types under a shape constraint.
- **Graph Convolutional Network (GCN) [[2]](https://doi.org/10.1039/D3DD00178D)**: For predicting atom adjacency matrices.
- **Deterministic Standardization Pipeline**: For refining and validating generated molecules.
---
## 🧠 Model Training
- Trained on **1.6 million** compounds from the **ChEMBL** database.
- Filtered to molecules with **15–39 heavy atoms**.
- Supported elements: `H, C, N, O, F, P, S, Cl, Br`.
---
## 🧪 Standardization Pipeline
The generated molecules are post-processed through the following steps:
- Largest Fragment picker
- Valence check
- Kekulization
- RDKit sanitization
- Constrained Geometry optimization via **MMFF94** Molecular Dynamics
---
## 📏 Evaluation Pipeline
Aligns and Evaluates shape similarity between generated molecules and a reference using
**Shape Tanimoto Similarity [[3]](https://doi.org/10.1007/978-94-017-1120-3_5 )** via Gaussian Molecular Volume overlap.
> Hydrogens are ignored in both reference and generated samples for this metric.
---
## 📊 Performance (100 Denoising Steps)
*Tested on 100,000 samples using 1,000 CCDC Virtual Screening [[4]](https://www.ccdc.cam.ac.uk/support-and-resources/downloads/) reference compounds.*
### General Overview
- ⏱ **Avg time to generate 50 valid samples**: 11.46 sec (NVIDIA H100)
- ⚡️ **Generation speed**: 4.18 valid molecules/sec
- 💾 **GPU memory (per generation thread)**: Up to 14.0 GB (`float16` 39 atoms 100 samples)
- 📐 **Avg Shape Tanimoto Similarity**: 53.32%
- 🎯 **Max Shape Tanimoto Similarity**: 99.69%
- 🔬 **Avg Chemical Tanimoto Similarity (2-hop 2048-bit Morgan Fingerprints)**: 10.87%
- 🧬 **% Chemically novel (vs. training set)**: 99.84%
- ✔️ **% Valid molecules (post-standardization)**: 48%
- 🔁 **% Unique molecules in generated set**: 99.94%
- 📎 **Fréchet Fingerprint Distance (2-hop 2048-bit Morgan Fingerprints)**:
- To ChEMBL: 4.13
- To PubChem: 2.64
- To ZINC (250k): 4.95
### PoseBusters [[5]](https://doi.org/10.1039/D3SC04185A) validity check results:
**Overall stats**:
- PB-valid molecules: **91.33 %**
**Detailed Problems**:
- position: 0.01 %
- mol_pred_loaded: 0.0 %
- sanitization: 0.01 %
- inchi_convertible: 0.01 %
- all_atoms_connected: 0.0 %
- bond_lengths: 0.24 %
- bond_angles: 0.70 %
- internal_steric_clash: 2.31 %
- aromatic_ring_flatness: 3.34 %
- non-aromatic_ring_non-flatness: 0.27 %
### Synthesizability of the generated compounds
#### SA Score [[6]](https://doi.org/10.1186/1758-2946-1-8)
*1 (easy to make) - 10 (very difficult to make)*
**Average SA Score**: **3.18**
<img src="https://raw.githubusercontent.com/Membrizard/ml_conformer_generator/main/assets/benchmarks/sa_score_dist.png" width="300">
---
## Generation Examples




---
## 💾 Access & Licensing
The **Python package and inference code are available on GitHub** under Apache 2.0 License
> https://github.com/Membrizard/ml_conformer_generator
The trained model **Weights** are available at
> https://huggingface.co/Membrizard/ml_conformer_generator
And are licensed under CC BY-NC-ND 4.0
The usage of the trained weights for any profit-generating activity is restricted.
For commercial licensing and inference-as-a-service, contact:
[Denis Sapegin](https://github.com/Membrizard)
---
## ONNX Inference:
For torch Free inference an ONNX version of the model is present.
Weights of the model in ONNX format are available at:
> https://huggingface.co/Membrizard/ml_conformer_generator
`egnn_chembl_15_39.onnx`
`adj_mat_seer_chembl_15_39.onnx`
```python
from mlconfgen import MLConformerGeneratorONNX
from rdkit import Chem
model = MLConformerGeneratorONNX(
egnn_onnx="./egnn_chembl_15_39.onnx",
adj_mat_seer_onnx="./adj_mat_seer_chembl_15_39.onnx",
diffusion_steps=100,
)
reference = Chem.MolFromMolFile('./assets/demo_files/yibfeu.mol')
samples = model.generate_conformers(reference_conformer=reference, n_samples=20, variance=2)
```
Install ONNX GPU runtime (if needed):
`pip install onnxruntime-gpu`
---
## Export to ONNX
An option to compile the model to ONNX is provided
requires `onnxscript==0.2.2`
`pip install onnxscript`
```python
from mlconfgen import MLConformerGenerator
from onnx_export import export_to_onnx
model = MLConformerGenerator()
export_to_onnx(model)
```
This compiles and saves the ONNX files to: `./`
## Streamlit App

### Running
- Move the trained PyTorch weights into `./streamlit_app`
`./streamlit_app/edm_moi_chembl_15_39.pt`
`./streamlit_app/adj_mat_seer_chembl_15_39.pt`
- Install the dependencies `pip install -r ./streamlit_app/requirements.txt`
- Bring the app UI up:
```commandline
cd ./streamlit_app
streamlit run app.py
```
### Streamlit App Development
1. To enable development mode for the 3D viewer (`stspeck`), set `_RELEASE = False` in `./streamlit/stspeck/__init__.py`.
2. Navigate to the 3D viewer frontend and start the development server:
```commandline
cd ./frontend/speck/frontend
npm run start
```
This will launch the dev server at `http://localhost:3001`
3. In a separate terminal, run the Streamlit app from the root frontend directory:
```commandline
cd ./streamlit_app
streamlit run app.py
```
4. To build the production version of the 3D viewer, run:
```commandline
cd ./streamlit_app/stspeck/frontend
npm run build
```
| text/markdown | null | Denis Sapegin <dasapegin@gmail.com>, Fedor Bakharev <fbakharev@gmail.com>, Azamat Gafurov <azamat.gafurov@gmail.com> | null | Denis Sapegin <dasapegin@gmail.com>, Azamat Gafurov <azamat.gafurov@gmail.com> | null | rdkit, chemistry, diffusion, conformers | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Chemistry"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"torch>=2.0.1",
"rdkit>=2023.9.5",
"numpy>=1.26.4",
"onnx>=1.17.0; extra == \"onnx\"",
"onnxruntime>=1.19.0; extra == \"onnx\"",
"onnxscript>=0.2.0; extra == \"onnx\""
] | [] | [] | [] | [
"Homepage, https://github.com/Membrizard/ml_conformer_generator",
"Issues, https://github.com/Membrizard/ml_conformer_generator/issues"
] | twine/6.1.0 CPython/3.12.8 | 2026-02-20T18:24:20.106966 | mlconfgen-0.2.3.tar.gz | 52,328 | 8f/9b/72e1f1d3b953e3d4d357cdf3bb178d1a3b1a85c4ec8e38345aa2cee7591d/mlconfgen-0.2.3.tar.gz | source | sdist | null | false | 89523a32a3807c898ee2b52b796b0899 | fe6db58097d7292bb9c1176fcd193ef9361dacb92324632b2ab233085fab36f6 | 8f9b72e1f1d3b953e3d4d357cdf3bb178d1a3b1a85c4ec8e38345aa2cee7591d | CC-BY-NC-ND-4.0 AND Apache-2.0 | [
"LICENSE",
"LICENSE-MODEL"
] | 219 |
2.4 | lbt-honeybee | 0.9.250 | Installs a collection of all Honeybee core and extension libraries. |

[](https://github.com/ladybug-tools/lbt-honeybee/actions)
[](https://www.python.org/downloads/release/python-3120/) [](https://www.python.org/downloads/release/python-3100/) [](https://www.python.org/downloads/release/python-370/) [](https://github.com/IronLanguages/ironpython2/releases/tag/ipy-2.7.8/)
# lbt-honeybee
A collection of Honeybee core Python libraries and extensions.
Note that this Python package and corresponding repository does not contain any source
code and simply exists to provide a shortcut for installing all of the honeybee
extension packages together.
## Installation
```console
pip install lbt-honeybee -U
```
## Included Honeybee Extensions
Running `pip install lbt-honeybee -U` will result in the installation of the following
honeybee Python packages:
* [honeybee-display](https://github.com/ladybug-tools/honeybee-display)
* [honeybee-radiance-postprocess](https://github.com/ladybug-tools/honeybee-radiance-postprocess)
* [honeybee-radiance](https://github.com/ladybug-tools/honeybee-radiance)
* [honeybee-radiance-folder](https://github.com/ladybug-tools/honeybee-radiance-folder)
* [honeybee-radiance-command](https://github.com/ladybug-tools/honeybee-radiance-command)
* [honeybee-energy](https://github.com/ladybug-tools/honeybee-energy)
* [honeybee-energy-standards](https://github.com/ladybug-tools/honeybee-energy-standards)
## Included Honeybee Core Libraries
Since the honeybee extensions use the honeybee-core libraries, the following
dependencies are also included:
* [honeybee-core](https://github.com/ladybug-tools/honeybee-core)
* [honeybee-schema](https://github.com/ladybug-tools/honeybee-schema)
## Also Included (All Ladybug Packages)
Since honeybee uses ladybug, the following are also included (installed through [lbt-ladybug](https://github.com/ladybug-tools/lbt-ladybug)):
* [ladybug-geometry](https://github.com/ladybug-tools/ladybug-geometry)
* [ladybug-core](https://github.com/ladybug-tools/ladybug)
* [ladybug-comfort](https://github.com/ladybug-tools/ladybug-comfort)
* [ladybug-display](https://github.com/ladybug-tools/ladybug-display)
| text/markdown | Ladybug Tools | info@ladybug.tools | null | null | AGPL-3.0 | null | [
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: IronPython",
"Operating System :: OS Independent"
] | [] | https://github.com/ladybug-tools/lbt-honeybee | null | null | [] | [] | [] | [
"honeybee-radiance-postprocess==0.4.607",
"honeybee-energy[openstudio,standards]==1.116.128",
"honeybee-display==0.6.1",
"lbt-ladybug==0.27.171"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-20T18:24:11.366885 | lbt_honeybee-0.9.250.tar.gz | 17,499 | 52/ef/13aa221449a055b9c546dff563fa4853bc25853dd6bf14bb2c605bacdc9f/lbt_honeybee-0.9.250.tar.gz | source | sdist | null | false | 1532d4a1c7e151c52a80d32a9a517a4b | a4637886d35d9bf7fcbd7f9dbc1e7ce4a04554854513eaa5539a5565072fb99f | 52ef13aa221449a055b9c546dff563fa4853bc25853dd6bf14bb2c605bacdc9f | null | [
"LICENSE"
] | 306 |
2.4 | upnext | 0.0.4 | Background jobs and APIs for Python. Like Modal, but self-hostable. | # UpNext Package
Core SDK/runtime package for workers, tasks, APIs, and queue processing.
## Package Validation
From the workspace root:
```bash
./scripts/verify-upnext-package.sh
```
This runs the upnext package test suite with branch coverage and enforces the current minimum coverage gate.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiohttp>=3.11.0",
"croniter>=6.0.0",
"fastapi>=0.115.0",
"pydantic-settings>=2.12.0",
"pydantic>=2.10.0",
"redis>=5.2.0",
"rich>=13.9.0",
"typer>=0.15.0",
"upnext-server",
"upnext-shared",
"uvicorn>=0.34.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T18:24:10.272372 | upnext-0.0.4.tar.gz | 120,499 | 3f/ca/34d9478b5f9d48967bc0762751b40520c6b765a91366c42362e05c41991b/upnext-0.0.4.tar.gz | source | sdist | null | false | ca4c376e85960e94ba878511a54812c5 | 53efc4ac48e22b16c6ce317e70534006b8858015dc9dcae381bd312e07e765bb | 3fca34d9478b5f9d48967bc0762751b40520c6b765a91366c42362e05c41991b | null | [] | 204 |
2.4 | ai-code-review-cli | 2.5.0 | AI-powered code review tool with local Git, remote MR/PR analysis, and CI integration (GitLab, GitHub or Forgejo) | # AI Code Review
AI-powered code review tool with **3 powerful use cases**:
- 🤖 **CI Integration** - Automated reviews in your CI/CD pipeline (GitLab or GitHub)
- 🔍 **Local Reviews** - Review your local changes before committing
- 🌐 **Remote Reviews** - Analyze existing MRs/PRs from the terminal
## 📑 Table of Contents
- [🚀 Primary Use Case: CI/CD Integration](#-primary-use-case-cicd-integration)
- [⚙️ Secondary Use Cases](#️-secondary-use-cases)
- [Local Usage (Container)](#local-usage-container)
- [Local Usage (CLI Tool)](#local-usage-cli-tool)
- [Remote Reviews](#remote-reviews)
- [🔧 Configuration](#-configuration)
- [⚡ Smart Skip Review](#-smart-skip-review)
- [For Developers](#for-developers)
- [🔧 Common Issues](#-common-issues)
- [📖 Documentation](#-documentation)
- [🤖 AI Tools Disclaimer](#-ai-tools-disclaimer)
- [📄 License](#-license)
- [👥 Author](#-author)
## 🚀 Primary Use Case: CI/CD Integration
This is the primary and recommended way to use the AI Code Review tool.
### GitLab CI
Add to `.gitlab-ci.yml`:
```yaml
ai-review:
stage: code-review
image: registry.gitlab.com/redhat/edge/ci-cd/ai-code-review:latest
variables:
AI_API_KEY: $GEMINI_API_KEY # Set in CI/CD variables
script:
- ai-code-review --post
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
allow_failure: true
```
### GitHub Actions
Add to `.github/workflows/ai-review.yml`:
```yaml
name: AI Code Review
on:
pull_request:
types: [opened, synchronize]
jobs:
ai-review:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
continue-on-error: true
permissions:
contents: read
pull-requests: write
container:
image: registry.gitlab.com/redhat/edge/ci-cd/ai-code-review:latest
steps:
- name: Run AI Review
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
AI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
run: ai-code-review --pr-number ${{ github.event.pull_request.number }} --post
```
### Forgejo Actions
Add to `.forgejo/workflows/ai-review.yml`:
```yaml
name: AI Code Review
on:
pull_request:
types: [opened, synchronize]
jobs:
ai-review:
runs-on: codeberg-tiny # adjust for non-codeberg instances
continue-on-error: true
permissions:
contents: read
pull-requests: write
container:
image: registry.gitlab.com/redhat/edge/ci-cd/ai-code-review:latest
steps:
- name: Run AI Review
env:
AI_API_KEY: ${{ secrets.GEMINI_API_KEY }} # set in Forgejo Actions secrets
run: ai-code-review --pr-number ${{ github.event.pull_request.number }} --post
```
## ⚙️ Secondary Use Cases
### Local Usage (Container)
This is the recommended way to use the tool locally, as it doesn't require any installation on your system.
```bash
# Review local changes
podman run -it --rm -v .:/app -w /app \
registry.gitlab.com/redhat/edge/ci-cd/ai-code-review:latest \
ai-code-review --local
# Review a remote MR
podman run -it --rm -e GITLAB_TOKEN=$GITLAB_TOKEN -e AI_API_KEY=$AI_API_KEY \
registry.gitlab.com/redhat/edge/ci-cd/ai-code-review:latest \
ai-code-review group/project 123
```
> **Note**: You can use `docker` instead of `podman` and the command should work the same.
### Local Usage (CLI Tool)
This is a good option if you have Python installed and want to use the tool as a CLI command.
> **Note on package vs. command name:** The package is registered on PyPI as `ai-code-review-cli`, but for ease of use, the command to execute remains `ai-code-review`.
`pipx` is a more mature and well-known tool for the same purpose. It handles the package vs. command name difference automatically.
```bash
# Install pipx
pip install pipx
pipx ensurepath
# Install the package
pipx install ai-code-review-cli
# Run the command
ai-code-review --local
```
### Remote Reviews
You can also analyze existing MRs/PRs from your terminal.
```bash
# GitLab MR
ai-code-review group/project 123
# GitHub PR
ai-code-review --platform-provider github owner/repo 456
# Save to file
ai-code-review group/project 123 -o review.md
# Post the review to the MR/PR
ai-code-review group/project 123 --post
```
## 🔧 Configuration
### Required Setup
#### 1. Platform Token (Not needed for local reviews)
```bash
# For GitLab remote reviews
export GITLAB_TOKEN=glpat_xxxxxxxxxxxxxxxxxxxx
# For GitHub remote reviews
export GITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxx
# For Forgejo remote reviews
export FORGEJO_TOKEN=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Local reviews don't need platform tokens! 🎉
```
#### 2. AI API Key
```bash
# Get key from: https://makersuite.google.com/app/apikey
export AI_API_KEY=your_gemini_api_key_here
```
### Configuration Methods (Priority Order)
The tool supports **4 configuration methods** with the following priority:
1. **🔴 CLI Arguments** (highest priority) - `--ai-provider anthropic --ai-model claude-sonnet-4-5`
2. **🟡 Environment Variables** - `export AI_PROVIDER=anthropic`
3. **🟢 Configuration File** - `.ai_review/config.yml`
4. **⚪ Field Defaults** (lowest priority) - Built-in defaults
### Configuration File
Create a YAML configuration file for persistent settings:
```bash
# Create from template
cp .ai_review/config.yml.example .ai_review/config.yml
# Edit your project settings
nano .ai_review/config.yml
```
**Key benefits:**
- ✅ **Project-specific settings** - Different configs per repository
- ✅ **Team sharing** - Commit to git for consistent team settings
- ✅ **Reduced typing** - Set common options once
- ✅ **Layered override** - CLI arguments still override everything
**File locations:**
- **Auto-detected**: `.ai_review/config.yml` (loaded automatically if exists)
- **Custom path**: `--config-file path/to/custom.yml`
- **Disable loading**: `--no-config-file` flag
### Environment Variables
For sensitive data and CI/CD environments:
```bash
# Copy template
cp env.example .env
# Edit and set your tokens
GITLAB_TOKEN=glpat_xxxxxxxxxxxxxxxxxxxx
AI_API_KEY=your_gemini_api_key_here
```
### Common Options
```bash
# Different AI providers
ai-code-review project/123 --ai-provider anthropic # Claude
ai-code-review project/123 --ai-provider ollama # Local Ollama
# Custom server URLs
ai-code-review project/123 --gitlab-url https://gitlab.company.com
# Output options
ai-code-review project/123 -o review.md # Save to file
ai-code-review project/123 2>logs.txt # Logs to stderr
```
**For all configuration options, troubleshooting, and advanced usage → see [User Guide](docs/user-guide.md)**
### Team/Organization Context
For teams working on multiple projects, you can specify a **shared team context** that applies organization-wide:
```bash
# Remote team context (recommended - stored in central repo)
export TEAM_CONTEXT_FILE=https://gitlab.com/org/standards/-/raw/main/review.md
ai-code-review --local
# Or use CLI option
ai-code-review project/123 --team-context-file https://company.com/standards/review.md --post
# Local team context file
ai-code-review --team-context-file ../team-standards.md --local
```
**Use cases:**
- Organization-wide coding standards
- Security requirements and compliance rules
- Team conventions shared across projects
- Industry-specific guidelines (HIPAA, GDPR, etc.)
**Priority order:** Team context → Project context → Commit history
This allows maintaining org standards while individual projects add specific guidelines.
**See [User Guide - Team Context](docs/user-guide.md#-teamorganization-context) for complete documentation.**
### Intelligent Review Context (Two-Phase Synthesis)
The tool uses a **two-phase approach** to incorporate previous reviews and avoid repeating mistakes:
**Phase 1 - Synthesis (automatic):**
- Fetches **ALL** comments and reviews (including resolved ones)
- Uses a fast model (e.g., `gemini-3-flash-preview`) to synthesize key insights
- Identifies author corrections to previous AI reviews
- Generates concise summary (<500 words)
**Phase 2 - Main Review:**
- Uses synthesis as context to avoid repeating mistakes
- Focuses on code changes with awareness of discussions
**Benefits:**
- ✅ Prevents repeating invalidated suggestions
- ✅ Reduces token usage (synthesis is much shorter than raw comments)
- ✅ Lower costs (fast model for preprocessing)
- ✅ Better quality (focused insights vs raw data)
**Configuration:**
```yaml
# Enable/disable (default: enabled)
enable_review_context: true
enable_review_synthesis: true
# Custom synthesis model (optional)
synthesis_model: "gemini-3-flash-preview" # Default for Gemini
# synthesis_model: "claude-haiku-4-5" # For Anthropic
# synthesis_model: "gpt-4o-mini" # For OpenAI
```
**Skips automatically when:**
- No comments/reviews exist (first review)
- Feature is disabled
## ⚡ Smart Skip Review
**AI Code Review automatically skips unnecessary reviews** to reduce noise and costs:
- 🔄 **Dependency updates** (`chore(deps): bump lodash 4.1.0 to 4.2.0`)
- 🤖 **Bot changes** (from `dependabot[bot]`, `renovate[bot]`)
- 📝 **Documentation-only** changes (if enabled)
- 🏷️ **Tagged PRs/MRs** (`[skip review]`, `[automated]`)
- 📝 **Draft/WIP PRs/MRs** (work in progress)
**Result:** Focus on meaningful changes, save API costs, faster CI/CD pipelines.
> **📖 Learn more:** Configuration, customization, and CI integration → [User Guide - Skip Review](docs/user-guide.md#smart-skip-review)
## For Developers
### Development Setup
```bash
# Install using uv (recommended)
uv sync --all-extras
# Or with pip
pip install -e .
```
> To install or learn more about `uv`, check here:
[uv](https://docs.astral.sh/uv)
## 🔧 Common Issues
### gRPC Warnings with Google Gemini (only for `ai-generate-context`)
When using Google Gemini provider, you may see harmless gRPC connection warnings:
```
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1759934851.372144 Other threads are currently calling into gRPC, skipping fork() handlers
```
**These warnings are harmless and don't affect functionality.** To suppress them:
```bash
# Suppress warnings by redirecting stderr
ai-generate-context . 2>/dev/null
# Or use alternative provider (no warnings)
ai-generate-context . --provider ollama
```
## 📖 Documentation
- **[User Guide](docs/user-guide.md)** - Complete usage, configuration, and troubleshooting
- **[Context Generator Guide](docs/context-generator.md)** - AI context generation for better reviews (requires Git repository)
- **[Context7 Integration Guide](docs/context7-integration.md)** - Enhanced reviews with official library documentation (optional)
- **[Developer Guide](docs/developer-guide.md)** - Development setup, architecture, and contributing
## 🤖 AI Tools Disclaimer
<details>
<summary>This project was developed with the assistance of artificial intelligence tools</summary>
**Tools used:**
- **Cursor**: Code editor with AI capabilities
- **Claude-Sonnet-4.5**: Anthropic's latest language model (claude-sonnet-4-5)
**Division of responsibilities:**
**AI (Cursor + Claude-Sonnet-4.5)**:
- 🔧 Initial code prototyping
- 📝 Generation of examples and test cases
- 🐛 Assistance in debugging and error resolution
- 📚 Documentation and comments writing
- 💡 Technical implementation suggestions
**Human (Juanje Ojeda)**:
- 🎯 Specification of objectives and requirements
- 🔍 Critical review of code and documentation
- 💬 Iterative feedback and solution refinement
- ✅ Final validation of concepts and approaches
**Crotchety old human (Adam Williamson)**:
- 👴🏻 Adapted GitHub client and tests for Forgejo using 100% artisanal human brainpower
**Collaboration philosophy**: AI tools served as a highly capable technical assistant, while all design decisions, educational objectives, and project directions were defined and validated by the human.
</details>
## 📄 License
MIT License - see LICENSE file for details.
## 👥 Author
- **Author:** Juanje Ojeda
- **Email:** juanje@redhat.com
- **URL:** <https://gitlab.com/redhat/edge/ci-cd/ai-code-review>
| text/markdown | null | Juanje Ojeda <juanje@redhat.com> | null | null | null | ai, assistant, automation, code quality, code review, coding, developer tools, devops, git, github, github actions, gitlab, gitlab-ci, llm, static code analysis, workflows | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiofiles>=23.2.0",
"aiohttp>=3.9.0",
"anthropic<1.0.0,>=0.40.0",
"click>=8.1.0",
"defusedxml>=0.7.1",
"gitpython>=3.1.40",
"grpcio>=1.75.0",
"httpx>=0.28.1",
"jinja2<4.0.0,>=3.1.0",
"langchain-anthropic<2.0.0,>=1.0.0",
"langchain-google-genai<5.0.0,>=4.0.0",
"langchain-google-vertexai<4.0.0,>=3.0.0",
"langchain-ollama<2.0.0,>=1.0.0",
"langchain-openai<1.0.0,>=0.3.0",
"langchain<3.0.0,>=1.0.0",
"ollama>=0.2.0",
"protobuf>=6.0.0",
"pydantic-core<3.0.0,>=2.33.0",
"pydantic-settings>=2.10.1",
"pydantic<3.0.0,>=2.12.0",
"pyforgejo>=2.0.0",
"pygithub>=2.1.0",
"python-gitlab>=7.0.0",
"pyyaml>=6.0.0",
"structlog>=23.2.0",
"unidiff>=0.7.0",
"mypy>=1.19.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-httpx>=0.30.0; extra == \"dev\"",
"pytest-mock>=3.11.0; extra == \"dev\"",
"pytest>=9.0.0; extra == \"dev\"",
"ruff>=0.14.0; extra == \"dev\"",
"types-pyyaml; extra == \"dev\"",
"types-requests; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://gitlab.com/redhat/edge/ci-cd/ai-code-review",
"Repository, https://gitlab.com/redhat/edge/ci-cd/ai-code-review",
"Issues, https://gitlab.com/redhat/edge/ci-cd/ai-code-review/-/issues"
] | twine/6.2.0 CPython/3.13.7 | 2026-02-20T18:23:56.529415 | ai_code_review_cli-2.5.0.tar.gz | 120,365 | af/4f/426d384528e60a2466054cc1fbf34e3b5a0a6c968a3788d8541497be22f4/ai_code_review_cli-2.5.0.tar.gz | source | sdist | null | false | 81cba38a87b22ef0b345a4481e7222a0 | 5a754c2d7dd34590ab4e35d9b259d94ce4f829c0e2f80200a9bb4083a1a9ade7 | af4f426d384528e60a2466054cc1fbf34e3b5a0a6c968a3788d8541497be22f4 | MIT | [
"LICENSE"
] | 200 |
2.4 | py-ai-workflows | 0.32.0 | A toolkit for AI workflows. | ============
ai_workflows
============
The ``ai_workflows`` package is a toolkit for supporting AI workflows (i.e., workflows that are pre-scripted and
repeatable, but utilize LLMs for various tasks).
The goal is to lower the bar for social scientists and others to
leverage LLMs in repeatable, reliable, and transparent ways. See
`this blog post <https://www.linkedin.com/pulse/repeatable-reliable-transparent-graduating-from-ai-workflows-robert-nb4ge/>`_
for a discussion,
`here for the full documentation <https://ai-workflows.readthedocs.io/>`_, and
`here for a custom GPT <https://chatgpt.com/g/g-67586f2d154081918b6ee65b868e859e-ai-workflows-coding-assistant>`_
that can help you use this package. If you learn best by example, see these:
#. `example-doc-conversion.ipynb <https://github.com/higherbar-ai/ai-workflows/blob/main/src/example-doc-conversion.ipynb>`_:
loading different file formats and converting them into a Markdown syntax that LLMs can understand.
#. `example-doc-extraction.ipynb <https://github.com/higherbar-ai/ai-workflows/blob/main/src/example-doc-extraction.ipynb>`_:
extracting structured data from unstructured documents (edit notebook to customize).
#. `example-doc-extraction-templated.ipynb <https://github.com/higherbar-ai/ai-workflows/blob/main/src/example-doc-extraction-templated.ipynb>`_:
extracting structured data from unstructured documents (supply an Excel template to customize).
#. `example-qual-analysis-1.ipynb <https://github.com/higherbar-ai/ai-workflows/blob/main/src/example-qual-analysis-1.ipynb>`_:
a more realistic workflow example that performs a simple qualitative analysis on a set of interview transcripts.
#. `example-surveyeval-lite.ipynb <https://github.com/higherbar-ai/ai-workflows/blob/main/src/example-surveyeval-lite.ipynb>`_:
another workflow example that critically evaluates surveys question-by-question.
Tip: if you're not completely comfortable working in Python, use
`GitHub Copilot in VS Code <https://code.visualstudio.com/docs/copilot/setup>`_
or Gemini as a copilot in `Google Colab <https://colab.google/>`_. Also do make use of
`this custom GPT coding assistant <https://chatgpt.com/g/g-67586f2d154081918b6ee65b868e859e-ai-workflows-coding-assistant>`_.
Installation
------------
Install the latest version with pip::
pip install py-ai-workflows[docs]
If you don't need anything in the ``document_utilities`` module (relating to reading, parsing, and converting
documents), you can install a slimmed-down version with::
pip install py-ai-workflows
Additional document-parsing dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you installed the full version with document-processing capabilities (``py-ai-workflows[docs]``), you'll also need
to install several other dependencies, which you can do by running the
`initial-setup.ipynb <https://github.com/higherbar-ai/ai-workflows/blob/main/src/initial-setup.ipynb>`_ Jupyter
notebook — or by installing them manually as follows.
First, download NTLK data for natural language text processing::
# download NLTK data
import nltk
nltk.download('punkt', force=True)
Then install ``libreoffice`` for converting Office documents to PDF.
On Linux::
# install LibreOffice for document processing
!apt-get install -y libreoffice
On MacOS::
# install LibreOffice for document processing
brew install libreoffice
On Windows::
# install LibreOffice for document processing
choco install -y libreoffice
AWS Bedrock support
^^^^^^^^^^^^^^^^^^^
Finally, if you're accessing models via AWS Bedrock, the AWS CLI needs to be installed and configured for AWS access.
Jupyter notebooks with Google Colab support
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can use `the colab-or-not package <https://github.com/higherbar-ai/colab-or-not>`_ to initialize a Jupyter notebook
for Google Colab or other environments::
%pip install colab-or-not py-ai-workflows
# download NLTK data
import nltk
nltk.download('punkt', force=True)
# set up our notebook environment (including LibreOffice)
from colab_or_not import NotebookBridge
notebook_env = NotebookBridge(
system_packages=["libreoffice"],
config_path="~/.hbai/ai-workflows.env",
config_template={
"openai_api_key": "",
"openai_model": "",
"azure_api_key": "",
"azure_api_base": "",
"azure_api_engine": "",
"azure_api_version": "",
"anthropic_api_key": "",
"anthropic_model": "",
"langsmith_api_key": "",
}
)
notebook_env.setup_environment()
Overview
---------
Here are the basics:
#. `The llm_utilities module <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.llm_utilities.html>`_ provides
a simple interface for interacting with a large language model (LLM). It
includes the ``LLMInterface`` class that can be used for executing individual workflow steps.
#. `The document_utilities module <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.document_utilities.html#>`_
provides an interface for extracting Markdown-formatted text and structured data
from various file formats. It includes functions for reading Word, PDF, Excel, CSV, HTML, and other file formats,
and then converting them into Markdown or structured data for use in LLM interactions.
#. The `example-doc-conversion.ipynb <https://github.com/higherbar-ai/ai-workflows/blob/main/src/example-doc-conversion.ipynb>`_
notebook provides a simple example of how to use the ``document_utilities``
module to convert files to Markdown format, in either Google Colab or a local environment.
#. The `example-doc-extraction.ipynb <https://github.com/higherbar-ai/ai-workflows/blob/main/src/example-doc-extraction.ipynb>`_
notebook provides an example of how to extract structured data from unstructured documents.
#. The `example-doc-extraction-templated.ipynb <https://github.com/higherbar-ai/ai-workflows/blob/main/src/example-doc-extraction-templated.ipynb>`_
notebook provides an easier-to-customize version of the above: you supply an Excel template with your data extraction
needs.
#. The `example-qual-analysis-1.ipynb <https://github.com/higherbar-ai/ai-workflows/blob/main/src/example-qual-analysis-1.ipynb>`_
notebook provides a more realistic workflow example that uses both the ``document_utilities`` and the
``llm_utilities`` modules to perform a simple qualitative analysis on a set of interview transcripts. It also works
in either Google Colab or a local environment.
#. The `example-surveyeval-lite.ipynb <https://github.com/higherbar-ai/ai-workflows/blob/main/src/example-surveyeval-lite.ipynb>`_
notebook provides another workflow example that uses the ``document_utilities`` module to convert a survey
file to Markdown format and then to JSON format, and then uses the ``llm_utilities`` module to evaluate survey
questions using an LLM. It also works in either Google Colab or a local environment.
#. The `example-testing.ipynb <https://github.com/higherbar-ai/ai-workflows/blob/main/src/example-testing.ipynb>`_
notebook provides a basic set-up for testing Markdown conversion methods (LLM-assisted
vs. not-LLM-assisted). At the moment, this notebook only works in a local environment.
Example snippets
^^^^^^^^^^^^^^^^
Converting a file to Markdown format (without LLM assistance)::
from ai_workflows.document_utilities import DocumentInterface
doc_interface = DocumentInterface()
markdown = doc_interface.convert_to_markdown(file_path)
Converting a file to Markdown format (*with* LLM assistance)::
from ai_workflows.llm_utilities import LLMInterface
from ai_workflows.document_utilities import DocumentInterface
llm_interface = LLMInterface(openai_api_key=openai_api_key)
doc_interface = DocumentInterface(llm_interface=llm_interface)
markdown = doc_interface.convert_to_markdown(file_path)
Converting a file to JSON format::
from ai_workflows.llm_utilities import LLMInterface
from ai_workflows.document_utilities import DocumentInterface
llm_interface = LLMInterface(openai_api_key=openai_api_key)
doc_interface = DocumentInterface(llm_interface=llm_interface)
dict_list = doc_interface.convert_to_json(
file_path,
json_context = "The file contains a survey instrument with questions to be administered to rural Zimbabwean household heads by a trained enumerator.",
json_job = "Your job is to extract questions and response options from the survey instrument.",
json_output_spec = "Return correctly-formatted JSON with the following fields: ..."
)
Requesting a JSON response from an LLM::
from ai_workflows.llm_utilities import LLMInterface
llm_interface = LLMInterface(openai_api_key=openai_api_key)
json_output_spec = """Return correctly-formatted JSON with the following fields:
* `answer` (string): Your answer to the question."""
full_prompt = f"""Answer the following question:
(question)
{json_output_spec}
Your JSON response precisely following the instructions given:"""
parsed_response, raw_response, error = llm_interface.get_json_response(
prompt = full_prompt,
json_validation_desc = json_output_spec
)
Technical notes
---------------
Working with JSON
^^^^^^^^^^^^^^^^^
The ``ai_workflows`` package helps you to extract structured JSON content from documents and LLM responses. In all such
cases, you have to describe the JSON format that you want with enough clarity and specificity that the system can
reliably generate and validate responses (you typically supply this in a ``json_output_spec`` parameter). When describing
your desired JSON, always include the field names and types, as well as detailed descriptions. For example, if you
wanted a list of questions back::
json_output_spec = """Return correctly-formatted JSON with the following fields:
* `questions` (list of objects): A list of questions, each with the following fields:
* `question` (string): The question text
* `answer` (string): The supplied answer to the question"""
By default, the system will use this informal, human-readable description to automatically generate a formal JSON
schema, which it will use to validate LLM responses (and retry if needed).
LLMInterface
^^^^^^^^^^^^
`The LLMInterface class <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.llm_utilities.html#ai_workflows.llm_utilities.LLMInterface>`_
provides a simple LLM interface with the following features:
#. Support for both OpenAI and Anthropic models, either directly or via Azure or AWS Bedrock
#. Support for both regular and JSON responses (using the LLM provider's "JSON mode" when possible)
#. Optional support for conversation history (tracking and automatic addition to each request)
#. Automatic validation of JSON responses against a formal JSON schema (with automatic retry to correct invalid JSON)
#. Automatic (LLM-based) generation of formal JSON schemas
#. Automatic timeouts for long-running requests
#. Automatic retry for failed requests (OpenAI refusals, timeouts, and other retry-able errors)
#. Support for LangSmith tracing
#. Synchronous and async versions of all functions (async versions begin with ``a_``)
Key methods:
#. `get_llm_response() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.llm_utilities.html#ai_workflows.llm_utilities.LLMInterface.get_llm_response>`_:
Get a response from an LLM
#. `get_json_response() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.llm_utilities.html#ai_workflows.llm_utilities.LLMInterface.get_json_response>`_:
Get a JSON response from an LLM
#. `user_message() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.llm_utilities.html#ai_workflows.llm_utilities.LLMInterface.user_message>`_:
Get a properly-formatted user message to include in an LLM prompt
#. `user_message_with_image() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.llm_utilities.html#ai_workflows.llm_utilities.LLMInterface.user_message_with_image>`_:
Get a properly-formatted user message to include in an LLM prompt, including an image
attachment
#. `generate_json_schema() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.llm_utilities.html#ai_workflows.llm_utilities.LLMInterface.generate_json_schema>`_:
Generate a JSON schema from a human-readable description (called automatically when JSON output
description is supplied to ``get_json_response()``)
#. `count_tokens() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.llm_utilities.html#ai_workflows.llm_utilities.LLMInterface.count_tokens>`_:
Count the number of tokens in a string
#. `enforce_max_tokens() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.llm_utilities.html#ai_workflows.llm_utilities.LLMInterface.enforce_max_tokens>`_:
Truncate a string as necessary to fit within a maximum number of tokens
If you don't have an API key for an AI provider yet, `see here to learn what that is and how to get one <https://www.linkedin.com/pulse/those-genai-api-keys-christopher-robert-l5rie/>`_.
DocumentInterface
^^^^^^^^^^^^^^^^^
`The DocumentInterface class <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.document_utilities.html#ai_workflows.document_utilities.DocumentInterface>`_ provides a simple interface for converting files to Markdown or JSON format.
Key methods:
#. `convert_to_markdown() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.document_utilities.html#ai_workflows.document_utilities.DocumentInterface.convert_to_markdown>`_:
Convert a file to Markdown format, using an LLM if available and deemed helpful (if you
specify ``use_text=True``, it will include raw text in any LLM prompt, which might improve results)
#. `convert_to_json() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.document_utilities.html#ai_workflows.document_utilities.DocumentInterface.convert_to_json>`_:
Convert a file to JSON format using an LLM (could convert the document to JSON page-by-page or convert to Markdown
first and then JSON; specify ``markdown_first=True`` if you definitely don't want to go the page-by-page route)
#. `markdown_to_json() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.document_utilities.html#ai_workflows.document_utilities.DocumentInterface.markdown_to_json>`_:
Convert a Markdown string to JSON format using an LLM
#. `markdown_to_text() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.document_utilities.html#ai_workflows.document_utilities.DocumentInterface.markdown_to_text>`_:
Convert a Markdown string to plain text
#. `merge_dicts() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.document_utilities.html#ai_workflows.document_utilities.DocumentInterface.merge_dicts>`_:
Merge a list of dictionaries into a single dictionary (handy for merging the results from ``x_to_json()`` methods)
Markdown conversion
"""""""""""""""""""
The `DocumentInterface.convert_to_markdown() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.document_utilities.html#ai_workflows.document_utilities.DocumentInterface.convert_to_markdown>`_
method uses one of several methods to convert files to Markdown.
If an ``LLMInterface`` is available:
#. PDF files are converted to Markdown with LLM assistance: we split the PDF into pages (splitting double-page spreads
as needed), convert each page to an image, and then convert to Markdown using the help of a multimodal LLM. This is
the most accurate method, but it's also the most expensive, running at about $0.015 per page as of October 2024. In
the process, we try to keep narrative text that flows across pages together, drop page headers and footers, and
describe images, charts, and figures as if to a blind person. We also do our best to convert tables to proper
Markdown tables. If the ``use_text`` parameter is set to ``True``, we'll extract the raw text from each page (when
possible) and provide that to the LLM to assist it with the conversion.
#. We use LibreOffice to convert ``.docx``, ``.doc``, and ``.pptx`` files to PDF and then convert the PDF to Markdown
using the LLM assistance method described above.
#. For ``.xlsx`` files without charts or images, we use a custom parser to convert worksheets and table ranges to proper
Markdown tables. If there are charts or images, we use LibreOffice to convert to PDF and, if it's 10 pages or fewer,
we convert from the PDF to Markdown using the LLM assistance method described above. If it's more than 10 pages,
we fall back to dropping charts or images and converting without LLM assistance.
#. For other file types, we fall back to converting without LLM assistance, as described below.
Otherwise, we convert files to Markdown using one of the following methods (in order of preference):
#. For ``.xlsx`` files, we use a custom parser and Markdown formatter.
#. For other file types, we use IBM's ``Docling`` package for those file formats that it supports. This method drops
images, charts, and figures, but it does a nice job with tables and automatically uses OCR when needed.
#. If ``Docling`` fails or doesn't support a file format, we next try ``PyMuPDFLLM``, which supports PDF files and a
range of other formats. This method also drops images, charts, and figures, and it's pretty bad at tables, but it
does a good job extracting text and a better job adding Markdown formatting than most other libraries.
#. Finally, if we haven't managed to convert the file using one of the higher-quality methods described above, we use
the ``Unstructured`` library to parse the file into elements and then add basic Markdown formatting. This method is
fast and cheap, but it's also the least accurate.
JSON conversion
"""""""""""""""
You can convert from Markdown to JSON using the
`DocumentInterface.markdown_to_json() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.document_utilities.html#ai_workflows.document_utilities.DocumentInterface.markdown_to_json>`_
method, or you can convert files directly to JSON using the
`DocumentInterface.convert_to_json() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.document_utilities.html#ai_workflows.document_utilities.DocumentInterface.convert_to_json>`_
method. The latter method will most often convert to Markdown first and then to JSON, but it will convert straight to
JSON with a page-by-page approach if:
#. The ``markdown_first`` parameter is explicitly provided as ``False`` and converting the file to Markdown would
naturally use an LLM with a page-by-page approach (see the section above)
#. Or: the ``markdown_first`` parameter is left at the default (``None``), converting the file to Markdown would
naturally use an LLM with a page-by-page approach, and the file's Markdown content is too large to convert to JSON
in a single LLM call.
The advantage of converting to JSON directly can also be a disadvantage: parsing to JSON is done page-by-page. If
JSON elements don't span page boundaries, this can be great; however, if elements *do* span page boundaries,
it won't work well. For longer documents, Markdown-to-JSON conversion also happens in batches due to LLM token
limits, but efforts are made to split batches by natural boundaries (e.g., between sections). Thus, the
doc->Markdown->JSON path can work better if page boundaries aren't the best way to batch the conversion process.
Whether or not you convert to JSON via Markdown, JSON conversion always uses LLM assistance. The parameters you supply
are:
#. ``json_context``: a description of the file's content, to help the LLM understand what it's looking at
#. ``json_job``: a description of the task you want the LLM to perform (e.g., extracting survey questions)
#. ``json_output_spec``: a description of the output you expect from the LLM (see discussion further above)
#. ``json_output_schema``: optionally, a formal JSON schema to validate the LLM's output; by
default, this will be automatically generated based on your ``json_output_spec``, but you can specify your own
schema or explicitly pass None if you want to disable JSON validation (if JSON validation isn't disabled, the
``LLMInterface`` default is to retry twice if the LLM output doesn't parse or match the schema, but you can change
this behavior by specifying the ``json_retries`` parameter in the ``LLMInterface`` constructor)
The more detail you provide, the better the LLM will do at the JSON conversion. If you find that things aren't working
well, try including some few-shot examples in the ``json_output_spec`` parameter.
Note that the JSON conversion methods return a *list* of ``dict`` objects, one for each batch or LLM call. This is
because, for all but the shortest documents, conversion will take place in multiple batches. One ``dict``, following
your requested format, is returned for each batch. You can process these returned dictionaries separately, merge them
yourself, or use the handy ``DocumentInterface.merge_dicts()`` method to automatically merge them together into a single
dictionary.
JSONSchemaCache
^^^^^^^^^^^^^^^
`The JSONSchemaCache class <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.llm_utilities.html#ai_workflows.llm_utilities.JSONSchemaCache>`_
provides a simple in-memory cache for JSON schemas, so that they don't have to be
regenerated repeatedly. It's used internally by both the ``LLMInterface`` and ``DocumentInterface`` classes, to avoid
repeatedly generating the same schema for the same JSON output specification.
Key methods:
#. `get_json_schema() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.llm_utilities.html#ai_workflows.llm_utilities.JSONSchemaCache.get_json_schema>`_:
Get a JSON schema from the cache (returns empty string if none found)
#. `put_json_schema() <https://ai-workflows.readthedocs.io/en/latest/ai_workflows.llm_utilities.html#ai_workflows.llm_utilities.JSONSchemaCache.put_json_schema>`_:
Put a JSON schema into the cache
Known issues
^^^^^^^^^^^^
See `bugs logged in GitHub issues <https://github.com/higherbar-ai/ai-workflows/labels/bug>`_
for the most up-to-date list of known issues.
ImportError: libGL.so.1: cannot open shared object file
"""""""""""""""""""""""""""""""""""""""""""""""""""""""
If you use this package in a headless environment (e.g., within a Docker container), you might encounter the following
error::
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
This is caused by a conflict between how the Docling and Unstructured packages depend on opencv. The fix is to install
all of your requirements like normal, and then uninstall and re-install opencv::
pip uninstall -y opencv-python opencv-python-headless && pip install opencv-python-headless
In a Dockerfile (after your ``pip install`` commands)::
RUN pip uninstall -y opencv-python opencv-python-headless && pip install opencv-python-headless
Roadmap
-------
This package is a work-in-progress. See
`the GitHub issues page <https://github.com/higherbar-ai/ai-workflows/issues>`_ for known
`bugs <https://github.com/higherbar-ai/ai-workflows/labels/bug>`_ and
`enhancements being considered <https://github.com/higherbar-ai/ai-workflows/labels/enhancement>`_.
Feel free to react to or comment on existing issues, or to open new issues.
Credits
-------
This toolkit was originally developed by `Higher Bar AI, PBC <https://higherbar.ai>`_, a public benefit corporation. To
contact us, email us at ``info@higherbar.ai``.
Many thanks also to `Laterite <https://www.laterite.com/>`_ for their contributions.
Full documentation
------------------
See the full reference documentation here:
https://ai-workflows.readthedocs.io/
Local development
-----------------
To develop locally:
#. ``git clone https://github.com/higherbar-ai/ai-workflows``
#. ``cd ai-workflows``
#. ``python -m venv .venv``
#. ``source .venv/bin/activate``
#. ``pip install -e .``
#. Execute the ``initial-setup.ipynb`` Jupyter notebook to install system dependencies.
For convenience, the repo includes ``.idea`` project files for PyCharm.
To rebuild the documentation:
#. Update version number in ``/docs/source/conf.py``
#. Update layout or options as needed in ``/docs/source/index.rst``
#. In a terminal window, from the project directory:
a. ``cd docs``
b. ``SPHINX_APIDOC_OPTIONS=members,show-inheritance sphinx-apidoc -o source ../src/ai_workflows --separate --force``
c. ``make clean html``
#. Use the ``assemble-gpt-materials.ipynb`` notebook to update the custom GPT coding assistant
To rebuild the distribution packages:
#. For the PyPI package:
a. Update version number (and any build options) in ``/setup.py``
b. Confirm credentials and settings in ``~/.pypirc``
c. Run ``python -m build``
d. Delete old builds from ``/dist``
e. In a terminal window:
i. ``twine upload dist/* --verbose``
#. For GitHub:
a. Commit everything to GitHub and merge to ``main`` branch
b. Add new release, linking to new tag like ``v#.#.#`` in main branch
#. For readthedocs.io:
a. Go to https://readthedocs.org/projects/ai-workflows/, log in, and click to rebuild from GitHub (only if it
doesn't automatically trigger)
| null | Christopher Robert | crobert@higherbar.ai | null | null | Apache 2.0 | null | [] | [] | https://github.com/higherbar-ai/ai-workflows | null | >=3.10 | [] | [] | [] | [
"openai<3.0,>=1.99.0",
"anthropic[bedrock]~=0.83.0",
"tiktoken",
"langsmith<1.0.0,>=0.3.43",
"tenacity",
"Pillow",
"jsonschema",
"colab-or-not~=0.4.0",
"json_repair==0.*",
"unstructured[all-docs]; extra == \"docs\"",
"PyMuPDF>=1.25.1; extra == \"docs\"",
"pymupdf4llm; extra == \"docs\"",
"openpyxl; extra == \"docs\"",
"nltk==3.9.1; extra == \"docs\"",
"beautifulsoup4>=4.12.0; extra == \"docs\"",
"markdown>=3.5.0; extra == \"docs\"",
"docling<3.0,>=2.8.1; extra == \"docs\""
] | [] | [] | [] | [
"Documentation, https://ai-workflows.readthedocs.io/"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T18:23:54.783184 | py_ai_workflows-0.32.0.tar.gz | 56,448 | 2c/d4/f07b3c20d8efc4c9e15d8561ff1b207a64af0a2d18357f0ead54dd613915/py_ai_workflows-0.32.0.tar.gz | source | sdist | null | false | f1d926e09662c8bc8aab3be1d356f9e0 | 484c352ccdd19a0e8bdcd1dced4846a33322ad336696aa994566126dc09f0513 | 2cd4f07b3c20d8efc4c9e15d8561ff1b207a64af0a2d18357f0ead54dd613915 | null | [
"LICENSE"
] | 227 |
2.4 | vivarium-cluster-tools | 2.3.0 | A set of tools for running simulation using vivarium on cluster. | Vivarium Cluster Tools
=======================
.. image:: https://badge.fury.io/py/vivarium-cluster-tools.svg
:target: https://badge.fury.io/py/vivarium-cluster-tools
.. image:: https://github.com/ihmeuw/vivarium_cluster_tools/actions/workflows/build.yml/badge.svg?branch=main
:target: https://github.com/ihmeuw/vivarium_cluster_tools
:alt: Latest Version
.. image:: https://readthedocs.org/projects/vivarium-cluster-tools/badge/?version=latest
:target: https://vivarium-cluster-tools.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
Vivarium cluster tools is a python package that makes running ``vivarium``
simulations at scale on a Slurm cluster easy.
Installation
------------
You can install this package with
.. code-block:: console
pip install vivarium-cluster-tools
In addition, this tool needs the redis client. This must be installed using conda.
.. code-block:: console
conda install redis
A simple example
----------------
If you have a ``vivarium`` model specifcation file defining a particular model,
you can use that along side a **branches file** to launch a run of many
simulations at once with variations in the input data, random seed, or with
different parameter settings.
.. code-block:: console
psimulate run /path/to/model_specification.yaml /path/to/branches_file.yaml
The simplest branches file defines a count of input data draws and random seeds
to launch.
.. code-block:: yaml
input_draw_count: 25
random_seed_count: 10
This branches file defines a set of simulations for all combinations of 25
input draws and 10 random seeds and so would run, in total, 250 simulations.
You can also define a set of parameter variations to run your model over. Say
your original model specification looked something like
.. code-block:: yaml
plugins:
optional: ...
components:
vivarium_public_health:
population:
- BasePopulation()
- Mortality()
disease.models:
- SIS('lower_respiratory_infections')
my_lri_intervention:
components:
- GiveKidsVaccines()
configuration:
population:
population_size: 1000
age_start: 0
age_end: 5
lri_vaccine:
coverage: 0.2
efficacy: 0.8
Defining a simple model of lower respiratory infections and a vaccine
intervention. You could then write a branches file that varied over both
input data draws and random seeds, but also over different levels of coverage
and efficacy for the vaccine. That file would look like
.. code-block:: yaml
input_draw_count: 25
random_seed_count: 10
branches:
lri_vaccine:
coverage: [0.0, 0.2, 0.4, 0.8, 1.0]
efficacy: [0.4, 0.6, 0.8]
The branches file would overwrite your original ``lri_vaccine`` configuration
with each combination of coverage and efficacy in the branches file and launch
a simulation. More, it would run each coverage-efficacy pair in the branches
for each combination of input draw and random seed to produce 25 * 10 * 5 * 3 =
3750 unique simulations.
To read about more of the available features and get a better understanding
of how to correctly write your own branches files,
`Check out the docs! <https://vivarium-cluster-tools.readthedocs.io/en/latest/>`_
---------------------------------------------------------------------------------
| null | The vivarium developers | vivarium.dev@gmail.com | null | null | null | null | [] | [] | https://github.com/ihmeuw/vivarium_cluster_tools | null | null | [] | [] | [] | [
"vivarium_dependencies[click,loguru,numpy_lt_2,pandas,pyarrow,pyyaml,requests,tables]",
"vivarium_build_utils<3.0.0,>=2.0.1",
"drmaa",
"dill",
"redis",
"rq",
"vivarium>=3.0.0",
"psutil",
"layered_config_tree",
"vivarium_dependencies[ipython,matplotlib,sphinx,sphinx-click]; extra == \"docs\"",
"vivarium_dependencies[interactive]; extra == \"interactive\"",
"vivarium_dependencies[pytest]; extra == \"test\"",
"vivarium_dependencies[ipython,matplotlib,sphinx,sphinx-click]; extra == \"dev\"",
"vivarium_dependencies[interactive]; extra == \"dev\"",
"vivarium_dependencies[pytest]; extra == \"dev\"",
"vivarium_dependencies[lint]; extra == \"dev\"",
"types-setuptools; extra == \"dev\"",
"types-psutil; extra == \"dev\"",
"pyarrow-stubs; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:23:26.030822 | vivarium_cluster_tools-2.3.0.tar.gz | 85,964 | 76/48/6f11b22d85c02805029682d91e37cedabdbc92b1e130b498dd04e068d05f/vivarium_cluster_tools-2.3.0.tar.gz | source | sdist | null | false | 4858be54aa23465cc95b787d92446b06 | 8c452bc1b41e67883d4933746f8b436c9cb3e76626bf303b40ce589573faa377 | 76486f11b22d85c02805029682d91e37cedabdbc92b1e130b498dd04e068d05f | null | [
"LICENSE"
] | 228 |
2.2 | endstone | 0.11.1 | Endstone offers a plugin API for Bedrock Dedicated Servers, supporting both Python and C++. | <div align="center">
<a href="https://github.com/EndstoneMC/endstone/releases">
<img src="https://static.wikia.nocookie.net/minecraft_gamepedia/images/4/43/End_Stone_JE3_BE2.png" alt="Logo" width="80" height="80">
</a>
<h3>Endstone</h3>
<p>
<b>High-performance Minecraft Bedrock server software</b><br>
Extensible with Python and C++ plugins
</p>
[](https://github.com/EndstoneMC/endstone/actions/workflows/ci.yml)
[-black)](https://feedback.minecraft.net/hc/en-us/sections/360001186971-Release-Changelogs)
[](https://pypi.org/project/endstone)
[](https://www.python.org/)
[](LICENSE)
[](https://discord.gg/xxgPuc2XN9)
</div>
## Why Endstone?
Bedrock's official addon and script APIs let you add content, but can hardly modify core gameplay. Custom servers like
PocketMine and Nukkit offer that control, but sacrifice vanilla features. Endstone gives you both: cancellable events,
packet control, and deep gameplay access with full vanilla compatibility. Think of it as Paper for Bedrock. If you've
ever wished Bedrock servers had the same modding power as Java Edition, this is it.
## Quick Start
Get your server running in seconds:
```shell
pip install endstone
endstone
```
Then create your first plugin:
```python
from endstone.plugin import Plugin
from endstone.event import event_handler, PlayerJoinEvent
class MyPlugin(Plugin):
api_version = "0.10"
def on_enable(self):
self.logger.info("MyPlugin enabled!")
self.register_events(self)
@event_handler
def on_player_join(self, event: PlayerJoinEvent):
event.player.send_message(f"Welcome, {event.player.name}!")
```
**Get started faster with our templates:**
[Python](https://github.com/EndstoneMC/python-plugin-template) | [C++](https://github.com/EndstoneMC/cpp-plugin-template)
## Features
- **Cross-platform** - Runs natively on both Windows and Linux without emulation, making deployment flexible and
straightforward.
- **Always up-to-date** - Designed to stay compatible with the latest Minecraft Bedrock releases so you're never left
behind.
- **Python & C++ plugins** - Write plugins in Python for rapid development, or use C++ when you need maximum
performance. The choice is yours.
- **Powerful API** - A comprehensive API with 60+ events covering players, blocks, actors, and more. Includes commands,
forms, scoreboards, inventories, and a full permission system.
- **Drop-in replacement** - Works with your existing Bedrock worlds and configurations. Just install and run.
- **Familiar to Bukkit developers** - If you've developed plugins for Java Edition servers, you'll feel right at home
with Endstone's API design.
## Installation
Requires Python 3.10+ on Windows 10+ or Linux (Ubuntu 22.04+, Debian 12+).
### Using pip (recommended)
```shell
pip install endstone
endstone
```
### Using Docker
```shell
docker pull endstone/endstone
docker run --rm -it -p 19132:19132/udp endstone/endstone
```
### Building from source
```shell
git clone https://github.com/EndstoneMC/endstone.git
cd endstone
pip install .
endstone
```
For detailed installation guides, system requirements, and configuration options, see
our [documentation](https://endstone.dev/).
## Documentation
Visit [endstone.dev](https://endstone.dev/) for comprehensive guides, API reference, and tutorials.
## Contributing
We welcome contributions from the community! Whether it's bug reports, feature requests, or code contributions:
- **Found a bug?** Open an [issue](https://github.com/EndstoneMC/endstone/issues)
- **Want to contribute code?** Submit a [pull request](https://github.com/EndstoneMC/endstone/pulls)
- **Want to support the project?** [Buy me a coffee](https://ko-fi.com/EndstoneMC)
## License
Endstone is licensed under the [Apache-2.0 license](LICENSE).
<div align="center">
**Sponsored by [Bisect Hosting](https://bisecthosting.com/endstone)**
[](https://bisecthosting.com/endstone)
</div>
| text/markdown | null | Vincent Wu <magicdroidx@gmail.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | plugin, python, minecraft, bedrock | [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"License :: OSI Approved :: Apache Software License",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp",
"asyncio",
"click",
"colorlog",
"importlib-metadata",
"importlib-resources",
"lazy-loader",
"numpy",
"packaging",
"pip",
"pkginfo",
"psutil",
"pyyaml",
"requests",
"rich",
"sentry-crashpad==0.7.17.1",
"tomlkit",
"typing-extensions",
"endstone-stubgen; extra == \"dev\"",
"pytest; extra == \"dev\""
] | [] | [] | [] | [
"Changelog, https://endstone.readthedocs.io/en/latest/changelog",
"Discussions, https://github.com/orgs/EndstoneMC/discussions",
"Documentation, https://endstone.readthedocs.io",
"Homepage, https://github.com/EndstoneMC/endstone",
"Issues, https://github.com/EndstoneMC/endstone/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:22:55.129030 | endstone-0.11.1-cp314-cp314-win_amd64.whl | 35,165,961 | 58/c1/ef61bbc2fe0e2684cde291891d6b629c25cea84d27ed0d9184686e17e371/endstone-0.11.1-cp314-cp314-win_amd64.whl | cp314 | bdist_wheel | null | false | 7d87210e41a945c11f97622a1e3ecd41 | 65ddd5bfe003c870b40380f5b602fd2c5b6435aa7a64b0feb190638c141ad985 | 58c1ef61bbc2fe0e2684cde291891d6b629c25cea84d27ed0d9184686e17e371 | null | [] | 936 |
2.4 | serine | 0.0.1 | Serine Labs | # serine
Serine Labs
## Installation
```bash
pip install serine
```
# Or using uv
```bash
uv pip install serine
```
## Usage
```python
import serine
print(serine.info()) # Output: serinelabs
```
| text/markdown | Serine Labs | null | null | null | null | serine | [
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"pytest; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/serinelabs",
"Repository, https://github.com/serinelabs",
"Issues, https://github.com/serinelabs"
] | uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T18:22:18.455698 | serine-0.0.1.tar.gz | 1,259 | a9/00/22cbd9486d184e5c25f56939295042825e2758093cf855dd476ac76a0f63/serine-0.0.1.tar.gz | source | sdist | null | false | 4bae5d6a9349e7148d80e7dea516c016 | 530ab60eb8bd70f78275f04b2e26b1f1d3c396cb2656b79fdbe7afcaa9528231 | a90022cbd9486d184e5c25f56939295042825e2758093cf855dd476ac76a0f63 | null | [] | 204 |
2.4 | dragonfly-energy | 1.35.103 | Dragonfly extension for energy simulation. | 
[](https://github.com/ladybug-tools/dragonfly-energy/actions)
[](https://coveralls.io/github/ladybug-tools/dragonfly-energy)
[](https://www.python.org/downloads/release/python-3100/) [](https://www.python.org/downloads/release/python-370/) [](https://www.python.org/downloads/release/python-270/) [](https://github.com/IronLanguages/ironpython2/releases/tag/ipy-2.7.8/)
# dragonfly-energy
Dragonfly extension for energy simulation, including integration with the
[EnergyPlus](https://github.com/NREL/EnergyPlus) simulation engine, the
[OpenStudio](https://github.com/NREL/OpenStudio) SDK, and the
[URBANopt](https://docs.urbanopt.net/) SDK.
## Installation
`pip install dragonfly-energy`
To check if Dragonfly command line interface is installed correctly
use `dragonfly-energy --help`.
## QuickStart
```python
import dragonfly_energy
```
## [API Documentation](http://ladybug-tools.github.io/dragonfly-energy/docs)
## Usage
Since the building geometry in dragonfly is fundamentally 2D, creating a model of
a building and assigning energy model properties can be done with a few lines of
code. Here is an example:
```python
from dragonfly.model import Model
from dragonfly.building import Building
from dragonfly.story import Story
from dragonfly.room2d import Room2D
from dragonfly.windowparameter import SimpleWindowRatio
from honeybee_energy.lib.programtypes import office_program
# create the Building object
pts_1 = (Point3D(0, 0, 3), Point3D(0, 10, 3), Point3D(10, 10, 3), Point3D(10, 0, 3))
pts_2 = (Point3D(10, 0, 3), Point3D(10, 10, 3), Point3D(20, 10, 3), Point3D(20, 0, 3))
pts_3 = (Point3D(0, 10, 3), Point3D(0, 20, 3), Point3D(10, 20, 3), Point3D(10, 10, 3))
pts_4 = (Point3D(10, 10, 3), Point3D(10, 20, 3), Point3D(20, 20, 3), Point3D(20, 10, 3))
room2d_1 = Room2D('Office1', Face3D(pts_1), 3)
room2d_2 = Room2D('Office2', Face3D(pts_2), 3)
room2d_3 = Room2D('Office3', Face3D(pts_3), 3)
room2d_4 = Room2D('Office4', Face3D(pts_4), 3)
story = Story('OfficeFloor', [room2d_1, room2d_2, room2d_3, room2d_4])
story.solve_room_2d_adjacency(0.01)
story.set_outdoor_window_parameters(SimpleWindowRatio(0.4))
story.multiplier = 4
building = Building('OfficeBuilding', [story])
# assign energy properties
for room in story.room_2ds:
room.properties.energy.program_type = office_program
room.properties.energy.add_default_ideal_air()
# create the Model object
model = Model('NewDevelopment', [building])
```
Once a Dragonfly Model has been created, it can be converted to a honeybee Model,
which can then be converted to IDF format like so:
```python
# create the dragonfly Model object
model = Model('NewDevelopment', [building])
# serialize the dragonfly Model to Honeybee Models and convert them to IDF
hb_models = model.to_honeybee('Building', use_multiplier=False, tolerance=0.01)
idfs = [hb_model.to.idf(hb_model) for hb_model in hb_models]
```
The dragonfly model can also be serialized to a geoJSON to be simulated with URBANopt.
```python
from ladybug.location import Location
# create the dragonfly Model object
model = Model('NewDevelopment', [building])
# create a location for the geoJSON and write it to a folder
location = Location('Boston', 'MA', 'USA', 42.366151, -71.019357)
sim_folder = './tests/urbanopt_model'
geojson, hb_model_jsons, hb_models = model.to.urbanopt(model, location, folder=sim_folder)
```
## Local Development
1. Clone this repo locally
```
git clone git@github.com:ladybug-tools/dragonfly-energy
# or
git clone https://github.com/ladybug-tools/dragonfly-energy
```
2. Install dependencies:
```
cd dragonfly-energy
pip install -r dev-requirements.txt
pip install -r requirements.txt
```
3. Run Tests:
```
python -m pytest tests/
```
4. Generate Documentation:
```
sphinx-apidoc -f -e -d 4 -o ./docs ./dragonfly_energy
sphinx-build -b html ./docs ./docs/_build/docs
```
| text/markdown | Ladybug Tools | info@ladybug.tools | null | null | AGPL-3.0 | null | [
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: IronPython",
"Operating System :: OS Independent"
] | [] | https://github.com/ladybug-tools/dragonfly-energy | null | null | [] | [] | [] | [
"dragonfly-core==1.75.11",
"honeybee-energy==1.116.128",
"honeybee-energy-standards==2.2.6; extra == \"standards\"",
"honeybee-openstudio>=0.3.14; extra == \"openstudio\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-20T18:22:00.812491 | dragonfly_energy-1.35.103.tar.gz | 142,782 | 04/f1/6df39235b0aad3dc294bacd664b838d0b1b1dc1a3751d41287350d532788/dragonfly_energy-1.35.103.tar.gz | source | sdist | null | false | 037538b30c2182c63513afdb284880c4 | 235efc7997d14496e1967bda22b17a0d85f4c24c49d8ed14c3700feac243b7cb | 04f16df39235b0aad3dc294bacd664b838d0b1b1dc1a3751d41287350d532788 | null | [
"LICENSE"
] | 272 |
2.4 | py-stringmatching | 0.4.7 | Python library for string matching. | py_stringmatching
=================
This project seeks to build a Python software package that consists of a comprehensive and scalable set of string tokenizers (such as alphabetical tokenizers, whitespace tokenizers) and string similarity measures (such as edit distance, Jaccard, TF/IDF). The package is free, open-source, and BSD-licensed.
Important links
===============
* Project Homepage: https://sites.google.com/site/anhaidgroup/projects/magellan/py_stringmatching
* Code repository: https://github.com/anhaidgroup/py_stringmatching
* User Manual: https://anhaidgroup.github.io/py_stringmatching/v0.4.2/index.html
* Tutorial: https://anhaidgroup.github.io/py_stringmatching/v0.4.2/Tutorial.html
* How to Contribute: https://anhaidgroup.github.io/py_stringmatching/v0.4.2/Contributing.html
* Developer Manual: http://pages.cs.wisc.edu/~anhai/py_stringmatching/v0.2.0/dev-manual-v0.2.0.pdf
* Issue Tracker: https://github.com/anhaidgroup/py_stringmatching/issues
* Mailing List: https://groups.google.com/forum/#!forum/py_stringmatching
Dependencies
============
py_stringmatching has been tested on each Python version between 3.8 and 3.12, inclusive.
Note: Python 3.7 support was dropped as it reached End of Life (EOL) on June 27, 2023.
The required dependencies to build the package are NumPy 1.7.0 or higher, but lower than 2.0,
and a C or C++ compiler. For the development version, you will also need Cython.
Platforms
=========
py_stringmatching has been tested on Linux, OS X and Windows. At this time we have only tested on x86 architecture.
| text/x-rst | null | UW Magellan Team <uwmagellan@gmail.com> | null | null | BSD | null | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Education",
"License :: OSI Approved :: BSD License",
"Operating System :: POSIX",
"Operating System :: Unix",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering",
"Topic :: Utilities",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"numpy<2.0,>=1.7.0"
] | [] | [] | [] | [
"Homepage, https://sites.google.com/site/anhaidgroup/projects/magellan/py_stringmatching"
] | twine/6.2.0 CPython/3.12.0 | 2026-02-20T18:21:20.883461 | py_stringmatching-0.4.7-cp312-cp312-win_amd64.whl | 1,347,962 | c8/39/96f7ba4fa6f1c47d531979b1447775f8e0943676a0fa9642a88b610acd9a/py_stringmatching-0.4.7-cp312-cp312-win_amd64.whl | cp312 | bdist_wheel | null | false | ea8cea36cefd2c2334b38425bf08e09b | 28d16eea595b6eb5af92d2ad29e18549f918732d40a81c4c138b929c3fd2bc73 | c83996f7ba4fa6f1c47d531979b1447775f8e0943676a0fa9642a88b610acd9a | null | [
"LICENSE"
] | 3,127 |
2.4 | auto-round-nightly | 0.12.0.dev20260220 | Repository of AutoRound: Advanced Weight-Only Quantization Algorithm for LLMs | <div align="center">
<p align="center">
<img src="docs/imgs/AutoRound.png" alt="AutoRound Overview" width="20%">
</p>
<h3> Advanced Quantization Algorithm for LLMs</h3>
[](https://github.com/intel/auto-round)
[](https://github.com/intel/auto-round)
[](https://github.com/intel/auto-round/blob/main/LICENSE)
<a href="https://huggingface.co/Intel">
<img alt="Model Checkpoints" src="https://img.shields.io/badge/%F0%9F%A4%97%20HF-Models-F57C00">
</a>
English | [简体中文](README_CN.md)
[User Guide](./docs/step_by_step.md) | [用户指南](./docs/step_by_step_CN.md)
---
<div align="left">
## 🚀 What is AutoRound?
AutoRound is an advanced quantization toolkit designed for Large Language Models (LLMs) and Vision-Language Models (VLMs).
It achieves high accuracy at ultra-low bit widths (2–4 bits) with minimal tuning by leveraging **sign-gradient descent** and providing broad hardware compatibility.
See our papers [SignRoundV1](https://arxiv.org/pdf/2309.05516) and [SignRoundV2](http://arxiv.org/abs/2512.04746) for more details. For usage instructions, please refer to the [User Guide](./docs/step_by_step.md).
<p align="center">
<img src="docs/imgs/autoround_overview.png" alt="AutoRound Overview" width="80%">
</p>
## 🆕 What's New
* [2025/12] The **SignRoundV2** paper is available. Turn on `enable_alg_ext` and use the **AutoScheme** API for mixed-precision quantization to reproduce the results: [*Paper*](http://arxiv.org/abs/2512.04746), [*Notes for evaluating LLaMA models*](./docs/alg_202508.md).
* [2025/11] AutoRound has landed in **LLM-Compressor**: [*Usage*](https://github.com/vllm-project/llm-compressor/tree/main/examples/autoround/README.md), [*vLLM blog*](https://blog.vllm.ai/2025/12/09/intel-autoround-llmc.html), [*RedHat blog*](https://developers.redhat.com/articles/2025/12/09/advancing-low-bit-quantization-llms-autoround-x-llm-compressor), [*X post*](https://x.com/vllm_project/status/1998710451312771532), [*Intel blog*](https://community.intel.com/t5/Blogs/Products-and-Solutions/HPC/Advancing-Low-Bit-Quantization-for-LLMs-AutoRound-x-LLM/post/1729336), [*Linkedin*](https://www.linkedin.com/posts/vllm-project_advancing-lowbit-quantization-for-llms-activity-7404478053768441856-ru8f/?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAapNW8BLnAdCAr57GOwSCJXjf76ZvOEOAg), [*微信*](https://mp.weixin.qq.com/s/l5WA-1_4ipffQN6GOH2Iqg), [*知乎*](https://zhuanlan.zhihu.com/p/1982167638315664412).
* [2025/11] An **enhanced GGUF** quantization algorithm is available via `--enable_alg_ext`: [*Accuracy*](./docs/gguf_alg_ext_acc.md).
* [2025/10] AutoRound has been integrated into **SGLang**: [*Usage*](https://docs.sglang.io/advanced_features/quantization.html#using-auto-round), [*LMSYS Blog*](https://lmsys.org/blog/2025-11-13-AutoRound/), [*X post*](https://x.com/lmsysorg/status/1991977019220148650?s=20), [*Intel blog*](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/AutoRound-Meets-SGLang-Enabling-Quantized-Model-Inference-with/post/1727196), [*Linkedin*](https://www.linkedin.com/feed/update/urn:li:activity:7397742859354857472).
* [2025/10] A **mixed precision** algorithm is available to generate schemes in minutes: [*Usage*](https://github.com/intel/auto-round/blob/main/docs/step_by_step.md#autoscheme), [*Accuracy*](./docs/auto_scheme_acc.md).
* [2025/09] **MXFP4** and **NVFP4** dtypes is available: [*Accuracy*](./docs/mxnv_acc.md).
* [2025/08] An **improved INT2** algorithm is available via `--enable_alg_ext`: [*Accuracy*](./docs/alg_202508.md)
* [2025/07] **GGUF** format is supported: [*Usage*](./docs/step_by_step.md#gguf-format).
* [2025/05] AutoRound has been integrated into **vLLM**: [*Usage*](https://docs.vllm.ai/en/latest/features/quantization/auto_round/), [*Medium blog*](https://medium.com/@NeuralCompressor/accelerating-vllm-and-sglang-deployment-using-autoround-45fdc0b2683e), [*小红书*](https://www.xiaohongshu.com/explore/69396bc6000000000d03e473?note_flow_source=wechat&xsec_token=CB6G3F_yM99q8XfusvyRlJqm8Db4Es2k0kYIHdIUiSQ9g=).
* [2025/05] AutoRound has been integrated into **Transformers**: [*Blog*](https://huggingface.co/blog/autoround).
* [2025/03] The INT2-mixed **DeepSeek-R1** model (~200GB) retains 97.9% accuracy: [*Model*](https://huggingface.co/OPEA/DeepSeek-R1-int2-mixed-sym-inc).
## ✨ Key Features
✅ **Superior Accuracy**
Delivers strong performance even at 2–3 bits [example models](https://huggingface.co/collections/OPEA/2-3-bits-67a5f0bc6b49d73c01b4753b), with leading results at 4 bits [benchmark](https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard).
✅ **Ecosystem Integration**
Seamlessly works with **Transformers, vLLM, SGLang** and more.
✅ **Multiple Formats Export**
Support **AutoRound, AutoAWQ, AutoGPTQ, and GGUF** for maximum compatibility. Details are shown in [export formats](https://github.com/intel/auto-round/blob/main/docs/step_by_step.md#supported-export-formats)
✅ **Fast Mixed Bits/Dtypes Scheme Generation**
Automatically configure in minutes, with about 1.1X-1.5X the model’s BF16 RAM size as overhead. Accuracy [results](./docs/auto_scheme_acc.md) and [user guide](https://github.com/intel/auto-round/blob/main/docs/step_by_step.md#autoscheme).
✅ **Optimized Round-to-Nearest Mode**
Use `--iters 0` for fast quantization with some accuracy drop for 4 bits. Details are shown in [opt_rtn mode](https://github.com/intel/auto-round/blob/main/docs/step_by_step.md#opt-rtn-mode)
✅ **Affordable Quantization Cost**
Quantize 7B models in about 10 minutes on a single GPU. Details are shown in [quantization costs](https://github.com/intel/auto-round/blob/main/docs/step_by_step.md#quantization-costs)
✅ **10+ VLMs Support**
Out-of-the-box quantization for 10+ vision-language models [example models](https://huggingface.co/collections/OPEA/vlms-autoround-675bc712fdd6a55ebaf11bfa), [support matrix](https://github.com/intel/auto-round/tree/main/auto_round/mllm#support-matrix)
✅ **Multiple Recipes**
Choose from `auto-round-best`, `auto-round`, and `auto-round-light` to suit your needs. Details are shown in [quantization recipes](https://github.com/intel/auto-round/blob/main/docs/step_by_step.md#recipe-recommendation)
✅ Advanced Utilities
Includes [multiple gpus quantization](https://github.com/intel/auto-round/blob/main/docs/step_by_step.md#devicemulti-gpu-setting-in-quantization), [multiple calibration datasets](https://github.com/intel/auto-round/blob/main/docs/step_by_step.md#default-dataset) and support for [10+ runtime backends](https://github.com/intel/auto-round/blob/main/docs/step_by_step.md#specify-inference-backend).
✅ Beyond weight only quantization. We are actively expanding support for additional datatypes such as **MXFP**, NVFP, W8A8, and more.
## Installation
### Install from pypi
```bash
# CPU(Xeon)/GPU(CUDA)
pip install auto-round
# HPU(Gaudi)
# install inside the hpu docker container, e.g. vault.habana.ai/gaudi-docker/1.23.0/ubuntu24.04/habanalabs/pytorch-installer-2.9.0:latest
pip install auto-round-hpu
# XPU(Intel GPU)
pip install torch --index-url https://download.pytorch.org/whl/xpu
pip install auto-round
```
<details>
<summary>Build from Source</summary>
```bash
# CPU(Xeon)/GPU(CUDA)
pip install .
# HPU(Gaudi)
python setup.py install hpu
# XPU(Intel GPU)
pip install torch --index-url https://download.pytorch.org/whl/xpu
pip install .
```
</details>
## Model Quantization (CPU/Intel GPU/Gaudi/CUDA)
>If you encounter issues during quantization, try using pure RTN mode with iters=0, disable_opt_rtn=True. Additionally, using group_size=32 or mixed bits is recommended for better results..
### CLI Usage
The full list of supported arguments is provided by calling `auto-round -h` on the terminal.
> **ModelScope is supported for model downloads, simply set `AR_USE_MODELSCOPE=1`.**
```bash
auto-round \
--model Qwen/Qwen3-0.6B \
--scheme "W4A16" \
--format "auto_round" \
--output_dir ./tmp_autoround
```
We offer another two recipes, `auto-round-best` and `auto-round-light`, designed for optimal accuracy and improved speed, respectively. Details are as follows.
<details>
<summary>Other Recipes</summary>
```bash
# Best accuracy, 3X slower, low_gpu_mem_usage could save ~20G but ~30% slower
auto-round-best \
--model Qwen/Qwen3-0.6B \
--scheme "W4A16" \
--low_gpu_mem_usage
```
```bash
# 2-3X speedup, slight accuracy drop at W4 and larger accuracy drop at W2
auto-round-light \
--model Qwen/Qwen3-0.6B \
--scheme "W4A16"
```
<!-- ```bash
auto-round-fast \
# Fast and low memory, 2-3X speedup, slight accuracy drop at W4G128
--model Qwen/Qwen3-0.6B \
--bits 4 \
--group_size 128 \
``` -->
</details>
In conclusion, we recommend using **auto-round for W4A16 and auto-round-best with `enable_alg_ext` for W2A16**. However, you may adjust the
configuration to suit your specific requirements and available resources.
### API Usage
```python
from auto_round import AutoRound
# Load a model (supports FP8/BF16/FP16/FP32)
model_name_or_path = "Qwen/Qwen3-0.6B"
# Available schemes: "W2A16", "W3A16", "W4A16", "W8A16", "NVFP4", "MXFP4" (no real kernels), "GGUF:Q4_K_M", etc.
ar = AutoRound(model_name_or_path, scheme="W4A16")
# Highest accuracy (4–5× slower).
# `low_gpu_mem_usage=True` saves ~20GB VRAM but runs ~30% slower.
# ar = AutoRound(model_name_or_path, nsamples=512, iters=1000, low_gpu_mem_usage=True)
# Faster quantization (2–3× speedup) with slight accuracy drop at W4G128.
# ar = AutoRound(model_name_or_path, nsamples=128, iters=50, lr=5e-3)
# Supported formats: "auto_round" (default), "auto_gptq", "auto_awq", "llm_compressor", "gguf:q4_k_m", etc.
ar.quantize_and_save(output_dir="./qmodel", format="auto_round")
```
<details>
<summary>Important Hyperparameters</summary>
##### Quantization Scheme & Configuration
- **`scheme` (str|dict|AutoScheme)**: The predefined quantization keys, e.g. `W4A16`, `MXFP4`, `NVFP4`, `GGUF:Q4_K_M`. For MXFP4/NVFP4, we recommend exporting to LLM-Compressor format.
- **`bits` (int)**: Number of bits for quantization (default is `None`). If not None, it will override the scheme setting.
- **`group_size` (int)**: Size of the quantization group (default is `None`). If not None, it will override the scheme setting.
- **`sym` (bool)**: Whether to use symmetric quantization (default is `None`). If not None, it will override the scheme setting.
- **`layer_config` (dict)**: Configuration for layer_wise scheme (default is `None`), mainly for customized mixed schemes.
##### Algorithm Settings
- **`enable_alg_ext` (bool)**: [Experimental Feature] Only for `iters>0`. Enable algorithm variants for specific schemes (e.g., MXFP4/W2A16) that could bring notable improvements. Default is `False`.
- **`disable_opt_rtn` (bool|None)**: Use pure RTN mode for specific schemes (e.g., GGUF and WOQ). Default is `None`. If None, it defaults to `False` in most cases to improve accuracy, but may be set to `True` due to known issues.
##### Tuning Process Parameters
- **`iters` (int)**: Number of tuning iterations (default is `200`). Common values: 0 (RTN mode), 50 (with lr=5e-3 recommended), 1000. Higher values increase accuracy but slow down tuning.
- **`lr` (float)**: The learning rate for rounding value (default is `None`). When None, it will be set to `1.0/iters` automatically.
- **`batch_size` (int)**: Batch size for training (default is `8`). 4 is also commonly used.
- ** `enable_deterministic_algorithms` (bool)**: Whether to enable deterministic algorithms for reproducibility (default is `False`).
##### Calibration Dataset
- **`dataset` (str|list|tuple|torch.utils.data.DataLoader)**: The dataset for tuning (default is `"NeelNanda/pile-10k"`). Supports local JSON files and dataset combinations, e.g. `"./tmp.json,NeelNanda/pile-10k:train,mbpp:train+validation+test"`.
- **`nsamples` (int)**: Number of samples for tuning (default is `128`).
- **`seqlen` (int)**: Data length of the sequence for tuning (default is `2048`).
##### Device/Speed Configuration
- **`enable_torch_compile` (bool)**: If no exception is raised, typically we recommend setting it to True for faster quantization with lower resource.
- **`low_gpu_mem_usage` (bool)**: Whether to offload intermediate features to CPU at the cost of ~20% more tuning time (default is `False`).
- **`low_cpu_mem_usage` (bool)**: [Experimental Feature]Whether to enable saving immediately to reduce ram usage (default is `True`).
- **`device_map` (str|dict|int)**: The device to be used for tuning, e.g., `auto`, `cpu`, `cuda`, `0,1,2` (default is `0`). When using `auto`, it will try to use all available GPUs.
</details>
### Supported Schemes
<details>
> Gray indicates the absence of a kernel or the presence of only an inefficient/reference kernel. BF16 is mainly for AutoScheme
| Format | Supported Schemes |
|:-------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **auto_round** | W4A16(Recommended), W2A16, W3A16, W8A16, W2A16G64, W2A16G32, `MXFP4`, `MXFP8`, `MXFP4_RCEIL`, `MXFP8_RCEIL`, `NVFP4`, `FPW8A16`, `FP8_STATIC`, `BF16` |
| **auto_awq** | W4A16(Recommended), BF16 |
| **auto_gptq** | W4A16(Recommended), W2A16, W3A16, W8A16, W2A16G64, W2A16G32,BF16 |
| **llm_compressor** | NVFP4(Recommended), `MXFP4`, `MXFP8`, `FPW8A16`, `FP8_STATIC` |
| **gguf** | GGUF:Q4_K_M(Recommended), GGUF:Q2_K_S, GGUF:Q3_K_S, GGUF:Q3_K_M, GGUF:Q3_K_L, GGUF:Q4_K_S, GGUF:Q5_K_S, GGUF:Q5_K_M, GGUF:Q6_K, GGUF:Q4_0, GGUF:Q4_1, GGUF:Q5_0, GGUF:Q5_1,GGUF:Q8_0 |
| **fake** | `all schemes (only for research)` |
</details>
### Adaptive Schemes (Experimental Feature)
AutoScheme provides an automatic algorithm to generate adaptive mixed bits/data-type quantization recipes.
Please refer to the [user guide](https://github.com/intel/auto-round/blob/main/docs/step_by_step.md#autoscheme) for more details on AutoScheme.
~~~python
from auto_round import AutoRound, AutoScheme
model_name = "Qwen/Qwen3-8B"
avg_bits = 3.0
scheme = AutoScheme(avg_bits=avg_bits, options=("GGUF:Q2_K_S", "GGUF:Q4_K_S"), ignore_scale_zp_bits=True)
layer_config = {"lm_head": "GGUF:Q6_K"}
# Change iters to 200 for non-GGUF schemes
ar = AutoRound(model=model_name, scheme=scheme, layer_config=layer_config, iters=0)
ar.quantize_and_save()
~~~
<details>
<summary>Important Hyperparameters of AutoScheme</summary>
##### AutoScheme Hyperparameters
- **`avg_bits` (float)**: Target average bit-width for the entire model. Only quantized layers are included in the average bit calculation.
- **`options` (str | list[str] | list[QuantizationScheme])**: Candidate quantization schemes to choose from. It can be a single comma-separated string (e.g., `"W4A16,W2A16"`), a list of strings (e.g., `["W4A16", "W2A16"]`), or a list of `QuantizationScheme` objects.
- **`ignore_scale_zp_bits` (bool)**: Only supported in API usage. Determines whether to exclude the bits of scale and zero-point from the average bit-width calculation (default: `False`).
- **`shared_layers` (Iterable[Iterable[str]], optional)**: Only supported in API usage. Defines groups of layers that share quantization settings.
- **`batch_size` (int, optional)**: Only supported in API usage. Can be set to `1` to reduce VRAM usage at the expense of longer tuning time.
</details>
### API Usage for VLMs
<details>
<summary>Click to expand</summary>
**This feature is experimental and may be subject to changes**.
By default, AutoRound only quantize the text module of VLMs and uses `NeelNanda/pile-10k` for calibration. To
quantize the entire model, you can enable `quant_nontext_module` by setting it to True, though support for this feature
is limited. For more information, please refer to the AutoRound [readme](./auto_round/mllm/README.md).
```python
from auto_round import AutoRound
# Load the model
model_name_or_path = "Qwen/Qwen2.5-VL-7B-Instruct"
# Quantize the model
ar = AutoRound(model_name_or_path, scheme="W4A16")
output_dir = "./qmodel"
ar.quantize_and_save(output_dir)
```
</details>
## Model Inference
### vLLM (CPU/Intel GPU/CUDA)
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
]
sampling_params = SamplingParams(temperature=0.6, top_p=0.95)
model_name = "Intel/DeepSeek-R1-0528-Qwen3-8B-int4-AutoRound"
llm = LLM(model=model_name)
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
### SGLang (Intel GPU/CUDA)
**Please note that support for the MoE models and visual language models is currently limited.**
```python
import sglang as sgl
llm = sgl.Engine(model_path="Intel/DeepSeek-R1-0528-Qwen3-8B-int4-AutoRound")
prompts = [
"Hello, my name is",
]
sampling_params = {"temperature": 0.6, "top_p": 0.95}
outputs = llm.generate(prompts, sampling_params)
for prompt, output in zip(prompts, outputs):
print(f"Prompt: {prompt}\nGenerated text: {output['text']}")
```
### Transformers (CPU/Intel GPU/Gaudi/CUDA)
AutoRound supports 10+ backends and automatically selects the best available backend based on the installed libraries and prompts the user to
install additional libraries when a better backend is found.
**Please avoid manually moving the quantized model to a different device** (e.g., model.to('cpu')) during inference, as
this may cause unexpected exceptions.
The support for Gaudi device is limited.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Intel/DeepSeek-R1-0528-Qwen3-8B-int4-AutoRound"
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "There is a girl who likes adventure,"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0]))
```
## Publications & Events
[SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs](https://arxiv.org/abs/2512.04746) (2025.12 paper)
[Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLM](https://aclanthology.org/2024.findings-emnlp.662/) (2023.09 paper)
[TEQ: Trainable Equivalent Transformation for Quantization of LLMs](https://arxiv.org/abs/2310.10944) (2023.10 paper)
[Effective Post-Training Quantization for Large Language Models](https://medium.com/intel-analytics-software/effective-post-training-quantization-for-large-language-models-with-enhanced-smoothquant-approach-93e9d104fb98) (2023.04 blog)
Check out [Full Publication List](./docs/publication_list.md).
## Acknowledgement
Special thanks to open-source low precision libraries such as AutoGPTQ, AutoAWQ, GPTQModel, Triton, Marlin, and ExLLaMAV2 for providing low-precision CUDA kernels, which are leveraged in AutoRound.
## 🌟 Support Us
If you find AutoRound helpful, please ⭐ star the repo and share it with your community!
| text/markdown | Intel AIPT Team | wenhua.cheng@intel.com, weiwei1.zhang@intel.com, heng.guo@intel.com | null | null | Apache 2.0 | quantization, auto-around, LLM, SignRound | [
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"License :: OSI Approved :: Apache Software License"
] | [] | https://github.com/intel/auto-round | null | >=3.10.0 | [] | [] | [] | [
"accelerate",
"datasets",
"numpy",
"py-cpuinfo",
"threadpoolctl",
"torch",
"tqdm",
"transformers>=4.38",
"numba; extra == \"cpu\"",
"tbb; extra == \"cpu\"",
"auto-round-lib; extra == \"cpu\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T18:21:13.100044 | auto_round_nightly-0.12.0.dev20260220.tar.gz | 431,328 | ab/77/4477421866af1575955c5f0b24550032e44cd6fa2ee6bf4b63bf7c5e70e3/auto_round_nightly-0.12.0.dev20260220.tar.gz | source | sdist | null | false | 5d0516389e1dcadf9669356c6b47a53d | 831aca7dfbf1e40c63cad087148bfa28fac74f1236a3b298f3a0169e37625fdb | ab774477421866af1575955c5f0b24550032e44cd6fa2ee6bf4b63bf7c5e70e3 | null | [
"LICENSE",
"third-party-programs.txt"
] | 172 |
2.4 | bizsupply-sdk | 0.1.3 | SDK for developing plugins for the bizSupply platform (requires a bizSupply account) | # bizSupply Plugin SDK
The official SDK for developing plugins and benchmarks for the [bizSupply](https://www.bizsupply.ai/) document processing platform.
> **This SDK is designed exclusively for the [bizSupply](https://www.bizsupply.ai/) platform.** It provides base classes, testing utilities, and CLI tools for building plugins that run inside the bizSupply engine. The SDK has no standalone functionality — a bizSupply account is required. [Sign up at bizsupply.ai](https://www.bizsupply.ai/) to get started.
## Quick Start
```bash
# Install the SDK
pip install bizsupply-sdk
# Create a new plugin project
bizsupply init classification --name my-classifier
cd my-classifier
# Create a new benchmark
bizsupply init benchmark --name energy-price
# Validate your code
bizsupply validate src/plugin.py
# Run tests
pytest tests/
```
## Plugin Types
| Type | Purpose | Base Class | Required Method |
|------|---------|------------|-----------------|
| **Source** | Ingest documents from external sources | `SourcePlugin` | `fetch()`, `has_new_data()` |
| **Classification** | Categorize documents with labels | `ClassificationPlugin` | `classify()` |
| **Extraction** | Extract structured data from documents | `ExtractionPlugin` | `extract()` |
| **Benchmark** | Score and compare documents | `BaseBenchmark` | `score()`, `compute()`, `compare()` |
## Example: Classification Plugin
```python
from typing import Any
from bizsupply_sdk import ClassificationPlugin, Document
class MyClassificationPlugin(ClassificationPlugin):
"""Classify documents using LLM analysis."""
async def classify(
self,
document: Document,
file_data: bytes | None,
mime_type: str | None,
available_labels: list[str],
current_path: list[str],
configs: dict[str, Any],
) -> str | None:
"""Classify a document by selecting from available_labels.
The engine handles ontology tree traversal - just pick one label.
Return None if no label matches.
"""
# Use LLM to classify
path_str = " > ".join(current_path) if current_path else "Root"
prompt = f"Path: {path_str}\nOptions: {available_labels}\nSelect the best match."
response = await self.prompt_llm(
prompt=prompt,
file_data=file_data, # Gemini reads PDFs directly
mime_type=mime_type,
)
# prompt_llm() returns dict | list | None
category = response.get("category") if isinstance(response, dict) else None
if category in available_labels:
return category
return None
```
## Example: Benchmark
```python
from typing import Any
from bizsupply_sdk import BaseBenchmark, ExtendedDocument, ScoredDocument, MatchCondition, MatchRule
class EnergyPriceBenchmark(BaseBenchmark):
"""Find the best energy contract price from invoice data."""
name = "energy_price"
target_labels = ["contract", "energy"]
metric_unit = "EUR/kWh"
MATCH_RULES = [
MatchRule(
name="contract_invoice_match",
left_group=["contract", "energy"],
right_group=["invoice", "energy"],
conditions=[
MatchCondition(
left_field="client_tax_id",
right_field="client_tax_id",
match_type="==",
),
],
),
]
def score(self, document: ExtendedDocument) -> float | None:
prices = [inv.get("price_per_kwh") for inv in document.aggregations]
prices = [p for p in prices if p is not None]
return sum(prices) / len(prices) if prices else None
def compute(self, results: list[ScoredDocument]) -> float:
return min(r.score for r in results) # Best (lowest) market price
def compare(self, document_score: float, benchmark_score: float) -> bool:
return document_score > benchmark_score # Paying more than market
```
## Testing Your Plugin
```python
import pytest
from bizsupply_sdk.testing import MockPluginServices, create_test_document
@pytest.fixture
def mock_services():
return MockPluginServices(
llm_responses={"default": "Contract"}
)
async def test_classification(mock_services):
plugin = MyClassificationPlugin()
mock_services.configure_plugin(plugin)
document = create_test_document(content="Contract agreement...")
result = await plugin.classify(
document=document,
file_data=b"test content",
mime_type="application/pdf",
available_labels=["Contract", "Invoice", "Report"],
current_path=[],
configs={},
)
assert result == "Contract"
assert mock_services.llm_call_count == 1
```
## CLI Commands
| Command | Description |
|---------|-------------|
| `bizsupply init <type>` | Scaffold a new plugin or benchmark (`source`, `classification`, `extraction`, `benchmark`, `ontology`) |
| `bizsupply validate <file>` | Validate plugin or benchmark code structure |
| `bizsupply tutorial` | Interactive development workflow tutorial |
## Documentation
- [Getting Started](https://bizsupply.readme.io/docs/welcome)
- [Create a Classification Plugin](https://bizsupply.readme.io/docs/create-classification-plugin)
- [Create an Extraction Plugin](https://bizsupply.readme.io/docs/create-extraction-plugin)
- [Create a Source Plugin](https://bizsupply.readme.io/docs/create-source-plugin)
## Links
- [bizSupply Platform](https://www.bizsupply.ai/) - Sign up and manage your account
- [Documentation](https://bizsupply.readme.io/docs/welcome) - Full developer documentation
## License
MIT License - see LICENSE file for details.
| text/markdown | null | Infosistema <support@bizsupply.ai> | null | null | null | ai, bizsupply, classification, document-processing, extraction, plugin, sdk | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"jinja2>=3.0.0",
"pydantic<3.0.0,>=2.0.0",
"pyyaml>=6.0.0",
"rich>=13.0.0",
"typer>=0.9.0",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://www.bizsupply.ai/",
"Documentation, https://bizsupply.readme.io/docs/plugin-interface"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T18:21:04.365795 | bizsupply_sdk-0.1.3.tar.gz | 49,507 | d5/70/1abc9dac942dec85931021dd646afb425c7582ee31492df3a2ca7cfe210d/bizsupply_sdk-0.1.3.tar.gz | source | sdist | null | false | e621848a886908d1f97fa68949df463c | 0664de65c9c5a450bdda6698bcf8ba2811e74af5654ec7a9e02b90a776ffa7d6 | d5701abc9dac942dec85931021dd646afb425c7582ee31492df3a2ca7cfe210d | MIT | [
"LICENSE"
] | 235 |
2.3 | dinox | 0.5.6 | Derivative Informed Neural Operators in JAX and Equinox | # dinox
`dinox` is a `JAX` implementation of Reduced Basis Derivative Informed Neural Operators, built for high performance in single-GPU environments where all training data fits in GPU memory.
The library is designed primarily for PDE learning workflows based on:
- `FEniCS 2019.1`
- Jacobians computed via `hippylib`
- Subspace methods provided by `bayesflux`
---
## Overview
`dinox` provides:
- Reduced basis neural operator architectures
- Derivative-informed training using PDE Jacobians
- GPU-accelerated implementations in `JAX` and `Equinox`
- Integration with FEniCS-discretized PDEs
---
## Getting Started
See the [hyperelasticity tutorial](https://github.com/dinoSciML/dinox/blob/main/examples/DINO_Tutorial.ipynb) for a complete walkthrough of the RB-DINO pipeline: problem setup, data generation, training with L2 vs H1 loss, and surrogate evaluation.
---
## Important: FEniCS & Hippylib Environment Required
`dinox` depends on `bayesflux[hippylib]`, which requires:
- hippylib
- FEniCS 2019.1
`FEniCS` has system-level dependencies and cannot be installed via pip alone.
You must first create a conda environment with FEniCS 2019.1 before installing dinox.
---
## Installation
### Prerequisites
- NVIDIA driver >= 525 (check with `nvidia-smi`)
- conda or mamba
> **Note on CUDA libraries:** You do **not** need to install CUDA Toolkit, cuDNN, or cuSPARSE via conda or your system package manager. The pip wheels for JAX and CuPy bundle their own CUDA 12 runtime libraries. Installing system CUDA alongside pip-bundled CUDA is the most common source of GPU detection failures.
### Step 1 — Create a FEniCS 2019.1 environment
```bash
conda create -n fenics-2019.1_env -c conda-forge fenics==2019.1.0 python=3.11
conda activate fenics-2019.1_env
```
### Step 2 — Fix `LD_LIBRARY_PATH` (critical for GPU)
A system-level or conda-set `LD_LIBRARY_PATH` pointing to a CUDA installation will conflict with the CUDA libraries bundled in the JAX and CuPy pip wheels, causing errors like `Unable to load cuSPARSE`.
```bash
unset LD_LIBRARY_PATH
```
### Step 3 — Install GPU-enabled JAX
```bash
pip install "jax[cuda12]" cupy-cuda12x nvidia-curand-cu12
```
### Step 4 — Install dinox
```bash
# With CuPy GPU support (recommended)
pip install dinox[cupy]
# Without CuPy
pip install dinox
```
### Step 5 — Verify GPU
```bash
python -c "import jax; print('JAX devices:', jax.devices())"
python -c "import cupy; print('CuPy GPU count:', cupy.cuda.runtime.getDeviceCount())"
```
You should see your NVIDIA GPU listed. If JAX shows only `CpuDevice`, check that `LD_LIBRARY_PATH` is unset (see Step 2).
---
## GPU Support
- Designed for single-GPU workflows where all data fits in GPU memory
- Requires CUDA 12-enabled JAX (`pip install "jax[cuda12]"`) — the pip wheel bundles its own CUDA runtime
- Optional CuPy arrays for GPU operations via `dinox[cupy]`
- Without GPU, CPU fallback is automatic
---
## Development
```bash
conda create -n fenics-2019.1_env -c conda-forge fenics==2019.1.0 python=3.11
conda activate fenics-2019.1_env
pip install "jax[cuda12]" cupy-cuda12x nvidia-curand-cu12
unset LD_LIBRARY_PATH # or use the permanent conda hook above
pip install -e ".[dev]"
```
This installs:
- dinox (editable)
- development tools (pytest, black, flake8, isort)
- bayesflux[hippylib]
- hippylib
- all required JAX dependencies
---
## Requirements
- Python >= 3.10
- FEniCS 2019.1 (via conda)
- JAX >= 0.7.0 (for GPU: `pip install "jax[cuda12]"`)
- NVIDIA driver >= 525 (for GPU)
- Optional: CuPy for GPU array operations (`pip install dinox[cupy]`)
---
## Troubleshooting
| Problem | Solution |
|---|---|
| `Unable to load cuSPARSE` | `unset LD_LIBRARY_PATH` before running Python |
| `No such file: libcurand.so` | `pip install nvidia-curand-cu12` |
| JAX shows only `CpuDevice` | Ensure `jax[cuda12]` was installed (not just `jax`) and `LD_LIBRARY_PATH` is unset |
| `nvidia-smi` not found | Install or update NVIDIA driver (>= 525) |
| JAX/CuPy CUDA version conflict | Do **not** `conda install cudatoolkit` — let pip wheels provide CUDA |
---
## Repository
- Homepage: https://github.com/dinoSciML/dinox
- Repository: https://github.com/dinoSciML/dinox
| text/markdown | Joshua Chen, Michael Brennan, Lianghao Cao, Thomas O'Leary-Roseberry | Joshua Chen <joshuawchen@icloud.com>, Michael Brennan <mcbrenn@mit.edu>, Lianghao Cao <lianghao@caltech.edu>, Thomas O'Leary-Roseberry <tom.olearyroseberry@utexas.edu> | null | null | MIT License
Copyright (c) 2025 Joshua Chen
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"jax>=0.7.0",
"jaxlib>=0.7.0",
"jaxtyping",
"optax>=0.2.3",
"bayesflux[hippylib]>=0.8",
"equinox",
"hickle>=5.0.3",
"pydantic",
"cupy-cuda12x; extra == \"cupy\"",
"nvidia-curand-cu12; extra == \"cupy\"",
"pytest; extra == \"dev\"",
"black; extra == \"dev\"",
"isort; extra == \"dev\"",
"flake8; extra == \"dev\"",
"flake8-pyproject; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/dinoSciML/dinox",
"Repository, https://github.com/dinoSciML/dinox"
] | twine/6.1.0 CPython/3.9.19 | 2026-02-20T18:21:01.016468 | dinox-0.5.6.tar.gz | 17,517 | 00/0e/fff54a820bb5f8eb4c688de995849864b0f7f1653f1732f10cc69811f897/dinox-0.5.6.tar.gz | source | sdist | null | false | e191fd425f6292f6e2bdaed325151e60 | 5616d478c19b811ddfdb01eb0d97ea6d6ab793df3aa7df12cd5147dd88f4598e | 000efff54a820bb5f8eb4c688de995849864b0f7f1653f1732f10cc69811f897 | null | [] | 196 |
2.4 | q3rcon-tui | 0.5.1 | A terminal user interface for managing Q3 compatible servers using RCON. | # q3rcon tui
[](https://github.com/pypa/hatch)
[](https://github.com/astral-sh/ruff)
[](https://pypi.org/project/q3rcon-tui)
[](https://pypi.org/project/q3rcon-tui)
-----

## Table of Contents
- [Installation](#installation)
- [License](#license)
## Installation
*with uv*
```console
uv tool install q3rcon-tui
```
*with pipx*
```console
pipx install q3rcon-tui
```
The TUI should now be discoverable as q3rcon-tui.
## Configuration
#### Flags
Pass `--host`, `--port` and `--password` as flags:
```console
q3rcon-tui --host=localhost --port=28960 --password=rconpassword
```
Additional flags:
- `--raw`: Boolean flag, if set the RichLog will print raw responses without rendering tables.
- `--append`: Boolean flag, if set the RichLog output will append each response continuously.
- `--version`: Print the version of the TUI.
- `--help`: Print the help message.
#### Environment Variables
Store and load from dotenv files located at:
- .env in the cwd
- user home directory / .config / q3rcon-tui / config.env
example .env:
```env
Q3RCON_TUI_HOST=localhost
Q3RCON_TUI_PORT=28960
Q3RCON_TUI_PASSWORD=rconpassword
Q3RCON_TUI_RAW=false
Q3RCON_TUI_APPEND=false
```
## Special Thanks
- [lapetus-11](https://github.com/Iapetus-11) for writing the [aio-q3-rcon](https://github.com/Iapetus-11/aio-q3-rcon) package.
- The developers at [Textualize](https://github.com/Textualize) for writing the [textual](https://github.com/Textualize/textual) package.
## License
`q3rcon-tui` is distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.
| text/markdown | null | onyx-and-iris <code@onyxandiris.online> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aio-q3-rcon>=1.0.0",
"loguru>=0.7.3",
"pydantic-settings>=2.13.1",
"textual>=8.0.0"
] | [] | [] | [] | [
"Documentation, https://github.com/onyx-and-iris/q3rcon-tui#readme",
"Issues, https://github.com/onyx-and-iris/q3rcon-tui/issues",
"Source, https://github.com/onyx-and-iris/q3rcon-tui"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:20:57.773065 | q3rcon_tui-0.5.1.tar.gz | 25,020 | c2/cc/7dfa8e50e5466c5509bea8852c30d87ada4dbbde93ad3012b4b3bea3f028/q3rcon_tui-0.5.1.tar.gz | source | sdist | null | false | 12d70d247c5c0e0a86901ac452678b4d | f790b9ee25a53cc870e117ce4f2c52013cd264b7844f6c01f5bc09e87adda88b | c2cc7dfa8e50e5466c5509bea8852c30d87ada4dbbde93ad3012b4b3bea3f028 | MIT | [
"LICENSE.txt"
] | 200 |
2.4 | dragonfly-radiance | 0.4.147 | Dragonfly extension for radiance simulation. | 
[](https://github.com/ladybug-tools/dragonfly-radiance/actions)
[](https://www.python.org/downloads/release/python-3100/) [](https://www.python.org/downloads/release/python-370/) [](https://www.python.org/downloads/release/python-270/) [](https://github.com/IronLanguages/ironpython2/releases/tag/ipy-2.7.8/)
# dragonfly-radiance
Dragonfly extension for Radiance simulation.
## Installation
`pip install dragonfly-radiance`
To check if the command line interface is installed correctly
use `dragonfly-radiance --help`.
## QuickStart
```python
import dragonfly_radiance
```
## [API Documentation](http://ladybug-tools.github.io/dragonfly-radiance/docs)
## Local Development
1. Clone this repo locally
```
git clone git@github.com:ladybug-tools/dragonfly-radiance
# or
git clone https://github.com/ladybug-tools/dragonfly-radiance
```
2. Install dependencies:
```
cd dragonfly-radiance
pip install -r dev-requirements.txt
pip install -r requirements.txt
```
3. Run Tests:
```
python -m pytest tests/
```
4. Generate Documentation:
```
sphinx-apidoc -f -e -d 4 -o ./docs ./dragonfly_radiance
sphinx-build -b html ./docs ./docs/_build/docs
```
| text/markdown | Ladybug Tools | info@ladybug.tools | null | null | AGPL-3.0 | null | [
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: IronPython",
"Operating System :: OS Independent"
] | [] | https://github.com/ladybug-tools/dragonfly-radiance | null | null | [] | [] | [] | [
"dragonfly-core==1.75.11",
"honeybee-radiance==1.66.237"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-20T18:20:55.330497 | dragonfly_radiance-0.4.147.tar.gz | 31,860 | e8/ba/68ad7a05dc34bf323efebdf62ca8f34150ee37f79c31f72dd8f7c03eadd7/dragonfly_radiance-0.4.147.tar.gz | source | sdist | null | false | 2fc21f59aaedf30c738149d63eaa6b9f | cfbb1e82a4b512e1fa30bd593fb64099a0f12eb8b68a4aa045e0e23df95b64b5 | e8ba68ad7a05dc34bf323efebdf62ca8f34150ee37f79c31f72dd8f7c03eadd7 | null | [
"LICENSE"
] | 263 |
2.4 | mkdocs-exclude-unused | 1.0.5 | Simple plugin to exclude notused .md files from docs folder | [](https://opensource.org/licenses/MIT) 

# mkdocs-exclude-unused
A simple MkDocs plugin that automatically removes all .md files not included in the nav section of your mkdocs.yaml file.
## How to use
Simply add plugin to your mkdocs .yml file
```
plugins:
- mkdocs-exclude-unused
```
## Example
In your mkdocs.yaml your nav section looks like this:
nav:
- Main: index.md
- Page1: Page1.md
But in your docs folder you have:
docs/
index.md
Page1.md
Page2.md
**Mkdocs will create .html file for each of .md file even if it's not mentioned in 'nav' section!**
Page2.md can be used in other mkdocs.yml files, or it may be required in the docs folder for other reasons.
After running `mkdocs build` or `mkdocs serve`, warnings will be generated if Page2.md is missing from the navigation, preventing you from building in `--strict` mode. This plugin solves the problem by removing unused pages.
| text/markdown | Michal Domanski | null | null | null | MIT | null | [] | [] | https://github.com/michal2612/mkdocs-exclude-unused | null | null | [] | [] | [] | [
"mkdocs"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:20:31.557007 | mkdocs_exclude_unused-1.0.5.tar.gz | 3,430 | 5b/4d/0bf1dc77c002b5c8803a12ce8c02fe40c5999ca2c92bf224a9463ad9a7d0/mkdocs_exclude_unused-1.0.5.tar.gz | source | sdist | null | false | 4e449e8626bc9fe4f1670eda4bbc409a | f847c736ff83a0f532f3a0d761dc97c379270d024e9f24603798b89c1fa7ceb0 | 5b4d0bf1dc77c002b5c8803a12ce8c02fe40c5999ca2c92bf224a9463ad9a7d0 | null | [
"LICENSE"
] | 190 |
2.4 | justice-apex-skills | 1.0.0 | 20 production-ready AI consciousness skills for autonomous systems | # Justice Apex Skills ⚡
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/JusticeApex/justice-apex-skills)
> **20 Production-Ready AI Consciousness Skills** — Advanced autonomous systems library for intelligent task orchestration, pattern detection, trading automation, and enterprise operations.
---
## 🎯 Overview
Justice Apex Skills is a comprehensive library of 20 advanced AI skills designed to enable autonomous agents to make intelligent, safe, and effective decisions across multiple domains including:
- **Governance & Control**: Confidence gates, consensus mechanisms, agent orchestration
- **Intelligence & Learning**: Pattern detection, memory systems, strategy evaluation
- **Reliability & Safety**: Self-healing, disaster recovery, audit logging, failover management
- **Trading & Finance**: Whale detection, copy trading, compliance, portfolio optimization
- **Operations**: Workflow orchestration, multi-tenancy, phase evolution
---
## 📦 Installation
Install via pip:
```bash
pip install justice-apex-skills
```
### Optional Dependencies
For Firestore integration (Memory System):
```bash
pip install justice-apex-skills[firebase]
```
For development:
```bash
pip install justice-apex-skills[dev]
```
---
## 🚀 Quick Start
### Basic Usage
```python
from justice_apex_skills import ConfidenceGate, WhaleDetector
# Initialize confidence gate
gate = ConfidenceGate()
# Evaluate action safety before execution
decision = gate.evaluate_action(
action="execute_trade",
context={"amount": 1000, "risk_level": "medium"}
)
if decision.confident:
print(f"Execute with {decision.confidence}% confidence")
else:
print(f"Action blocked: {decision.reason}")
# Detect whale transactions
detector = WhaleDetector()
whales = detector.get_active_whales(chain="ethereum")
```
### All 20 Skills
```python
from justice_apex_skills import (
# Governance
ConfidenceGate, LLMRouter, EvolutionEngine, SwarmConsensus, AgentOrchestrator,
# Intelligence
MemorySystem, PatternDetector, TelemetryCapture, StrategyLibrary,
# Reliability
SelfHealing, AuditLogger, FailoverManager, DisasterRecovery,
# Trading & Finance
WhaleDetector, CopyTradingEngine, ComplianceEngine, PortfolioOptimizer,
# Operations
PhaseEvolution, MultiTenancy, WorkflowOrchestrator
)
```
---
## 📚 Skill Library
### Core Governance (5 skills)
| Skill | Purpose | Key Features |
|-------|---------|--------------|
| **ConfidenceGate** | Quality control system | Risk assessment, confidence levels, decision history |
| **LLMRouter** | Multi-provider LLM orchestration | Cost optimization, health monitoring, failover |
| **EvolutionEngine** | Genetic algorithm optimization | Population management, fitness evaluation, selection |
| **SwarmConsensus** | Distributed consensus | Vote aggregation, Byzantine tolerance, conflict resolution |
| **AgentOrchestrator** | Centralized coordination | Task allocation, dependency management, result aggregation |
### Intelligence & Learning (4 skills)
| Skill | Purpose | Key Features |
|-------|---------|--------------|
| **MemorySystem** | Dual-layer learning | Short-term (RAM), long-term (Firestore), importance scoring |
| **PatternDetector** | ML pattern recognition | Trend detection, anomaly detection, clustering |
| **TelemetryCapture** | System metrics | Performance monitoring, event tracking, analytics |
| **StrategyLibrary** | Strategic framework | Decision templates, context matching, outcome tracking |
### Reliability & Safety (4 skills)
| Skill | Purpose | Key Features |
|-------|---------|--------------|
| **SelfHealing** | Automatic recovery | Fault detection, healing strategies, state restoration |
| **AuditLogger** | Operation auditing | Complete logging, compliance tracking, forensics |
| **FailoverManager** | Redundancy handling | Automatic failover, health checks, recovery protocols |
| **DisasterRecovery** | Business continuity | Backup management, state recovery, RTO/RPO management |
### Trading & Finance (4 skills)
| Skill | Purpose | Key Features |
|-------|---------|--------------|
| **WhaleDetector** | Blockchain whale tracking | Multi-chain support (11 networks), behavior analysis |
| **CopyTradingEngine** | Follow-the-leader trading | Signal replication, risk limits, performance tracking |
| **ComplianceEngine** | Regulatory enforcement | Policy templates, violation detection, reporting |
| **PortfolioOptimizer** | Asset allocation | Rebalancing, risk optimization, performance analytics |
### Operations (3 skills)
| Skill | Purpose | Key Features |
|-------|---------|--------------|
| **PhaseEvolution** | Multi-phase evolution | Phase transitions, state management, rollback support |
| **MultiTenancy** | Multi-tenant support | Resource isolation, billing tracking, data segregation |
| **WorkflowOrchestrator** | Complex workflows | DAG execution, dependency resolution, parallel processing |
---
## 🔌 API Reference
### ConfidenceGate Example
```python
from justice_apex_skills import ConfidenceGate
gate = ConfidenceGate(high_confidence_threshold=0.85)
decision = gate.evaluate_action(
action="transfer_funds",
context={
"amount": 10000,
"recipient": "trusted_address",
"risk_level": "low"
}
)
# decision.confident: bool
# decision.confidence: float (0-1)
# decision.reason: str
# decision.risk_factors: Dict[str, float]
```
### WhaleDetector Example
```python
from justice_apex_skills import WhaleDetector
detector = WhaleDetector()
# Get active whales on Ethereum
whales = detector.get_active_whales(
chain="ethereum",
min_balance_usd=1000000,
lookback_hours=24
)
for whale in whales:
print(f"Whale: {whale.address}")
print(f" Holdings: ${whale.total_balance_usd}")
print(f" Behavior: {whale.behavior_pattern}")
print(f" Risk Level: {whale.risk_level}")
```
---
## 🏗️ Project Structure
```
justice-apex-skills/
├── setup.py # Package configuration
├── pyproject.toml # Modern Python packaging
├── MANIFEST.in # Package manifest
├── requirements.txt # Dependencies
├── LICENSE # MIT License
├── README_PyPI.md # PyPI documentation
├── PYPI_SUBMISSION_GUIDE.md # Submission instructions
│
├── justice_apex_skills/ # Main package
│ └── __init__.py # Exports all 20 skills
│
├── 01_confidence_gate/ # Skill 1
├── 02_llm_router/ # Skill 2
├── 03_evolution_engine/ # Skill 3
├── ... (all 20 skills)
└── 20_workflow_orchestrator/ # Skill 20
```
---
## 💡 Use Cases
### Autonomous Trading
Use WhaleDetector + CopyTradingEngine + PortfolioOptimizer to automatically follow whale trades with risk management.
### AI Agent Governance
Use ConfidenceGate + AuditLogger + SelfHealing to safely deploy autonomous agents with built-in safety checks.
### Multi-Tenant SaaS
Use MultiTenancy + DisasterRecovery + FailoverManager to run production SaaS with reliability guarantees.
### Intelligent Workflow Automation
Use WorkflowOrchestrator + PatternDetector + StrategyLibrary to orchestrate complex business processes.
---
## 🔧 Development
### Running Tests
```bash
pytest tests/
```
### Building the Package
```bash
python -m pip install build
python -m build
```
### Local Installation
```bash
pip install -e .
```
---
## 📋 Requirements
- **Python**: 3.8 or higher
- **Dependencies**: None required (all skills use Python standard library)
- **Optional**: firebase-admin (for Firestore support in MemorySystem)
---
## 📖 Documentation
Full documentation is available at:
- **GitHub**: https://github.com/JusticeApex/justice-apex-skills
- **ReadTheDocs**: https://justice-apex-skills.readthedocs.io
- **API Reference**: See individual skill README files in skill directories
---
## 🤝 Contributing
Contributions welcome! Please:
1. Fork the repository
2. Create a feature branch
3. Submit a pull request
See [CONTRIBUTING.md](https://github.com/JusticeApex/justice-apex-skills/blob/main/CONTRIBUTING.md) for details.
---
## 📜 License
MIT License — See [LICENSE](LICENSE) file for details.
---
## 🙋 Support
- **Issues**: https://github.com/JusticeApex/justice-apex-skills/issues
- **Discussions**: https://github.com/JusticeApex/justice-apex-skills/discussions
- **Email**: team@justiceapex.com
---
**Justice Apex LLC** — Building the future of autonomous AI systems. ⚡
| text/markdown | Justice Apex LLC | Justice Apex LLC <team@justiceapex.com> | null | Justice Apex LLC <team@justiceapex.com> | MIT | ai, autonomous-systems, machine-learning, python, consciousness, trading, blockchain, orchestration, agent, swarm | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Office/Business :: Financial",
"Typing :: Typed"
] | [] | https://github.com/JusticeApex/justice-apex-skills | null | >=3.8 | [] | [] | [] | [
"firebase-admin>=6.0.0; extra == \"firebase\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.20.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"flake8>=6.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"sphinx>=6.0.0; extra == \"dev\"",
"sphinx>=6.0.0; extra == \"docs\"",
"sphinx-rtd-theme>=1.0.0; extra == \"docs\"",
"sphinx-autodoc-typehints>=1.0.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/JusticeApex/justice-apex-skills",
"Documentation, https://justice-apex-skills.readthedocs.io",
"Bug Tracker, https://github.com/JusticeApex/justice-apex-skills/issues",
"Source Code, https://github.com/JusticeApex/justice-apex-skills"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T18:18:52.953527 | justice_apex_skills-1.0.0.tar.gz | 326,639 | 90/b1/82a0df49b640e516cef812675d40824b613dd12664378d1d0c70eca29f2e/justice_apex_skills-1.0.0.tar.gz | source | sdist | null | false | a2edcbbf85936f6ed72648b572b5ab52 | b1fd2512d1707ff9f62467cf55043f5b6c619c4b2327ab6d6580d26228f9a5e9 | 90b182a0df49b640e516cef812675d40824b613dd12664378d1d0c70eca29f2e | null | [
"LICENSE"
] | 204 |
2.2 | ai-atlasforge | 1.9.1 | Autonomous AI research and development platform powered by Claude | # AI-AtlasForge
An autonomous AI research and development platform with multi-provider LLM support (Claude, Codex, Gemini). Run long-duration missions, accumulate cross-session knowledge, and build software autonomously.
## What is AI-AtlasForge?
AI-AtlasForge is not a chatbot wrapper. It's an **autonomous research engine** that:
- Runs multi-day missions without human intervention
- Maintains mission continuity across context windows
- Accumulates knowledge that persists across sessions
- Self-corrects when drifting from objectives
- Adversarially tests its own outputs
- **Multi-provider**: Supports Claude, OpenAI Codex, and Google Gemini as LLM backends
## Quick Start
### Prerequisites
- Python 3.10+
- Anthropic API key (get one at https://console.anthropic.com/)
- Linux environment (tested on Ubuntu 22.04+, Debian 12+)
> **Platform Notes:**
> - **Windows:** Use WSL2 (Windows Subsystem for Linux)
> - **macOS:** Should work but is untested. Please report issues.
### Option 1: Standard Installation
```bash
# Clone the repository
git clone https://github.com/DragonShadows1978/AI-AtlasForge.git
cd AI-AtlasForge
# Run the installer
./install.sh
# Configure your API key
export ANTHROPIC_API_KEY='your-key-here'
# Or edit config.yaml / .env
# Verify installation
./verify.sh
```
### Option 2: One-Liner Install
```bash
curl -sSL https://raw.githubusercontent.com/DragonShadows1978/AI-AtlasForge/main/quick_install.sh | bash
```
### Option 3: Docker Installation
```bash
git clone https://github.com/DragonShadows1978/AI-AtlasForge.git
cd AI-AtlasForge
docker compose up -d
# Dashboard at http://localhost:5050
```
For detailed installation options, see [INSTALL.md](INSTALL.md) or [QUICKSTART.md](QUICKSTART.md).
### Running Your First Mission
1. **Start the Dashboard** (optional, for monitoring):
```bash
make dashboard
# Or: python3 dashboard_v2.py
# Access at http://localhost:5050
```
2. **Create a Mission**:
- Via Dashboard: Click "Create Mission" and enter your objectives
- Via Sample: Run `make sample-mission` to load a hello-world mission
- Via JSON: Create `state/mission.json` manually
3. **Start the Engine**:
```bash
make run
# Or: python3 atlasforge_conductor.py --mode=rd
```
### Development Commands
Run `make help` to see all available commands:
```bash
make install # Full installation
make verify # Verify installation
make dashboard # Start dashboard
make run # Start autonomous agent
make docker # Start with Docker
make sample-mission # Load sample mission
```
## What's New in v1.8.4
- **Handoff System Overhaul** - Complete rework of the conductor handoff system for improved reliability across mission cycles
- **Widget Visibility Toggles** - Dashboard widgets can now be hidden/shown without disabling backend services
- **Dashboard Drag & Drop** - Drag-and-drop widget reordering with layout presets, undo/redo, and touch support
- **Context Watcher Improvements** - Enhanced token tracking and handoff logic
- **Systemd Auto-Start** - Fixed graphical-session.target dependency on Linux Mint, Dashboard and Tray services now auto-start on boot via default.target
## What's New in v1.8.3
- **Test Harness Improvements** - Refactored subprocess mocking in conductor timeout tests, improved phase-aware drift validation, provider-aware ground rules caching
- **Stability Fixes** - Enhanced test coverage for timeout scenarios, improved error handling in stage handlers, Gemini provider integration tests
## What's New in v1.8.2
- **Bug Fixes** - Fixed null handling in suggestion analyzer, improved storage fallback in dashboard similarity analysis
## What's New in v1.8.1
- **Dashboard Services Config** - Added Atlas Lab service configuration to services registry
## What's New in v1.8.0
- **Google Gemini Support** - Full provider integration with subscription-based API access. Gemini missions validated on complex codebases (custom autograd implementations). Code generation, testing, and iteration loops proven functional
- **Provider-Agnostic Architecture** - Three LLM backends (Claude, Codex, Gemini) running through unified orchestration with provider-specific hardening
- **Enhanced Gemini Integration** - Defensive API invocation, clear error parsing, subscription auth support (API key or OAuth)
- **Mission Validation** - Tested Gemini on Project Tensor (custom autograd) - improved code robustness and performance through multi-cycle iteration
## What's New in v1.7.0
- **OpenAI Codex Support** - Full multi-provider support: run missions and investigations with Claude or Codex as the LLM backend. Provider-aware ground rules, prompt templates, and transcript handling
- **Ground Rules Loader** - Provider-aware ground rules system with overlay support for Claude/Codex/investigation modes
- **Enhanced Context Watcher** - Major overhaul with improved token tracking, time-based handoff, and Haiku-powered summaries
- **Experiment Framework** - Expanded scientific experiment orchestration with multi-hypothesis testing
- **Investigation Engine** - Enhanced multi-subagent investigation system with provider selection
- **Dashboard Improvements** - New widgets system, improved chat interface, better WebSocket handling
- **Transcript Archival** - New integration for automatic transcript archival
- 110 files changed, 3500+ lines added across the platform
## Architecture
```
+-------------------+
| Mission State |
| (mission.json) |
+--------+----------+
|
+--------------+--------------+
| |
+---------v---------+ +--------v--------+
| AtlasForge | | Dashboard |
| (Execution Engine)| | (Monitoring) |
+---------+---------+ +-----------------+
|
+---------v---------+ +-------------------+
| Modular Engine |<------->| Context Watcher |
| (StageOrchestrator)| | (Token + Time) |
+---------+---------+ +-------------------+
|
+---------v-------------------+
| Stage Handlers |
| |
| PLANNING -> BUILDING -> |
| TESTING -> ANALYZING -> |
| CYCLE_END -> COMPLETE |
+-----------------------------+
|
+---------v-------------------+
| Integration Manager |
| (Event-Driven Hooks) |
+-----------------------------+
```
## Mission Lifecycle
1. **PLANNING** - Understand objectives, research codebase, create implementation plan
2. **BUILDING** - Implement the solution
3. **TESTING** - Validate implementation
4. **ANALYZING** - Evaluate results, identify issues
5. **CYCLE_END** - Generate reports, prepare continuation
6. **COMPLETE** - Mission finished
Missions can iterate through multiple cycles until success criteria are met.
## Core Components
### atlasforge.py
Main execution loop. Spawns Claude instances, manages state, handles graceful shutdown.
### af_engine/ (Modular Engine)
Plugin-based mission execution system:
- **StageOrchestrator** - Core workflow orchestrator (~300 lines)
- **Stage Handlers** - Pluggable handlers for each stage (Planning, Building, Testing, Analyzing, CycleEnd, Complete)
- **IntegrationManager** - Event-driven integration coordination
- **PromptFactory** - Template-based prompt generation
### Mission Queue
Queue multiple missions to run sequentially:
- Auto-start next mission when current completes
- Set cycle budgets per mission
- Priority ordering
- Dashboard integration for queue management
### Context Watcher
Real-time context monitoring to prevent timeout waste:
- **Token-based detection**: Monitors JSONL transcripts for context exhaustion (130K/140K thresholds)
- **Time-based detection**: Proactive handoff at 55 minutes before 1-hour timeout
- **Haiku-powered summaries**: Generates intelligent HANDOFF.md via Claude Haiku
- **Automatic recovery**: Sessions continue from HANDOFF.md on restart
See [context_watcher/README.md](context_watcher/README.md) for detailed documentation.
### dashboard_v2.py
Web-based monitoring interface showing mission status, knowledge base, and analytics.
### Knowledge Base
SQLite database accumulating learnings across all missions:
- Techniques discovered
- Insights gained
- Gotchas encountered
- Reusable code patterns
### Adversarial Testing
Separate Claude instances that test implementations:
- RedTeam agents with no implementation knowledge
- Mutation testing
- Property-based testing
### GlassBox
Post-mission introspection system:
- Transcript parsing
- Agent hierarchy reconstruction
- Stage timeline visualization
## Key Features
### Display Layer (Windows)
Visual environment for graphical application testing:
- Screenshot capture from virtual display
- Web-accessible display via noVNC (localhost:6080)
- Web terminal via ttyd (localhost:7681)
- Browser support for OAuth flows and web testing
- Automatic GPU detection with software fallback
See [docs/DISPLAY_LAYER.md](workspace/docs/DISPLAY_LAYER.md) for the user guide.
### Mission Continuity
Missions survive context window limits through:
- Persistent mission.json state
- Cycle-based iteration
- Continuation prompts that preserve context
### Knowledge Accumulation
Every mission adds to the knowledge base. The system improves over time as it learns patterns, gotchas, and techniques.
### Autonomous Operation
Designed for unattended execution:
- Graceful crash recovery
- Stage checkpointing
- Automatic cycle progression
## Directory Structure
```
AI-AtlasForge/
+-- atlasforge_conductor.py # Main orchestrator
+-- af_engine/ # Modular engine package
| +-- orchestrator.py # StageOrchestrator
| +-- stages/ # Stage handlers
| +-- integrations/ # Event-driven integrations
+-- .af_archived/ # Archived legacy files (pre-modular engine backups)
+-- context_watcher/ # Context monitoring module
| +-- context_watcher.py # Token + time-based handoff
| +-- tests/ # Context watcher tests
+-- dashboard_v2.py # Web dashboard
+-- adversarial_testing/ # Testing framework
+-- atlasforge_enhancements/ # Enhancement modules
+-- workspace/ # Active workspace
| +-- glassbox/ # Introspection tools
| +-- artifacts/ # Plans, reports
| +-- research/ # Notes, findings
| +-- tests/ # Test scripts
+-- state/ # Runtime state
| +-- mission.json # Current mission
| +-- claude_state.json # Execution state
+-- missions/ # Mission workspaces
+-- atlasforge_data/
| +-- knowledge_base/ # Accumulated learnings
+-- logs/ # Execution logs
```
## Configuration
AI-AtlasForge uses environment variables for configuration:
| Variable | Default | Description |
|----------|---------|-------------|
| `ATLASFORGE_PORT` | `5050` | Dashboard port |
| `ATLASFORGE_ROOT` | (script directory) | Base directory |
| `ATLASFORGE_DEBUG` | `false` | Enable debug logging |
| `USE_MODULAR_ENGINE` | `true` | Use new modular engine (set to `false` for legacy) |
## Dashboard Features
The web dashboard provides real-time monitoring:
- **Mission Status** - Current stage, progress, timing
- **Activity Feed** - Live log of agent actions
- **Knowledge Base** - Search and browse learnings
- **Analytics** - Token usage, cost tracking
- **Mission Queue** - Queue and schedule missions
- **GlassBox** - Post-mission analysis
## Philosophy
**First principles only.** No frameworks hiding integration failures. Every component built from scratch for full visibility.
**Speed of machine, not human.** Designed for autonomous operation. Check in when convenient, not when required.
**Knowledge accumulates.** Every mission adds to the knowledge base. The system gets better over time.
**Trust but verify.** Adversarial testing catches what regular testing misses. The same agent that writes code doesn't validate it.
## Requirements
- Python 3.10+
- Node.js 18+ (optional, for dashboard JS modifications)
- Anthropic API key
- Linux environment (Ubuntu 22.04+, Debian 12+)
### Python Dependencies
See `requirements.txt` or `pyproject.toml` for full list.
## Documentation
- [QUICKSTART.md](QUICKSTART.md) - Get started in 5 minutes
- [INSTALL.md](INSTALL.md) - Detailed installation guide
- [USAGE.md](USAGE.md) - How to use AI-AtlasForge
- [ARCHITECTURE.md](ARCHITECTURE.md) - System architecture
- [DISPLAY_LAYER.md](workspace/docs/DISPLAY_LAYER.md) - Display Layer user guide (Windows)
- [TROUBLESHOOTING.md](workspace/docs/TROUBLESHOOTING.md) - Display Layer troubleshooting
## Recent Changes
### v1.9.1 (2026-02-20)
- **Dashboard Filter Persistence** - All dashboard filters, sorts, and search state now persist across page reloads via versioned localStorage schema
- **Mission Suggestion Sort/Filter Persistence** - Sort field, sort direction, tag filter, and health filter all persist (schema v2 with migration from legacy flat-map)
- **Analytics Period Persistence** - Selected analytics time period persists across sessions
- **Glassbox UI Persistence** - Search query, date range, and selected mission persist in Glassbox viewer
- **Global Preference Registry** - Centralized `ALL_PREFERENCE_KEYS` list and `clearAllPreferences()` for one-click reset
- **Stage Gate Lock File Fix** - Hook now bypasses all enforcement when no active Conductor process is detected via lock file; fixes normal Claude Code terminal usage being blocked post-mission
- **Stage Normalization** - Stage names normalized to uppercase when read from lock file; prevents silent bypass on lowercase stage values
### v1.9.0 (2026-02-20)
- **Modular Engine Only** - Retired legacy monolithic `af_engine.py` (3,688 lines); modular `af_engine/` package is now the sole engine implementation
- **Archival Module** - Migrated transcript archival functions to `af_engine/core/archival.py`; removed `importlib.util` dynamic loading hack
- **Engine Init Simplified** - `af_engine/__init__.py` reduced from ~150 lines to ~50; `USE_MODULAR_ENGINE` feature flag removed entirely
- **Dashboard WebSocket Push** - Live stage updates pushed to connected clients when af_engine stage changes; no polling required
- **Analytics Integration** - Dashboard analytics endpoints enriched with engine-native metrics (success rate, execution time, task counts)
- **Stage Gate Enforcement** - Two-layer stage enforcement: CLI `--disallowedTools` per stage + hook-level path restrictions
### v1.8.7 (2026-02-19)
- **Widget Settings Popup** - Mobile panel reordering via widget settings buttons
- **Collapsed Card Improvements** - Stage indicator and health summary remain visible when widgets are collapsed
- **Dashboard CSS** - Refined collapsed card styling and status card layout
### v1.8.6 (2026-02-19)
- **Widget Control Mechanism** - Overhauled widget visibility toggle system; widgets can be hidden/shown independently of backend services
- **Token Sanity Check** - New integration that validates token counts before handoff to prevent corrupt context windows
- **Transcript Archival** - Improved automatic transcript archival integration
- **Orchestrator Updates** - Enhanced stage orchestration reliability
- **Dashboard Queue Scheduler** - Improved mission queue scheduling and priority handling
- **Dashboard Drag-Drop** - Refined drag-and-drop widget reordering with better touch support
### v1.8.5 (2026-02-18)
- **CLAUDECODE env fix** - Conductor now strips `CLAUDECODE` env var before spawning Claude subprocesses, preventing "nested session" crash when launched from an active Claude Code session
- **Multiple mission completions** - AtlasLab fork mission, StoryForge missions, and several R&D cycles completed autonomously
- **Widget visibility toggles** - Dashboard widgets can now be hidden without disabling backend
- **Handoff system overhaul** - Major rework of session handoff and continuity system
### v1.8.4 (2026-02-15)
- Drag-and-drop widget reordering in dashboard
- Handoff system overhaul with improved continuity
- Widget visibility toggles
### v1.7.0 (2026-02-06)
- **OpenAI Codex Support** - Multi-provider LLM backend: run missions and investigations with Claude or Codex. Provider-aware ground rules, prompts, and transcript handling
- **Ground Rules Loader** - Provider-aware ground rules system with overlay support for Claude/Codex/investigation modes
- **Enhanced Context Watcher** - Major overhaul with improved token tracking, time-based handoff, and Haiku-powered summaries
- **Experiment Framework** - Expanded scientific experiment orchestration with multi-hypothesis testing
- **Investigation Engine** - Enhanced multi-subagent investigation system with provider selection
- **Dashboard Improvements** - New widgets system, improved chat interface, better WebSocket handling
- **PromptFactory Enhancements** - Provider-aware caching, AfterImage integration with fallback paths
- **Conductor Hardening** - Improved session management, singleton protocol, crash recovery
- **Transcript Archival** - New integration for automatic transcript archival
- **Research Agent** - Improved web researcher and knowledge synthesizer
- 110 files changed, 3500+ lines added across the platform
### v1.6.9 (2026-02-02)
- Fixed GlassBox visualization issues
### v1.6.8 (2026-02-01)
- Fixed zombie timer bug - stale session cleanup now stops timer threads
- Fixed continuation prompt bug - cycle progression now updates problem_statement
- Added conductor singleton with takeover protocol (prevents multiple instances)
### v1.6.7 (2026-02-01)
- Fixed JSON response parsing bug in conductor (handles markdown code blocks)
- ContextWatcher stability improvements
### v1.6.5 (2026-01-31)
- Build checkpoint improvements
- Mission state persistence fixes
## License
MIT License - see [LICENSE](LICENSE) for details.
## Contributing
Contributions are welcome! Please feel free to submit issues and pull requests.
## Related Projects
- **[AI-AfterImage](https://github.com/DragonShadows1978/AI-AfterImage)** - Episodic memory for AI coding agents. Gives Claude Code persistent memory of code it has written across sessions. Works great with AtlasForge for cross-mission code recall.
## Acknowledgments
Built on Claude by Anthropic. Special thanks to the Claude Code team for making autonomous AI development possible.
| text/markdown | null | null | null | null | MIT | ai, claude, autonomous, research, development | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Code Generators"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"flask>=2.0.0",
"flask-socketio>=5.0.0",
"simple-websocket>=0.5.0",
"anthropic>=0.18.0",
"watchdog>=3.0.0",
"psutil>=5.9.0",
"numpy>=1.21.0",
"scikit-learn>=1.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"flake8>=6.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"PyGObject>=3.42.0; extra == \"tray\"",
"ai-atlasforge[dev,tray]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/DragonShadows1978/AI-AtlasForge",
"Documentation, https://github.com/DragonShadows1978/AI-AtlasForge#readme",
"Repository, https://github.com/DragonShadows1978/AI-AtlasForge.git",
"Issues, https://github.com/DragonShadows1978/AI-AtlasForge/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T18:18:02.639812 | ai_atlasforge-1.9.1.tar.gz | 231,724 | f3/2a/6aaac607ecd80adbe7c727a61098ff42921218bd8f83ef656071ae2d4971/ai_atlasforge-1.9.1.tar.gz | source | sdist | null | false | 1591ffea5e9149e3db2bac6e51bb33f6 | 19e28c516915591a1692855b9e2e6b3ee5429396e6befcd301ad22cba91ee308 | f32a6aaac607ecd80adbe7c727a61098ff42921218bd8f83ef656071ae2d4971 | null | [] | 219 |
2.4 | pympp | 0.2.0 | HTTP 402 Payment Authentication for Python | # pympp
Python SDK for the [**Machine Payments Protocol**](https://machinepayments.dev)
[](https://pypi.org/project/pympp/)
[](LICENSE)
## Documentation
Full documentation, API reference, and guides are available at **[machinepayments.dev/sdk/python](https://machinepayments.dev/sdk/python)**.
## Install
```bash
pip install pympp
```
## Quick Start
### Server
```python
from mpp import Credential, Receipt
from mpp.server import Mpp
from mpp.methods.tempo import tempo, ChargeIntent
server = Mpp.create(
method=tempo(
intents={"charge": ChargeIntent()},
currency="0x20c0000000000000000000000000000000000000",
recipient="0x742d35Cc6634c0532925a3b844bC9e7595F8fE00",
),
)
@app.get("/paid")
@server.pay(amount="0.50")
async def handler(request, credential: Credential, receipt: Receipt):
return {"data": "...", "payer": credential.source}
```
### Client
```python
from mpp.client import Client
from mpp.methods.tempo import tempo, TempoAccount, ChargeIntent
account = TempoAccount.from_key("0x...")
async with Client(methods=[tempo(account=account, intents={"charge": ChargeIntent()})]) as client:
response = await client.get("https://api.example.com/resource")
```
## Examples
| Example | Description |
|---------|-------------|
| [api-server](./examples/api-server/) | Payment-gated API server |
| [fetch](./examples/fetch/) | CLI tool for fetching URLs with automatic payment handling |
| [mcp-server](./examples/mcp-server/) | MCP server with payment-protected tools |
## Protocol
Built on the ["Payment" HTTP Authentication Scheme](https://datatracker.ietf.org/doc/draft-ietf-httpauth-payment/). See [mpp-specs](https://tempoxyz.github.io/mpp-specs/) for the full specification.
## License
MIT OR Apache-2.0
| text/markdown | Tempo | null | null | null | MIT OR Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"httpx>=0.27",
"hypothesis>=6.0; extra == \"dev\"",
"pytest-asyncio>=0.24; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"mcp>=1.1.0; extra == \"mcp\"",
"pydantic>=2.0; extra == \"server\"",
"python-dotenv>=1.0; extra == \"server\"",
"pydantic>=2.0; extra == \"tempo\"",
"pytempo>=0.2.1; extra == \"tempo\""
] | [] | [] | [] | [
"Homepage, https://github.com/tempoxyz/pympp",
"Documentation, https://github.com/tempoxyz/pympp#readme"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T18:18:01.599597 | pympp-0.2.0.tar.gz | 78,376 | 32/af/642f6d4ec810a2e89d1b15e88533d870008a345d0f5164533671482448d6/pympp-0.2.0.tar.gz | source | sdist | null | false | 42db0c85e3829715a565be15f41eaf95 | c69862b4b1a69ae278b626214e0ac73f391bb655c741b1d98c5bdcca53ca95b7 | 32af642f6d4ec810a2e89d1b15e88533d870008a345d0f5164533671482448d6 | null | [
"LICENSE-APACHE",
"LICENSE-MIT"
] | 200 |
2.4 | rxfoundry.clients.swifty-receiver-api | 0.1.1076 | Swifty Receiver API | API for the Swifty Receiver
| text/markdown | RxFoundry Team | paul.tindall@rxfoundry.com | null | null | null | OpenAPI, OpenAPI-Generator, Swifty Receiver API | [] | [] | null | null | null | [] | [] | [] | [
"urllib3<3.0.0,>=2.1.0",
"python-dateutil>=2.8.2",
"pydantic>=2",
"typing-extensions>=4.7.1"
] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.11.14 Linux/5.10.0-32-cloud-amd64 | 2026-02-20T18:17:46.243098 | rxfoundry_clients_swifty_receiver_api-0.1.1076-py3-none-any.whl | 28,413 | 33/c7/6f03d2ee4e2af190f9d215253b2715f3b175df30fc8bb4cea0661ee09143/rxfoundry_clients_swifty_receiver_api-0.1.1076-py3-none-any.whl | py3 | bdist_wheel | null | false | 8cb9b172c3abb52fb35de4b98cf5ff23 | 0a00267e89dc837ae2fc570c3b2e61c084a5fd709ddec3e31d769ec13cc2d14c | 33c76f03d2ee4e2af190f9d215253b2715f3b175df30fc8bb4cea0661ee09143 | null | [] | 0 |
2.4 | rxfoundry.clients.swifty-api | 0.1.1076 | SwiftyRX API | API for the SwiftyRX Backend
| text/markdown | RxFoundry Team | paul.tindall@rxfoundry.com | null | null | null | OpenAPI, OpenAPI-Generator, SwiftyRX API | [] | [] | null | null | null | [] | [] | [] | [
"urllib3<3.0.0,>=2.1.0",
"python-dateutil>=2.8.2",
"pydantic>=2",
"typing-extensions>=4.7.1"
] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.11.14 Linux/5.10.0-32-cloud-amd64 | 2026-02-20T18:17:44.426420 | rxfoundry_clients_swifty_api-0.1.1076-py3-none-any.whl | 190,792 | 53/40/abd6b3103a842ccb2524c0e0352e7fb1989d9136caa562969c1381f69eb9/rxfoundry_clients_swifty_api-0.1.1076-py3-none-any.whl | py3 | bdist_wheel | null | false | 954d95ffb1b5cbaa5ca5583a779d3df7 | 15c37fa4f96885ce1dd9619dde7569fb96f27d8194f6eaf92d877135e02587d6 | 5340abd6b3103a842ccb2524c0e0352e7fb1989d9136caa562969c1381f69eb9 | null | [] | 0 |
2.4 | rxfoundry.clients.swifty-oauth-api | 0.1.1076 | Swifty OAuth API | API for the Swifty OAuth Backend
| text/markdown | RxFoundry Team | paul.tindall@rxfoundry.com | null | null | null | OpenAPI, OpenAPI-Generator, Swifty OAuth API | [] | [] | null | null | null | [] | [] | [] | [
"urllib3<3.0.0,>=2.1.0",
"python-dateutil>=2.8.2",
"pydantic>=2",
"typing-extensions>=4.7.1"
] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.11.14 Linux/5.10.0-32-cloud-amd64 | 2026-02-20T18:17:41.055173 | rxfoundry_clients_swifty_oauth_api-0.1.1076-py3-none-any.whl | 27,822 | 3a/7d/5cc9d6319429fa531cc12d9fc622074f93990e48c562feca25f10418cc07/rxfoundry_clients_swifty_oauth_api-0.1.1076-py3-none-any.whl | py3 | bdist_wheel | null | false | 3cc240653e514e48acbfa8cfba2fefb6 | a475c82b47660830614d33ecf609a0b3dcf1875d6556e0dec51088da4a7a659e | 3a7d5cc9d6319429fa531cc12d9fc622074f93990e48c562feca25f10418cc07 | null | [] | 0 |
2.4 | neuro-san | 0.6.33 | NeuroAI data-driven System for multi-Agent Networks - client, library and server | # Neuro SAN Data-Driven Agents
[](https://deepwiki.com/cognizant-ai-lab/neuro-san)
**Neuro AI system of agent networks (Neuro SAN)** is a library for building data-driven multi-agent networks
which can be run as a library, or served up via an HTTP server.
Motivation: People come with all their hopes and dreams to lay them at the altar
of a single LLM/agent expecting it to do the most complex tasks. This often fails
because the scope is often too big for a single LLM to handle. People expect the
equivalent of an adult PhD to be at their disposal, but what you really get is a high-school intern.
Solution: Allow these problems to be broken up into smaller pieces so that multiple LLM-enabled
agents can communicate with each other to solve a single problem.
Neuro SAN agent networks can be entirely specified in a data-only
[HOCON](https://github.com/lightbend/config/blob/main/HOCON.md)
file format (think: JSON with comments, among other things), enabling subject matter experts
to be the authors of complex agent networks, not just programmers.
Neuro SAN agent networks can also call CodedTools (langchain or our own interface) which do things
that LLMs can't on their own like: Query a web service, effectuate change via a web API, handle
private data correctly, do complex math operations, copy large bits of data without error.
While this aspect _does_ require programming skills, what the savvy gain with Neuro SAN is a new way
to think about your problems that involves a weave between natural language tasks that LLMs are good at
and traditional computing tasks which deterministic Python code gives you.
Neuro SAN also offers:
* channels for private data (aka sly_data) that should be kept out of LLM chat streams
* LLM-provider agnosticism and extensibility of data-only-configured LLMs when new hotness arrives.
* agent-specific LLM specifications - use the right LLM for the cost/latency/context-window/data-privacy each agent needs.
* fallback LLM specifications for when your fave goes down.
* powerful debugging information for gaining insight into your mutli-agent systems.
* cloud-agnostic server-readiness at scale - run where you want
* enabling distributed agent webs that call each other to work together, wherever they are hosted.
* security-by-default - you set what private data is to be shared downstream/upstream
* Out-of-the-box support for Observability/tracing data feeds for apps like LangSmith, Arize Phoenix and HoneyHive.
* test infrastructure for your agent networks, including:
* data-driven test cases
* the ability for LLMs to test your agent networks
* an Assessor app which classifies the modes of failure for your agents, given a data-driven test case
* MCP protocol API - Every Neuro SAN server can be an MCP Server.
* per-user authorization for Agent Networks - optional implementations include: OpenFGA
## Quick Start
**🚀 For the easiest way to get started, use our automated quick start scripts!**
See the [quick-start/README.md](quick-start/README.md) for simple one-command scripts that handle all setup automatically:
- **macOS/Linux:** `./quick-start/start-server.sh`
- **Windows:** `quick-start\start-server.bat`
### Prerequisites
Before running the quick start scripts, ensure you have:
- You have Python 3.12 or better installed on your machine
- You have virtual environment support for Python installed (typically included with Python 3.12+)
These scripts automatically:
- Create and activate virtual environment
- Install all dependencies
- Set up environment variables
- Enable CORS for web applications
- Launch the server
For manual setup, continue with the instructions below.
## Running client and server
### Prep
#### Setup your virtual environment
##### Install Python dependencies
Set PYTHONPATH environment variable
export PYTHONPATH=$(pwd)
Create and activate a new virtual environment:
python3 -m venv venv
. ./venv/bin/activate
pip install neuro-san
OR from the neuro-san project top-level:
Install packages specified in the following requirements files:
pip install -r requirements.txt
##### Set necessary environment variables
In a terminal window, set at least these environment variables:
export OPENAI_API_KEY="XXX_YOUR_OPENAI_API_KEY_HERE"
Any other API key environment variables for other LLM provider(s) also need to be set if you are using them.
### Using as a library (Direct)
From the top-level of this repo:
python -m neuro_san.client.agent_cli --agent hello_world
Type in this input to the chat client:
From earth, I approach a new planet and wish to send a short 2-word greeting to the new orb.
What should return is something like:
Hello, world.
... but you are dealing with LLMs. Your results will vary!
### Client/Server Setup
#### Server
In the same terminal window, be sure the environment variable(s) listed above
are set before proceeding.
Option 1: Run the service directly. (Most useful for development)
python -m neuro_san.service.main_loop.server_main_loop
Option 2: Build and run the docker container for the hosting agent service:
./neuro_san/deploy/build.sh ; ./neuro_san/deploy/run.sh
These build.sh / Dockerfile / run.sh scripts are intended to be portable so they can be used with
your own projects' registries and coded_tools work.
ℹ️ Ensure the required environment variables
(OPENAI_API_KEY, AGENT_TOOL_PATH, AGENT_MANIFEST_FILE, and PYTHONPATH)
are passed into the container — either by exporting them before running run.sh,
or by configuring them inside the script
#### Client
In another terminal start the chat client:
python -m neuro_san.client.agent_cli --http --agent hello_world
### Extra info about agent_cli.py
There is help to be had with --help.
By design, you cannot see all agents registered with the service from the client.
When the chat client is given a newline as input, that implies "send the message".
This isn't great when you are copy/pasting multi-line input. For that there is a
--first_prompt_file argument where you can specify a file to send as the first
message.
You can send private data that does not go into the chat stream as a single escaped
string of a JSON dictionary. For example:
--sly_data "{ \"login\": \"your_login\" }"
## Running Python unit/integration tests
To run Python unit/integration tests, follow the [instructions](docs/tests.md) here.
## Creating a new agent network
### Agent example files
Look at the hocon files in ./neuro_san/registries for examples of specific agent networks.
The natural question to ask is: What is a hocon file?
The simplest answer is that you can think of a hocon file as a JSON file that allows for comments.
Here are some descriptions of the example hocon files provided in this repo.
To play with them, specify their stem as the argument for --agent on the agent_cli.py chat client.
In some order of complexity, they are:
* hello_world
This is the initial example used above and demonstrates
a front-man agent talking to another agent downstream.
* esp_decision_assistant
Very abstract, but also very powerful.
A front man agent gathers information about a decision to make
in ESP terms. It then calls a prescriptor which in turn
calls one or more predictors in order to help make the decision
in an LLM-based ESP manner.
When coming up with new hocon files in that same directory, also add an entry for it
in the manifest.hocon file.
build.sh / run.sh the service like you did above to re-load the server,
and interact with it via the agent_cli.py chat client, making sure
you specify your agent correctly (per the hocon file stem).
### More agent example files
Note that the .hocon files in this repo are more spartan for testing and simple
demonstration purposes.
For more examples of agent networks, documentation and tutorials,
see the [neuro-san-studio repo.](https://github.com/cognizant-ai-lab/neuro-san-studio)
For a complete list of agent networks keys, see the [agent hocon file reference](docs/agent_hocon_reference.md)
### Manifest file
All agents used need to have an entry in a single manifest hocon file.
For the neuro-san repo, this is: neuro_san/registries/manifest.hocon.
When you create your own repo for your own agents, that will be different
and you will need to create your own manifest file. To point the system
at your own manifest file, set a new environment variable:
export AGENT_MANIFEST_FILE=<your_repo>/registries/manifest.hocon
## Infrastructure
The agent infrastructure is run as a library, or as an HTTP service.
Access to agents is implemented (client and server) using the
[AgentSession](https://github.com/cognizant-ai-lab/neuro-san/blob/main/neuro_san/interfaces/agent_session.py)
interface:
It has 2 main methods:
* function()
This tells the client what the top-level agent will do for it.
* streaming_chat()
This is the main entry point. Send some text and it starts a conversation
with a "front man" agent. If that guy needs more information it will ask
you and you return your answer via another call to the chat() interface.
ChatMessage Results from this method are streamed and when the conversation
is over, the stream itself closes after the last message has been received.
ChatMessages of various types will come back over the stream.
Anything of type AI is the front-man answering you on behalf of the rest of
its agent posse, so this is the kind you want to pay the most attention to.
Implementations of the AgentSession interface:
* DirectAgentSession class. Use this if you want to call neuro-san as a library
* HttpServiceAgentSession class. Use this if you want to call neuro-san as a client to a HTTP service
Note that agent_cli uses all of these. You can look at the source code there for examples.
There are also some asynchoronous implementations available of the
[AsyncAgentSession](https://github.com/cognizant-ai-lab/neuro-san/blob/main/neuro_san/interfaces/async_agent_session.py)
interface:
## Advanced concepts
### Coded Tools
Most of the examples provided here show how no-code agents are put together,
but neuro-san agent networks support the notion of coded tools for
low-code solutions.
These are most often used when an agent needs to call out to a specific
web service, but they can be any kind of Python code as long it
derives from the CodedTool interface defined in neuro_san/interfaces/coded_tool.py.
The main interface for this class looks like this:
async def async_invoke(self, args: Dict[str, Any], sly_data: Dict[str, Any]) -> Any:
Note that while a synchronous version of this method is available for tire-kicking convenience,
this asynchronous interface is the preferred entry point because neuro-san itself is designed
to operate in an asynchronous server environment to enhance agent parallelism.
The args are an argument dictionary passed in by the calling LLM, whose keys
are defined in the agent's hocon entry for the CodedTool.
The intent with sly_data is that the data in this dictionary is to never supposed to enter the chat stream.
Most often this is private data, but sly_data can also be used as a bulletin-board as a place
for CodedTools to cooperate on their results.
Sly data has many potential originations:
* sent explicitly by a client (usernames, tokens, session ids, etc),
* generated by other CodedTools
* generated by other agent networks.
See the class and method comments in neuro_san/interfaces/coded_tool.py for more information.
When you develop your own coded tools, there is another environment variable
that comes into play:
export AGENT_TOOL_PATH=<your_repo>/coded_tools
Beneath this, classes are dynamically resolved based on their agent name.
That is, if you added a new coded tool to your agent, its file path would
look like this:
<your_repo>/coded_tools/<your_agent_name>/<your_coded_tool>.py
## Creating Clients
To create clients, follow the [instructions](docs/clients.md) here.
## Using neuro-san MCP protocol API
To use neuro-san as an MCP server, see details in [mcp](docs/mcp_service.md)
| text/markdown | null | Dan Fink <Daniel.Fink@cognizant.com> | null | null | null | LLM, langchain, agent, multi-agent | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"leaf-common>=1.2.34",
"leaf-server-common>=0.1.23",
"grpcio>=1.62.0",
"grpcio-tools>=1.62.0",
"protobuf>=4.25.3",
"pyhocon>=0.3.60",
"pyOpenSSL>=24.0.0",
"boto3>=1.34.51",
"botocore>=1.34.51",
"idna>=3.6",
"urllib3>=1.26.18",
"aiohttp<4.0,>=3.13.0",
"aiofiles>=25.1.0",
"ruamel.yaml>=0.18.6",
"langchain<2.0,>=1.2.0",
"langchain-core<2.0,>=1.2.5",
"langchain-classic<2.0,>=1.0.0",
"langchain-community<1.0,>=0.4",
"openai<2.0,>=1.54.1",
"bs4<0.1,>=0.0.2",
"pydantic<3.0,>=2.9.2",
"langchain-openai<2.0,>=1.0.0",
"httpx>=0.28.1",
"psutil>=7.0.0",
"tornado>=6.4.2",
"jsonschema>=4.19.0",
"janus>=2.0.0",
"watchdog>=6.0.0",
"validators>=0.22.0",
"timedinput>=0.1.1",
"json-repair<1.0,>=0.47.3",
"langchain-mcp-adapters<1.0,>=0.1.7"
] | [] | [] | [] | [
"Homepage, https://github.com/cognizant-ai-lab/neuro-san",
"Repository, https://github.com/cognizant-ai-lab/neuro-san",
"Documentation, https://github.com/cognizant-ai-lab/neuro-san#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:17:21.823331 | neuro_san-0.6.33.tar.gz | 1,051,860 | 7e/fb/63143d3e2234e2fe86c8ada393054668cc7c24f0439a71a48a53d2848b25/neuro_san-0.6.33.tar.gz | source | sdist | null | false | 6c5a3dc9d08e0a573748cedafdfe6e21 | cd3c441d354d397d432cdea691c5d175ab04a693ed5a8b826b85523c44845e54 | 7efb63143d3e2234e2fe86c8ada393054668cc7c24f0439a71a48a53d2848b25 | Apache-2.0 | [
"LICENSE.txt"
] | 214 |
2.4 | influx-rust | 0.2.0 | High-performance InfluxDB query interface for Python | # influx-rust
High-performance InfluxDB query library for Python, powered by Rust.
[](https://pypi.org/project/influx-rust/)
[](https://pypi.org/project/influx-rust/)
[](https://opensource.org/licenses/MIT)
## Why influx-rust?
**10x faster** InfluxDB queries compared to native Python implementations. By leveraging Rust's performance with PyO3 bindings, `influx-rust` eliminates JSON serialization overhead and interpreter bottlenecks while maintaining a familiar Python API.
### Performance Comparison
| Implementation | Query Time (40+ sources, 24h range) | Improvement |
|----------------|-------------------------------------|-------------|
| **Python (influxdb-client)** | ~38-40 seconds | baseline |
| **influx-rust** | ~3-4 seconds | **10x faster** ✅ |
Real-world performance measured on production queries with 40+ data sources, aggregations, and time-windowed data.
## Features
- 🚀 **10x faster** than Python native InfluxDB clients
- 🔄 **Drop-in replacement** for existing `influxdb-client` async queries
- 🦀 **Rust-powered** performance with zero-copy deserialization
- 🐍 **Python-friendly** interface - both async/await and synchronous
- 📦 **Pre-built wheels** for Linux, macOS, and Windows
- 🔒 **Type-safe** Rust implementation with comprehensive error handling
- ⚡ **Tokio async runtime** for concurrent query execution
- 🔓 **GIL-free I/O** - releases Python's Global Interpreter Lock during queries
## Installation
```bash
pip install influx-rust
```
Requires Python 3.8 or higher.
## Quick Start
### Async Version (for async/await code)
```python
from influx_rust import get_influx_data_async
# For use in async functions
results = await get_influx_data_async(
url="https://your-influxdb.com",
token="your-token",
org="your-org",
query='''
from(bucket: "sensors")
|> range(start: -24h)
|> filter(fn: (r) => r._measurement == "temperature")
|> aggregateWindow(every: 30m, fn: mean)
'''
)
# Returns list of dictionaries
for record in results:
print(record) # {'_time': '2024-01-29T10:00:00Z', '_value': 23.5, ...}
```
### Synchronous Version (for non-async code)
```python
from influx_rust import get_influx_data
# For use in regular synchronous functions
results = get_influx_data(
url="https://your-influxdb.com",
token="your-token",
org="your-org",
query='''
from(bucket: "sensors")
|> range(start: -24h)
|> filter(fn: (r) => r._measurement == "temperature")
|> aggregateWindow(every: 30m, fn: mean)
'''
)
# Returns list of dictionaries (same as async version)
for record in results:
print(record) # {'_time': '2024-01-29T10:00:00Z', '_value': 23.5, ...}
```
## Usage Examples
### Basic Async Query
```python
import asyncio
from influx_rust import get_influx_data_async
async def fetch_temperature_data():
data = await get_influx_data_async(
url="https://influx.example.com",
token="your_token_here",
org="my_org",
query='from(bucket: "sensors") |> range(start: -1h)'
)
return data
# Run the async function
results = asyncio.run(fetch_temperature_data())
print(f"Retrieved {len(results)} records")
```
### Basic Synchronous Query
For non-async contexts (Flask, Django, Dramatiq workers, simple scripts):
```python
from influx_rust import get_influx_data
# No async/await needed - just call the function directly
def fetch_temperature_data():
data = get_influx_data(
url="https://influx.example.com",
token="your_token_here",
org="my_org",
query='from(bucket: "sensors") |> range(start: -1h)'
)
return data
# Direct function call - no asyncio.run() needed
results = fetch_temperature_data()
print(f"Retrieved {len(results)} records")
```
**Note:** The synchronous version (`get_influx_data`) still benefits from Rust's performance and properly releases Python's GIL during I/O operations, allowing other threads to run.
### Complex Query with Aggregations
```python
# Multi-source query with joins and aggregations
query = '''
from(bucket: "agua_mar")
|> range(start: -24h)
|> filter(fn: (r) => r._measurement == "oxygen")
|> filter(fn: (r) => contains(value: r.source, set: ["sensor1", "sensor2", "sensor3"]))
|> aggregateWindow(every: 30m, fn: mean)
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
'''
data = await get_influx_data_async(
url=INFLUXDB_URL,
token=INFLUXDB_TOKEN,
org=INFLUXDB_ORG,
query=query
)
```
### Integration with Existing Python Code
Drop-in replacement for `influxdb-client`:
```python
# BEFORE: Using influxdb-client (slow)
from influxdb_client import InfluxDBClient
from influxdb_client.client.write_api import SYNCHRONOUS
async def get_data_old_way():
client = InfluxDBClient(url=url, token=token, org=org)
query_api = client.query_api()
tables = query_api.query(query)
# ... process tables ...
# AFTER: Using influx-rust (fast)
from influx_rust import get_influx_data_async
async def get_data_new_way():
# Same result, 10x faster
results = await get_influx_data_async(
url=url,
token=token,
org=org,
query=query
)
return results
```
### Error Handling
```python
from influx_rust import get_influx_data_async
try:
data = await get_influx_data_async(
url="https://influx.example.com",
token="invalid_token",
org="my_org",
query="from(bucket: 'test') |> range(start: -1h)"
)
except Exception as e:
print(f"Query failed: {e}")
# Handle authentication errors, network issues, etc.
```
## How It Works
`influx-rust` uses [PyO3](https://github.com/PyO3/pyo3) to create Python bindings for a high-performance Rust implementation:
1. **Rust Core**: Uses the official [influxdb2](https://crates.io/crates/influxdb2) Rust client
2. **Zero-Copy Deserialization**: Directly processes InfluxDB responses without intermediate JSON strings
3. **Async Runtime**: Powered by [Tokio](https://tokio.rs/) for efficient concurrent operations
4. **Python Bindings**: PyO3 exposes Rust functions as native Python async functions
### Why It's Faster
| Bottleneck | Python (influxdb-client) | influx-rust |
|------------|-------------------------|-------------|
| **JSON serialization** | ❌ Double serialization (InfluxDB → JSON → Python) | ✅ Zero-copy deserialization |
| **Interpreter overhead** | ❌ Python GIL and interpreter | ✅ Compiled Rust (no GIL) |
| **Memory allocations** | ❌ Intermediate string buffers | ✅ Direct struct mapping |
| **Async runtime** | ❌ Python asyncio | ✅ Tokio (native threads) |
## Development
### Building from Source
```bash
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Clone repository
git clone https://github.com/your-org/influx-rust.git
cd influx-rust
# Create virtual environment
python -m venv .venv
source .venv/bin/activate
# Install maturin
pip install maturin
# Build and install in development mode
maturin develop --release
```
### Running Tests
```bash
# Build release version
cargo build --release
# Run Rust tests
cargo test
# Test Python integration
python -c "from influx_rust import get_influx_data_async; print('✅ Import successful')"
```
### Performance Testing
Compare performance with Python implementation:
```bash
# Set environment variables
export INFLUXDB_URL="https://your-influxdb.com"
export INFLUXDB_TOKEN="your-token"
export INFLUXDB_ORG="your-org"
# Run comparison script
./compare_performance.sh
```
## API Reference
### `get_influx_data_async`
```python
async def get_influx_data_async(
url: str,
token: str,
org: str,
query: str
) -> list[dict[str, str]]
```
Execute an InfluxDB Flux query asynchronously.
**Parameters:**
- `url` (str): InfluxDB server URL (e.g., `https://influx.example.com`)
- `token` (str): Authentication token
- `org` (str): Organization name
- `query` (str): Flux query string
**Returns:**
- `list[dict[str, str]]`: List of records as dictionaries. Each dictionary contains InfluxDB fields like `_time`, `_value`, `_measurement`, etc.
**Raises:**
- `Exception`: On authentication errors, network failures, or invalid queries
## Requirements
- **Python**: 3.8 or higher
- **Operating Systems**: Linux (x86_64, aarch64), macOS (x86_64, arm64), Windows (x86_64)
Pre-built wheels are available for all supported platforms. No Rust toolchain required for installation.
## Deployment
### Docker
When using in Docker, simply install from PyPI:
```dockerfile
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
# influx-rust installs from pre-built wheel (no Rust needed!)
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "app.py"]
```
No need to install Rust in your Docker container!
### Production Considerations
- **Connection Pooling**: influx-rust reuses connections internally
- **Timeout**: Default timeout is 30 seconds (configurable in future versions)
- **Memory**: Significantly lower memory usage vs Python client (no intermediate buffers)
- **Logging**: Enable debug logs with `RUST_LOG=debug` environment variable
## Roadmap
- [ ] Configurable timeouts
- [ ] Connection pooling configuration
- [ ] Write API support (currently read-only)
- [ ] Streaming queries for large datasets
- [ ] Custom error types instead of generic exceptions
- [ ] Sync version of the API (non-async)
## Benchmarks
Real-world production query (AquaChile agua_mar monitoring):
```bash
Query: 40+ sensors, 24h range, aggregations, joins
Records: ~12,450 records
Python (influxdb-client):
⏱️ Query execution: 38.2s
📊 Memory usage: ~450MB
influx-rust:
⏱️ Query execution: 3.8s (10x faster)
📊 Memory usage: ~180MB (2.5x less)
```
## Contributing
Contributions welcome! Please:
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## License
MIT License - see [LICENSE](LICENSE) for details.
## Credits
Built with:
- [PyO3](https://github.com/PyO3/pyo3) - Rust bindings for Python
- [influxdb2](https://crates.io/crates/influxdb2) - Official InfluxDB Rust client
- [Tokio](https://tokio.rs/) - Async runtime for Rust
- [Maturin](https://github.com/PyO3/maturin) - Build and publish Rust-based Python packages
Developed by AquaChile DevOps for high-performance aquaculture monitoring.
## Support
- 📫 Issues: [GitHub Issues](https://github.com/your-org/influx-rust/issues)
- 📖 Documentation: [GitHub Wiki](https://github.com/your-org/influx-rust/wiki)
- 💬 Discussions: [GitHub Discussions](https://github.com/your-org/influx-rust/discussions)
| text/markdown; charset=UTF-8; variant=GFM | null | AquaChile <devops@devops.com> | null | null | MIT | influxdb, rust, performance, async, database | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Rust",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Database",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://github.com/famoralesc/influx-rust/blob/main/README.md",
"Homepage, https://github.com/famoralesc/influx-rust",
"Repository, https://github.com/famoralesc/influx-rust"
] | maturin/1.11.5 | 2026-02-20T18:17:09.476357 | influx_rust-0.2.0.tar.gz | 984,124 | 46/c0/f9c43335f178051fdb3ae5e089468ef659e681706275521613f6917cbd81/influx_rust-0.2.0.tar.gz | source | sdist | null | false | 9dedca99e15e10ac4b5536dba80eae7f | 14f7c7ea1f63d801637555745ea429fb2fdacb2331ee5933e90a05e7907f10b1 | 46c0f9c43335f178051fdb3ae5e089468ef659e681706275521613f6917cbd81 | null | [] | 134 |
2.4 | honeybee-radiance-postprocess | 0.4.607 | Postprocessing of Radiance results and matrices | [](https://github.com/ladybug-tools/honeybee-radiance-postprocess/actions)
[](https://coveralls.io/github/ladybug-tools/honeybee-radiance-postprocess)
[](https://www.python.org/downloads/release/python-3100/) [](https://www.python.org/downloads/release/python-370/)
# honeybee-radiance-postprocess
Library and CLI for postprocessing of Radiance results and matrices.
## Installation
```console
pip install honeybee-radiance-postprocess
```
## QuickStart
```python
import honeybee_radiance_postprocess
```
## [API Documentation](http://ladybug-tools.github.io/honeybee-radiance-postprocess/docs)
## Local Development
1. Clone this repo locally
```console
git clone git@github.com:ladybug-tools/honeybee-radiance-postprocess
# or
git clone https://github.com/ladybug-tools/honeybee-radiance-postprocess
```
2. Install dependencies:
```console
cd honeybee-radiance-postprocess
pip install -r dev-requirements.txt
pip install -r requirements.txt
```
3. Run Tests:
```console
python -m pytest tests/
```
4. Generate Documentation:
```console
sphinx-apidoc -f -e -d 4 -o ./docs ./honeybee_radiance_postprocess
sphinx-build -b html ./docs ./docs/_build/docs
```
| text/markdown | Ladybug Tools | info@ladybug.tools | null | null | GPLv3 | null | [
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Operating System :: OS Independent"
] | [] | https://github.com/ladybug-tools/honeybee-radiance-postprocess | null | null | [] | [] | [] | [
"honeybee-radiance==1.66.237",
"numpy<2.0.0",
"cupy-cuda12x==13.6.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-20T18:16:11.790360 | honeybee_radiance_postprocess-0.4.607.tar.gz | 91,169 | b4/63/2bc9cd4d099d8319bb7e408f4c49a3d296ef803842672c04372c94cae9c8/honeybee_radiance_postprocess-0.4.607.tar.gz | source | sdist | null | false | b036609154aff912b05b1eb4f188c6ed | 0cf1b0f0f96fac7a4db1f279cfe90aa7f510db8e4433f0d4ed710af6f4cb5341 | b4632bc9cd4d099d8319bb7e408f4c49a3d296ef803842672c04372c94cae9c8 | null | [
"LICENSE"
] | 286 |
2.4 | djicons | 0.3.0 | Multi-library SVG icon system for Django - like react-icons, but backend-driven | # djicons
Multi-library SVG icon system for Django. Like [react-icons](https://react-icons.github.io/react-icons/), but 100% backend-driven.
**Zero-config development. Minimal production builds.**
## Features
- **Multi-library support**: Ionicons, Heroicons, Material Symbols, Tabler, Lucide, Font Awesome
- **CDN mode for development**: Access all ~177,000 icons without downloading anything
- **Smart collection for production**: Only download the icons you actually use
- **SVG inline rendering**: Full CSS control, no font loading
- **Namespace system**: `{% icon "ion:home" %}`, `{% icon "hero:pencil" %}`
- **LRU caching**: Fast rendering with memory + optional Django cache
- **Django 4.2+ & 5.x**: Fully compatible with modern Django
## Installation
```bash
pip install djicons
```
Add to `INSTALLED_APPS`:
```python
INSTALLED_APPS = [
'djicons',
# ...
]
```
**That's it!** Start using icons immediately - no download required in development.
## How It Works
### Development (default)
Icons are fetched from CDN on demand. Zero setup, access to all ~177,000 icons.
### Production
Run `djicons_collect` to download only the icons used in your templates:
```bash
python manage.py djicons_collect
```
This scans your templates, finds all `{% icon %}` usages, and downloads only those icons (~KBs instead of ~700MB).
## Quick Start
```django
{% load djicons %}
{# Basic usage #}
{% icon "home" %}
{# With namespace (explicit library) #}
{% icon "ion:cart-outline" %}
{% icon "hero:pencil-square" %}
{% icon "material:shopping_cart" %}
{% icon "tabler:home" %}
{% icon "lucide:settings" %}
{% icon "fa:house" %}
{% icon "fa:github-brands" %}
{# With size #}
{% icon "ion:home" size=24 %}
{# With CSS classes #}
{% icon "hero:pencil" css_class="w-5 h-5 text-blue-500" %}
{# With color #}
{% icon "ion:heart" color="#ff0000" %}
{% icon "ion:heart" fill="currentColor" %}
{# With ARIA accessibility #}
{% icon "ion:menu" aria_label="Open menu" %}
{# With data attributes #}
{% icon "ion:close" data_action="dismiss" data_target="#modal" %}
{# Store in variable #}
{% icon "ion:home" as home_icon %}
{{ home_icon }}
```
## Available Icon Packs
| Pack | Namespace | Icons | License |
|------|-----------|-------|---------|
| [Ionicons](https://ionicons.com) | `ion:` | ~1,400 | MIT |
| [Heroicons](https://heroicons.com) | `hero:` | ~300 | MIT |
| [Material Symbols](https://fonts.google.com/icons) | `material:` | ~2,500 | Apache 2.0 |
| [Tabler Icons](https://tabler.io/icons) | `tabler:` | ~5,000 | MIT |
| [Lucide](https://lucide.dev) | `lucide:` | ~1,500 | ISC |
| [Font Awesome Free](https://fontawesome.com) | `fa:` | ~2,000 | CC BY 4.0 / MIT |
**Total: ~12,700 icons**
### Font Awesome Styles
Font Awesome icons come in three styles:
```django
{% icon "fa:house" %} {# solid (default) #}
{% icon "fa:heart-regular" %} {# regular/outlined #}
{% icon "fa:github-brands" %} {# brand logos #}
```
**Note:** Font Awesome Free requires attribution. See [fontawesome.com/license](https://fontawesome.com/license/free)
## Configuration
### Development (CDN mode - default)
```python
# settings.py - Development
DJICONS = {
'MODE': 'cdn', # Fetch from CDN (default)
}
```
### Production (Local mode)
```python
# settings.py - Production
DJICONS = {
'MODE': 'local',
'COLLECT_DIR': BASE_DIR / 'static' / 'icons', # Where collected icons are stored
}
```
### Full Configuration Options
```python
DJICONS = {
# Mode: 'cdn' (development) or 'local' (production)
'MODE': 'cdn',
# Default namespace for unqualified names
'DEFAULT_NAMESPACE': 'ion',
# Directory for collected icons (production)
'COLLECT_DIR': BASE_DIR / 'static' / 'icons',
# Icon packs to enable
'PACKS': ['ionicons', 'heroicons', 'material', 'tabler', 'lucide', 'fontawesome'],
# Custom icon directories by namespace
'ICON_DIRS': {
'custom': BASE_DIR / 'static' / 'my-icons',
},
# Return empty string for missing icons (vs raising error)
'MISSING_ICON_SILENT': True,
# Default CSS class for all icons
'DEFAULT_CLASS': '',
# Default icon size
'DEFAULT_SIZE': None,
# Add aria-hidden by default
'ARIA_HIDDEN': True,
# Semantic aliases
'ALIASES': {
'edit': 'hero:pencil',
'delete': 'hero:trash',
'add': 'ion:add-outline',
},
}
```
## Collecting Icons for Production
The `djicons_collect` command scans your templates and downloads only the icons you use:
```bash
# Scan templates and download used icons
python manage.py djicons_collect
# Specify custom output directory
python manage.py djicons_collect --output ./static/icons
# Preview what would be downloaded (dry run)
python manage.py djicons_collect --dry-run
# Verbose output
python manage.py djicons_collect -v
```
### Custom Icon Directories
Use `ICON_DIRS` to load icons from your project's static directory:
```python
from pathlib import Path
DJICONS = {
'ICON_DIRS': {
# Load ionicons from your static folder
'ion': BASE_DIR / 'static' / 'ionicons' / 'dist' / 'svg',
# Add your own custom icons
'app': BASE_DIR / 'static' / 'icons',
},
# Disable bundled packs if you don't need them
'PACKS': [],
}
```
Icons in `ICON_DIRS` take priority over bundled packs, so you can override specific icons.
## Programmatic Usage
```python
from djicons import icons, Icon, get, register
from djicons.loaders import DirectoryIconLoader
# Get an icon
icon = icons.get("ion:home")
html = icon.render(size=24, css_class="text-primary")
# Shortcut function
html = get("ion:home", size=24)
# Register custom icon
icons.register("my-icon", "<svg>...</svg>", namespace="myapp")
# Register a directory of icons
loader = DirectoryIconLoader("/path/to/icons")
icons.register_loader(loader, namespace="custom")
# Create aliases
icons.register_alias("edit", "hero:pencil")
# List icons
all_icons = icons.list_icons()
ion_icons = icons.list_icons("ion")
namespaces = icons.list_namespaces()
```
## Icon Class API
```python
from djicons import Icon
icon = Icon(
name="home",
svg_content="<svg>...</svg>",
namespace="myapp",
category="navigation",
tags=["house", "main"],
)
# Render with options
html = icon.render(
size=24, # width & height
width=24, # or separate
height=24,
css_class="icon", # CSS classes
color="#000", # CSS color
fill="currentColor", # SVG fill
stroke="#000", # SVG stroke
aria_label="Home", # Accessibility
aria_hidden=True, # Hide from screen readers
data_action="click", # data-* attributes
)
```
## ERPlora Integration
For [ERPlora](https://github.com/ERPlora/erplora) modules:
```python
# In your module's apps.py
from django.apps import AppConfig
class InventoryConfig(AppConfig):
name = 'inventory'
def ready(self):
from djicons.contrib.erplora import register_module_icons
register_module_icons(self.name, self.path)
```
Or auto-discover all modules:
```python
# In Django settings or ready()
from djicons.contrib.erplora import discover_module_icons
discover_module_icons("/path/to/modules")
```
Then use in templates:
```django
{% icon "inventory:box" %}
{% icon "sales:receipt" %}
```
## Template Tags Reference
### `{% icon %}`
Render an SVG icon inline.
```django
{% icon name [size=N] [width=N] [height=N] [css_class="..."] [color="..."] [fill="..."] [stroke="..."] [aria_label="..."] [aria_hidden=True|False] [**attrs] %}
```
### `{% icon_exists %}`
Check if an icon exists.
```django
{% icon_exists "ion:home" as has_home %}
{% if has_home %}...{% endif %}
```
### `{% icon_list %}`
List available icons.
```django
{% icon_list "ion" as ionicons %}
{% for name in ionicons %}
{% icon name size=24 %}
{% endfor %}
```
### `{% icon_sprite %}`
Render SVG sprite sheet (for advanced use).
```django
{% icon_sprite "ion" %}
```
## Development
```bash
# Clone repository
git clone https://github.com/djicons/djicons.git
cd djicons
# Install dependencies
pip install -e ".[dev]"
# Download icon packs
python scripts/download_icons.py
# Run tests
pytest
# Run linting
ruff check .
ruff format .
```
## License
MIT License - see [LICENSE](LICENSE) for details.
Icon packs are distributed under their respective licenses:
- [Ionicons](https://ionicons.com): MIT License
- [Heroicons](https://heroicons.com): MIT License
- [Material Symbols](https://fonts.google.com/icons): [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- [Tabler Icons](https://tabler.io/icons): MIT License
- [Lucide](https://lucide.dev): ISC License
- [Font Awesome Free](https://fontawesome.com): CC BY 4.0 (icons) / MIT (code)
**Note:** This project includes Material Icons by Google, licensed under the Apache License, Version 2.0.
## Credits
- Inspired by [react-icons](https://react-icons.github.io/react-icons/)
- Built for [ERPlora](https://github.com/ERPlora/erplora) modular ERP system
- Created by [Ioan Beilic](https://github.com/ioanbeilic)
| text/markdown | null | Ioan Beilic <ioanbeilic@gmail.com> | null | null | null | django, heroicons, icons, ionicons, lucide, material, svg, tabler, template-tags | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 5.1",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"django>=4.2",
"django-stubs>=4.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest-django>=4.5; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/djicons/djicons",
"Documentation, https://github.com/djicons/djicons#readme",
"Repository, https://github.com/djicons/djicons",
"Issues, https://github.com/djicons/djicons/issues",
"Changelog, https://github.com/djicons/djicons/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:15:54.675546 | djicons-0.3.0.tar.gz | 30,710 | 32/90/8011cbfc4fb6c83c5dcc0a670c0132a6ab1b7fc68024576a1a70da406e37/djicons-0.3.0.tar.gz | source | sdist | null | false | 582d053797ee0537b6a35a514865501c | f133d0f1a627d66776d10ed97d5f8ae892b0d4ceea9d70757bc4b611868dd4d9 | 32908011cbfc4fb6c83c5dcc0a670c0132a6ab1b7fc68024576a1a70da406e37 | MIT | [
"LICENSE"
] | 233 |
2.4 | tesserax | 0.10.0 | A pure-Python library for rendering professional CS graphics. | # Tesserax: A Lightweight SVG Rendering Library
[](https://pypi.org/project/tesserax/)
[](https://pypi.org/project/tesserax/)
[](https://github.com/apiad/tesserax/actions/workflows/tests.yaml)
[](#)
[](https://pepy.tech/project/tesserax)
[](https://opensource.org/licenses/MIT)
Tesserax is a modern Python 3.12 library designed for programmatic SVG generation with a focus on ease of use, layout management, and flexible geometric primitives. It is particularly well-suited for visualizing data structures, algorithms, and technical diagrams.
Beyond static diagrams, Tesserax now includes a **deterministic physics engine** and a **cinematic animation system**, making it a complete toolkit for scientific communication.
## Key Features
* **Rich Primitives**: Includes standard shapes (`Rect`, `Circle`) plus advanced procedural geometry like `Polyline` with smoothing and subdivision support.
* **Declarative Layouts**: Effortlessly arrange shapes in `Row`, `Column`, or `Grid` containers, or use algorithmic layouts like `Tree` and `Force`.
* **Smart Canvas**: Automatically fit the canvas viewport to the content with adjustable padding.
* **Anchor System**: Connect shapes using semantic anchors like `top`, `bottom`, `left`, `right`, and `center`.
* **Cinematic Animations**: Create complex motion graphics using a declarative, code-first API that supports keyframes, morphing, and warping.
* **Physics Simulation**: Bake high-precision rigid body simulations directly into your animations using the built-in `World` and `Body` primitives.
* **Reactive Statistical Visualization**: Build data-driven graphics with an Altair-inspired API that supports automatic axis generation and seamless **Enter-Update-Exit** animations.
* **Rock-Solid Reliability**: 90% test coverage ensuring predictable behavior across all geometric, layout, and animation systems.
## Installation
Tesserax has zero dependencies (literally). It's 100% pure Python, and can be easily installed with `pip`:
```bash
pip install tesserax
```
Or if you're one of the cool kids, using `uv`:
```bash
uv add tesserax
```
If you want support for saving PNG files, install with the `export` extra:
```bash
pip install tesserax[export]
```
## Quick Start
The following example demonstrates how to create a simple logo and highlights the most basic functionality of **Tesserax**.
```python
import math
from tesserax import Canvas, Square, Circle, Text, Polyline, Point, Group
from tesserax.layout import RowLayout
with Canvas() as canvas:
# We use a top-level row layout
with RowLayout(align="end") as logo:
# Left Block
r = Square(30, fill="green", stroke="none")
# Center Text
t = Text(
"tesserax",
size=48,
font="sans-serif",
fill="navyblue",
anchor="middle",
baseline="bottom",
)
# Right Circle
c = Circle(20, fill="red", stroke="none")
# Create the "Squiggly" Underline
Polyline(
[
r.anchor("bottom").dy(10),
c.anchor("bottom").dy(10),
],
smoothness=1.0,
stroke="black",
marker_end="arrow",
).subdivide(5).apply(
lambda p: p.dy(math.sin((p.x / logo.bounds().width * 20 + 5)) * 5)
)
# Use fit() to frame the logo perfectly
canvas.fit(padding=10).display()
```

The `display()` method in the `Canvas` class is an IPython/Jupyter/Quarto compatible shortcut to automatically include the rendered SVG (in all its beautiful vectorial glory) directly in a notebook. But you can also use `Canvas.save()` to generate a plain old, boring SVG file on this, and `str(canvas)` to get the actual SVG code as a plain string.
## Deep Dive: Beyond the Basics
Tesserax scales from simple scripts to complex simulations. Here is an overview of the advanced capabilities available.
### Geometric Primitives & Procedural Shapes
Tesserax provides a robust suite of atoms like `Rect`, `Circle`, `Ellipse`, and `Arrow`.
* **Polyline API**: The `Polyline` class supports `smoothing` (Bezier interpolation), `subdivision` (increasing resolution), and `simplification` (reducing vertices).
* **Path API**: For low-level control, use the `Path` class with standard SVG commands (`move_to`, `cubic_to`, `arc`).
### The Layout Engine
Forget manual pixel pushing. Tesserax offers a hierarchy of layout engines:
* **Standard Layouts**: `Row`, `Column`, and `Grid` automatically position elements based on gaps and alignment.
* **Hierarchical Layout**: Automatically draws Trees and Directed Acyclic Graphs (DAGs).
* **Force-Directed Layout**: Simulates physical forces to arrange arbitrary network graphs.
### Cinematic Animation
The animation system is designed for **storytelling**, not just movement.
* **Declarative API**: Compose animations using `parallel (|)` and `sequential (+)` operators.
* **Keyframes**: Define complex multi-stage timelines for any property (position, rotation, color).
* **Morphing & Warping**: Smoothly transform one shape into another or apply wave functions to geometry.
### Physics Engine
Tesserax includes a **baked physics engine** for high-precision rigid body simulations.
* **Deterministic**: Define a `World`, add `Body` objects, and apply `Field`s like Gravity or Drag.
* **Baked Playback**: The simulation is calculated upfront and converted into standard keyframes, allowing high-resolution physics (e.g., 1000 steps/sec) to play back smoothly at any framerate.
* **Interoperable**: Physics animations can be mixed and matched with standard tweens.
### Statistical Visualization
Bridging the gap between diagrams and plots, Tesserax offers a grammar-of-graphics charting API.
* **Altair-lite API**: Define a `Chart`, select a `Mark` (bar, point), and `encode` visual channels like `x`, `y`, and `color`.
* **Automated Scales**: Includes built-in `Linear`, `Band`, and `Color` scales that automatically map data values to pixels and palettes.
* **Integrated Axes**: Effortlessly add titles, ticks, and gridlines with smart coordinate management.
## Why Tesserax?
In the Python ecosystem, there is a clear divide between **data visualization** (plotting numbers) and **diagrammatic representation** (drawing concepts).
Tesserax is for **Scientific Drawing**---providing the low-level primitives needed for total layout authority.
Libraries like **Matplotlib** map data to charts. Tesserax maps concepts to geometry. Use Tesserax for the schematics, geometric proofs, and algorithmic walkthroughs in your papers.
**TikZ** is the industry standard for academic figures but uses a cryptic macro language. Tesserax brings that same "total-control" philosophy to **Python 3.12**, giving you coordinate-invariant precision with the power of Python's loops and types.
## Contribution
Tesserax is free as in both free beer and free speech. License is MIT.
Contributions are always welcomed! Fork, clone, and submit a pull request.
| text/markdown | null | Alejandro Piad <apiad@apiad.net> | null | null | null | animation, charts, diagrams, graphics, physics, svg, visualization | [
"Development Status :: 4 - Beta",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Multimedia :: Graphics :: Presentation",
"Topic :: Scientific/Engineering :: Visualization"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"cairosvg>=2.8.2; extra == \"export\"",
"imageio[ffmpeg]>=2.37.2; extra == \"export\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T18:15:41.648757 | tesserax-0.10.0.tar.gz | 168,926 | d8/95/a25947564b1163bf8583af4c2b5988a70b5d50fc322e166737d06a1f9302/tesserax-0.10.0.tar.gz | source | sdist | null | false | 8049d575d3b5d18a6f5721de61e4997e | 0b05e7a48f0b0fa9e6c1debcfbba74b5d1b7d10b81b61618c25667efe43e11be | d895a25947564b1163bf8583af4c2b5988a70b5d50fc322e166737d06a1f9302 | null | [
"LICENSE"
] | 201 |
2.4 | atrace | 0.8.0 | Generate trace tables for simple programs | # Atrace

Automatically prints a trace table of **simple programs**
This module is intended for beginner programmers.
## Usage
Just import the module:
```
import atrace # noqa
x, y = 1, 3
...
```
For instance running test/programs/small_example.py will print this table at the end:
```
╭────────┬─────┬─────┬────────┬─────────────┬───────────────────┬──────────────╮
│ line │ x │ y │ t │ (greet) n │ (greet) message │ output │
├────────┼─────┼─────┼────────┼─────────────┼───────────────────┼──────────────┤
│ 3 │ 1 │ 3 │ │ │ │ │
│ 6 │ 2 │ │ │ │ │ │
│ 6 │ 3 │ │ │ │ │ │
│ 8 │ │ │ │ │ │ x: 3 │
│ 10 │ │ │ (1, 2) │ │ │ │
│ 13 │ │ │ │ bob │ │ │
│ 14 │ │ │ │ │ bonjour bob! │ │
│ 18 │ │ │ │ │ │ bonjour bob! │
╰────────┴─────┴─────┴────────┴─────────────┴───────────────────┴──────────────╯
```
## Compatibility
Requires python version 3.10 or higher.
Tested with:
- cpython
- pypy
- Thonny
## Does not work well with
- Multithreaded programs
- Multi-module programs
- Debuggers
- Classes
- Variables containing functions
- Context managers
- Generators
| text/markdown | null | Nicholas Wolff <nwolff@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"tabulate>=0.9.0"
] | [] | [] | [] | [
"Repository, https://github.com/nwolff/atrace.git"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T18:15:39.164502 | atrace-0.8.0-py3-none-any.whl | 7,522 | c5/67/fda43b0c3c2550f2c76b7fd1adf834587c60cc85e5a30e43496cbac1fbb8/atrace-0.8.0-py3-none-any.whl | py3 | bdist_wheel | null | false | cb843fabed9c77c068faa29c0e1022b4 | d24c4c9e0cfa16a77b5c7aff7655cf03510c525bbf82602d5d6eb24b9d380481 | c567fda43b0c3c2550f2c76b7fd1adf834587c60cc85e5a30e43496cbac1fbb8 | MIT | [
"LICENSE"
] | 236 |
2.4 | pl-run-program | 0.0.22 | A simple interface for running non-python programs in python. | # pl-run-program
A simple interface for running non-python programs in python.
## Project Status
Alpha. Expect breaking changes.
## Installation
```
uv add pl-run-program
```
## Usage
```python
from pl_run_program import run_program, run_simple_program, program_at_path
from pathlib import Path
# Paths are validated to ensure they're absolute, execuable, exist, etc.
echo_program = program_at_path(Path("/bin/echo"))
# run_program returns a ProgramResult object with stdout, stderr, and return code.
result = run_program(echo_program, ["Hello, World!"])
print(f"run_program result: {result}")
# run_simple_program is a convenience function that returns only stdout and raises an exception if the program returns a non-zero return code.
result = run_simple_program(echo_program, ["Hello, World!"])
print(f"run_simple_program result: {result}")
```
## Releasing
Run `./release.sh`.
## License
Licensed under the Apache License 2.0. See [LICENSE](./LICENSE).
| text/markdown | Peter Lavigne | null | null | null | null | null | [
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | >=3.14 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/Peter-Lavigne/pl-run-program",
"Repository, https://github.com/Peter-Lavigne/pl-run-program"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T18:15:26.160966 | pl_run_program-0.0.22.tar.gz | 6,639 | d0/2f/9e079f1ed8c3c55b5f194fce034441efd1469e4dea20d53958323056e302/pl_run_program-0.0.22.tar.gz | source | sdist | null | false | 66b77f75d5f41abae75d26cbc39a2308 | 1cc3d28da871480255d6e1b6d18b0e0d6e6e9df7d9fa0534e005fda538de60bf | d02f9e079f1ed8c3c55b5f194fce034441efd1469e4dea20d53958323056e302 | Apache-2.0 | [
"LICENSE"
] | 189 |
2.4 | tenant-rpa-sdk | 0.1.0 | SDK de logging resiliente para RPAs no backend rpa_logs | # tenant-rpa-sdk
SDK Python para RPAs com foco em logging resiliente no backend `rpa_logs`, sem acoplamento ao `tenant-system`.
## Objetivo
- Padronizar logs de RPAs em `rpa_logs`
- Evitar perda de eventos em indisponibilidade temporaria do DB
- Fornecer API simples para uso em qualquer automacao Python
## Escopo
- Somente logging (sem lifecycle `start/heartbeat/finish`)
- Postgres-only (`psycopg`)
- Dedupe por `idempotency_key`
- Fallback local em JSONL + reconciliacao por `flush_pending()`
## Instalacao
### Desenvolvimento local
```bash
cd tenant-rpa-sdk
python -m venv .venv
source .venv/bin/activate
pip install -U pip
pip install -e .[dev]
```
### Consumo em RPAs (producao)
Use versao pinada:
```txt
tenant-rpa-sdk==0.1.0
```
Canal 1 (oficial) - PyPI publico:
```bash
pip install tenant-rpa-sdk==0.1.0
```
Canal 2 - package no GitHub (asset da Release):
```bash
pip install "https://github.com/tech-ops-ai/tenant-rpa-sdk/releases/download/tenant-rpa-sdk-v0.1.0/tenant_rpa_sdk-0.1.0-py3-none-any.whl"
```
Se seu ambiente usa mirror privado, ajuste `--index-url`/`--extra-index-url` no `pip`.
## Variaveis de ambiente
- `MAESTRO_PROD` ou `MAESTRO`
- `RPA_LOGS_TABLE` (default `rpa_logs`)
- `RPA_LOG_SOURCE` (default `rpa.app`)
- `RPA_LOG_FALLBACK_PATH` (default `/var/log/tenant-rpa/pending_rpa_logs.jsonl`)
- `RPA_LOG_FLUSH_ON_START` (default `true`)
- `RPA_LOG_DB_TIMEOUT_SECONDS` (default `5`)
- `RPA_LOG_MAX_MESSAGE_CHARS` (default `2000`)
- `RPA_LOG_MAX_METADATA_BYTES` (default `32768`)
## Conexao Maestro
Formato esperado:
```txt
postgres,<host>,<port>,<database>,<user>,<password>
```
Alias encadeado aceito:
```txt
MAESTRO_PROD=MAESTRO
MAESTRO=postgres,db.host,5432,maestro,rpa,***
```
Se `MAESTRO_PROD` nao existir, o SDK tenta automaticamente `MAESTRO`.
## Uso rapido
```python
from tenant_rpa_sdk import TenantBackendLogger
logger = TenantBackendLogger(
project_name="c6-egv",
source="rpa.cart-abandonment",
execution_id=123,
db_alias_env="MAESTRO_PROD",
fallback_path="/tmp/pending_rpa_logs.jsonl",
echo_stdout=True,
flush_on_start=True,
)
logger.info("RPA iniciado", metadata={"step": "bootstrap"})
logger.warning("API lenta", metadata={"latency_ms": 1200})
logger.error("Falha em consulta", metadata={"query_id": "q-09", "token": "secret"})
result = logger.flush_pending(limit=200)
print(result)
```
## API publica
Classe principal: `tenant_rpa_sdk.logger.TenantBackendLogger`
Metodos:
- `debug(message, *, event_type="APP", metadata=None, execution_id=None)`
- `info(...)`
- `warning(...)`
- `error(...)`
- `critical(...)`
- `log(level, message, *, event_type="APP", metadata=None, execution_id=None)`
- `bind_execution(execution_id)`
- `flush_pending(limit=200) -> dict`
## Contrato de fallback JSONL
Cada linha contem:
- `idempotency_key`
- `created_at`
- `payload`
- `attempt`
- `last_error`
- `next_retry_at`
## Versionamento e release
- Politica: SemVer estrito (`MAJOR.MINOR.PATCH`)
- Tags de release: `tenant-rpa-sdk-vX.Y.Z`
- Publicacao automatica no PyPI por GitHub Actions ao criar tag valida
- Os artefatos `.whl` e `.tar.gz` tambem sao anexados na GitHub Release da mesma tag
Consulte:
- `CHANGELOG.md`
- `CONTRIBUTING.md`
- `docs/versionamento.md`
- `docs/release.md`
- `docs/uso-em-rpas.md`
- `docs/faq.md`
## Testes
Unitarios:
```bash
pytest tests/unit
```
Integracao (Postgres real, execucao opt-in):
```bash
export TENANT_RPA_SDK_INTEGRATION=true
pytest tests/integration
```
| text/markdown | Tenant RPA Team | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"psycopg[binary]<4.0,>=3.2",
"pytest<10.0,>=8.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:15:19.527287 | tenant_rpa_sdk-0.1.0.tar.gz | 10,787 | 18/ef/fc1859ee284de8d4c65e15a4ce07e9dafecfc5bda6470331edd2b5d03a6a/tenant_rpa_sdk-0.1.0.tar.gz | source | sdist | null | false | 4c7d220fbbedf9e3870e5e4c948d5850 | 6c9788ff7e2877c6025526f0abbdecfc6d8b4b1b5c2f66619a1c2d51543006d1 | 18effc1859ee284de8d4c65e15a4ce07e9dafecfc5bda6470331edd2b5d03a6a | LicenseRef-Proprietary | [] | 218 |
2.4 | netron | 8.9.1 | Viewer for neural network, deep learning and machine learning models. | Netron is a viewer for neural network, deep learning and machine learning models.
Netron supports ONNX, TensorFlow Lite, Core ML, Keras, Caffe, Darknet, PyTorch, TensorFlow.js, Safetensors and NumPy.
Netron has experimental support for TorchScript, torch.export, ExecuTorch, TensorFlow, OpenVINO, RKNN, ncnn, MNN, PaddlePaddle, GGUF and scikit-learn.
| text/markdown | null | Lutz Roeder <lutzroeder@users.noreply.github.com> | null | null | MIT | onnx, keras, tensorflow, tflite, coreml, mxnet, caffe, caffe2, torchscript, pytorch, ncnn, mnn, openvino, darknet, paddlepaddle, chainer, artificial intelligence, machine learning, deep learning, neural network, visualizer, viewer | [
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Visualization"
] | [] | null | null | null | [] | [] | [] | [] | [] | [] | [] | [
"homepage, https://github.com/lutzroeder/netron"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T18:15:10.370902 | netron-8.9.1-py3-none-any.whl | 3,352,260 | 58/00/c6b966b9d9753b8304e96bd32d60311e0171048194c833b0bcef08b6ff99/netron-8.9.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 9f4d0a9d8574e2e7f94378f5d5f3c0b9 | 5016c5499c25a74b321ea5676dc02581707beff899e295d04ec9033ac01050ec | 5800c6b966b9d9753b8304e96bd32d60311e0171048194c833b0bcef08b6ff99 | null | [] | 920 |
2.4 | renderflow | 2026.7 | Generic workflow app runtime and Streamlit renderer with provider plugins | # renderflow
Workflow runtime and rendering API for:
- Streamlit UI
- CLI execution
- HTML report export
- Individual figure export
## Core Idea
`renderflow` owns the interface contract and rendering behavior.
Provider packages should mostly define workflows (`run_workflow` + params), not custom UI/CLI plumbing.
## Demo App
Live Streamlit demo:
https://demo-for-renderflow.streamlit.app/
## CLI
List installed providers:
```bash
renderflow list-providers
```
List provider workflows:
```bash
renderflow list-workflows --provider crsd-inspector
```
Show interpreted workflow parameters:
```bash
renderflow show-params --provider crsd-inspector --workflow signal_analysis
```
Provider-scoped CLI (no provider `cli.py` needed):
```toml
[project.scripts]
crsd-inspector = "renderflow.cli:main"
```
With that entrypoint:
- `crsd-inspector -h` shows available workflows.
- `crsd-inspector range_doppler_processing -h` shows only that workflow's parameters/defaults/help.
Execute a workflow in terminal mode:
```bash
renderflow execute \
--provider crsd-inspector \
--workflow signal_analysis \
--param crsd_directory=examples \
--param prf_hz=1000 \
--output terminal
```
Execute and export both:
- one combined report file (`--html`)
- per-figure files (`--save-figures-dir` + `--figure-format`)
```bash
renderflow execute \
--provider crsd-inspector \
--workflow signal_analysis \
--param crsd_directory=examples \
--html output/report.html \
--save-figures-dir output/figures \
--figure-format html
```
Add per-figure JSON in the same run:
```bash
renderflow execute \
--provider crsd-inspector \
--workflow signal_analysis \
--param crsd_directory=examples \
--html output/report.html \
--save-figures-dir output/figures \
--figure-format html \
--figure-format json
```
Export multiple figure formats in a single run:
```bash
renderflow execute \
--provider crsd-inspector \
--workflow signal_analysis \
--param crsd_directory=examples \
--save-figures-dir output/figures \
--figure-format html \
--figure-format json
```
Comma-separated format lists are also accepted:
```bash
renderflow execute \
--provider crsd-inspector \
--workflow signal_analysis \
--param crsd_directory=examples \
--html output/report.html \
--save-figures-dir output/figures \
--figure-format html,json
```
If `--figure-format` is omitted, per-figure export defaults to `html`.
Image formats (`png`, `jpg`, `jpeg`, `svg`, `pdf`) require Kaleido. `renderflow` includes `kaleido` as a dependency.
Launch Streamlit:
```bash
renderflow run --provider crsd-inspector
```
## Shell Completion
Tab completion is supported via `argcomplete` for both:
- `renderflow ...`
- provider-scoped commands (for example `crsd-inspector ...`)
Activate in Bash for current shell:
```bash
eval "$(register-python-argcomplete renderflow)"
eval "$(register-python-argcomplete crsd-inspector)"
```
After activation, workflow subcommands and options complete with `Tab`.
## Workflow Result Contract
Use `renderflow.workflow.Workflow` inside provider workflows:
```python
from renderflow.workflow import Workflow
workflow = Workflow(name="My Workflow", description="...")
workflow.params = {
"threshold": {
"type": "number",
"default": 0.5,
"label": "Threshold",
"description": "Minimum score to keep",
},
}
workflow.add_text("Summary text")
workflow.add_table("Metrics", {"name": ["a"], "value": [1]})
workflow.add_plot(fig, title="Spectrum", figure_id="spectrum", save=True)
workflow.add_code("print('debug')", language="python")
return workflow.build()
```
`add_plot(..., save=False)` marks a plot as not exportable when using figure-save operations.
Minimum return contract from `run_workflow(...)`:
- must return a `dict`
- either:
- modern shape: `{"results": [ ... ]}`
- legacy shape: `{"text": [...], "tables": [...], "plots": [...]}`
- for modern shape, each item in `results` must be a dict with:
- `type` in `text | table | plot | code`
- if `type == "plot"`, item must include `figure`
## Provider Contract Options
### 1) Explicit `AppSpec` (fully explicit)
Entry point:
```toml
[project.entry-points."renderflow.providers"]
my-provider = "my_provider.app_definition:get_app_spec"
```
`get_app_spec()` returns `renderflow.contracts.AppSpec`.
### 2) Auto-Defined Provider (minimal)
If no `app_definition` exists, `renderflow` auto-builds from:
- `<provider>.workflows.*` modules with `run_workflow(...)`
- optional `<provider>.renderflow` module:
- `APP_NAME = "..."`
- `WORKFLOWS_PACKAGE = "provider.custom_workflows"` (optional)
- optional custom metadata constants for provider setup
Workflow parameters are pulled from:
1. `workflow.params` if a `workflow` object exists in the module
2. `PARAMS` module global
3. inferred function signature defaults
This lets packages like `crsd-inspector` keep only workflow definitions and optional init logic, while `renderflow` handles CLI + Streamlit parameter interpretation and rendering.
| text/markdown | Brian Day | null | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"streamlit>=1.31.0",
"pandas>=2.0.0",
"plotly>=5.18.0",
"kaleido>=0.2.1",
"argcomplete>=3.5.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:14:59.509415 | renderflow-2026.7.tar.gz | 20,138 | 72/84/6c79e811843817ac357a45b33fc2f6ff22509539dfd5313c26acc8b016bd/renderflow-2026.7.tar.gz | source | sdist | null | false | 6c1b2734171e671a11b18720e258a330 | f997127879c7b71bab80a9f431577796c73e80891579f7016ebe7803bdaed78f | 72846c79e811843817ac357a45b33fc2f6ff22509539dfd5313c26acc8b016bd | null | [] | 198 |
2.4 | cyclecore-pq | 0.2.0 | Python SDK for CycleCore PQ — Post-Quantum Cryptography as a Service | # CycleCore PQ — Python SDK
Post-quantum cryptography as a service. Dilithium3 signing, Kyber768 encryption, AES-256-GCM — one API call.
## Install
```bash
pip install cyclecore-pq
```
## Quick Start
```python
from cyclecore_pq import CycleCoreClient
client = CycleCoreClient("pq_live_YOUR_KEY")
# Sign
result = client.sign(b"hello world")
print(result.signature)
# Verify
verified = client.verify(b"hello world", result.signature_bytes)
print(verified.valid) # True
# Encrypt / Decrypt
encrypted = client.encrypt(b"sensitive data")
decrypted = client.decrypt(encrypted.ciphertext_bytes)
print(decrypted.plaintext_bytes) # b"sensitive data"
```
## Async
```python
from cyclecore_pq import AsyncCycleCoreClient
async with AsyncCycleCoreClient("pq_live_YOUR_KEY") as client:
result = await client.sign(b"hello world")
```
## Methods
| Method | Description |
|--------|-------------|
| `sign(message)` | Sign with Dilithium3 |
| `verify(message, signature)` | Verify a signature |
| `encrypt(plaintext)` | Encrypt with Kyber768 + AES-256-GCM |
| `decrypt(ciphertext)` | Decrypt a ciphertext blob |
| `sign_batch(messages)` | Batch sign (up to 1,000) |
| `encrypt_batch(plaintexts)` | Batch encrypt (up to 1,000) |
| `handshake_init()` | Start PQ key exchange |
| `handshake_respond(...)` | Respond to key exchange |
| `handshake_finish(...)` | Complete key exchange |
| `attest(data)` | Add to attestation chain |
| `attest_verify(chain_id)` | Verify chain integrity |
| `attest_export(chain_id)` | Export chain for audit |
| `keys()` | Get your public keys |
| `rotate_keys()` | Rotate key pairs |
| `usage_stats()` | Usage statistics |
| `health()` | API health check |
## Errors
```python
from cyclecore_pq import AuthenticationError, RateLimitError, ValidationError
try:
result = client.sign(b"hello")
except AuthenticationError:
print("Invalid API key")
except RateLimitError:
print("Rate limit exceeded")
except ValidationError:
print("Bad request")
```
## Docs
Full API documentation: [cyclecore.ai/pq/docs](https://cyclecore.ai/pq/docs)
| text/markdown | null | CycleCore Technologies <hi@cyclecore.ai> | null | null | null | post-quantum, cryptography, dilithium, kyber, pqc | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Security :: Cryptography",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.24.0",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://cyclecore.ai/pq",
"Documentation, https://cyclecore.ai/pq/docs"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-20T18:14:52.122355 | cyclecore_pq-0.2.0.tar.gz | 8,996 | 03/75/0038c22653c1a309666861f430cc740429afbf19eb6d22463f00a3e1238b/cyclecore_pq-0.2.0.tar.gz | source | sdist | null | false | 326a0af992cd2210c23c24569a83dc62 | 005619357466a132c6ea348f5ca6ff9f0773383a89563a3df322f1f8413075cb | 03750038c22653c1a309666861f430cc740429afbf19eb6d22463f00a3e1238b | MIT | [
"LICENSE"
] | 206 |
2.4 | omniverse-asset-validator | 1.11.0 | NVIDIA Asset Validator | # Omniverse Asset Validator
An extensible framework to validate [OpenUSD](https://openusd.org) assets.
Inspired by Pixar's [usdchecker](https://graphics.pixar.com/usd/release/toolset.html#usdchecker),
this library extends validation capabilities with additional rules and provides automatic issue fixing.
## Features
- Rule-based validation — Modular rule interface with registration mechanism for custom validators
- Flexible engine — Run validation on a `Usd.Stage`, individual layers, or recursively search folders
- Auto-fixing — Issue fixing interface for applying automated corrections when rules provide suggestions
- Command line interface — Standalone CLI for validation outside of GUI applications
- Lightweight — Pure Python implementation requiring only OpenUSD
## Installation
Install from PyPI:
```bash
pip install omniverse-asset-validator
```
## Optional Dependencies
For full functionality, install with optional dependencies:
```bash
# Include usd-core
pip install omniverse-asset-validator[usd]
# Include NumPy (for optimizations)
pip install omniverse-asset-validator[numpy]
# Install all optional dependencies
pip install omniverse-asset-validator[usd,numpy]
```
## Basic usage
```python
from omni.asset_validator import ValidationEngine, IssueFixer
engine = ValidationEngine()
results = engine.validate("path/to/asset.usda")
for issue in results.issues():
print(f"{issue.severity}: {issue.message}")
fixer = IssueFixer()
fix_results = fixer.fix(results.issues())
for result in fix_results:
print(f"{result.status}: {result.issue.message}")
```
The `ValidationEngine` also supports selecting specific rules with `enableRule()` / `disableRule()`
and filtering by category with `enableCategory()`.
## Command Line Interface
The `omni_asset_validate` command provides validation from the terminal:
```bash
# Validate a single file
omni_asset_validate asset.usda
# Validate a directory recursively
omni_asset_validate ./assets/
# Apply automatic fixes
omni_asset_validate --fix asset.usda
# Validate specific categories
omni_asset_validate --category Material --category Geometry asset.usda
# Enable specific rules only
omni_asset_validate --no-init-rules --rule StageMetadataChecker asset.usda
# Export results to CSV
omni_asset_validate --csv-output results.csv asset.usda
# Show help
omni_asset_validate --help
```
## Documentation
- [Full Documentation](https://docs.omniverse.nvidia.com/kit/docs/asset-validator)
- [API Reference](https://docs.omniverse.nvidia.com/kit/docs/asset-validator/latest/source/python/docs/api.html)
- [Available Rules](https://docs.omniverse.nvidia.com/kit/docs/asset-validator/latest/source/python/docs/rules.html)
## Requirements
- Python 3.10 - 3.12
- OpenUSD 22.11 or later
## License
Apache-2.0
| text/markdown | NVIDIA | null | null | null | Apache-2.0 | null | [] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"numpy<2,>=1; extra == \"numpy\"",
"usd-core>=22.11; extra == \"usd\""
] | [] | [] | [] | [
"Homepage, https://docs.omniverse.nvidia.com/kit/docs/asset-validator",
"Documentation, https://docs.omniverse.nvidia.com/kit/docs/asset-validator"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T18:14:49.820677 | omniverse_asset_validator-1.11.0-py3-none-any.whl | 236,671 | d3/ac/e336b19a4d1dc77ef6b23455cde5416695f9de10cb8afddc2af58b1562bb/omniverse_asset_validator-1.11.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 65fa62ba6e2751c20a3afb51aace7ef0 | ea6c75e7594d439695ad1139d070966fd0f8ef126d719be774f2b95365919a54 | d3ace336b19a4d1dc77ef6b23455cde5416695f9de10cb8afddc2af58b1562bb | null | [] | 161 |
2.3 | mhctyper | 0.1.8 | MHC Class I and II typer based on polysolver algorithm. | # mhctyper
[](https://pypi.org/project/mhctyper/)


Polars-accelerated MHC class I and II typing based on Polysolver algorithm.
## Features
- Supports both class I and II typing with good
[accuracy](https://github.com/svm-zhang/hla_benchmark?tab=readme-ov-file)
- Runtime speedup boosted by polars
- Minimum I/O operations
- Easy integration to workflow/pipeline with better CLI and proper packaging.
## Installation
mhctyper can be installed from PyPI:
```bash
pip install mhctyper
```
## Quick start
`mhctyper` simply requires 2 inputs:
- Alignment to HLA alleles in BAM format: `$bam`.
- Population frequency from the original `polysolver`: `HLA_FREQ.txt`.
```bash
mhctyper --bam "$bam" \
--freq "HLA_FREQ.txt" \
--outdir "$outdir" \
--nproc 8
```
Please refer to [documentation](https://svm-zhang.github.io/mhctyper) for more details.
| text/markdown | null | Simo Zhang <svm.zhang@gmail.com> | null | Simo Zhang <svm.zhang@gmail.com> | MIT | HLA typing, bioinformatics, genomics, sequencing | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=2.1.1",
"polars>=1.7.0",
"pysam>=0.22.1",
"tinyscibio>=0.4.1",
"tqdm>=4.66.5"
] | [] | [] | [] | [
"Source, https://github.com/svm-zhang/mhctyper",
"Documentation, https://svm-zhang.github.io/mhctyper/",
"Issues, https://github.com/svm-zhang/mhctyper/issues"
] | twine/5.1.1 CPython/3.12.12 | 2026-02-20T18:12:09.182363 | mhctyper-0.1.8.tar.gz | 10,532 | 2f/76/fad00242a082b828a43a2829f8e446cc26f5a7c23ac40eff68e9684d961a/mhctyper-0.1.8.tar.gz | source | sdist | null | false | db8d6ad36fb1c038a8b93c572a22aee8 | 351f32c25026cfe23cb4e878c13a4ce29cfa3d60313d5cb99b9334cd21240c58 | 2f76fad00242a082b828a43a2829f8e446cc26f5a7c23ac40eff68e9684d961a | null | [] | 202 |
2.3 | broccoli-ml | 16.1.0 | Some useful Pytorch models, circa 2025 | # broccoli
Some useful PyTorch models, circa 2025.

# Getting started
You can install broccoli with
```
pip install broccoli-ml
```
PyTorch is a peer dependency of `broccoli`, which means
* You will need to make sure you have PyTorch installed in order to use `broccoli`
* PyTorch will **not** be installed automatically when you install `broccoli`
We take this approach because PyTorch versioning is environment-specific and we don't know where you will want to use `broccoli`. If we automatically install PyTorch for you, there's a good chance we would get it wrong!
Therefore, please also make sure you install PyTorch.
# Usage examples
... | text/markdown | Nicholas Bailey | null | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"einops<0.9.0,>=0.8.1"
] | [] | [] | [] | [] | poetry/2.1.3 CPython/3.9.5 Darwin/25.2.0 | 2026-02-20T18:11:51.176024 | broccoli_ml-16.1.0.tar.gz | 20,015 | 14/98/fd56ae4ed67480635803140d292da4da2b2fe5073aa7c139a14f514174d1/broccoli_ml-16.1.0.tar.gz | source | sdist | null | false | 415f8cd4e5306cdb6a22be88daa92a27 | f4d387fcb074302fab99d568feb418b3de7bacf597a9298bf6d40c13384f19fb | 1498fd56ae4ed67480635803140d292da4da2b2fe5073aa7c139a14f514174d1 | null | [] | 223 |
2.4 | pml-lang | 0.1.0 | A domain-specific programming language for Prompt Engineers | PromptLang (.pml) allows you to write prompts like code and run them like programs with strict JSON validation, multi-step chaining, and dynamic variables.
| text/markdown | RSN Narasimha | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/yourusername/promptlang | null | >=3.8 | [] | [] | [] | [
"click",
"python-dotenv",
"pydantic",
"google-genai",
"openai"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.7 | 2026-02-20T18:11:08.999295 | pml_lang-0.1.0.tar.gz | 5,154 | 56/22/b257e374f94413ff3024d9bf50823478aacd039adeed7540a5bc4687b1b1/pml_lang-0.1.0.tar.gz | source | sdist | null | false | 90f7f1f8ab3c1a45d7fddeb0e60b7769 | b76eb5448ac3c9de39982551b8db6e174a928a9ea6df81e527e2a0483a0a8772 | 5622b257e374f94413ff3024d9bf50823478aacd039adeed7540a5bc4687b1b1 | null | [] | 211 |
2.1 | guv-calcs | 0.6.2 | A library for carrying out fluence and irradiance calculations for germicidal UV (GUV) applications. | GUV Calcs
======================
A library for fluence and irradiance calculations for germicidal UV (GUV) applications. Simulate UV light propagation in rooms, calculate disinfection outcomes, and verify safety compliance.
## Installation
Install with pip:
```bash
pip install guv-calcs
```
Or install from source:
```bash
git clone https://github.com/jvbelenky/guv-calcs.git
cd guv-calcs
pip install .
```
## Quick Start
```python
from guv_calcs import Room, Lamp
# Create a room (6m x 4m x 2.7m)
room = Room(x=6, y=4, z=2.7, units="meters")
# Add a lamp at the ceiling, aimed downward
lamp = Lamp(filedata="my_lamp.ies").move(3, 2, 2.7).aim(3, 2, 0)
room.add_lamp(lamp)
# Add standard calculation zones (fluence volume + safety planes)
room.add_standard_zones()
# Run the calculation
room.calculate()
# Access results
fluence = room.calc_zones["WholeRoomFluence"]
print(f"Mean fluence rate: {fluence.values.mean():.2f} µW/cm²")
```
## Examples
### Method Chaining
```python
room = (
Room(x=6, y=4, z=2.7)
.place_lamp("my_lamp.ies") # Auto-positions lamp
.add_standard_zones()
.calculate()
)
```
### Multiple Lamps
```python
room = Room(x=6, y=4, z=2.7)
lamp1 = Lamp(filedata="my_lamp.ies").move(2, 2, 2.7).aim(2, 2, 0)
lamp2 = Lamp(filedata="my_other_lamp.ies").move(4, 2, 2.7).aim(4, 2, 0)
room.add_lamp(lamp1).add_lamp(lamp2)
room.add_standard_zones()
room.calculate()
```
### Custom Calculation Planes
```python
from guv_calcs import Room, Lamp, CalcPlane
room = Room(x=6, y=4, z=2.7)
lamp = Lamp(filedata="my_lamp.ies").move(3, 2, 2.7).aim(3, 2, 0)
room.add_lamp(lamp)
# Add a plane at desk height (0.75m)
workplane = CalcPlane(
zone_id="WorkPlane",
x1=0, x2=6,
y1=0, y2=4,
height=0.75,
x_spacing=0.25,
y_spacing=0.25,
)
room.add_calc_zone(workplane)
room.calculate()
```
### Non-Rectangular Rooms
```python
from guv_calcs import Room, Polygon2D
# L-shaped room
floor = Polygon2D(vertices=[
(0, 0), (4, 0), (4, 2), (2, 2), (2, 4), (0, 4)
])
room = Room(floor_polygon=floor, z=2.7)
```
### Safety Compliance Check
```python
room = (
Room(x=6, y=4, z=2.7)
.place_lamp("aerolamp")
.add_standard_zones()
.calculate()
)
result = room.check_lamps()
print(f"Compliant: {result.compliant}")
```
### Save and Load
```python
# Save
room.save("my_room.guv")
# Load
loaded = Room.load("my_room.guv")
loaded.calculate()
```
## Lamp Keywords
```
lamp = Lamp.from_keyword('ushio_b1')
```
Currently, only 222nm lamps are available, with data downloaded from reports.osluv.org. We welcome collaboration to expand the availability of lamp data.
- `aerolamp` - Aerolamp DevKit
- `beacon` - Beacon
- `lumenizer_zone` - LumenLabs Lumenizer Zone
- `nukit_lantern` - NuKit Lantern
- `nukit_torch` - NuKit Torch
- `sterilray` - SterilRay Germbuster Sabre
- `ushio_b1` - Ushio B1
- `ushio_b15` - Ushio B1.5
- `uvpro222_b1` - Bioabundance UVPro222 B1
- `uvpro222_b1` - Bioabundance UVPro222 B2
## License
Distributed under the MIT License. See `LICENSE.txt` for more information.
## Contact
Vivian Belenky - jvb@osluv.org
Project Link: [https://github.com/jvbelenky/guv-calcs/](https://github.com/jvbelenky/guv-calcs/)
| text/markdown | J. Vivian Belenky | j.vivian.belenky@outlook.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"License :: OSI Approved :: MIT License"
] | [] | https://github.com/jvbelenky/guv-calcs | null | >=3.11 | [] | [] | [] | [
"matplotlib>=3.8.4",
"numpy>=1.26.4",
"openpyxl>=3.1.0",
"pandas>=2.2.2",
"photompy==0.2.0",
"plotly>=6.1.1",
"scipy>=1.13.1",
"kaleido>=1.0.0",
"seaborn>=0.13.2"
] | [] | [] | [] | [] | twine/5.1.1 CPython/3.10.12 | 2026-02-20T18:11:00.618914 | guv_calcs-0.6.2.tar.gz | 519,272 | 14/3b/7a6e5fffa47117c00f61b8e7ac98bf5ee543f73b414404887338ba2c6a78/guv_calcs-0.6.2.tar.gz | source | sdist | null | false | c23508be4fa0735b7bab8d35679c42c4 | 5e65daecf705c92db5cb87c6b7be73b365051fb8eb1d67fb712ccbedb621392a | 143b7a6e5fffa47117c00f61b8e7ac98bf5ee543f73b414404887338ba2c6a78 | null | [] | 208 |
2.4 | aiobp | 1.2.0 | Boilerplate for asyncio service | Asyncio Service Boilerplate
===========================
This module provides a foundation for building microservices using Python's `asyncio` library. Key features include:
* A runner with graceful shutdown
* A task reference management
* A flexible configuration provider
* A logger with colorized output
No dependencies are enforced by default, so you only install what you need.
For basic usage, no additional Python modules are required.
The table below summarizes which optional dependencies to install based on the features you want to use:
| aiobp Feature | Required Module(s) |
|-------------------------|--------------------|
| config (.conf or .json) | msgspec |
| config (.yaml) | msgspec, pyyaml |
| OpenTelemetry logging | opentelemetry-sdk, opentelemetry-exporter-otlp-proto-grpc |
To install with OpenTelemetry support:
```bash
pip install aiobp[otel]
```
Basic example
-------------
```python
import asyncio
from aiobp import runner
async def main():
try:
await asyncio.sleep(60)
except asyncio.CancelledError:
print('Saving data...')
runner(main())
```
OpenTelemetry Logging
---------------------
aiobp supports exporting logs to OpenTelemetry collectors (SigNoz, Jaeger, etc.).
### Configuration
Add OTEL settings to your `LoggingConfig`:
```ini
[log]
level = DEBUG
filename = service.log
otel_endpoint = http://localhost:4317
otel_export_interval = 5
```
| Option | Default | Description |
|----------------------|---------|--------------------------------------------------|
| otel_endpoint | None | OTLP gRPC endpoint (e.g. http://localhost:4317) |
| otel_export_interval | 5 | Export interval in seconds (0 = instant export) |
### Usage
```python
from dataclasses import dataclass
from aiobp.logging import LoggingConfig, setup_logging, log
@dataclass
class Config:
log: LoggingConfig = None
# ... load config ...
setup_logging("my-service-name", config.log)
log.info("This message goes to console, file, and OTEL collector")
```
### Resource Attributes
To add custom resource attributes (like location, environment, etc.), set the standard OTEL environment variable before calling `setup_logging`:
```python
import os
os.environ["OTEL_RESOURCE_ATTRIBUTES"] = "location=datacenter1,environment=production"
setup_logging("my-service-name", config.log)
```
### Graceful Fallback
If `otel_endpoint` is configured but OpenTelemetry packages are not installed, a warning is logged and the application continues with console/file logging only.
More complex example
--------------------
```python
import asyncio
import aiohttp
import sys
from dataclasses import dataclass
from aiobp import create_task, on_shutdown, runner
from aiobp.config import InvalidConfigFile, sys_argv_or_filenames
from aiobp.config.conf import loader
from aiobp.logging import LoggingConfig, add_devel_log_level, log, setup_logging
@dataclass
class WorkerConfig:
"""Your microservice worker configuration"""
sleep: int = 5
@dataclass
class Config:
"""Put configurations together"""
worker: WorkerConfig = None
log: LoggingConfig = None
async def worker(config: WorkerConfig, client_session: aiohttp.ClientSession) -> int:
"""Perform service work"""
attempts = 0
try:
async with client_session.get('http://python.org') as resp:
assert resp.status == 200
log.debug('Page length %d', len(await resp.text()))
attempts += 1
await asyncio.sleep(config.sleep)
except asyncio.CancelledError:
log.info('Doing some shutdown work')
await client_session.post('http://localhost/service/attempts', data={'attempts': attempts})
return attempts
async def service(config: Config):
"""Your microservice"""
client_session = aiohttp.ClientSession()
on_shutdown(client_session.close, after_tasks_cancel=True)
create_task(worker(config.worker, client_session), 'PythonFetcher')
# you can do some monitoring, statistics collection, etc.
# or just let the method finish and the runner will wait for Ctrl+C or kill
def main():
"""Example microservice"""
add_devel_log_level()
try:
config_filename = sys_argv_or_filenames('service.local.conf', 'service.conf')
config = loader(Config, config_filename)
except InvalidConfigFile as error:
print(f'Invalid configuration: {error}')
sys.exit(1)
setup_logging(config.log)
log.info("my-service-name", "Using config file: %s", config_filename)
runner(service(config))
if __name__ == '__main__':
main()
```
| text/markdown | null | "INSOFT s.r.o." <helpdesk@insoft.cz> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"msgspec>=0.19.0; extra == \"logging\"",
"pyyaml>=6.0.2; extra == \"logging-yaml\"",
"opentelemetry-sdk>=1.20.0; extra == \"logging-otel\"",
"opentelemetry-exporter-otlp-proto-grpc>=1.20.0; extra == \"logging-otel\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T18:10:03.303637 | aiobp-1.2.0.tar.gz | 12,403 | f7/b8/fa44daacf450d2bea6878ae42e04018a28388f4c3cf8701897a73cf9efe7/aiobp-1.2.0.tar.gz | source | sdist | null | false | 0c3a527d7e843c17c3c836024a8e7276 | 3c6c427157a197ba08a7d2ffe7066a09a28d7a2633cfdcf674b28558763c4695 | f7b8fa44daacf450d2bea6878ae42e04018a28388f4c3cf8701897a73cf9efe7 | MIT | [] | 222 |
2.4 | ferret-scan | 1.5.2 | Sensitive data detection tool with pre-commit hook support | # Ferret Scan Python Package
<div align="center">
<img src="https://raw.githubusercontent.com/awslabs/ferret-scan/main/docs/images/ferret-scan-logo.png" alt="Ferret Scan Logo" width="200"/>
</div>
A Python wrapper for [Ferret Scan](https://github.com/awslabs/ferret-scan), a sensitive data detection tool. This package provides easy installation and seamless pre-commit hook integration.
## Installation
```bash
pip install ferret-scan
```
## Usage
### Command Line
After installation, use `ferret-scan` exactly like the native binary:
```bash
# Basic scan
ferret-scan --file document.txt
# JSON output
ferret-scan --file document.txt --format json
# Quiet mode for scripts
ferret-scan --file document.txt --quiet
# Pre-commit mode with optimizations
ferret-scan --pre-commit-mode --confidence high,medium --checks all
```
### Pre-commit Hook
Ferret Scan provides multiple pre-commit hook configurations for different security requirements. Add to your `.pre-commit-config.yaml`:
#### Default Configuration (Recommended)
```yaml
repos:
- repo: https://github.com/awslabs/ferret-scan
rev: v1.0.0
hooks:
- id: ferret-scan
```
#### Strict Security (Blocks on high confidence findings)
```yaml
repos:
- repo: https://github.com/awslabs/ferret-scan
rev: v1.0.0
hooks:
- id: ferret-scan-strict
```
#### Advisory Mode (Shows findings but never blocks)
```yaml
repos:
- repo: https://github.com/awslabs/ferret-scan
rev: v1.0.0
hooks:
- id: ferret-scan-advisory
```
#### Secrets Only (Focus on API keys and tokens)
```yaml
repos:
- repo: https://github.com/awslabs/ferret-scan
rev: v1.0.0
hooks:
- id: ferret-scan-secrets
```
#### Financial Data (Credit cards and financial info)
```yaml
repos:
- repo: https://github.com/awslabs/ferret-scan
rev: v1.0.0
hooks:
- id: ferret-scan-financial
```
#### PII Detection (SSN, passport, email)
```yaml
repos:
- repo: https://github.com/awslabs/ferret-scan
rev: v1.0.0
hooks:
- id: ferret-scan-pii
```
#### Metadata Check (Document metadata scanning)
```yaml
repos:
- repo: https://github.com/awslabs/ferret-scan
rev: v1.0.0
hooks:
- id: ferret-scan-metadata
```
#### CI/CD Optimized (Structured output for pipelines)
```yaml
repos:
- repo: https://github.com/awslabs/ferret-scan
rev: v1.0.0
hooks:
- id: ferret-scan-ci
```
#### Custom Configuration
You can also customize any hook with additional arguments:
```yaml
repos:
- repo: https://github.com/awslabs/ferret-scan
rev: v1.0.0
hooks:
- id: ferret-scan
args: ['--confidence', 'high', '--checks', 'CREDIT_CARD,SECRETS', '--verbose']
```
#### Local Installation
For local installations, use the `ferret-scan` command directly:
```yaml
repos:
- repo: local
hooks:
- id: ferret-scan
name: Ferret Scan - Sensitive Data Detection
entry: ferret-scan
language: system
files: '\.(txt|py|js|ts|go|java|json|yaml|yml|md|csv|log|conf|config|ini|env)$'
args: ['--pre-commit-mode', '--confidence', 'high,medium']
```
## How It Works
This Python package:
1. **Automatic Binary Download**: Downloads the appropriate ferret-scan binary for your platform (Linux/macOS/Windows, x86_64/ARM64)
2. **Transparent Execution**: Passes all arguments directly to the native binary
3. **Cross-Platform**: Works on all platforms supported by ferret-scan
4. **Pre-commit Ready**: Integrates seamlessly with pre-commit hooks with automatic optimizations
## Supported Platforms
- **Linux**: x86_64, ARM64
- **macOS**: x86_64 (Intel), ARM64 (Apple Silicon)
- **Windows**: x86_64, ARM64
## Features
All features of the native ferret-scan binary are available:
- **Sensitive Data Detection**: Credit cards, passports, SSNs, API keys, etc.
- **Multiple Formats**: Text, JSON, CSV, YAML, JUnit, GitLab SAST output
- **Document Processing**: PDF, Office documents, images
- **Pre-commit Optimizations**: Automatic quiet mode, no colors, appropriate exit codes
- **Suppression Rules**: Manage false positives
- **Configuration**: YAML config files and profiles
- **Redaction**: Remove sensitive data from documents
## Command Line Options
The Python package supports all command-line options of the native binary:
- `--file`: Input file, directory, or glob pattern
- `--format`: Output format (text, json, csv, yaml, junit, gitlab-sast)
- `--confidence`: Confidence levels (high, medium, low, combinations)
- `--checks`: Specific checks to run (CREDIT_CARD, SECRETS, SSN, etc.)
- `--pre-commit-mode`: Enable pre-commit optimizations
- `--verbose`: Detailed information for findings
- `--quiet`: Suppress progress output
- `--no-color`: Disable colored output
- `--recursive`: Recursively scan directories
- `--enable-preprocessors`: Enable document text extraction
- `--config`: Configuration file path
- `--profile`: Configuration profile name
## Requirements
- Python 3.7+
- Internet connection (for initial binary download)
## License
Apache License 2.0 - see the [LICENSE](https://github.com/awslabs/ferret-scan/blob/main/LICENSE) file for details.
## Contributing
See the main [Ferret Scan repository](https://github.com/awslabs/ferret-scan) for contribution guidelines.
| text/markdown | AWS | AWS <ferret-scan@amazon.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Security",
"Topic :: Software Development :: Quality Assurance"
] | [] | https://github.com/awslabs/ferret-scan | null | >=3.7 | [] | [] | [] | [
"requests>=2.25.0",
"requests>=2.25.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/awslabs/ferret-scan",
"Repository, https://github.com/awslabs/ferret-scan",
"Issues, https://github.com/awslabs/ferret-scan/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:09:59.516753 | ferret_scan-1.5.2.tar.gz | 52,349,460 | 52/42/2551207a61051142378f628fe1d7fdf81a9b8a5e0967be2e05c45f63d96b/ferret_scan-1.5.2.tar.gz | source | sdist | null | false | 8ce747360f5a5792a999c706dd1d63d3 | 2b2209cc854f37caf32b60cf3596ae93fa9cd01726f17d8e53b8b0239012cc05 | 52422551207a61051142378f628fe1d7fdf81a9b8a5e0967be2e05c45f63d96b | Apache-2.0 | [
"LICENSE.txt"
] | 282 |
2.4 | appinfra | 0.4.0 | Infrastructure framework for Python applications | # appinfra



[](https://peps.python.org/pep-0561/)
[](https://github.com/astral-sh/ruff)
[](https://github.com/serendip-ml/appinfra/actions/workflows/test-docker.yml)
[](https://pypi.org/project/appinfra/)

Production-grade Python infrastructure framework for building reliable CLI tools and services.
## Scope
**Best for:** Production CLI tools, background services, systems-level Python applications.
**Not for:** Web APIs (use FastAPI), async-heavy applications, ORMs.
See [docs/README.md](appinfra/docs/README.md) for full scope and philosophy.
## Features
- **Logging** - Structured logging with custom levels, rotation, JSON output, and database handlers
- **Database** - PostgreSQL interface with connection pooling and query monitoring
- **App Framework** - Fluent builder API for CLI tools with lifecycle management
- **Configuration** - YAML config with environment variable overrides and path resolution
- **Time Utilities** - Scheduling, periodic execution, and duration formatting
## Requirements
- Python 3.11+
- PostgreSQL 16 (optional, for database features)
## Installation
```bash
pip install appinfra
```
Optional features:
```bash
pip install appinfra[sql] # Database support (PostgreSQL, SQLite)
pip install appinfra[ui] # Rich console, interactive prompts
pip install appinfra[fastapi] # FastAPI integration
pip install appinfra[validation] # Pydantic config validation
pip install appinfra[hotreload] # Config file watching
pip install appinfra[all] # All extras (dev, docs, and all above)
```
## Documentation
Full documentation is available in [docs/README.md](appinfra/docs/README.md), or via CLI:
```bash
appinfra docs # Overview
appinfra docs list # List all guides and examples
appinfra docs show <topic> # Read a specific guide
```
## Highlights
### App Framework
**AppBuilder for CLI tools** - Build production CLI applications with lifecycle management, config,
logging, and tools. Focused configurers provide clean separation of concerns. Config files are
resolved from `--etc-dir` (default: `./etc`):
```python
from appinfra.app import AppBuilder
app = (
AppBuilder("myapp")
.with_description("Data processing tool")
.with_config_file("config.yaml") # Resolved from --etc-dir
.logging.with_level("info").with_location(1).done()
.tools.with_tool(ProcessorTool()).with_main(MainTool()).done()
.advanced.with_hook("startup", init_database).done()
.build()
)
app.run()
```
**Fluent builder APIs** - All components use chainable builder patterns for clean, readable
configuration. No more scattered setup code or complex constructor arguments:
```python
from appinfra.log import LoggingBuilder
logger = (
LoggingBuilder("my_app")
.with_level("info")
.with_format("%(asctime)s [%(levelname)s] %(message)s")
.console_handler(colors=True)
.file_handler("logs/app.log", rotate_mb=10)
.build()
)
```
**Decorator-based CLI tools** - Build command-line tools with minimal boilerplate. Tools
automatically get logging, config access, and argument parsing:
```python
from appinfra.app import AppBuilder
app = AppBuilder("mytool").build()
@app.tool(name="sync", help="Synchronize data")
@app.argument("--force", action="store_true", help="Force sync")
@app.argument("--limit", type=int, default=100)
def sync_tool(self):
self.lg.info(f"Syncing {self.args.limit} items")
if self.args.force:
self.lg.warning("Force mode enabled")
return 0
```
**Nested subcommands** - Organize complex CLIs with hierarchical command structures using the
`@subtool` decorator:
```python
app = AppBuilder("myapp").build()
@app.tool(name="db", help="Database operations")
def db_tool(self):
return self.run_subtool()
@db_tool.subtool(name="migrate", help="Run migrations")
@app.argument("--step", type=int, default=1)
def db_migrate(self):
self.lg.info(f"Migrating {self.args.step} steps...")
@db_tool.subtool(name="status")
def db_status(self):
self.lg.info("Database is healthy")
# Usage: myapp db migrate --step 3
# myapp db status
```
**Multi-source version tracking** - Automatically detect version and git commit from PEP 610
metadata, build-time info, or git runtime. Integrates with AppBuilder for --version flag and
startup logging:
```python
app = (
AppBuilder("myapp")
.version
.with_semver("1.0.0")
.with_build_info() # App's own commit from _build_info.py
.with_package("appinfra") # Track framework version
.done()
.build()
)
# --version shows: myapp 1.0.0 (abc123f) + tracked packages
# Startup logs commit hash, warns if repo has uncommitted changes
```
### Configuration
**YAML includes with security** - Build modular configurations with file includes, environment
variable validation, and automatic path resolution. Includes are protected against path traversal
and circular dependencies:
```yaml
# config.yaml
!include "./base.yaml" # Document-level merge
database:
primary: !include "./db/primary.yaml" # Nested includes
credentials:
password: !secret ${DB_PASSWORD} # Validated env var reference
paths:
models: !path ../models # Resolved relative to this file
cache: !path ~/.cache/myapp # Expands ~
```
**DotDict config access** - Access nested configuration with attribute syntax or dot-notation paths.
Automatic conversion of nested dicts, with safe traversal methods:
```python
from appinfra.dot_dict import DotDict
config = DotDict({
"database": {"host": "localhost", "port": 5432},
"features": {"beta": True}
})
# Attribute-style access
print(config.database.host) # "localhost"
print(config.features.beta) # True
# Dot-notation path queries
if config.has("database.ssl.enabled"):
setup_ssl(config.get("database.ssl.cert"))
```
**Hot-reload configuration** - Change log levels, feature flags, or any config value without
restarting your application. Uses content-based change detection to avoid spurious reloads:
```python
from appinfra.config import ConfigWatcher
def on_config_change(new_config):
logger.info("Config updated, applying changes...")
apply_feature_flags(new_config.features)
watcher = ConfigWatcher(lg=logger, etc_dir="./etc")
watcher.configure("config.yaml", debounce_ms=500)
watcher.add_section_callback("features", on_config_change)
watcher.start()
```
### Logging & Security
**Topic-based log levels** - Control logging granularity with glob patterns. Set debug logging for
database queries while keeping network calls at warning level, all without touching application
code:
```python
from appinfra.log import LogLevelManager
manager = LogLevelManager.get_instance()
manager.add_rule("/app/db/*", "debug") # All database loggers
manager.add_rule("/app/db/queries", "trace") # Even more detail for queries
manager.add_rule("/app/net/**", "warning") # Network and all children
manager.add_rule("/app/cache", "error") # Only errors from cache
```
**Automatic secret masking** - Protect sensitive data in logs with pattern-based detection. Covers
20+ secret formats including AWS keys, GitHub tokens, JWTs, and database URLs:
```python
from appinfra.security import SecretMasker, SecretMaskingFilter
masker = SecretMasker()
masker.add_known_secret(os.environ["API_KEY"]) # Track known secrets
# Patterns auto-detect common formats
text = masker.mask("token=ghp_abc123secret") # "token=[MASKED]"
text = masker.mask("aws_secret=AKIA...") # "aws_secret=[MASKED]"
# Integrate with logging
handler.addFilter(SecretMaskingFilter(masker))
```
**Lightweight observability hooks** - Event-based callbacks without heavy frameworks. Register
handlers for specific events or globally, with automatic timing in context:
```python
from appinfra.observability import ObservabilityHooks, HookEvent, HookContext
hooks = ObservabilityHooks()
@hooks.on(HookEvent.QUERY_START)
def on_query(ctx: HookContext):
logger.debug(f"Query: {ctx.data.get('sql')}")
@hooks.on(HookEvent.QUERY_END)
def on_complete(ctx: HookContext):
logger.info(f"Completed in {ctx.duration:.3f}s")
# Trigger events with arbitrary data
hooks.trigger(HookEvent.QUERY_START, sql="SELECT * FROM users")
```
### Time & Scheduling
**Dual-mode ticker** - Run periodic tasks with scheduled intervals or continuous execution. Context
manager handles signals for graceful shutdown:
```python
from appinfra.time import Ticker
# Scheduled mode: run every 30 seconds
with Ticker(logger, secs=30) as ticker:
for tick_count in ticker: # Stops on SIGTERM/SIGINT
run_health_check()
if tick_count >= 100:
break
# Continuous mode: run as fast as possible
for tick in Ticker(logger): # No secs = continuous
process_queue_item()
```
**Human-readable durations** - Format seconds to readable strings and parse them back. Supports
microseconds to days, with precise mode for sub-millisecond accuracy:
```python
from appinfra.time import delta_str, delta_to_secs
# Formatting
delta_str(3661.5) # "1h1m1s"
delta_str(0.000042) # "42μs"
delta_str(90061) # "1d1h1m1s"
# Parsing
delta_to_secs("2h30m") # 9000.0
delta_to_secs("1d12h") # 129600.0
delta_to_secs("500ms") # 0.5
```
**Time-based task scheduler** - Execute tasks at specific times with daily, weekly, monthly, or
hourly periods. Generator-based iteration with signal handling for graceful shutdown:
```python
from appinfra.time import Sched, Period
# Daily at 14:30
sched = Sched(logger, Period.DAILY, "14:30")
# Weekly on Monday at 09:00
sched = Sched(logger, Period.WEEKLY, "09:00", weekday=0)
for timestamp in sched.run(): # Yields after each scheduled time
generate_report()
```
**ETA progress tracking** - Accurate time-to-completion estimates using EWMA-smoothed processing
rates. Handles variable update intervals without spike errors:
```python
from appinfra.time import ETA, delta_str
eta = ETA(total=1000)
for i, item in enumerate(items):
process(item)
eta.update(i + 1)
remaining = eta.remaining_secs()
print(f"{eta.percent():.1f}% - {delta_str(remaining)} remaining")
```
**Business day iteration** - Memory-efficient date range processing with weekend filtering. Iterates
from start date to today without materializing the full range:
```python
from appinfra.time import iter_dates
import datetime
start = datetime.date(2025, 12, 1)
for date in iter_dates(start, skip_weekends=True):
process_business_day(date) # Mon-Fri only, up to today
```
### CLI & UI
**Testable CLI output** - Write testable CLI tools without mocking stdout. Swap output
implementations for production, testing, or silent operation:
```python
from appinfra.cli.output import ConsoleOutput, BufferedOutput, NullOutput
def run_command(output=None):
output = output or ConsoleOutput()
output.write("Processing...")
output.write("Done!")
# In tests: capture output
buf = BufferedOutput()
run_command(output=buf)
assert "Done!" in buf.text
assert buf.lines == ["Processing...", "Done!"]
```
**Interactive CLI prompts** - Smart prompts that work in TTY, non-interactive, and CI environments.
Auto-detects available libraries with graceful fallbacks:
```python
from appinfra.ui import confirm, select, text
env = select("Environment:", ["dev", "staging", "prod"])
name = text("Project name:", validate=lambda x: len(x) > 0)
if confirm(f"Deploy {name} to {env}?"):
deploy()
```
**Progress with logging coordination** - Rich spinner or progress bar that pauses for log output.
Falls back to plain logging on non-TTY:
```python
from appinfra.ui import ProgressLogger
with ProgressLogger(lg, "Processing...", total=100) as pl:
for item in items:
result = process(item)
pl.log(f"Processed {item.name}") # Pauses spinner, logs, resumes
pl.update(advance=1)
```
### Database
**Database auto-reconnection** - Automatic retry with exponential backoff on transient failures.
Configured via YAML, transparent to application code:
```yaml
# etc/config.yaml
database:
url: postgresql://...
auto_reconnect: true
max_retries: 3 # Attempts before raising
retry_delay: 0.5 # Initial delay, doubles each retry
```
**Read-only database mode** - Transaction-level enforcement preventing accidental writes. Validates
configuration to catch conflicts early:
```python
pg = PG(config, readonly=True)
with pg.session() as session:
# SELECT queries work normally
# INSERT/UPDATE/DELETE raise errors at transaction level
```
### Server
**FastAPI subprocess isolation** - Run FastAPI in a subprocess with queue-based IPC. Main process
stays responsive while workers handle requests, with automatic restart on failure:
```python
from appinfra.app.fastapi import FastAPIBuilder
server = (
FastAPIBuilder("api")
.with_config(config)
.with_port(8000)
.with_subprocess_mode(
request_queue=request_q,
response_queue=response_q,
auto_restart=True
)
.build()
)
server.start() # Non-blocking, runs in subprocess
```
## Completeness
Built for production with comprehensive validation:
- **4,000+ tests** across unit, integration, e2e, security, and performance categories
- **95% code coverage** on 11,000+ statements
- **100% type hints** verified by mypy strict mode
- **Security tests** for YAML injection, path traversal, ReDoS, and secret exposure
## Contributing
See the [Contributing Guide](appinfra/docs/guides/contributing.md) for development setup and
guidelines.
## Links
- [Changelog](CHANGELOG.md)
- [Security Policy](appinfra/docs/SECURITY.md)
- [API Stability](appinfra/docs/guides/api-stability.md)
## License
Apache License 2.0 - see [LICENSE](LICENSE) for details.
| text/markdown | serendip-ml (github.com/serendip-ml) | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"PyYAML<7.0,>=6.0",
"requests<3.0.0,>=2.32.0",
"urllib3<3.0.0,>=2.6.0",
"coverage<8.0.0,>=7.0.0; extra == \"dev\"",
"ruff<1.0.0,>=0.1.0; extra == \"dev\"",
"mypy<2.0.0,>=1.0.0; extra == \"dev\"",
"types-PyYAML<7.0.0,>=6.0.0; extra == \"dev\"",
"pytest<10.0.0,>=7.0.0; extra == \"dev\"",
"pytest-asyncio<2.0.0,>=0.21.0; extra == \"dev\"",
"pytest-cov<8.0.0,>=4.0.0; extra == \"dev\"",
"pytest-xdist<4.0.0,>=3.0.0; extra == \"dev\"",
"hypothesis<7.0.0,>=6.0.0; extra == \"dev\"",
"pip-audit<3.0.0,>=2.0.0; extra == \"dev\"",
"interrogate<2.0.0,>=1.5.0; extra == \"dev\"",
"pydantic<3.0.0,>=2.0.0; extra == \"validation\"",
"mkdocs<2.0.0,>=1.5.3; extra == \"docs\"",
"mkdocs-material<10.0.0,>=9.5.0; extra == \"docs\"",
"mkdocstrings[python]<2.0.0,>=1.0.0; extra == \"docs\"",
"mkdocs-autorefs<2.0.0,>=1.4.0; extra == \"docs\"",
"fastapi<1.0.0,>=0.100.0; extra == \"fastapi\"",
"uvicorn[standard]<1.0.0,>=0.23.0; extra == \"fastapi\"",
"watchdog<5.0.0,>=3.0.0; extra == \"hotreload\"",
"rich<14.0.0,>=13.0.0; extra == \"ui\"",
"questionary<3.0.0,>=2.0.0; extra == \"ui\"",
"InquirerPy<1.0.0,>=0.3.0; extra == \"ui\"",
"sqlalchemy<3.0.0,>=2.0.0; extra == \"sql\"",
"sqlalchemy-utils<1.0.0,>=0.42.0; extra == \"sql\"",
"psycopg2-binary<3.0.0,>=2.9.0; extra == \"sql\"",
"appinfra[dev]; extra == \"all\"",
"appinfra[docs]; extra == \"all\"",
"appinfra[sql]; extra == \"all\"",
"appinfra[ui]; extra == \"all\"",
"appinfra[fastapi]; extra == \"all\"",
"appinfra[validation]; extra == \"all\"",
"appinfra[hotreload]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/serendip-ml/appinfra",
"Repository, https://github.com/serendip-ml/appinfra",
"Issues, https://github.com/serendip-ml/appinfra/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:09:23.091261 | appinfra-0.4.0.tar.gz | 926,290 | 1b/d8/c4b5a6da5b188b22ec6e65cc41b3c2a7dd50740998978a6ba078c12e892e/appinfra-0.4.0.tar.gz | source | sdist | null | false | 397e9394c0a51de9a821aa99faac0a3c | 2be9dae788b539c2543e10fc8443f8ec6f8e0482ab37b7a3d0f747e8ac8cca6d | 1bd8c4b5a6da5b188b22ec6e65cc41b3c2a7dd50740998978a6ba078c12e892e | null | [
"LICENSE"
] | 272 |
2.4 | nasa-polynomials | 0.1.0 | Python package for fitting NASA polynomials to thermodynamic data | Yo ptit projet sympa
| text/markdown | null | Joseph El-Forzli <jelforzli.nasapol@gmail.com> | null | null | MIT | NASA, polynomial, thermodynamics, fitting | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T18:09:09.341554 | nasa_polynomials-0.1.0.tar.gz | 5,346 | 44/6c/7b7998f244e12b4b433436fbd90ee686f4aa13d01e9e0d2dfcd8b4f0bdac/nasa_polynomials-0.1.0.tar.gz | source | sdist | null | false | c48f1e9a07b22b423363fca0bb8fcf37 | f496db6e87a03a06b9dd4a3e5fc353647b164cff7e35ce04deb41bc8195d333e | 446c7b7998f244e12b4b433436fbd90ee686f4aa13d01e9e0d2dfcd8b4f0bdac | null | [
"LICENSE"
] | 117 |
2.4 | sqlbench | 0.1.60 | A multi-database SQL workbench with support for IBM i, MySQL, and PostgreSQL | # SQLBench
A multi-database SQL workbench with support for IBM i (AS/400), MySQL, and PostgreSQL.
## Features
- Connect to multiple databases simultaneously
- Browse schemas, tables, and columns
- Execute SQL queries with syntax highlighting
- Export results to CSV, Excel, and PDF
- Save and manage database connections
- Right-click context menus for quick actions
## Supported Databases
- **IBM i (AS/400)** - via ODBC
- **MySQL** - via mysql-connector-python
- **PostgreSQL** - via psycopg2
## Installation
```bash
# Clone the repository
git clone https://github.com/jsteil/sqlbench.git
cd sqlbench
# Install dependencies
make install
```
## Usage
```bash
make run
```
## Requirements
- Python 3.8+
- tkinter (usually included with Python)
- For IBM i: IBM i Access ODBC Driver
## License
MIT
| text/markdown | Jim | null | null | null | null | database, gui, ibmi, mysql, postgresql, sql, workbench | [
"Development Status :: 3 - Alpha",
"Environment :: X11 Applications",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Database",
"Topic :: Database :: Front-Ends"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"openpyxl>=3.1.0",
"pyqt6>=6.4.0",
"sqlparse>=0.5.0",
"mysql-connector-python>=8.0.0; extra == \"all\"",
"openpyxl>=3.1.0; extra == \"all\"",
"psycopg2-binary>=2.9.0; extra == \"all\"",
"pyodbc>=4.0.0; extra == \"all\"",
"pyqt6>=6.4.0; extra == \"all\"",
"black>=23.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"openpyxl>=3.1.0; extra == \"export\"",
"pyodbc>=4.0.0; extra == \"ibmi\"",
"mysql-connector-python>=8.0.0; extra == \"mysql\"",
"psycopg2-binary>=2.9.0; extra == \"postgresql\"",
"pyqt6>=6.4.0; extra == \"qt\""
] | [] | [] | [] | [
"Homepage, https://github.com/jpsteil/sqlbench",
"Repository, https://github.com/jpsteil/sqlbench",
"Issues, https://github.com/jpsteil/sqlbench/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:09:08.243560 | sqlbench-0.1.60.tar.gz | 1,454,382 | 9b/ca/6cb3efaef39b87171746d57c84a44899f181e675ddf5ffeb75f2c34048bb/sqlbench-0.1.60.tar.gz | source | sdist | null | false | be41832a8a5bdab8f548f18129713683 | 26700594febdea7e6b121984358520876ca0e7d8f57c475854abfdf0f48c6ce1 | 9bca6cb3efaef39b87171746d57c84a44899f181e675ddf5ffeb75f2c34048bb | MIT | [] | 198 |
2.4 | surrealengine | 0.9.5 | An async Object-Document Mapper (ODM) for SurrealDB | # SurrealEngine
**Replace Your Multi-Database Stack with One Pythonic ORM**
Documents, graphs, vectors, real-time, and analytics—without the operational nightmare.
Powered by SurrealDB. Built for Python developers.
> **Vector Search, Graph Queries, Live Updates, and Zero-Copy Analytics — All in One ORM.**
SurrealEngine is the comprehensive Python ORM for SurrealDB, delivering capabilities
that previously required multiple databases and complex distributed systems. You get
familiar object mapping plus native support for graphs, vectors, real-time queries,
and advanced analytics.
Whether you're building a **Real-Time Recommendation Engine**, an **AI-Powered Search Service**,
or a **High-Frequency Data Pipeline**, SurrealEngine provides a unified Pythonic API
without the operational complexity of managing multiple databases.
[](https://iristech-systems.github.io/SurrealEngine-Docs/)
[](https://pypi.org/project/surrealengine/)
[](https://github.com/iristech-systems/surrealengine/blob/main/LICENSE)
[](https://surrealdb.com)
---
## 🚀 The "Magic" Example
Why choose SurrealEngine? Because you can do **this** in a single query:
```python
# The "Why SurrealEngine?" Example:
# Vector Search + Graph Traversal — in one query.
# Try doing THIS with PostgreSQL + Neo4j + Pinecone.
similar_users = await User.objects \
.filter(embedding__knn=(user_vector, 10)) \
.out("friends") \
.out(Person) \
.all()
# Result: Top 10 similar users' friends (Deep traversal)
```
---
## 🤔 Why SurrealEngine?
**Before SurrealEngine:**
```python
from pymongo import MongoClient # Documents
from neo4j import GraphDatabase # Graphs
from pinecone import Index # Vectors
from redis import Redis # Real-time
import pandas as pd # Analytics
```
**With SurrealEngine:**
```python
from surrealengine import Document, create_connection
# Everything you need
```
**One database. One connection. One API. Zero operational complexity.**
---
## 🆚 How It Compares
| Need | Traditional Approach | With SurrealEngine |
|------|---------------------|-------------------|
| **Documents** | MongoDB + PyMongo | ✅ Built-in |
| **Graphs** | Neo4j + Driver | ✅ Built-in |
| **Vectors** | Pinecone/Weaviate | ✅ Built-in |
| **Real-Time** | Redis/Firebase | ✅ Built-in |
| **Analytics** | ClickHouse/Pandas | ✅ Built-in |
| **Databases to Manage** | 3-5 | **1** |
| **APIs to Learn** | 3-5 | **1** |
| **Connection Pools** | 3-5 | **1** |
| **Failure Modes** | Many | Few |
---
## 📦 Installation
We strongly recommend using `uv` for 10-100x faster package installation and resolution but `pip` and `poetry` work too.
### Using uv (Recommended)
```bash
uv add surrealengine
# Optional extras:
# uv add "surrealengine[signals]" # For pre/post save hooks
# uv add "surrealengine[data]" # For PyArrow/Polars support
# uv add "surrealengine[jupyter]" # For Jupyter Notebook support
```
### Using pip
```bash
pip install surrealengine
# Optional: pip install "surrealengine[signals, data]"
```
---
## ⚡ Quick Start
### 1. Connect (Sync or Async)
SurrealEngine auto-detects your context. Use `async_mode=True` for async apps (FastAPI), or defaults to sync for scripts.
```python
from surrealengine import create_connection
# Async connection (e.g. FastAPI)
await create_connection(
url="ws://localhost:8000/rpc",
namespace="test", database="test",
username="root", password="root",
async_mode=True
).connect()
# OR Sync connection (e.g. Scripts)
create_connection(
url="ws://localhost:8000/rpc",
namespace="test", database="test",
username="root", password="root",
async_mode=False
)
```
### 2. Define Your Model
```python
from surrealengine import Document, StringField, IntField
class Person(Document):
name = StringField(required=True)
age = IntField()
class Meta:
collection = "person"
indexes = [
# HNSW Vector Index
{"name": "idx_vector", "fields": ["embedding"], "dimension": 1536, "dist": "COSINE", "m": 16},
]
```
### 3. Polyglot Usage (Same API!)
```python
# Create
# In async function: await Person(name="Jane", age=30).save()
# In sync script: Person(name="Jane", age=30).save()
jane = await Person(name="Jane", age=30).save()
# Query with Pythonic Syntax (Overloaded Operators)
# Or use Django-style: Person.objects.filter(age__gt=25)
people = await Person.objects.filter(Person.age > 25).all()
# Graph Relations & Traversal
await jane.relate_to("knows", other_person)
# Traversal - Two Ways:
# 1. Edge Only (Lazy/Dict Result) -> returns list of dicts {out: ..., in: ...}
# Equivalent to: SELECT ->knows->? FROM person:jane
relations = await Person.objects.filter(id=jane.id).out("knows").all()
# 2. Edge + Node (Hydrated Documents) -> returns list of Person objects
# Equivalent to: SELECT ->knows->person.* FROM person:jane
friends = await Person.objects.filter(id=jane.id).out("knows").out(Person).all()
# Chain traversals freely:
# Friends of friends (Hydrated)
fof = await Person.objects.filter(id=jane.id).out("knows").out("knows").out(Person).all()
# 3. Magic Accessor (.rel)
# Simple access to relationships from a document instance
friends = await jane.rel.knows(Person).all()
```
### 4. Advanced Performance
```python
# Zero-Copy Data Export (10-50x faster)
# Export directly to Arrow/Polars without Python object overhead
df = await Person.objects.all().to_polars()
arrow_table = await Person.objects.all().to_arrow()
# Advanced Aggregation Pipeline
# "Find high-revenue categories with VIP activity"
from surrealengine import Sum, Mean, CountIf, DistinctCount
stats = await Transaction.objects.aggregate() \
.match(status="success", created_at__gt="2024-01-01") \
.group(
by_fields=["category", "region"],
total_revenue=Sum("amount"),
avg_ticket=Mean("amount"),
vip_transactions=CountIf("amount > 1000"),
unique_customers=DistinctCount("user_id")
) \
.having(total_revenue__gt=50000, vip_transactions__gte=10) \
.sort(total_revenue="DESC") \
.limit(5) \
.execute()
```
---
## 🎬 See It In Action
### Real-World Examples
**Recommendation Engine (Vector + Graph)**
```python
# Find products similar to what user liked (vector)
# that were purchased by friends (graph)
friend_ids = [u.id for u in await user.rel.friends(User)]
recommendations = await Product.objects \
.filter(embedding__knn=(user_preference_vector, 20)) \
.in_("purchased") \
.filter(id__inside=friend_ids) \
.all()
```
**Analytics Dashboard**
```python
# MongoDB-style aggregation pipeline
stats = await Order.objects.aggregate() \
.group("category", revenue=Sum("amount")) \
.sort(revenue="DESC") \
.limit(5) \
.execute()
```
**AI-Powered Search**
```python
# Semantic search + full-text filters
# using HNSW vector index and BM25 full-text search
from surrealengine import Q
results = await Article.objects \
.filter(embedding__knn=(query_embedding, 10)) \
.filter(Q.raw("content @1@ 'machine learning'")) \
.filter(published=True) \
.all()
```
👉 **[See 20+ more examples in the docs](https://iristech-systems.github.io/SurrealEngine-Docs/)**
---
## 📚 Documentation
**Full documentation is available at [https://iristech-systems.github.io/SurrealEngine-Docs/](https://iristech-systems.github.io/SurrealEngine-Docs/)**
👉 **Read the [State of SurrealEngine](https://github.com/iristech-systems/surrealengine/discussions/4) announcement!**
---
## ✨ Features
| Feature | Status | Notes |
|---------|--------|-------|
| **Polyglot API** | ✅ Supported | (New in v0.7.0) Write code once, run in both **Sync** (WSGI/Scripts) and **Async** (ASGI/FastAPI) contexts naturally. |
| **Aggregations** | ✅ Supported | MongoDB-style aggregation pipelines (`.aggregate().group(...).execute()`) for complex analytics. |
| **Materialized Views** | ✅ Supported | Define pre-computed views using `Document.create_materialized_view()` for high-performance analytics. |
| **Connection Pooling** | ✅ Supported | Full support for async pooling (auto-reconnect, health checks). Sync pooling available via `Queue`. |
| **Live Queries** | ⚠️ Partial | Supported on **WebSocket** connections (`ws://`, `wss://`). **NOT** supported on embedded (`mem://`, `file://`). |
| **Graph Traversal** | ✅ Supported | Fluent API (`.out().in_()`) and Magic `.rel` accessor (`user.rel.friends(User)`). |
| **Change Tracking** | ✅ Supported | Objects track dirty fields (`.is_dirty`, `.get_changes()`) to optimize UPDATE queries. |
| **Schema Generation** | ✅ Supported | Can generate `DEFINE TABLE/FIELD` statements from Python classes. |
| **Embedded Docs** | ✅ Supported | Define structured nested objects with `EmbeddedDocument`. Generates schema automatically. |
| **Stored Functions** | ✅ Supported | Define SurrealDB functions directly in Python using `@surreal_func`. |
| **Vector Search** | ✅ Supported | Support for HNSW indexes via `Meta.indexes` (dimension, dist, etc.). |
| **Full Text Search** | ✅ Supported | Support for BM25 and Highlights via `Meta.indexes`. |
| **Events** | ✅ Supported | Define triggers via `Meta.events` using the `Event` class. |
| **Data Science** | ✅ Supported | Zero-copy export to PyArrow (`.to_arrow()`) and Polars (`.to_polars()`). Requires `surrealengine[data]`. |
| **Pydantic** | ✅ Compatible | `RecordID` objects (SDK v1.0.8+) are Pydantic-compatible. |
---
## ⚠️ Sharp Edges & Limitations
SurrealEngine is designed to be a robust, high-level abstraction. However, be aware of these known limitations:
1. **Embedded Connections & Live Queries**:
Attempting to use `.live()` on a `mem://` or `file://` connection will raise a `NotImplementedError`. The underlying SDK's embedded connector does not currently support the event loop mechanism required for live subscriptions.
2. **RecordID Fields**:
The `RecordID` object from the SDK has `table_name` and `id` attributes.
*Gotcha*: Do not assume it has a `.table` attribute (it does not). Always use `.table_name`.
*Gotcha*: When parsing strings manually, prefer the SDK's `RecordID.parse("table:id")` over manual string splitting to handle escaped characters correctly.
3. **Strict Mode**:
By default, `Document` classes have `strict=True`. Initializing a document with unknown keyword arguments will raise an `AttributeError`. Set `strict=False` in `Meta` if you need to handle dynamic unstructured data.
4. **Auto-Connect**:
When using `create_connection(..., auto_connect=True)`, the connection is established immediately. For **async** connections, ensure this is called within a running event loop.
5. **Transactions**:
SurrealDB supports transactions (`BEGIN`, `COMMIT`, `CANCEL`). **SurrealEngine** provides a helper `await connection.transaction([coro1, coro2])` that ensures atomicity via connection pinning when using pools. A high-level `async with` context manager is planned for v0.8.0.
---
## 🎯 Get Started
```bash
uv add surrealengine
```
**Next Steps:**
- 📖 [Read the Docs](https://iristech-systems.github.io/SurrealEngine-Docs/)
- 💬 [Join Discussions](https://github.com/iristech-systems/surrealengine/discussions)
- 🐛 [Report Issues](https://github.com/iristech-systems/surrealengine/issues)
- ⭐ [Star on GitHub](https://github.com/iristech-systems/surrealengine) (if you find it useful!)
---
<p align="center">
Built with ❤️ by Iristech Systems<br>
Powered by <a href="https://surrealdb.com">SurrealDB</a>
</p>
| text/markdown; charset=UTF-8; variant=GFM | Iristech Systems | null | null | null | MIT License | surrealdb, odm, database, async, orm | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Database",
"Topic :: Software Development :: Libraries :: Python Modules",
"Framework :: AsyncIO"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"surrealdb>=1.0.8",
"typing-extensions>=4.12.0",
"cbor2>=5.6.0",
"websockets>=12.0",
"surrealengine[data,jupyter,signals]; extra == \"all\"",
"pyarrow>=14.0.0; extra == \"data\"",
"polars>=0.20.0; extra == \"data\"",
"maturin>=1.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\"",
"notebook>=7.0.0; extra == \"jupyter\"",
"ipykernel>=6.0.0; extra == \"jupyter\"",
"blinker>=1.6.2; extra == \"signals\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/iristech-systems/surrealengine/issues",
"Documentation, https://github.com/iristech-systems/surrealengine#readme",
"Homepage, https://github.com/iristech-systems/surrealengine"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T18:08:45.243677 | surrealengine-0.9.5.tar.gz | 402,144 | 54/f6/0ddfe9c3167b24be9d2ccaa5152b77f35687028eaa42e3f0e0f98b6493d1/surrealengine-0.9.5.tar.gz | source | sdist | null | false | ee14260751590ffeb668768269786e13 | ce8e2b65c71dffffc641ee2536e3bdf7a24ccfdd05daba1f71bd52ad6ba85c77 | 54f60ddfe9c3167b24be9d2ccaa5152b77f35687028eaa42e3f0e0f98b6493d1 | null | [
"LICENSE"
] | 203 |
2.4 | dragonfly-core | 1.75.11 | :dragon: dragonfly core library | 
[](https://github.com/ladybug-tools/dragonfly-core/actions)
[](https://coveralls.io/github/ladybug-tools/dragonfly-core)
[](https://www.python.org/downloads/release/python-3100/) [](https://www.python.org/downloads/release/python-370/) [](https://www.python.org/downloads/release/python-270/) [](https://github.com/IronLanguages/ironpython2/releases/tag/ipy-2.7.8/)
# dragonfly-core

Dragonfly is a collection of Python libraries to create representations of buildings
following [dragonfly-schema](https://github.com/ladybug-tools/dragonfly-schema/wiki).
It abstracts the capabilities of [honeybee-core](https://github.com/ladybug-tools/honeybee-core/)
to make it easier to construct and edit large models.
This repository is the core repository that provides dragonfly's common functionalities.
To extend these functionalities you should install available Dragonfly extensions or write
your own.
Here are a number of frequently used extensions for Dragonfly:
- [dragonfly-energy](https://github.com/ladybug-tools/dragonfly-energy): Adds Energy simulation to Dragonfly.
## Installation
`pip install -U dragonfly-core`
To check if Dragonfly command line interface is installed correctly use `dragonfly viz` and you
should get a `viiiiiiiiiiiiizzzzzzzzz!` back in response!
## [API Documentation](https://www.ladybug.tools/dragonfly-core/docs/)
## Local Development
1. Clone this repo locally
```console
git clone git@github.com:ladybug-tools/dragonfly-core.git
# or
git clone https://github.com/ladybug-tools/dragonfly-core.git
```
2. Install dependencies:
```console
cd dragonfly-core
pip install -r dev-requirements.txt
pip install -r requirements.txt
```
3. Run Tests:
```console
python -m pytests tests/
```
4. Generate Documentation:
```console
sphinx-apidoc -f -e -d 4 -o ./docs ./dragonfly
sphinx-build -b html ./docs ./docs/_build/docs
```
| text/markdown | Ladybug Tools | info@ladybug.tools | null | null | AGPL-3.0 | null | [
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: IronPython",
"Operating System :: OS Independent"
] | [] | https://github.com/ladybug-tools/dragonfly-core | null | null | [] | [] | [] | [
"honeybee-core==1.64.21",
"dragonfly-schema==2.0.2; python_version >= \"3.7\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-20T18:08:20.305318 | dragonfly_core-1.75.11.tar.gz | 247,994 | a0/c7/4341431b7c7a3c9be02d50c15ebbe0a1d3848a3a273f3090f1545b3b057c/dragonfly_core-1.75.11.tar.gz | source | sdist | null | false | 83ccea1656e18667a165521bc31e2c9b | 9cf65ace1a347db19d49b85a256218b566c76a0a7124b5092c5774d04f4b1ace | a0c74341431b7c7a3c9be02d50c15ebbe0a1d3848a3a273f3090f1545b3b057c | null | [
"LICENSE"
] | 476 |
2.4 | solace-agent-mesh | 1.17.1 | Solace Agent Mesh is an open-source framework for building event-driven, multi-agent AI systems where specialized agents collaborate on complex tasks. | <p align="center">
<img src="./docs/static/img/logo.png" alt="Solace Agent Mesh Logo" width="100"/>
</p>
<h2 align="center">
Solace Agent Mesh
</h2>
<h3 align="center">Open-source framework for building event driven multi-agent AI systems</h3>
<h5 align="center">Star ⭐️ this repo to stay updated as we ship new features and improvements.</h5>
<p align="center">
<a href="https://github.com/SolaceLabs/solace-agent-mesh/blob/main/LICENSE">
<img src="https://img.shields.io/github/license/SolaceLabs/solace-agent-mesh" alt="License">
</a>
<a href="https://pypi.org/project/solace-agent-mesh">
<img src="https://img.shields.io/pypi/v/solace-agent-mesh.svg" alt="PyPI - Version">
</a>
<a href="https://pypi.org/project/solace-agent-mesh">
<img src="https://img.shields.io/pypi/pyversions/solace-agent-mesh.svg" alt="PyPI - Python Version">
</a>
<a href="https://pypi.org/project/solace-agent-mesh">
<img alt="PyPI - Downloads" src="https://img.shields.io/pypi/dm/solace-agent-mesh?color=00C895">
</a>
</p>
<p align="center">
<a href="#-key-features">Key Features</a> •
<a href="#-quick-start-5-minutes">Quickstart</a> •
<a href="#️-next-steps">Next Steps</a> •
<a href="https://solacelabs.github.io/solace-agent-mesh/">Docs</a>
</p>
---
**Solace Agent Mesh** is a framework that supports building AI applications where multiple specialized AI agents work together to solve complex problems. It uses the event messaging of [Solace Platform](https://solace.com) for true scalability and reliability.
With Solace Agent Mesh (SAM), you can create teams of AI agents, each having distinct skills and access to specific tools. For example, you could have a Database Agent that can make SQL queries to fetch data or a MultiModal Agent that can help create images, audio files and reports.
The framework handles the communication between agents automatically, so you can focus on building great AI experiences.
SAM creates a standardized communication layer where AI agents can:
* Delegate tasks to peer agents
* Share data and artifacts
* Connect with diverse user interfaces and external systems
* Execute multi-step workflows with minimal coupling
SAM is built on top of the Solace AI Connector (SAC) which allows Solace Platform Event Brokers to connect to AI models and services and Google's Agent Development Kit (ADK) for AI logic and tool integrations.
<p align="center">
<img src="docs/static/img/Solace_AI_Framework_With_Broker.png" width="640" alt="SAM Architecture Diagram" />
</p>
The result? A fully asynchronous, event-driven and decoupled AI agent architecture ready for production deployment. It is robust, reliable and easy to maintain.
---
## 🔑 Key Features
- **[Multi-Agent Event-Driven Architecture](https://solacelabs.github.io/solace-agent-mesh/docs/documentation/getting-started/architecture)** – Agents communicate via the Solace Event Mesh for true scalability
- **[Agent Orchestration](https://solacelabs.github.io/solace-agent-mesh/docs/documentation/components/agents)** – Complex tasks are automatically broken down and delegated by the [Orchestrator](https://solacelabs.github.io/solace-agent-mesh/docs/documentation/components/orchestrator) agent
- **[Flexible Interfaces](https://solacelabs.github.io/solace-agent-mesh/docs/documentation/components/gateways)** – Integrate with REST API, web UI, [Slack](https://solacelabs.github.io/solace-agent-mesh/docs/documentation/developing/tutorials/slack-integration), or build your own integration
- **[Extensible](https://solacelabs.github.io/solace-agent-mesh/docs/documentation/components/plugins)** – Add your own agents, gateways, or services with minimal code
- **[Agent-to-Agent Communication](https://solacelabs.github.io/solace-agent-mesh/docs/documentation/getting-started/architecture)** – Agents can discover and delegate tasks to each other seamlessly using the Agent2Agent (A2A) Protocol
- **[Dynamic Embeds](https://solacelabs.github.io/solace-agent-mesh/docs/documentation/components/builtin-tools/embeds)** – Embed dynamic content like real-time data, calculations and file contents in responses
📚 **Want to know more?** Check out the full Solace Agent Mesh [documentation](https://solacelabs.github.io/solace-agent-mesh/docs/documentation/getting-started/introduction/).
---
## 🚀 Quick Start (5 minutes)
Set up Solace Agent Mesh in just a few steps.
### ⚙️ System Requirements
To run Solace Agent Mesh locally, you'll need:
- **Python 3.10.16+**
- **pip** (comes with Python)
- **OS**: MacOS, Linux, or Windows (with [WSL](https://learn.microsoft.com/en-us/windows/wsl/))
- **LLM API key** (any major provider or custom endpoint)
### 🎸 Vibe Coding
To quickly setup and customize your Agent Mesh, check out the [Vibe Coding Quickstart Guide](docs/docs/documentation/vibe_coding.md). This guide walks you through the essential steps to get Solace Agent Mesh up and running with minimal effort.
### 💻 Setup Steps
#### 1. Create a directory for a new project
```bash
mkdir my-sam && cd my-sam
```
#### 2. Create and activate a Python virtual environment
```bash
python3 -m venv .venv && source .venv/bin/activate
```
#### 3. Install Solace Agent Mesh (SAM)
Check if you have a version of SAM already installed.
```bash
sam -v
```
If you have an earlier version, uninstall it and **start from scratch**:
```bash
pip3 uninstall solace-agent-mesh
```
Note: Optionally, you can try to upgrade versions but this action is not officially supported at this time. (`pip3 install --upgrade solace-agent-mesh`)
If no previous version exists, install the latest version with:
```bash
pip3 install solace-agent-mesh
```
#### 4. Initialize the new project via a GUI tool
```bash
sam init --gui
```
Note: This initialization UI runs on port 5002
#### 5. Run the project
```bash
sam run
```
#### 6. Verify SAM is running
Open the Web UI at [http://localhost:8000](http://localhost:8000) for the chat interface and ask a question
### 🔧 Customize SAM
#### New agents can be added via a GUI interface
```bash
sam add agent --gui
```
#### Existing plugins can be installed
```bash
sam plugin add <your-component-name> --plugin <plugin-name>
```
---
## 🏗️ Architecture Overview
Solace Agent Mesh provides a "Universal A2A Agent Host," a flexible and configurable runtime environment built by integrating Google's Agent Development Kit (ADK) with the Solace AI Connector (SAC) framework.
The system allows you to:
- Host AI agents developed with Google ADK within the SAC framework
- Define agent capabilities (LLM model, instructions, tools) primarily through SAC YAML configuration
- Use Solace Platform as the transport for standard Agent-to-Agent (A2A) protocol communication
- Enable dynamic discovery of peer agents running within the same ecosystem
- Allow agents to delegate tasks to discovered peers via the A2A protocol over Solace
- Manage file artifacts using built-in tools with automatic metadata injection
- Perform data analysis using built-in SQL, JQ, and visualization tools
- Use dynamic embeds for context-dependent information resolution
### Key Components
- **SAC** handles broker connections, configuration loading, and component lifecycle
- **ADK** provides the agent runtime, LLM interaction, tool execution, and state management
- **A2A Protocol** enables communication between clients and agents, and between peer agents
- **Dynamic Embeds** allow placeholders in responses that are resolved with context-dependent information
- **File Management** provides built-in tools for artifact creation, listing, loading, and metadata handling
---
## ➡️ Next Steps
Want to go further? Here are some hands-on tutorials to help you get started:
| 🔧 Integration | ⏱️ Est. Time | 📘 Tutorial |
|----------------|--------------|-------------|
| 🌤️ **Weather Agent**<br>Learn how to build an agent that gives Solace Agent Mesh the ability to access real-time weather information. | **~15 min** | [Weather Agent Plugin](https://solacelabs.github.io/solace-agent-mesh/docs/documentation/developing/tutorials/custom-agent) |
| 🗃️ **SQL Database Integration**<br>Enable Solace Agent Mesh to answer company-specific questions using a sample coffee company database.| **~10–15 min** | [SQL Database Tutorial](https://solacelabs.github.io/solace-agent-mesh/docs/documentation/developing/tutorials/sql-database) |
| 🧠 **MCP Integration**<br>Integrating a Model Context Protocol (MCP) Servers into Solace Agent Mesh. | **~10–15 min** | [MCP Integration Tutorial](https://solacelabs.github.io/solace-agent-mesh/docs/documentation/developing/tutorials/mcp-integration) |
| 💬 **Slack Integration**<br>Chat with Solace Agent Mesh directly from Slack. | **~20–30 min** | [Slack Integration Tutorial](https://solacelabs.github.io/solace-agent-mesh/docs/documentation/developing/tutorials/slack-integration) |
| 👔 **Microsoft Teams Integration (Enterprise)**<br>Connect Solace Agent Mesh Enterprise to Microsoft Teams with Azure AD authentication. | **~30–40 min** | [Teams Integration Tutorial](https://solacelabs.github.io/solace-agent-mesh/docs/documentation/developing/tutorials/teams-integration) |
---
## 👥 Contributors
Solace Agent Mesh is built with the help of our amazing community. Thanks to everyone who has contributed ideas, code and time to make this project better!
View the full list of contributors here: [GitHub Contributors](https://github.com/SolaceLabs/solace-agent-mesh/graphs/contributors) 💚
**Looking to contribute?** Check out [CONTRIBUTING.md](CONTRIBUTING.md) to get started and see how you can help!
---
## 📄 License
This project is licensed under the **Apache 2.0 License**. See the full license text in the [LICENSE](LICENSE) file.
---
## 🧪 Running Tests
This project uses `pytest` for testing. You can run tests using either `hatch` or `pytest` directly.
### Using Hatch
The recommended way to run tests is through the `hatch` environment, which ensures all dependencies are managed correctly.
```bash
# Run all tests
hatch test
# Run tests with tags
hatch test -m "<tag>"
```
### Using Pytest Directly
If you prefer to use `pytest` directly, you must first install the project with its test dependencies.
```bash
# Install the project in editable mode with the 'test' extras
pip install -e .[test]
# Run all tests
pytest
```
---
<h3 align="center">
<img src="./docs/static/img/solace-logo-text.svg" alt="Solace Agent Mesh Logo" width="100"/>
</h3>
| text/markdown | null | SolaceLabs <solacelabs@solace.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | null | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks"
] | [] | null | null | >=3.10.16 | [] | [] | [] | [
"a2a-sdk[http-server]==0.3.7",
"alembic==1.16.5",
"asteval==1.0.6",
"authlib==1.6.5",
"azure-cognitiveservices-speech==1.41.1",
"beautifulsoup4==4.13.5",
"bm25s==0.2.14",
"boto3==1.40.37",
"click==8.1.8",
"cryptography==46.0.5",
"fastapi==0.120.1",
"flask-cors==6.0.1",
"flask==3.0.3",
"gitpython==3.1.45",
"google-adk==1.18.0",
"google-genai==1.49.0",
"holidays==0.81.0",
"httpx==0.28.1",
"itsdangerous==2.2.0",
"jaraco-context==6.1.0",
"jmespath==1.0.1",
"jsonpath-ng==1.7.0",
"jwcrypto==1.5.6",
"kaleido==0.2.1",
"litellm==1.76.3",
"markdownify==1.2.0",
"markitdown[all]==0.1.4",
"mermaid-cli==0.1.2",
"numpy==2.2.6",
"openai==1.99.9",
"pandas==2.3.2",
"pillow==12.1.1",
"playwright==1.58.0",
"plotly==6.3.0",
"protobuf==6.33.5",
"psycopg2-binary==2.9.10",
"pydantic==2.11.9",
"pydub==0.25.1",
"pypdf==6.6.2",
"pystache==0.6.8",
"python-docx==1.2.0",
"python-dotenv==1.1.1",
"python-jwt==4.1.0",
"python-liquid==2.1.0",
"python-multipart==0.0.22",
"python-pptx==1.0.2",
"pyyaml==6.0.2",
"rich==13.9.4",
"rouge==1.0.1",
"setuptools==80.10.2",
"solace-ai-connector==3.3.1",
"sqlalchemy==2.0.40",
"sse-starlette==3.0.2",
"starlette==0.49.1",
"toml==0.10.2",
"uvicorn[standard]==0.37.0",
"wheel==0.46.2",
"holidays==0.81.0; extra == \"employee-tools\"",
"google-cloud-storage==3.5.0; extra == \"gcs\"",
"aiosqlite; extra == \"test\"",
"asyncpg; extra == \"test\"",
"fastmcp==2.11.2; extra == \"test\"",
"httpx>=0.25; extra == \"test\"",
"psycopg2-binary; extra == \"test\"",
"pytest-asyncio; extra == \"test\"",
"pytest-cov>=4.0.0; extra == \"test\"",
"pytest-httpx>=0.35.0; extra == \"test\"",
"pytest-mock>=3.0.0; extra == \"test\"",
"pytest-xdist>=3.5.0; extra == \"test\"",
"pytest>=8.0.0; extra == \"test\"",
"respx; extra == \"test\"",
"ruff; extra == \"test\"",
"testcontainers; extra == \"test\"",
"google-cloud-aiplatform==1.126.1; extra == \"vertex\""
] | [] | [] | [] | [
"Homepage, https://github.com/SolaceLabs/solace-agent-mesh",
"Repository, https://github.com/SolaceLabs/solace-agent-mesh"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:07:53.405863 | solace_agent_mesh-1.17.1.tar.gz | 11,211,170 | e5/6f/1313e1d87e0626b86600498fbcf06ea52853f37c40f0153515528283f0e0/solace_agent_mesh-1.17.1.tar.gz | source | sdist | null | false | a57330fb575ac939a4668915a7e14cb6 | b3bbbd33291a73cd1b120ae689b8fe8ae23983b241751184ba825a2adfc9280c | e56f1313e1d87e0626b86600498fbcf06ea52853f37c40f0153515528283f0e0 | null | [
"LICENSE"
] | 255 |
2.1 | terradev-cli | 2.9.8 | Cross-cloud GPU provisioning with GitOps automation and HuggingFace Spaces deployment | # Terradev CLI v2.9.8
BYOAPI: Cross-cloud GPU provisioning and cost optimization platform with GitOps automation.
**GitHub Repository**: https://github.com/theoddden/terradev
**License Details**: https://github.com/theoddden/terradev?tab=License-1-ov-file
---
## Why Terradev?
Developers overpay by only accessing single-cloud workflows or using sequential provisioning with inefficient egress + rate-limiting.
Terradev is a cross-cloud compute-provisioning CLI that compresses + stages datasets, provisions optimal instances + nodes, and deploys **3-5x faster** than sequential provisioning.
---
## GitOps Automation
Production-ready GitOps workflows based on real-world Kubernetes experience:
```bash
# Initialize GitOps repository
terradev gitops init --provider github --repo my-org/infra --tool argocd --cluster production
# Bootstrap GitOps tool on cluster
terradev gitops bootstrap --tool argocd --cluster production
# Sync cluster with Git repository
terradev gitops sync --cluster production --environment prod
# Validate configuration
terradev gitops validate --dry-run --cluster production
```
### GitOps Features
- **Multi-Provider Support**: GitHub, GitLab, Bitbucket, Azure DevOps
- **Tool Integration**: ArgoCD and Flux CD support
- **Repository Structure**: Automated GitOps repository setup
- **Policy as Code**: Gatekeeper/Kyverno policy templates
- **Multi-Environment**: Dev, staging, production environments
- **Resource Management**: Automated quotas and network policies
- **Validation**: Dry-run and apply validation
- **Security**: Best practices and compliance policies
### GitOps Repository Structure
```
my-infra/
├── clusters/
│ ├── dev/
│ ├── staging/
│ └── prod/
├── apps/
├── infra/
├── policies/
└── monitoring/
```
---
## HuggingFace Spaces Integration
Deploy any HuggingFace model to Spaces with one command:
```bash
# Install HF Spaces support
pip install terradev-cli[hf]
# Set your HF token
export HF_TOKEN=your_huggingface_token
# Deploy Llama 2 with one click
terradev hf-space my-llama --model-id meta-llama/Llama-2-7b-hf --template llm
# Deploy custom model with GPU
terradev hf-space my-model --model-id microsoft/DialoGPT-medium \
--hardware a10g-large --sdk gradio
# Result:
# Space URL: https://huggingface.co/spaces/username/my-llama
# 100k+ researchers can now access your model!
```
### HF Spaces Features
- **One-Click Deployment**: No manual configuration required
- **Template-Based**: LLM, embedding, and image model templates
- **Multi-Hardware**: CPU-basic to A100-large GPU tiers
- **Auto-Generated Apps**: Gradio, Streamlit, and Docker support
- **Revenue Streams**: Hardware upgrades, private spaces, template licensing
### Available Templates
```bash
# LLM Template (A10G GPU)
terradev hf-space my-llama --model-id meta-llama/Llama-2-7b-hf --template llm
# Embedding Template (CPU-upgrade)
terradev hf-space my-embeddings --model-id sentence-transformers/all-MiniLM-L6-v2 --template embedding
# Image Model Template (T4 GPU)
terradev hf-space my-image --model-id runwayml/stable-diffusion-v1-5 --template image
```
---
## Installation
```bash
pip install terradev-cli
```
With HF Spaces support:
```bash
pip install terradev-cli[hf] # HuggingFace Spaces deployment
pip install terradev-cli[all] # All cloud providers + ML services + HF Spaces
```
---
## Quick Start
```bash
# 1. Get setup instructions for any provider
terradev setup runpod --quick
terradev setup aws --quick
# 2. Configure your cloud credentials (BYOAPI — you own your keys)
terradev configure --provider runpod
terradev configure --provider aws
terradev configure --provider vastai
# 3. Deploy to HuggingFace Spaces (NEW!)
terradev hf-space my-llama --model-id meta-llama/Llama-2-7b-hf --template llm
terradev hf-space my-embeddings --model-id sentence-transformers/all-MiniLM-L6-v2 --template embedding
terradev hf-space my-image --model-id runwayml/stable-diffusion-v1-5 --template image
# 4. Get enhanced quotes with conversion prompts
terradev quote -g A100
terradev quote -g A100 --quick # Quick provision best quote
# 5. Provision the cheapest instance (real API call)
terradev provision -g A100
# 6. Configure ML services
terradev configure --provider wandb --dashboard-enabled true
terradev configure --provider langchain --tracing-enabled true
# 7. Use ML services
terradev ml wandb --test
terradev ml langchain --create-workflow my-workflow
# 8. View analytics
python user_analytics.py
# 9. Provision 4x H100s in parallel across multiple clouds
terradev provision -g H100 -n 4 --parallel 6
# 10. Dry-run to see the allocation plan without launching
terradev provision -g A100 -n 2 --dry-run
# 11. Manage running instances
terradev status --live
terradev manage -i <instance-id> -a stop
terradev manage -i <instance-id> -a start
terradev manage -i <instance-id> -a terminate
# 12. Execute commands on provisioned instances
terradev execute -i <instance-id> -c "python train.py"
# 13. Stage datasets near compute (compress + chunk + upload)
terradev stage -d ./my-dataset --target-regions us-east-1,eu-west-1
# 14. View cost analytics from the tracking database
terradev analytics --days 30
# 15. Find cheaper alternatives for running instances
terradev optimize
# 16. One-command Docker workload (provision + deploy + run)
terradev run --gpu A100 --image pytorch/pytorch:latest -c "python train.py"
# 17. Keep an inference server alive
terradev run --gpu H100 --image vllm/vllm-openai:latest --keep-alive --port 8000
```
---
## BYOAuth — Bring Your Own Authentication
Terradev never touches, stores, or proxies your cloud credentials through a third party. Your API keys stay on your machine in `~/.terradev/credentials.json` — encrypted at rest, never transmitted.
**How it works:**
1. You run `terradev configure --provider <name>` and enter your API key
2. Credentials are stored locally in your home directory — never sent to Terradev servers
3. Every API call goes directly from your machine to the cloud provider
4. No middleman account, no shared credentials, no markup on provider pricing
**Why this matters:**
- **Zero trust exposure** — No third party holds your AWS/GCP/Azure keys
- **No vendor lock-in** — If you stop using Terradev, your cloud accounts are untouched
- **Enterprise-ready** — Compliant with SOC2, HIPAA, and internal security policies that prohibit sharing credentials with SaaS vendors
- **Full audit trail** — Every provision is logged locally with provider, cost, and timestamp
---
## CLI Commands
| Command | Description |
|---------|-------------|
| `terradev configure` | Set up API credentials for any provider |
| `terradev quote` | Get real-time GPU pricing across all clouds |
| `terradev provision` | Provision instances with parallel multi-cloud arbitrage |
| `terradev manage` | Stop, start, terminate, or check instance status |
| `terradev status` | View all instances and cost summary |
| `terradev execute` | Run commands on provisioned instances |
| `terradev stage` | Compress, chunk, and stage datasets near compute |
| `terradev analytics` | Cost analytics with daily spend trends |
| `terradev optimize` | Find cheaper alternatives for running instances |
| `terradev run` | Provision + deploy Docker container + execute in one command |
| `terradev hf-space` | **NEW:** One-click HuggingFace Spaces deployment |
| `terradev up` | **NEW:** Manifest cache + drift detection |
| `terradev rollback` | **NEW:** Versioned rollback to any deployment |
| `terradev manifests` | **NEW:** List cached deployment manifests |
| `terradev integrations` | Show status of W&B, Prometheus, and infra hooks |
### HF Spaces Commands (NEW!)
```bash
# Deploy Llama 2 to HF Spaces
terradev hf-space my-llama --model-id meta-llama/Llama-2-7b-hf --template llm
# Deploy with custom hardware
terradev hf-space my-model --model-id microsoft/DialoGPT-medium \
--hardware a10g-large --sdk gradio --private
# Deploy embedding model
terradev hf-space my-embeddings --model-id sentence-transformers/all-MiniLM-L6-v2 \
--template embedding --env BATCH_SIZE=64
```
### Manifest Cache Commands (NEW!)
```bash
# Provision with manifest cache
terradev up --job my-training --gpu-type A100 --gpu-count 4
# Fix drift automatically
terradev up --job my-training --fix-drift
# Rollback to previous version
terradev rollback my-training@v2
# List all cached manifests
terradev manifests --job my-training
```
---
## Observability & ML Integrations
Terradev facilitates connections to your existing tools via BYOAPI — your keys stay local, all data flows directly from your instances to your services.
| Integration | What Terradev Does | Setup |
|-------------|-------------------|-------|
| **Weights & Biases** | Auto-injects WANDB_* env vars into provisioned containers | `terradev configure --provider wandb --api-key YOUR_KEY` |
| **Prometheus** | Pushes provision/terminate metrics to your Pushgateway | `terradev configure --provider prometheus --api-key PUSHGATEWAY_URL` |
| **Grafana** | Exports a ready-to-import dashboard JSON | `terradev integrations --export-grafana` |
> Prices queried in real-time from all 10+ providers. Actual savings vary by availability.
---
## Pricing Tiers
| Feature | Research (Free) | Research+ ($49.99/mo) | Enterprise ($299.99/mo) |
|----------|------------------|------------------------|------------------------|
| Max concurrent instances | 1 | 8 | 32 |
| Provisions/month | 10 | 100 | Unlimited |
| Providers | All 11 | All 11 | All 11 + priority |
| Cost tracking | Yes | Yes | Yes |
| Dataset staging | Yes | Yes | Yes |
| Egress optimization | Basic | Full | Full + custom routes |
---
## Integrations
### Jupyter / Colab / VS Code Notebooks
```bash
pip install terradev-jupyter
%load_ext terradev_jupyter
%terradev quote -g A100
%terradev provision -g H100 --dry-run
%terradev run --gpu A100 --image pytorch/pytorch:latest --dry-run
```
### GitHub Actions
```yaml
- uses: theodden/terradev-action@v1
with:
gpu-type: A100
max-price: "1.50"
env:
TERRADEV_RUNPOD_KEY: ${{ secrets.RUNPOD_API_KEY }}
```
### Docker (One-Command Workloads)
```bash
terradev run --gpu A100 --image pytorch/pytorch:latest -c "python train.py"
terradev run --gpu H100 --image vllm/vllm-openai:latest --keep-alive --port 8000
```
---
## Requirements
- Python >= 3.9
- Cloud provider API keys (configured via `terradev configure`)
---
## License
Business Source License 1.1 (BUSL-1.1) - see LICENSE file for details
| text/markdown | Terradev Team | team@terradev.com | null | null | null | cloud, compute, gpu, provisioning, optimization, multi-cloud, parallel, cost-savings, aws, gcp, azure, machine-learning, ai, infrastructure | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Framework :: Jupyter",
"Framework :: Jupyter :: JupyterLab",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: System :: Distributed Computing",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: System :: Systems Administration",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content"
] | [] | https://github.com/theoddden/terradev | null | >=3.9 | [] | [] | [] | [
"click>=8.0.0",
"aiohttp>=3.9.0",
"boto3>=1.34.0",
"pyyaml>=6.0",
"google-cloud-compute>=1.8.0",
"azure-mgmt-compute>=29.0.0",
"azure-identity>=1.12.0",
"oci>=2.118.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"black>=22.0.0; extra == \"dev\"",
"flake8>=5.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"pre-commit>=2.20.0; extra == \"dev\"",
"sphinx>=5.0.0; extra == \"docs\"",
"sphinx-rtd-theme>=1.0.0; extra == \"docs\"",
"myst-parser>=0.18.0; extra == \"docs\""
] | [] | [] | [] | [
"Bug Reports, https://github.com/terradev/terradev-cli/issues",
"Source, https://github.com/terradev/terradev-cli",
"Documentation, https://docs.terradev.com",
"Changelog, https://github.com/terradev/terradev-cli/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T18:07:07.897000 | terradev_cli-2.9.8.tar.gz | 194,412 | 05/bf/c3a52e69114fa6a672c97a168c6d11bce9df094c6db6b1f9877704a8cbcf/terradev_cli-2.9.8.tar.gz | source | sdist | null | false | fb015fcbb4b50cf04698e627c6bd0e10 | 07515c9c5306e137be4d2ff13fd10975d62ee75e980d751f0755125be0d67520 | 05bfc3a52e69114fa6a672c97a168c6d11bce9df094c6db6b1f9877704a8cbcf | null | [] | 215 |
2.4 | honeybee-energy | 1.116.128 | Energy simulation library for honeybee. | 
[](https://github.com/ladybug-tools/honeybee-energy/actions)
[](https://coveralls.io/github/ladybug-tools/honeybee-energy)
[](https://www.python.org/downloads/release/python-3100/) [](https://www.python.org/downloads/release/python-370/) [](https://github.com/IronLanguages/ironpython2/releases/tag/ipy-2.7.8/)
# honeybee-energy
Honeybee extension for energy simulation.
This extension is intended to be generic such that it can represent energy properties
across multiple different energy simulation engines.
However, there are many parts of the extension that directly leverage
the [EnergyPlus](https://github.com/NREL/EnergyPlus) simulation engine in order to make
[honeybee-core](https://github.com/ladybug-tools/honeybee-core) models immediately
simulate-able.
The package can also leverage the [OpenStudio](https://github.com/NREL/OpenStudio) SDK
via the [honeybee-openstudio](https://github.com/ladybug-tools/honeybee-openstudio)
Python package to translate honeybee Models to OpenStudio format.
All of these dependencies are contained within the [honeybee-energy Docker image](https://hub.docker.com/r/ladybugtools/honeybee-energy)
Honeybee-energy is also used by other honeybee extensions that translate honeybee
models to building energy simulation engines, including
[honeybee-doe2](https://github.com/ladybug-tools/honeybee-doe2) and
[honeybee_ph](https://github.com/PH-Tools/honeybee_ph).
## Installation
`pip install -U honeybee-energy`
If you want to also include the standards library of typical ProgramTypes and
ConstructionSets use:
`pip install -U honeybee-energy[standards]`
If you want to also include the honeybee-openstudio library to perform translations
to OpenStudio use:
`pip install -U honeybee-energy[openstudio]`
To check if the command line interface is installed correctly use `honeybee-energy --help`.
## [API Documentation](http://ladybug-tools.github.io/honeybee-energy/docs)
## Local Development
1. Clone this repo locally
```console
git clone git@github.com:ladybug-tools/honeybee-energy
# or
git clone https://github.com/ladybug-tools/honeybee-energy
```
2. Install dependencies:
```console
cd honeybee-energy
pip install -r dev-requirements.txt
pip install -r requirements.txt
```
3. Run Tests:
```console
python -m pytest tests/
```
4. Generate Documentation:
```console
sphinx-apidoc -f -e -d 4 -o ./docs ./honeybee_energy
sphinx-build -b html ./docs ./docs/_build/docs
```
| text/markdown | Ladybug Tools | info@ladybug.tools | null | null | AGPL-3.0 | null | [
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: IronPython",
"Operating System :: OS Independent"
] | [] | https://github.com/ladybug-tools/honeybee-energy | null | null | [] | [] | [] | [
"honeybee-core==1.64.21",
"honeybee-standards==2.0.7",
"honeybee-energy-standards==2.3.0; extra == \"standards\"",
"honeybee-openstudio==0.4.9; extra == \"openstudio\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-20T18:06:56.458427 | honeybee_energy-1.116.128.tar.gz | 426,470 | b6/91/84cbf5545bc2190da20788965e49e77a37e63da05c6d40ec54aa96045627/honeybee_energy-1.116.128.tar.gz | source | sdist | null | false | d1ff9248210a8cad4777ffc8f57cf766 | 98536d89120cc44537cf15fd2a834a821eff7c5831f6b3c45ef0ba306f2fb5f5 | b69184cbf5545bc2190da20788965e49e77a37e63da05c6d40ec54aa96045627 | null | [
"LICENSE"
] | 1,065 |
2.4 | airbyte-agent-granola | 0.1.4 | Airbyte Granola Connector for AI platforms | # Granola
The Granola agent connector is a Python package that equips AI agents to interact with Granola through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
The Granola API connector provides read access to meeting notes from Granola,
an AI-powered meeting notes platform. This connector integrates with the Granola
Enterprise API to list and retrieve notes, including summaries, transcripts,
attendees, and calendar event details. Requires an Enterprise plan API key.
## Example questions
The Granola connector is optimized to handle prompts like these.
- List all meeting notes from Granola
- Show me recent meeting notes
- Get the details of a specific note
- List notes created in the last week
- Find meeting notes from last month
- Which meetings had the most attendees?
- Show me notes that mention budget reviews
- What meetings happened this quarter?
## Unsupported questions
The Granola connector isn't currently able to handle prompts like these.
- Create a new meeting note
- Delete a meeting note
- Update an existing note
- Share a note with someone
## Installation
```bash
uv pip install airbyte-agent-granola
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_granola import GranolaConnector
from airbyte_agent_granola.models import GranolaAuthConfig
connector = GranolaConnector(
auth_config=GranolaAuthConfig(
api_key="<Granola Enterprise API key generated from Settings > Workspaces > API tab>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@GranolaConnector.tool_utils
async def granola_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_granola import GranolaConnector, AirbyteAuthConfig
connector = GranolaConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@GranolaConnector.tool_utils
async def granola_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Notes | [List](./REFERENCE.md#notes-list), [Get](./REFERENCE.md#notes-get), [Search](./REFERENCE.md#notes-search) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Granola API docs
See the official [Granola API reference](https://docs.granola.ai/introduction).
## Version information
- **Package version:** 0.1.4
- **Connector version:** 1.0.2
- **Generated with Connector SDK commit SHA:** fc238ee4d89f35d5df587905e546890c0537377a
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/granola/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, granola, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T18:06:54.372507 | airbyte_agent_granola-0.1.4.tar.gz | 126,792 | d8/64/a021a13ec5b4504e2aa003dfeaca59ec73ef31f422e01afa791f53506cd8/airbyte_agent_granola-0.1.4.tar.gz | source | sdist | null | false | 792bd87aa5e10b491a42f51bebf17f8e | cc4f459e69af650793c2f6532c4e1f032cb48d0d417fa78fd22d3914000891a7 | d864a021a13ec5b4504e2aa003dfeaca59ec73ef31f422e01afa791f53506cd8 | null | [] | 254 |
2.4 | honeybee-radiance | 1.66.237 | Daylight and light simulation extension for honeybee. | # honeybee-radiance

[](https://github.com/ladybug-tools/honeybee-radiance/actions)
[](https://coveralls.io/github/ladybug-tools/honeybee-radiance)
[](https://www.python.org/downloads/release/python-3100/)
[](https://www.python.org/downloads/release/python-370/)
[](https://www.python.org/downloads/release/python-270/)
[](https://github.com/IronLanguages/ironpython2/releases/tag/ipy-2.7.8/)
Radiance extension for honeybee.
Honeybee-radiance adds Radiance simulation functionalities to honeybee for daylight/radiation simulation.
## Installation
`pip install -U honeybee-radiance`
To check if the command line interface is installed correctly use `honeybee-radiance --help`.
## Documentation
[API documentation](https://www.ladybug.tools/honeybee-radiance/docs/)
## Local Development
1. Clone this repo locally
```console
git clone git@github.com:ladybug-tools/honeybee-radiance
# or
git clone https://github.com/ladybug-tools/honeybee-radiance
```
2. Install dependencies:
```console
cd honeybee-radiance
pip install -r dev-requirements.txt
pip install -r requirements.txt
```
3. Run Tests:
```console
python -m pytest tests/
```
4. Generate Documentation:
```console
sphinx-apidoc -f -e -d 4 -o ./docs ./honeybee_radiance
sphinx-build -b html ./docs ./docs/_build/docs
```
| text/markdown | Ladybug Tools | info@ladybug.tools | null | null | AGPL-3.0 | null | [
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: IronPython",
"Operating System :: OS Independent"
] | [] | https://github.com/ladybug-tools/honeybee-radiance | null | null | [] | [] | [] | [
"honeybee-core==1.64.21",
"honeybee-radiance-folder==2.11.17",
"honeybee-radiance-command==1.23.0",
"honeybee-standards==2.0.7"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-20T18:06:40.625624 | honeybee_radiance-1.66.237.tar.gz | 256,598 | a9/82/7ea7d35de283d1710201c5bc47a3c0cfebc68fce664ffee3ff6b915e80f5/honeybee_radiance-1.66.237.tar.gz | source | sdist | null | false | 8551ad4ee6224aabf2d64af3b7639fff | 13671221c6039e88447f402f18698c589d1f5cf3e25ce12faf857e876c7c407a | a9827ea7d35de283d1710201c5bc47a3c0cfebc68fce664ffee3ff6b915e80f5 | null | [
"LICENSE"
] | 442 |
2.1 | pidcalib2 | 1.5.0 | A set of tools for estimating LHCb PID efficiencies | # PIDCalib2
A set of software tools for estimating LHCb PID efficiencies.
The package includes several user-callable modules:
- [`make_eff_hists`](#make_eff_hists) creates histograms that can be used to estimate the PID efficiency of a user's sample
- [`ref_calib`](#ref_calib) calculates the LHCb PID efficiency of a user reference sample
- `merge_trees` merges two ROOT files with compatible `TTree`s
- [`plot_calib_distributions`](#plot_calib_distributions) allows you to plot distributions of variables in the calibration datasets
- [`pklhisto2root`](#pklhisto2root) converts [Pickled](https://docs.python.org/3/library/pickle.html) [boost-histogram](https://github.com/scikit-hep/boost-histogram)s to ROOT histograms
The term "reference dataset/sample" refers to the user's dataset to which they want to assign PID efficiencies. The "calibration datasets/samples" are the special, centrally managed samples used internally by PIDCalib for PID efficiency estimation. The `--sample` argument always concerns these calibration samples.
Slides with additional information, example output, and plots are available on [Indico](https://indico.cern.ch/event/1055804/contributions/4440878/attachments/2277451/3869206/Run12_210707_v2.pdf).
For a summary of the selections and coverage of the Run 3 samples see this [talk](https://indico.cern.ch/event/1501195/#25-coverage-of-the-calibration).
## Setup
When working on a computer where the LHCb software stack is available (LXPLUS, university cluster, etc.), one can setup PIDCalib2 by running
```sh
lb-conda pidcalib bash
```
After this, the following commands will be available
```sh
pidcalib2.make_eff_hists
pidcalib2.ref_calib
pidcalib2.merge_trees
pidcalib2.plot_calib_distributions
pidcalib2.pklhisto2root
```
You can skip the bash invocation and join the setup and run phases into a single command
```sh
lb-conda pidcalib pidcalib2.make_eff_hists
```
To run `make_eff_hists`, you will need access to CERN EOS. You don't need to do anything special on LXPLUS. On other machines, you will usually need to obtain a Kerberos ticket by running
```sh
kinit [username]@CERN.CH
```
### Installing from PyPI
The PIDCalib2 package is available on [PyPI](https://pypi.org/project/pidcalib2/). It can be installed on any computer via `pip` simply by running (preferably in a virtual environment; see [`venv`](https://docs.python.org/3/library/venv.html))
```sh
pip install pidcalib2
```
Note that this will install the [`xrootd`](https://pypi.org/project/xrootd/) *Python bindings*. One also has to install XRootD itself for the bindings to work. See [this page](https://xrootd.slac.stanford.edu/index.html) for XRootD releases and instructions.
## `make_eff_hists`
This module creates histograms that can be used to estimate the PID efficiency of a user's sample.
Reading all the relevant calibration files can take a long time. When running a configuration for the first time, we recommend using the `--max-files 1` option. This will limit PIDCalib2 to reading just a single calibration file. Such a test will reveal any problems with, e.g., missing variables quickly. Keep in mind that you might get a warning about empty bins in the total histogram as you are reading a small subset of the calibration data. For the purposes of a quick test, this warning can be safely ignored.
### Options
To get a usage message listing all the options, their descriptions, and default values, type
```
pidcalib2.make_eff_hists --help
```
The calibration files to be processed are determined by the `sample`, `magnet`, and `particle` options. All the valid combinations can be listed by running
```sh
pidcalib2.make_eff_hists --list configs
```
Aliases for standard variables are defined to simplify the commands. We recommend users use the aliases when specifying variables. When you use a name that isn't an alias, a warning message like the following will show up in the log.
```
'probe_PIDK' is not a known PID variable alias, using raw variable
```
All aliases can be listed by running
```sh
pidcalib2.make_eff_hists --list aliases
```
Note that there are many more variables than there are aliases. If you want to find a variable for which no alias exists, you can check one of the calibration files yourself. The paths to the calibration files are printed when the `--verbose` option is specified. Alternatively, you can simply guess the name - if it doesn't exist, PIDCalib2 will let you know and might provide a list of similar names that do exist.
A file with alternative binnings can be specified using `--binning-file`. The file must contain valid JSON specifying bin edges. For example, two-bin binnings for particle `Pi`, variables `P` and `PT` can be defined as
```json
{"Pi": {"P": [10000, 15000, 30000], "PT": [6000, 10000, 20000]}}
```
An arbitrary number of binnings can be defined in a single file.
Complex cut expressions can be created by chaining simpler expressions using `&` (logical and) and `|` (logical or). One can also use standard mathematical symbols, like `*`, `/`, `+`, `-`, `(`, `)`. Whitespace does not matter.
### Examples
- Create a single 3D efficiency histogram for a single PID cut
```sh
pidcalib2.make_eff_hists --sample Turbo18 --magnet up --particle Pi --pid-cut "DLLK > 4" --bin-var P --bin-var ETA --bin-var nSPDhits --output-dir pidcalib_output
```
- Create multiple histograms in one run (most of the time is spent reading
in data, so specifying multiple cuts is much faster than running
make_eff_hists sequentially)
```sh
pidcalib2.make_eff_hists --sample Turbo16 --magnet up --particle Pi --pid-cut "DLLK > 0" --pid-cut "DLLK > 4" --pid-cut "DLLK > 6" --bin-var P --bin-var ETA --bin-var nSPDhits --output-dir pidcalib_output
```
- Create a single efficiency histogram for complex cuts using only negatively charged tracks
```sh
pidcalib2.make_eff_hists --sample Turbo18 --magnet up --particle Pi --pid-cut "MC15TuneV1_ProbNNp*(1-MC15TuneV1_ProbNNpi)*(1-MC15TuneV1_ProbNNk) < 0.5 & DLLK < 3" --cut "IsMuon==0 & Brunel_PT>250 & trackcharge==-1" --bin-var P --bin-var ETA --bin-var nSPDhits --output-dir pidcalib_output
```
### Caveats
Not all datasets have all the variables, and in some cases, the same variable is named differently (e.g., `probe_Brunel_IPCHI2` is named `probe_Brunel_MINIPCHI2` in certain electron samples). The aliases correspond to the most common names, but you might need to check the calibration files if PIDCalib2 can't find the variable you need.
## `ref_calib`
This module uses the histograms created by `make_eff_hists` to assign efficiency to events in a reference sample supplied by the user. Adding efficiency to the user-supplied file requires PyROOT and is optional.
The module works in two steps:
1. Calculate the efficiency and save it as a TTree in a separate file.
2. Optionally copy the efficiency TTree to the reference file and make it a friend of the user's TTree. The user must request the step by specifying `--merge` on the command line.
Be aware that `--merge` will modify your file. Use with caution.
### Options
The `sample` and `magnet` options are used solely to select the correct PID efficiency histograms. They should therefore mirror the options used when running `make_eff_hists`.
`bin-vars` must be a dictionary that relates the binning variables (or aliases) used to make the efficiency histograms with the variables in the reference sample. We assume that the reference sample branch names have the format `[ParticleName]_[VariableName]`. E.g., `D0_K_calcETA`, corresponds to a particle named `D0_K` and variable `calcETA`. If the user wants to estimate PID efficiency of their sample using 1D binning, where `calcETA` corresponds to the `ETA` binning variable alias of the calibration sample, they should specify `--bin-vars '{"ETA": "calcETA"}'`.
`ref-file` is the user's reference file to which they want to assign PID efficiencies. The parameter can be a local file or a remote file, e.g., on EOS (`--ref-file root://eoslhcb.cern.ch//eos/lhcb/user/a/anonymous/tuple.root`).
`ref-pars` must be a dictionary of particles from the reference sample to apply cuts to. The keys represent the particle branch name prefix (`D0_K` in the previous example), and the values passed are a list containing particle type and PID cut, e.g. `'{"D0_K" : ["K", "DLLK > 4"], "D0_Pi" : ["Pi", "DLLK < 4"]}'`.
The `--merge` option will copy the PID efficiency tree to your input file and make the PID efficiency tree a "Friend" of your input tree. Then you can treat your input tree as if it had the PID efficiency branches itself. E.g., `input_tree->Draw("PIDCalibEff")` should work. ROOT's "Friend" mechanism is an efficient way to add branches from one tree to another. Take a look [here](https://root.cern.ch/root/htmldoc/guides/users-guide/Trees.html#example-3-adding-friends-to-trees) if you would like to know more.
### Examples
- Evaluate efficiency of a single PID cut and save it to `user_ntuple_PID_eff.root` without adding it to `user_ntuple.root`
```sh
pidcalib2.ref_calib --sample Turbo18 --magnet up --ref-file data/user_ntuple.root --histo-dir pidcalib_output --bin-vars '{"P": "mom", "ETA": "Eta", "nSPDHits": "nSPDhits"}' --ref-pars '{"Bach": ["K", "DLLK > 4"]}' --output-file user_ntuple_PID_eff.root
```
- Evaluate efficiency of a single PID cut and add it to the reference file `user_ntuple.root`
```sh
pidcalib2.ref_calib --sample Turbo18 --magnet up --ref-file data/user_ntuple.root --histo-dir pidcalib_output --bin-vars '{"P": "mom", "ETA": "Eta", "nSPDHits": "nSPDhits"}' --ref-pars '{"Bach": ["K", "DLLK > 4"]}' --output-file user_ntuple_PID_eff.root --merge
```
- Evaluate efficiency of multiple PID cuts and add them to the reference file
```sh
pidcalib2.ref_calib --sample Turbo18 --magnet up --ref-file data/user_ntuple.root --histo-dir pidcalib_output --bin-vars '{"P": "P", "ETA": "ETA", "nSPDHits": "nSPDHits"}' --ref-pars '{"Bach": ["K", "DLLK > 4"], "SPi": ["Pi", "DLLK < 0"]}' --output-file user_ntuple_PID_eff.root --merge
```
### Caveats
You might notice that some of the events in your reference sample are assigned `PIDCalibEff`, `PIDCalibErr`, or both of -999.
- `PIDCalibEff` is -999 when for at least one track
- The event is out of binning range
- The relevant bin in the efficiency histogram has no events whatsoever
- The efficiency is negative
- `PIDCalibErr` is -999 when for at least one track
- The event is out of binning range
- The relevant bin in the efficiency histogram has no events whatsoever
- The relevant bin in the efficiency histogram has no events passing PID cuts
- The efficiency is negative
Because of `double` → `float` conversion in the original PIDCalib, tiny discrepancies (<1e−3 relative difference) in the efficiencies and/or uncertainties are to be expected.
A bug in the original PIDCalib caused the electron calibration datasets to be read twice, resulting in incorrect efficiency map uncertainties.
The original PIDCalib didn't apply the correct cuts to Omega samples (`K_Omega` and `K_DD`), leading to non-sensical efficiency maps.
### Electrons
To use the efficiency tables for electrons in 2024 and 2025, one should run a command similar to:
```sh
pidcalib2.ref_calib --sample 2024_WithUT_block1_Tables_with_brem --magnet up --ref-file "data/user_ntuple.root" --histo-dir /eos/lhcb/wg/rta/WP4/PIDCalib2_ElectronTables --bin-vars "{'PT' : 'PT', 'ETA': 'ETA'}" --ref-pars '{"eprobe": ["e", "ProbNNe > 0.2"]}' --output-file user_ntuple_PID_eff.root -v
```
The important differences here are:
- The `--sample` must include the bremsstrahlung category in the end, `with_brem` or `without_brem`. The data blocks available are: `block1-8`, `MC_W3134`, `MC_W3537`, `MC_W3739`, `MC_W4042`, `s25c1MagDown`, `s25c1MagUp`, `s25c2MagDown`, `s25c3MagUp`, `s25c4MagDown` and `s24c4MagUp`. (The four original brem categories `0brem`, `1brem_tag`, `1brem_probe` and `2brem` are availible for Block 1, 5 and 7, in case someone was already using those, but we recommend using the new naming scheme)
- The `--magnet` is labeled as `up` but it actually includes MagUp and MagDown.
- The `--histo-dir` must always be `/eos/lhcb/wg/rta/WP4/PIDCalib2_ElectronTables`, as that is where the efficiency tables are stored.
- The `--bin-vars` must always be `"{'PT' : 'PT', 'ETA': 'ETA'}"`. The binning used to compute the efficiencies is `{"e": {"PT": [500,700,900,1150,1500,2000,2750,4200,20000], "ETA": [1.5,2.5,2.85,3.2,3.6,5.25]}}`. (For the orignal brem categories, use `"{'P' : 'P'}"`, with the binning `{"e": {"P": [0,4375,8750,13125,17500,20625,23750,26875,30000,35000,40000,45000,50000,62500,75000,87500,100000]}}`)
- The `--ref-pars` have to match the values of the computed tables, that is: `"{DLLe: 0, 2, 3, 5}"` and `"{ProbNNe: 0.2, 0.8}"`.
## `plot_calib_distributions`
This tool allows you to plot distributions of variables in the calibration datasets. You can supply the same cuts and custom binnings that you would use for `make_eff_hists`. If you wish to plot a variable for which no binning exists, a uniform binning with 50 bins will be used. You can change the number of bins using `--bins` and force a uniform binning even if another binning is defined via `--force-uniform`.
A plot for every requested variable will be created in the `--output-dir` directory. The format of the plots can be controlled by `--format`. Furthermore, `plot_calib_distributions.pkl` will be saved in the same directory, containing all the histograms, should the user want to make the plots manually.
### Examples
- Create plots of the variables DLLK and P using 1 calibration file
```sh
pidcalib2.plot_calib_distributions --sample Turbo18 --magnet up --particle Pi --bin-var DLLK --bin-var P --output-dir pidcalib_output --max-files 1
```
- Create PDF plots of variable P with 95 uniform bins
```sh
pidcalib2.plot_calib_distributions --sample Turbo18 --magnet up --particle Pi --bin-var P --output-dir pidcalib_output --max-files 1 --format pdf --force-uniform --bins 95
```
- Create plots of variable P using custom binning
```sh
pidcalib2.plot_calib_distributions --sample Turbo18 --magnet up --particle Pi --bin-var P --output-dir pidcalib_output --max-files 1 --format png --binning-file my_binning.json
```
## `pklhisto2root`
This tool converts pickled PIDCalib2 histograms to `TH*D` and saves them in a ROOT file. It can be used on histograms produced by `make_eff_hists` or `plot_calib_distributions`. Note that ROOT supports only 1-, 2-, and 3-dimensional histograms; attempting to convert higher-dimensional histograms will fail.
### Example
- Convert pickled boost_histograms from `make_eff_hists` to ROOT
```sh
pidcalib2.pklhisto2root "pidcalib_output/effhists-Turbo18-up-Pi-DLLK>4-P.ETA.nSPDhits.pkl"
```
This will translate the histograms and save them to `pidcalib_output/effhists-Turbo18-up-Pi-DLLK>4-P.ETA.nSPDhits.root`.
## Development
### With lb-conda
On machines where `lb-conda` is available, you may use the `pidcalib` environment for PIDCalib2 development. This is mainly useful for small modifications and only if you don't need to add any new dependencies.
1. Clone the repository from [GitLab](https://gitlab.cern.ch/lhcb-rta/pidcalib2)
2. Enter the PIDCalib2 directory
```sh
cd pidcalib2
```
2. Start a new BASH shell within the `pidcalib` environment
```sh
lb-conda pidcalib bash
```
3. Run your *local* PIDCalib2 code
```sh
cd src
python -m pidcalib2.make_eff_hists -h
```
### Without lb-conda
This is a more versatile (if convoluted) method. It gives you full control of the dev environment and the ability to use IDEs, etc.
1. Clone the repository from [GitLab](https://gitlab.cern.ch/lhcb-rta/pidcalib2)
2. Enter the PIDCalib2 directory
```sh
cd pidcalib2
```
3. (Optional) Set up a virtual environment
```sh
python3 -m venv .venv
source .venv/bin/activate
```
4. Install pinned dependencies
```sh
pip install -r requirements-dev.txt
```
5. Install `xrootd` (possibly manually; see this [issue](https://github.com/xrootd/xrootd/issues/1397))
6. Run the tests
```sh
pytest
```
7. Run the modules
```sh
cd src
python3 -m pidcalib2.make_eff_hists -h
```
### Tips
Certain tests can be excluded using markers like this
```sh
pytest -m "not xrootd"
```
See available markers by running `pytest --markers` (the list will start with PIDCalib2 custom markers, then it will include all the pytest built-in markers).
## Links
- [PIDGen2](https://gitlab.cern.ch/lhcb-rta/pidgen2) - a tool to resample MC PID variables based on distributions from data calibration samples
| text/markdown | Daniel Cervenkov | daniel.cervenkov@cern.ch | null | null | GNU General Public License v3 (GPLv3) | null | [
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://gitlab.cern.ch/lhcb-rta/pidcalib2 | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [
"Bug Tracker, https://gitlab.cern.ch/lhcb-rta/pidcalib2/issues"
] | twine/4.0.1 CPython/3.7.15 | 2026-02-20T18:04:26.313298 | pidcalib2-1.5.0.tar.gz | 1,920,970 | 30/e6/e5d844736f6087048e312de2e7b5a9e3ca01835c9a1fbfa69bef641f85ab/pidcalib2-1.5.0.tar.gz | source | sdist | null | false | ebfc2c218cbde9ffada138ef1dda6ae8 | 86148135231d9a15e40e4bce87aaacfe596e509d4ca0046691bf2af08cde1f17 | 30e6e5d844736f6087048e312de2e7b5a9e3ca01835c9a1fbfa69bef641f85ab | null | [] | 219 |
2.4 | catsu | 0.1.8 | High-performance embeddings client for multiple providers | <div align="center">

# 🐱 catsu
[](https://pypi.org/project/catsu/)
[](https://pypi.org/project/catsu/)
[](https://opensource.org/licenses/Apache-2.0)
[](https://docs.catsu.dev)
[](https://discord.gg/vH3SkRqmUz)
_A unified, batteries-included client for embedding APIs that actually works._
</div>
**The world of embedding API clients is broken.**
- Everyone defaults to OpenAI's client for embeddings, even though it wasn't designed for that purpose
- Provider-specific libraries (VoyageAI, Cohere, etc.) are inconsistent, poorly maintained, or outright broken
- Universal clients like LiteLLM don't focus on embeddings—they rely on native client libraries, inheriting all their problems
- Every provider has different capabilities—some support dimension changes, others don't—with no standardized way to discover what's available
- Most clients lack basic features like retry logic, proper error handling, and usage tracking
**Catsu fixes this.** It's a high-performance, unified client built specifically for embeddings with:
🎯 A clean, consistent API across all providers </br>
🔄 Built-in retry logic with exponential backoff </br>
💰 Automatic usage and cost tracking </br>
📚 Rich model metadata and capability discovery </br>
⚡ Rust core with Python bindings for maximum performance
## Installation
```bash
pip install catsu
```
## Quick Start
```python
from catsu import Client
# Create client (reads API keys from environment)
client = Client()
# Generate embeddings
response = client.embed(
"openai:text-embedding-3-small",
["Hello, world!", "How are you?"]
)
print(f"Dimensions: {response.dimensions}")
print(f"Tokens used: {response.usage.tokens}")
print(f"Embedding: {response.embeddings[0][:5]}")
```
## Async Support
```python
import asyncio
from catsu import Client
async def main():
client = Client()
response = await client.aembed(
"openai:text-embedding-3-small",
"Hello, async world!"
)
print(response.embeddings[0][:5])
asyncio.run(main())
```
## With Options
```python
response = client.embed(
"openai:text-embedding-3-small",
["Search query"],
input_type="query", # "query" or "document"
dimensions=256, # output dimensions (if supported)
)
```
## Model Catalog
```python
# List all available models
models = client.list_models()
# Filter by provider
openai_models = client.list_models("openai")
for m in openai_models:
print(f"{m.name}: {m.dimensions} dims, ${m.cost_per_million_tokens}/M tokens")
```
## Configuration
```python
client = Client(
max_retries=5, # Default: 3
timeout=60, # Default: 30 seconds
)
```
## NumPy Integration
```python
# Convert embeddings to numpy array
arr = response.to_numpy()
print(arr.shape) # (2, 1536)
```
## Context Manager
```python
# Sync
with Client() as client:
response = client.embed("openai:text-embedding-3-small", "Hello!")
# Async
async with Client() as client:
response = await client.aembed("openai:text-embedding-3-small", "Hello!")
```
---
<div align="center">
If you found this helpful, consider giving it a ⭐!
made with ❤️ by [chonkie, inc.](https://chonkie.ai)
</div>
| text/markdown; charset=UTF-8; variant=GFM | null | Bhavnick Minhas <bhavnick@chonkie.ai> | null | null | Apache-2.0 | embeddings, openai, voyageai, async, rust | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/chonkie-inc/catsu",
"Repository, https://github.com/chonkie-inc/catsu"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:03:38.908484 | catsu-0.1.8.tar.gz | 49,998 | 85/54/84b62d54afe2695a9170f6f434dc8912976a57bf6680aa99557eadb3676f/catsu-0.1.8.tar.gz | source | sdist | null | false | 38bec7982aec6bb619b093c9d718db3a | 961cbc239e77b32ee08e5498a7ddcb83a3c08b025fb3fc504f97c14569e9aa74 | 855484b62d54afe2695a9170f6f434dc8912976a57bf6680aa99557eadb3676f | null | [] | 1,178 |
2.4 | zaxy | 0.3.0 | Lightweight Python utility library: one-liner helpers for beginners (smart input, print shortcuts, files, random) | # 🛠 zaxy-Framework (v0.3.0)
zaxy is a powerful Python utility library that simplifies standard boilerplate code into clean, one-line commands.
zaxy — это производительная библиотека для Python, превращающая громоздкий стандартный код в чистые однострочные команды.
[EN] Developed by maks39P. Created on February 8, 2026, by a 14-year-old developer with minimal use of AI.
[RU] Разработчик: maks39P. Создано 8 февраля 2026 г. 14-летним разработчиком с минимальным использованием ИИ.
-----------------------------------------------------------
EN | Installation
-----------------------------------------------------------
pip install zaxy
EN | Core Documentation
0.Framework initialization inside the code (required)
- connect()
1. Console Output & Control
- tx(*args): Shortcut for print().
- clr() / clear(): Clears terminal screen (supports Windows/Linux).
- pau(text) / pause(text): Execution pause (replaces input() for waiting).
2. Smart Input & Global Variables
*Important: These functions inject variables directly into the global scope (globals).*
- ir("var_name", "prompt"): Replaces [ var_name = input("prompt") ]
- irn("var_name", "prompt"): Replaces [ var_name = int(input("prompt")) ] (includes protection against non-integer input).
- irf("var_name", "prompt"): Replaces [ var_name = float(input("prompt")) ] (supports both dots and commas).
3. File System (OS wrappers)
- ls(path): Returns list of files in directory.
- gwd(): Returns current working directory path.
- md(name): Creates a new directory.
- rm(path): Removes a FILE (does not remove directories). Includes Xa-System-Error protection.
- ren(old, new): Renames a file or directory.
4. Randomization
- rn(a, b): Returns random integer (randint).
- rc(list): Returns random element from list (choice).
- sh(list): Shuffles list and returns it.
5. Ordinary Python
print("What is your name?")
name = input("> ")
print("How old are you?")
while True:
try:
age = int(input("> "))
break
except:
print("Please enter a number!")
print(f"Hello, {name}! You are {age} years old.")
import os
os.system('cls' if os.name == 'nt' else 'clear')
input("Press Enter to continue...")
5.5 Python with zaxy
from zaxy import *
connect()
ir("name", "What is your name? > ")
irn("age", "How old are you? > ")
tx(f"Hello, {name}! You are {age} years old.")
clr()
pau("Press Enter to continue...")
-----------------------------------------------------------
RU | Инструкция по использованию
-----------------------------------------------------------
pip install zaxy
RU | Техническая документация
0.подключение фраемворка внутри кода(обязательно)
- connect()
1. Вывод и управление консолью
- tx(*args): Сокращение для print().
- clr() / clear(): Полная очистка консоли (Windows/Linux).
- pau(text) / pause(text): Пауза выполнения кода (ожидание нажатия Enter).
2. Умный ввод и глобальные переменные
*Важно: Эти функции создают переменные сразу в глобальной области видимости (globals).*
- ir("имя", "текст"): Заменяет [ имя = input("текст") ]
- irn("имя", "текст"): Заменяет [ имя = int(input("текст")) ] (с защитой от ввода букв вместо чисел).
- irf("имя", "текст"): Заменяет [ имя = float(input("текст")) ] (автоматически меняет запятую на точку).
3. Работа с системой (OS модули)
- ls(путь): Список содержимого папки.
- gwd(): Получение пути текущей рабочей директории.
- md(имя): Создание новой папки.
- rm(путь): Удаление ФАЙЛА (не папки). Включает обработку Xa-System-Error.
- ren(старое, новое): Переименование файла или папки.
4. Рандомизация и списки
- rn(a, b): Случайное целое число от a до b.
- rc(список): Случайный выбор элемента из списка.
- sh(список): Перемешивание элементов списка (shuffle).
5. обычный пайтон
print("Как тебя зовут?")
name = input("> ")
print("Сколько лет?")
while True:
try:
age = int(input("> "))
break
except:
print("Введи число!")
print(f"Привет, {name}! Тебе {age}")
import os
os.system('cls' if os.name == 'nt' else 'clear')
input("Нажми Enter...")
5.5 пайтон с zaxy
from zaxy import *
connect()
ir("name", "Как тебя зовут? > ")
irn("age", "Сколько лет? > ")
tx(f"Привет, {name}! Тебе {age}")
clr()
pau("Нажми Enter...")
-----------------------------------------------------------
Contacts / Контакты:
Email: dekabri2316@gmail.com
Telegram: @maks39P
-----------------------------------------------------------
| text/markdown | null | developer-maksancik <dekabri2316@gmail.com> | null | null | MIT | python utilities, beginner python, one-liner python, simplify input, easy console, file helpers, random shortcuts, python helpers, zaxy, zaxy framework | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/developer-maksancik/zaxy-project",
"Repository, https://github.com/developer-maksancik/zaxy-project",
"Issues, https://github.com/developer-maksancik/zaxy-project/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:02:26.841931 | zaxy-0.3.0.tar.gz | 5,389 | ed/d7/c67556ad79ab9592c2f1661ff9f3aaad70f2ae486290c6240e52e9425ca6/zaxy-0.3.0.tar.gz | source | sdist | null | false | 8565d08aa8d04cee348a47ce5c7f1ca7 | 2cdf55ec2f6284b23e80902ad8bcf6c9c81ea752761225a86ca15fc299982cf5 | edd7c67556ad79ab9592c2f1661ff9f3aaad70f2ae486290c6240e52e9425ca6 | null | [
"LICENSE"
] | 217 |
2.4 | ansys-edb-core | 0.3.0.dev6 | A python wrapper for Ansys Edb service | PyEDB-Core
==========
|pyansys| |python| |pypi| |MIT|
.. |pyansys| image:: https://img.shields.io/badge/Py-Ansys-ffc107.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAIAAACQkWg2AAABDklEQVQ4jWNgoDfg5mD8vE7q/3bpVyskbW0sMRUwofHD7Dh5OBkZGBgW7/3W2tZpa2tLQEOyOzeEsfumlK2tbVpaGj4N6jIs1lpsDAwMJ278sveMY2BgCA0NFRISwqkhyQ1q/Nyd3zg4OBgYGNjZ2ePi4rB5loGBhZnhxTLJ/9ulv26Q4uVk1NXV/f///////69du4Zdg78lx//t0v+3S88rFISInD59GqIH2esIJ8G9O2/XVwhjzpw5EAam1xkkBJn/bJX+v1365hxxuCAfH9+3b9/+////48cPuNehNsS7cDEzMTAwMMzb+Q2u4dOnT2vWrMHu9ZtzxP9vl/69RVpCkBlZ3N7enoDXBwEAAA+YYitOilMVAAAAAElFTkSuQmCC
:target: https://docs.pyansys.com/
:alt: PyAnsys
.. |python| image:: https://img.shields.io/pypi/pyversions/ansys-edb-core?logo=pypi
:target: https://pypi.org/project/ansys-edb-core/
:alt: Python
.. |pypi| image:: https://img.shields.io/pypi/v/ansys-edb-core.svg?logo=python&logoColor=white
:target: https://pypi.org/project/ansys-edb-core
:alt: PyPI
.. |MIT| image:: https://img.shields.io/badge/License-MIT-yellow.svg
:target: https://opensource.org/licenses/MIT
:alt: MIT
|GH-CI| |black|
.. |GH-CI| image:: https://github.com/ansys/pyedb-core/actions/workflows/ci_cd.yml/badge.svg
:target: https://github.com/ansys/pyedb-core/actions/workflows/ci_cd.yml
:alt: GH-CI
.. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg?style=flat
:target: https://github.com/psf/black
:alt: Black
.. reuse_start
PyEDB-Core is a Python client for the Electronics Database (EDB), a format for storing
information describing designs for Ansys Electronic Desktop (AEDT). Using the PyEDB-Core API,
you can make calls to an EDB server that is running either locally or remotely.
The EDB server can create, edit, read, and write EDB files to disk. These files can then be
read into AEDT and their designs simulated.
Documentation and issues
~~~~~~~~~~~~~~~~~~~~~~~~
Documentation for the latest stable release of PyEDB-Core is hosted at
`PyEDB-Core documentation <https://edb.core.docs.pyansys.com/version/stable/index.html#>`_.
The documentation has five sections:
- `Getting started <https://edb.core.docs.pyansys.com/version/stable/getting_started/index.html#>`_: Describes
how to install PyEDB-Core in user mode.
- `User guide <https://edb.core.docs.pyansys.com/version/stable/user_guide/index.html>`_: Describes how to
use PyEDB-Core.
- `API reference <https://edb.core.docs.pyansys.com/version/stable/api/index.html>`_: Provides API member descriptions
and usage examples.
- `Examples <https://edb.core.docs.pyansys.com/version/stable/examples/index.html>`_: Provides examples showing
end-to-end workflows for using PyEDB-Core.
- `Contribute <https://edb.core.docs.pyansys.com/version/stable/contribute.html>`_: Describes how to install
PyEDB-Core in developer mode and how to contribute to this PyAnsys library.
In the upper right corner of the documentation's title bar, there is an option for switching from
viewing the documentation for the latest stable release to viewing the documentation for the
development version or previously released versions.
On the `PyEDB-Core Issues <https://github.com/ansys/pyedb-core/issues>`_ page, you can create
issues to report bugs and request new features. When possible, use these issue templates:
* Bug report template
* Feature request template
* Documentation issue template
* Example request template
If your issue does not fit into one of these categories, create your own issue.
On the `Discussions <https://discuss.ansys.com/>`_ page on the Ansys Developer portal, you can post questions,
share ideas, and get community feedback.
To reach the PyAnsys support team, email `pyansys.core@ansys.com <pyansys.core@ansys.com>`_.
License
~~~~~~~
PyEDB-Core is licensed under the MIT license.
PyEDB-Core makes no commercial claim over Ansys whatsoever. The use of this Python client requires
a legally licensed copy of AEDT. For more information, see the
`Ansys Electronics <https://www.ansys.com/products/electronics>`_ page on the Ansys website.
| text/x-rst | null | "ANSYS, Inc." <pyansys.support@ansys.com> | null | PyAnsys developers <pyansys.maintainers@ansys.com> | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Information Analysis",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"ansys-api-edb==0.3.dev3",
"protobuf<7,>=3.19.3",
"grpcio>=1.44.0",
"Django>=4.2.16",
"ansys-tools-common>=0.3.1",
"sphinx==7.4.7; extra == \"doc\"",
"numpydoc==1.9.0; extra == \"doc\"",
"ansys_sphinx_theme>=0.12.2; extra == \"doc\"",
"sphinx-copybutton==0.5.2; extra == \"doc\"",
"notebook; extra == \"notebook\"",
"matplotlib; extra == \"notebook\"",
"ipynbname; extra == \"notebook\"",
"pytest==8.4.1; extra == \"tests\"",
"pytest-cov==6.2.1; extra == \"tests\"",
"pytest-mock==3.14.1; extra == \"tests\"",
"tox; extra == \"tests\""
] | [] | [] | [] | [
"Documentation, https://edb.core.docs.pyansys.com",
"Homepage, https://github.com/ansys/pyedb-core",
"Source, https://github.com/ansys/pyedb-core",
"Tracker, https://github.com/ansys/pyedb-core/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:01:38.880916 | ansys_edb_core-0.3.0.dev6.tar.gz | 157,802 | 91/41/527cee13799e9744b5c8fca3f8cf2cc722007b78bbe0cda99dceef32f4ff/ansys_edb_core-0.3.0.dev6.tar.gz | source | sdist | null | false | 2b30ce6daa43da4e3f88aaa98f2267a1 | 91f41756d647c7324e565d03f5ad819afacf8a580778e56c78eb7a2af5370aee | 9141527cee13799e9744b5c8fca3f8cf2cc722007b78bbe0cda99dceef32f4ff | null | [
"LICENSE"
] | 198 |
2.4 | fast-scrape | 0.2.4 | High-performance HTML parsing library for Python | # fast-scrape
[](https://pypi.org/project/fast-scrape)
[](https://pypi.org/project/fast-scrape)
[](../../LICENSE-MIT)
**8x faster** HTML parsing than BeautifulSoup4. Rust-powered with **3000x faster** CSS selector queries.
## Installation
```bash
pip install fast-scrape
```
<details>
<summary>Alternative package managers</summary>
```bash
# uv (recommended - 10-100x faster)
uv pip install fast-scrape
# Poetry
poetry add fast-scrape
# Pipenv
pipenv install fast-scrape
```
</details>
> [!IMPORTANT]
> Requires Python 3.10 or later.
## Quick start
```python
from scrape_rs import Soup
soup = Soup("<html><body><div class='content'>Hello, World!</div></body></html>")
div = soup.find("div")
print(div.text) # Hello, World!
```
## Usage
<details open>
<summary><strong>Find elements</strong></summary>
```python
from scrape_rs import Soup
soup = Soup(html)
# Find first element by tag
div = soup.find("div")
# Find all elements
divs = soup.find_all("div")
# CSS selectors
for el in soup.select("div.content > p"):
print(el.text)
```
</details>
<details>
<summary><strong>Element properties</strong></summary>
```python
element = soup.find("a")
text = element.text # Get text content
html = element.inner_html # Get inner HTML
href = element.get("href") # Get attribute
```
</details>
<details>
<summary><strong>Batch processing</strong></summary>
```python
from scrape_rs import Soup
# Process multiple documents in parallel
documents = [html1, html2, html3]
soups = Soup.parse_batch(documents)
for soup in soups:
print(soup.find("title").text)
```
> [!TIP]
> Use `parse_batch()` for processing multiple documents. Uses all CPU cores automatically.
</details>
<details>
<summary><strong>Type hints</strong></summary>
Full IDE support with type stubs:
```python
from scrape_rs import Soup, Tag
def extract_links(soup: Soup) -> list[str]:
return [a.get("href") for a in soup.select("a[href]")]
```
</details>
## Performance
Massive performance improvements across all operations:
<details open>
<summary><strong>Parse speed comparison</strong></summary>
| File size | fast-scrape | BeautifulSoup4 | lxml | Speedup |
|-----------|-------------|----------------|------|---------|
| 1 KB | **11 µs** | 0.23 ms | 0.31 ms | **20-28x faster** |
| 100 KB | **2.96 ms** | 31.4 ms | 28.2 ms | **9.5-10.6x faster** |
| 1 MB | **15.5 ms** | 1247 ms | 1032 ms | **66-80x faster** |
**Throughput:** 64 MB/s on 1MB files — handles large documents efficiently.
</details>
<details>
<summary><strong>Query performance</strong></summary>
| Operation | fast-scrape | BeautifulSoup4 | Speedup |
|-----------|-------------|----------------|---------|
| `find("div")` | **208 ns** | 16 µs | **77x** |
| `find(".class")` | **20 ns** | 797 µs | **40,000x** |
| `find("#id")` | **20 ns** | 799 µs | **40,000x** |
| `select("div > p")` | **24.7 µs** | 4.361 ms | **176x** |
**CSS selectors dominate:** Class and ID selectors run in nanoseconds vs microseconds.
</details>
<details>
<summary><strong>Memory efficiency (100MB HTML)</strong></summary>
| Library | Memory | Efficiency |
|---------|--------|------------|
| fast-scrape | **145 MB** | 1x baseline |
| lxml | 2,100 MB | 14.5x larger |
| BeautifulSoup4 | 3,200 MB | **22x larger** |
**Result:** 14-22x more memory-efficient than Python competitors.
</details>
**Architecture optimizations:**
- **SIMD-accelerated class matching** — 2-10x faster selector execution
- **Zero-copy serialization** — 50-70% memory reduction in HTML output
- **Batch processing** — Parallel parsing uses all CPU cores automatically
**Parsing & Selection (Servo browser engine):**
- [html5ever](https://crates.io/crates/html5ever) — Spec-compliant HTML5 parser
- [selectors](https://crates.io/crates/selectors) — CSS selector matching engine
**Streaming Parser (Cloudflare):**
- [lol_html](https://github.com/cloudflare/lol_html) — High-performance streaming HTML parser with constant-memory event-driven API
## Related packages
| Platform | Package |
|----------|---------|
| Rust | [`scrape-core`](https://crates.io/crates/scrape-core) |
| Node.js | [`@fast-scrape/node`](https://www.npmjs.com/package/@fast-scrape/node) |
| WASM | [`@fast-scrape/wasm`](https://www.npmjs.com/package/@fast-scrape/wasm) |
## License
MIT OR Apache-2.0
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | MIT OR Apache-2.0 | html, parser, scraping, css-selectors, dom | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Rust",
"Topic :: Text Processing :: Markup :: HTML",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://github.com/bug-ops/scrape-rs",
"Homepage, https://github.com/bug-ops/scrape-rs",
"Issues, https://github.com/bug-ops/scrape-rs/issues",
"Repository, https://github.com/bug-ops/scrape-rs"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:01:32.858712 | fast_scrape-0.2.4.tar.gz | 127,576 | 35/db/833e0a65c1b0fe8d1a401ddcf7f673b7db09e1cce3a76940490c3a1fa8b6/fast_scrape-0.2.4.tar.gz | source | sdist | null | false | 589ce7eeac6728a27e52c0ee7262eb76 | 72482c19a38ce94ec556437d431e2e1be430c4457f10d0e3ad8ea1fc5d05d70e | 35db833e0a65c1b0fe8d1a401ddcf7f673b7db09e1cce3a76940490c3a1fa8b6 | null | [] | 1,711 |
2.4 | lambda-packer | 0.1.58 | A tool to package Python AWS Lambda functions with zips, Docker containers, and layers. | # lambda-packer
**A streamlined tool for managing and packaging Python AWS Lambda functions**
---
## Overview
`lambda-packer` is a command-line tool designed to simplify the process of packaging Python AWS Lambda functions.
It provides an opinionated approach to develop Lambdas using a monorepo, allowing packaging as either zip files or Docker containers,
with shared dependencies packaged as Lambda layers.
### Key Features
- Package Lambdas as zip files or Docker containers
- Support for multiple Lambda layers shared across functions
- Simple YAML configuration to manage Lambdas and layers
- Layer packaging with automatic dependency handling
---
## Installation
```bash
pip install lambda-packer
```
---
## Usage

### 1. Initialize a new repo
The `init` command creates a basic repository structure for your Lambda functions, including a `common` folder for shared dependencies, an example Lambda function, and a `package_config.yaml` file.
```bash
lambda-packer init <parent_directory> --lambda-name <lambda_name>
```
Example:
```bash
lambda-packer init my_project --lambda-name my_lambda
```
This command creates:
```
my_project/
├── common/
├── my_lambda/
│ ├── lambda.py
│ └── requirements.txt
├── dist/
└── package_config.yaml
```
### 2. Configuration
The `package_config.yaml` file is where you define how to package your Lambdas. You can specify the type of packaging (`zip` or `docker`), the Python runtime, and any layers associated with the Lambda.
#### Example `package_config.yaml`
```yaml
lambdas:
my_lambda:
type:
- zip
file_name: lambda
function_name: lambda_handler
runtime: '3.12'
platforms: ['linux/arm64', 'linux/x86_64']
layers:
- common
```
### 3. Package Lambda as a Zip
To package a Lambda function (for a `zip` type Lambda), use the following command:
```bash
lambda-packer package my_lambda
```
This will package the Lambda function and any referenced layers (e.g., `common`) into a zip file in the `dist` directory.
### 4. Package Lambda as a Docker Container
To package a Lambda as a Docker container (for a `docker` type Lambda), modify the `package_config.yaml` and set `type: docker`.
```yaml
lambdas:
my_lambda:
type: docker
runtime: "3.9"
layers:
- common
```
Then run:
```bash
lambda-packer package my_lambda
```
Or package them all:
```bash
layer-packer package
```
The tool will build a Docker image using the specified Python runtime and package the Lambda function.
### 5. Packaging Lambda Layers
If you need to package shared dependencies (like the `common` folder) as Lambda layers, you can use the `package-layer` command:
```bash
lambda-packer package-layer common
```
This command packages the `common` directory as a Lambda layer and zips it to the `dist/` folder.
---
## Available Commands
- `init <parent_directory> --lambda-name <lambda_name>`: Initialize a new monorepo with a common folder, a lambda, and `package_config.yaml`.
- `package <lambda_name>`: Package the specified Lambda function (either as zip or Docker container).
- `package-layer <layer_name>`: Package a specific layer (e.g., `common`) into a zip file.
- `config <lambda_name>`: Generate a package_config.yaml from an existing monorepo.
- `clean`: Clean the `dist/` directory by removing all contents.
---
## Example Workflow
1. **Initialize the project**:
```bash
lambda-packer init my_project --lambda-name my_lambda
```
2. **Edit `package_config.yaml`** to configure the Lambda:
```yaml
lambdas:
my_lambda:
type: zip
runtime: "3.9"
layers:
- common
```
3. **Install dependencies** for `my_lambda` by editing `my_lambda/requirements.txt`.
4. **Package the Lambda**:
```bash
lambda-packer package my_lambda
```
5. **Package the `common` layer** (if needed):
```bash
lambda-packer package-layer common
```
### 6. Adding a new lambda to an existing repository
You can add a new Lambda to an existing repository using the `lambda` command. You can also specify layers to be added to the new Lambda.
```bash
lambda-packer lambda <lambda_name> --runtime <runtime_version> --type <zip|docker> --layers <layer1> --layers <layer2>
```
Example:
```bash
lambda-packer lambda my_new_lambda --runtime 3.9 --type docker --layers common --layers shared
```
This will create a new Lambda directory and update the `package_config.yaml` like so:
```yaml
lambdas:
my_new_lambda:
runtime: "3.9"
type: docker
layers:
- common
- shared
```
If no layers are specified, the `layers` key will not be added.
Example without layers:
```bash
lambda-packer lambda my_new_lambda --runtime 3.9 --type docker
```
This will update the `package_config.yaml` like this:
```yaml
lambdas:
my_new_lambda:
runtime: "3.9"
type: docker
```
---
## Contributing
Contributions are welcome! If you'd like to contribute to this project, please open a pull request or issue on GitHub.
### Development Setup
Clone this repository and run:
```bash
git clone https://github.com/calvernaz/lambda-packer.git
cd lambda-packer
pip install -e .
```
For development:
```bash
pip install -e .[dev]
```
### Running Tests
```bash
pytest tests/
```
---
### Release
Bump patch version:
```bash
bumpversion patch
```
Push tags:
```
git push origin main --tags
```
## License
This project is licensed under the MIT License.
---
## Contact
For any questions or feedback, feel free to open an issue on GitHub.
| text/markdown | null | null | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"Click",
"PyYAML",
"docker>=4.4.0",
"pytest; extra == \"dev\"",
"pytest-mock; extra == \"dev\"",
"black; extra == \"dev\"",
"twine; extra == \"dev\"",
"bump2version; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T18:01:31.084679 | lambda_packer-0.1.58.tar.gz | 19,966 | 56/1e/385325ae0bc25bd7c188ea6f0dd0576321207ff2357f4c3b2a4d835afd93/lambda_packer-0.1.58.tar.gz | source | sdist | null | false | 4c6713509ff161551179faca714e2d7f | 058604e4a94b922f7027315349c09c984985590e1fa37083cc23df7d483426d4 | 561e385325ae0bc25bd7c188ea6f0dd0576321207ff2357f4c3b2a4d835afd93 | null | [
"LICENSE"
] | 220 |
2.1 | huff | 1.8.3 | huff: Market Area Analysis in Python | # huff: Market Area Analysis in Python

This Python library is designed for performing market area analyses with the *Huff Model* (Huff 1962, 1964) and/or the *Multiplicative Competitive Interaction (MCI) Model* (Nakanishi and Cooper 1974, 1982). The package is especially intended for researchers in economic geography, regional economics, spatial planning, marketing, geoinformation science, and health geography. It is designed to cover the entire workflow of a market area analysis, including model calibration and GIS-related processing. Users may load point shapefiles (or CSV, XLSX) of customer origins and supply locations and conduct a market area analysis step by step. The first step after importing is always to create an interaction matrix with a built-in function, on the basis of which all implemented models can then be calculated. The library supports parameter estimation based on empirical customer data using the MCI model or Maximum Likelihood estimation. See Huff and McCallum (2008), Orpana and Lampinen (2003) and Wieland (2017) for a description of the models, their practical application and fitting procedures. Additionally, the library includes functions for accessibility analysis, which may be combined with market area analysis, namely the *Hansen accessibility* (Hansen 1959) and the *Two-step floating catchment area analysis* (Luo and Wang 2003). The package also includes auxiliary GIS functions for market area analysis (buffer, distance matrix, overlay statistics) and clients for OpenRouteService(1) for network analysis (e.g., transport cost matrix) and OpenStreetMap(2) for simple maps. All auxiliary functions are implemented in the market area analysis functions but are also able to be used stand-alone.
## Author
Thomas Wieland [ORCID](https://orcid.org/0000-0001-5168-9846) [EMail](mailto:geowieland@googlemail.com)
## Availability
- 📦 PyPI: [huff](https://pypi.org/project/huff/)
- 💻 GitHub Repository: [huff_official](https://github.com/geowieland/huff_official)
- 📄 DOI (Zenodo): [10.5281/zenodo.18639559](https://doi.org/10.5281/zenodo.18639559)
A software paper describing the library is available at [arXiv](https://arxiv.org/abs/2602.17640)
## Citation
If you use this software, please cite:
Wieland, T. (2026). huff: Market Area Analysis in Python (Version 1.8.3) [Computer software]. Zenodo. https://doi.org/10.5281/zenodo.18639559
## Installation
To install the package from the Python Package Index (PyPI), use `pip`:
```bash
pip install huff
```
To install the package from GitHub with `pip`:
```bash
pip install git+https://github.com/geowieland/huff_official.git
```
## Features
- **Data management and preliminary analysis**:
- Importing tables or point geodata or interaction matrix
- Setting attributes of customer origins and supply locations (variables, weightings)
- Creating interaction matrix from point geodata (origins and destinations), including calculation of transport costs (distance, travel time)
- Creating interaction matrix from survey data
- **Huff Model**:
- Basic Huff Model analysis based on an interaction matrix
- Different function types: power, exponential, logistic
- Defining further attraction indicators in the utility function
- Huff model parameter estimation via Maximum Likelihood (ML) by probalities, customer flows, and total market areas
- Huff model market simulation
- **Multiplicative Competitive Interaction Model**:
- Log-centering transformation of interaction matrix
- Fitting MCI model with >= 2 independent variables in the utility function
- Huff-like MCI model market simulation
- MCI model market simulation with inverse log-centering transformation
- **Hansen accessibility**:
- Calculating basic Hansen accessibility based on an interaction matrix
- Calculating multivariate and (empirically) weighted Hansen accessibility based on an interaction matrix
- **Two-step floating catchment area analysis**:
- Calculating basic 2SFCA analysis based on an interaction matrix
- Calculating multivariate and (empirically) weighted 2SFCA analysis based on an interaction matrix
- **GIS tools**:
- OpenRouteService(1) Client (implemented in model functions, but also available stand-alone):
- Creating transport costs matrix from origins and destinations
- Creating isochrones from origins and destinations
- OpenStreetMap(2) Client (implemented in model functions, but also available stand-alone):
- Creating simple maps with OSM basemap
- Other GIS tools (implemented in model functions, but also available stand-alone):
- Creating buffers from geodata
- Spatial join with with statistics
- Creating euclidean distance matrix from origins and destinations
- Overlay-difference analysis of polygons
(1) © openrouteservice.org by HeiGIT | Map data © OpenStreetMap contributors | https://openrouteservice.org/
(2) © OpenStreetMap contributors | available under the Open Database License | https://www.openstreetmap.org/
## Examples
```python
# Workflow for basic Huff model analysis:
from huff.data_management import load_geodata
from huff.models import create_interaction_matrix
Haslach = load_geodata(
"data/Haslach.shp",
location_type="origins",
unique_id="BEZEICHN"
)
# Loading customer origins (shapefile)
Haslach.define_marketsize("pop")
# Definition of market size variable
Haslach.define_transportcosts_weighting(
func = "power",
param_lambda = -2.2,
)
# Definition of transport costs weighting (lambda)
Haslach.summary()
# Summary after update
Haslach_supermarkets = load_geodata(
"data/Haslach_supermarkets.shp",
location_type="destinations",
unique_id="LFDNR"
)
# Loading supply locations (shapefile)
Haslach_supermarkets.define_attraction("VKF_qm")
# Defining attraction variable
Haslach_supermarkets.define_attraction_weighting(
param_gamma=0.9
)
# Define attraction weighting (gamma)
Haslach_supermarkets.summary()
# Summary of updated customer origins
haslach_interactionmatrix = create_interaction_matrix(
Haslach,
Haslach_supermarkets
)
# Creating interaction matrix
haslach_interactionmatrix.transport_costs(
ors_auth="5b3ce3597851110001cf62487536b5d6794a4521a7b44155998ff99f",
network=True,
)
# Obtaining transport costs (default: driving-car)
# set network = True to calculate transport costs matrix via ORS API (default)
# ORS API documentation: https://openrouteservice.org/dev/#/api-docs/v2/
haslach_interactionmatrix.summary()
# Summary of interaction matrix
haslach_interactionmatrix.flows()
# Calculating spatial flows for interaction matrix
huff_model = haslach_interactionmatrix.marketareas()
# Calculating total market areas
# Result of class HuffModel
huff_model.summary()
# Summary of Huff model
haslach_interactionmatrix.plot(
origin_point_style = {
"name": "Districts",
"color": "black",
"alpha": 1,
"size": 100,
},
location_point_style = {
"name": "Supermarket chains",
"color": {
"Name": {
"Aldi Süd": "blue",
"Edeka": "yellow",
"Lidl": "red",
"Netto": "orange",
"Real": "darkblue",
"Treff 3000": "fuchsia"
}
},
"alpha": 1,
"size": 100
},
)
# Plot of interaction matrix with expected customer flows
```
For detailed examples, see the /examples folder in the [public GitHub repository](https://github.com/geowieland/huff_official).
## Literature
- Cooper LG, Nakanishi M (1983) Standardizing Variables in Multiplicative Choice Models. *Journal of Consumer Research* 10(1): 96–108. [10.1086/208948](https://doi.org/10.1086/208948)
- De Beule M, Van den Poel D, Van de Weghe N (2014) An extended Huff-model for robustly benchmarking and predicting retail network performance. *Applied Geography* 46(1): 80–89. [10.1016/j.apgeog.2013.09.026](https://doi.org/10.1016/j.apgeog.2013.09.026)
- Güssefeldt J (2002) Zur Modellierung von räumlichen Kaufkraftströmen in unvollkommenen Märkten. *Erdkunde* 56(4): 351–370. [10.3112/erdkunde.2002.04.02](https://doi.org/10.3112/erdkunde.2002.04.02)
- Haines Jr GH, Simon LS, Alexis M (1972) Maximum Likelihood Estimation of Central-City Food Trading Areas. *Journal of Marketing Research* 9(2): 154-159. [10.2307/3149948](https://doi.org/10.2307/3149948)
- Hansen WG (1959) How Accessibility Shapes Land Use. *Journal of the American Institute of Planners* 25(2): 73-76. [10.1080/01944365908978307](https://doi.org/10.1080/01944365908978307)
- Huff DL (1962) *Determination of Intra-Urban Retail Trade Areas*. Real Estate Research Program, Graduate Schools of Business Administration, University of California.
- Huff DL (1963) A Probabilistic Analysis of Shopping Center Trade Areas. *Land Economics* 39(1): 81-90. [10.2307/3144521](https://doi.org/10.2307/3144521)
- Huff DL (1964) Defining and estimating a trading area. *Journal of Marketing* 28(4): 34–38. [10.2307/1249154](https://doi.org/10.2307/1249154)
- Huff DL (2003) Parameter Estimation in the Huff Model. *ArcUser* 6(4): 34–36. https://stg.esri.com/news/arcuser/1003/files/huff.pdf
- Huff DL, Batsell RR (1975) Conceptual and Operational Problems with Market Share Models of Consumer Spatial Behavior. *Advances in Consumer Research* 2(1): 165-172.
- Huff DL, McCallum BM (2008) Calibrating the Huff Model using ArcGIS Business Analyst. ESRI White Paper, September 2008. https://www.esri.com/library/whitepapers/pdfs/calibrating-huff-model.pdf.
- Luo W, Wang F (2003) Measures of spatial accessibility to health care in a GIS environment: synthesis and a case study in the Chicago region. *Environment and Planning B: Planning and Design* 30: 865-884. [10.1068/b29120](https://doi.org/10.1068/b29120)
- Luo J (2014) Integrating the Huff Model and Floating Catchment Area Methods to Analyze Spatial Access to Healthcare Services. *Transactions in GIS* 18(3): 436-448. [10.1111/tgis.12096](https://doi.org/10.1111/tgis.12096)
- Nakanishi M, Cooper LG (1974) Parameter estimation for a Multiplicative Competitive Interaction Model: Least squares approach. *Journal of Marketing Research* 11(3): 303–311. [10.2307/3151146](https://doi.org/10.2307/3151146).
- Nakanishi M, Cooper LG (1982) Technical Note — Simplified Estimation Procedures for MCI Models. *Marketing Science* 1(3): 314-322. [10.1287/mksc.1.3.314](https://doi.org/10.1287/mksc.1.3.314)
- Orpana T, Lampinen J (2003) Building Spatial Choice Models from Aggregate Data. *Journal of Regional Science* 43(2): 319-348. [10.1111/1467-9787.00301](https://doi.org/10.1111/1467-9787.00301)
- Rauch S, Wieland T, Rauh J (2025) Accessibility of food - A multilevel approach comparing a choice based model with perceived accessibility in Mainfranken, Germany. *Journal of Transport Geography* 128: 104367. [10.1016/j.jtrangeo.2025.104367](https://doi.org/10.1016/j.jtrangeo.2025.104367)
- Wieland T (2015) *Räumliches Einkaufsverhalten und Standortpolitik im Einzelhandel unter Berücksichtigung von Agglomerationseffekten - Theoretische Erklärungsansätze, modellanalytische Zugänge und eine empirisch-ökonometrische Marktgebietsanalyse anhand eines Fallbeispiels aus dem ländlichen Raum Ostwestfalens/Südniedersachsens*. Mannheim: MetaGIS. https://nbn-resolving.org/urn:nbn:de:bvb:20-opus-180753
- Wieland T (2017) Market Area Analysis for Retail and Service Locations with MCI. *R Journal* 9(1): 298-323. [10.32614/RJ-2017-020](https://doi.org/10.32614/RJ-2017-020)
- Wieland T (2018) A Hurdle Model Approach of Store Choice and Market Area Analysis in Grocery Retailing. *Papers in Applied Geography* 4(4): 370-389. [10.1080/23754931.2018.1519458](https://doi.org/10.1080/23754931.2018.1519458)
- Wieland T (2018) Competitive locations of grocery stores in the local supply context - The case of the urban district Freiburg-Haslach. *European Journal of Geography* 9(3): 98-115. https://www.eurogeojournal.eu/index.php/egj/article/view/41
## What's new (v1.8.3)
- Bugfixes
- Correction in goodness_of_fit.modelfit(): Length of observed and expected vectors is refreshed after removing NaN (if desired by user)
- Other
- goodness_of_fit.modelfit() skips zero values when calculating APE and MAPE instead of returning None
- Update of literature in README
| text/markdown | Thomas Wieland | geowieland@googlemail.com | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.11.9 | 2026-02-20T18:00:52.423140 | huff-1.8.3.tar.gz | 133,709 | b9/21/992275e3ba6670cf4a8e076f80aa7552c119eb4e50bd438da678c61fc8cf/huff-1.8.3.tar.gz | source | sdist | null | false | ae641820e21370d3522c433fa73f43e6 | cc7e91c005f46770a22f98be62dfc98ec869b7441abf4da4743651cf681297fc | b921992275e3ba6670cf4a8e076f80aa7552c119eb4e50bd438da678c61fc8cf | null | [] | 222 |
2.4 | graph-crawler | 4.0.13 | Sync-First бібліотека для побудови графу веб-сайтів - просто як requests! | # GraphCrawler
[](https://www.python.org/downloads/)
[](https://pypi.org/project/graph-crawler/)
[](LICENSE)
Бібліотека для побудови графу структури веб-сайтів.
## Встановлення
```bash
pip install graph-crawler
```
Додаткові залежності:
```bash
pip install graph-crawler[playwright] # JavaScript сайти
pip install graph-crawler[embeddings] # Векторизація
pip install graph-crawler[mongodb] # MongoDB storage
pip install graph-crawler[all] # Все
```
## Використання
```python
import graph_crawler as gc
# Базове сканування
graph = gc.crawl("https://example.com", max_depth=2, max_pages=50)
print(f"Сторінок: {len(graph.nodes)}")
print(f"Посилань: {len(graph.edges)}")
# Збереження
gc.save_graph(graph, "site.json")
```
### Async API
```python
import asyncio
import graph_crawler as gc
async def main():
graph = await gc.async_crawl("https://example.com")
return graph
graph = asyncio.run(main())
```
### Параметри crawl()
| Параметр | Default | Опис |
|----------|---------|------|
| `max_depth` | 3 | Глибина сканування |
| `max_pages` | 100 | Ліміт сторінок |
| `same_domain` | True | Тільки поточний домен |
| `request_delay` | 0.5 | Затримка між запитами (сек) |
| `timeout` | 300 | Загальний таймаут (сек) |
| `driver` | "http" | Драйвер: `http`, `playwright` |
### URL Rules
```python
from graph_crawler import crawl, URLRule
rules = [
URLRule(pattern=r"\.pdf$", should_scan=False),
URLRule(pattern=r"/admin/", should_scan=False),
URLRule(pattern=r"/products/", priority=10),
]
graph = crawl("https://example.com", url_rules=rules)
```
### Операції з графом
```python
# Статистика
stats = graph.get_stats()
# Пошук
node = graph.get_node_by_url("https://example.com/page")
# Об'єднання графів
merged = graph1 + graph2
# Експорт
graph.export_edges("edges.csv", format="csv")
graph.export_edges("graph.dot", format="dot")
```
## Драйвери
| Драйвер | Призначення |
|---------|-------------|
| `http` | Статичні сайти (default) |
| `playwright` | JavaScript/SPA сайти |
```python
# Playwright для JS сайтів
graph = gc.crawl("https://spa-site.com", driver="playwright")
```
## Storage
| Тип | Рекомендовано |
|-----|---------------|
| `memory` | < 1K сторінок |
| `json` | 1K - 20K сторінок |
| `sqlite` | 20K+ сторінок |
| `mongodb` | Великі проекти |
## CLI
```bash
graph-crawler crawl https://example.com --max-depth 2
graph-crawler list
graph-crawler info graph_name
```
## Вимоги
- Python 3.11+
## Ліцензія
MIT
| text/markdown | 0-EternalJunior-0 | null | 0-EternalJunior-0 | null | null | web, crawler, scraper, graph, spider, scrapy, vectorization, free-threading | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Internet :: WWW/HTTP :: Indexing/Search",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"requests>=2.31.0",
"aiohttp>=3.9.0",
"beautifulsoup4>=4.12.0",
"lxml>=4.9.0",
"lxml_html_clean",
"selectolax>=0.3.0",
"pydantic>=2.5.0",
"pydantic-settings>=2.0.0",
"orjson>=3.9.0",
"fake-useragent",
"aiofiles>=23.2.0",
"aiosqlite>=0.19.0",
"pybloom-live",
"fastapi",
"cython>=3.0.0; extra == \"native\"",
"mmh3>=4.0.0; extra == \"native\"",
"playwright>=1.40.0; extra == \"playwright\"",
"motor>=3.3.0; extra == \"mongodb\"",
"asyncpg>=0.29.0; extra == \"postgresql\"",
"sentence-transformers>=2.2.0; extra == \"embeddings\"",
"numpy>=1.24.0; extra == \"embeddings\"",
"newspaper3k>=0.2.8; extra == \"newspaper\"",
"goose3>=3.1.0; extra == \"goose\"",
"readability-lxml>=0.8.0; extra == \"readability\"",
"newspaper3k>=0.2.8; extra == \"articles\"",
"goose3>=3.1.0; extra == \"articles\"",
"readability-lxml>=0.8.0; extra == \"articles\"",
"pyvis>=0.3.0; extra == \"viz\"",
"networkx>=3.6; extra == \"viz\"",
"celery>=5.3.0; extra == \"celery\"",
"redis>=5.0.0; extra == \"celery\"",
"g4f>=0.3.0; extra == \"ml\"",
"scikit-learn>=1.0.0; extra == \"ml\"",
"aiodns>=3.1.0; extra == \"performance\"",
"uvloop>=0.19.0; platform_system != \"Windows\" and extra == \"performance\"",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mypy>=1.5.0; extra == \"dev\"",
"playwright>=1.40.0; extra == \"all\"",
"motor>=3.3.0; extra == \"all\"",
"asyncpg>=0.29.0; extra == \"all\"",
"sentence-transformers>=2.2.0; extra == \"all\"",
"numpy>=1.24.0; extra == \"all\"",
"newspaper3k>=0.2.8; extra == \"all\"",
"goose3>=3.1.0; extra == \"all\"",
"readability-lxml>=0.8.0; extra == \"all\"",
"pyvis>=0.3.0; extra == \"all\"",
"networkx>=3.6; extra == \"all\"",
"celery>=5.3.0; extra == \"all\"",
"redis>=5.0.0; extra == \"all\"",
"g4f>=0.3.0; extra == \"all\"",
"scikit-learn>=1.0.0; extra == \"all\"",
"aiodns>=3.1.0; extra == \"all\"",
"uvloop>=0.19.0; platform_system != \"Windows\" and extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/0-EternalJunior-0/GraphCrawler",
"Documentation, https://github.com/0-EternalJunior-0/GraphCrawler/-/blob/main/README.md",
"Repository, https://github.com/0-EternalJunior-0/GraphCrawler",
"Bug Tracker, https://github.com/0-EternalJunior-0/GraphCrawler/-/issues"
] | twine/6.2.0 CPython/3.12.5 | 2026-02-20T18:00:27.477478 | graph_crawler-4.0.13.tar.gz | 699,290 | 8b/4a/4ef3e301fd9561eddcc909e14036b144249cf93ffadf8123078d93189229/graph_crawler-4.0.13.tar.gz | source | sdist | null | false | 97aeb7ac51c55ca2e574c6ff743ce907 | 6437c07f1dd673b7fb862e23ee5c73c50b5fda2876f544c311f311e4cfbef787 | 8b4a4ef3e301fd9561eddcc909e14036b144249cf93ffadf8123078d93189229 | MIT | [
"LICENSE"
] | 223 |
2.4 | prelims-cli | 0.0.5 | prelims CLI - Front matter post-processor CLI | # prelims-cli
CLI for [prelims](https://github.com/takuti/prelims).
## Install
Run:
```sh
pip install prelims-cli
```
If you need Japanese tokenization, run:
```sh
pip install prelims-cli[ja]
```
## Usage
Assuming the following folder directory:
```sh
- content
| ├── post
| └── blog
└─ scripts
└ config
└ myconfig.yaml
```
where, post and blog are pages, and scripts is the place to put scripts.
Here is the example of configuration:
```myconfig.yaml
handlers:
- target_path: "content/blog"
ignore_files:
- _index.md
processors:
- type: recommender
permalink_base: "/blog"
tfidf_options:
stop_words: english
max_df: 0.95
min_df: 2
tokenizer: null
- target_path: "content/post"
ignore_files:
- _index.md
processors:
- type: recommender
permalink_base: "/post"
tfidf_options:
max_df: 0.95
min_df: 2
tokenizer:
lang: ja
type: sudachi
mode: C
dict: full
```
```sh
$ prelims-cli --config ./scripts/config/myconfig.yaml
target: /user/chezo/src/chezo.uno/content/blog
target: /users/chezo/src/chezo.uno/content/post
```
Then your articles' front matter were updated.
| text/markdown | null | Aki Ariga <chezou@gmail.com> | null | Aki Ariga <chezou@gmail.com> | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0.3",
"omegaconf>=2.1.1",
"prelims>=0.0.6",
"sudachidict-full>=20211220; extra == \"ja\"",
"sudachipy>=0.6.2; extra == \"ja\""
] | [] | [] | [] | [
"Homepage, https://github.com/chezou/prelims-cli",
"Repository, https://github.com/chezou/prelims-cli"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:00:23.139662 | prelims_cli-0.0.5.tar.gz | 64,533 | 06/43/c51311c0e958a724e50b4aa3e97592212bedd2468d39eed4eebc00431afe/prelims_cli-0.0.5.tar.gz | source | sdist | null | false | a0d4de30a3f606aac1f25a0adcf2d313 | c658f96f4529689e8f7498a589928cc6f9837bf54d13a190f7914d0079ddb629 | 0643c51311c0e958a724e50b4aa3e97592212bedd2468d39eed4eebc00431afe | null | [
"LICENSE"
] | 214 |
2.4 | stitch-proj | 0.1.0 | Some tools for building a Translator BigKG. This software project is experimental and unfinished. | # stitch-proj
Some tools for building a Translator BigKG.
This software project is experimental and under active development.
---
## Installation
### From PyPI
```bash
pip install stitch-proj
````
### For development
```bash
pip install stitch-proj[dev]
```
### From source
```bash
git clone https://github.com/Translator-CATRAX/stitch-proj.git
cd stitch-proj
pip install -e .[dev]
```
---
## Overview
There are two primary intended users of `stitch-proj`:
1. **Ingester**
A developer who wants to ingest the
[Babel concept identifier normalization database](https://github.com/TranslatorSRI/Babel)
into a local SQLite database.
2. **Querier**
A developer building an application (e.g., a BigKG build system) who wants to
programmatically query a local Babel SQLite database.
---
## Package Structure
This project uses a `src/` layout:
```
src/
stitch_proj/
ingest_babel.py
local_babel.py
row_counts.py
stitchutils.py
```
Import the package as:
```python
import stitch_proj
```
---
## Tools
* `stitch_proj.ingest_babel`
Downloads and ingests the Babel database into a local SQLite database.
* `stitch_proj.local_babel`
Provides functions for querying a local Babel SQLite database.
* `stitch_proj.row_counts`
Prints table row counts for a local Babel SQLite database.
---
## Running the Ingest
After installation, the console script is available:
```bash
ingest-babel --help
```
Or invoke via module:
```bash
python -m stitch_proj.ingest_babel --help
```
A full ingest requires:
* CPython 3.12
* At least 32 GiB RAM
* ~600 GiB temporary disk space
* ~200 GiB for the final SQLite database
A full ingest may take 30–40 hours depending on hardware.
---
## Downloading a Pre-Built Babel Database
A pre-built SQLite file is available from S3:
```
https://rtx-kg2-public.s3.us-west-2.amazonaws.com/babel-20250331-p1.sqlite
```
Place it in a directory such as:
```
db/babel.sqlite
```
You can then use `stitch_proj.local_babel` to query it.
---
## Running Tests
Ensure a valid `babel.sqlite` file exists locally, then run:
```bash
pytest -v
```
Some tests require internet connectivity.
---
## Systems Tested
`ingest_babel.py` has been tested on:
* Ubuntu 24.04 (x86_64, Intel Xeon)
* Ubuntu 24.04 (ARM64, AWS Graviton3)
* macOS 14 (Apple Silicon)
The package is pure Python and platform-independent, but large ingests require
substantial memory and storage.
---
## Development Workflow
Run linting, typing, and tests with:
```bash
pytest
ruff check .
mypy src
```
Or install development dependencies:
```bash
pip install -e .[dev]
```
---
## License
MIT License. See `LICENSE`.
---
## Citation
Please see the Babel project's `CITATION.cff`:
[https://github.com/TranslatorSRI/Babel/blob/master/CITATION.cff](https://github.com/TranslatorSRI/Babel/blob/master/CITATION.cff)
| text/markdown | null | Stephen Ramsey <ramseyst@oregonstate.edu> | null | Frankie Hodges <hodgesf@oregonstate.edu> | MIT License
Copyright (c) 2025 Translator-CATRAX
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Development Status :: 3 - Alpha"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"bmt>=1.4.5",
"requests>=2.32.5",
"pandas>=2.2.3",
"ray>=2.43.0",
"htmllistparse>=0.6.1",
"pytest>=8.3.5; extra == \"dev\"",
"mypy>=1.15.0; extra == \"dev\"",
"ruff>=0.11.12; extra == \"dev\"",
"pylint>=3.3.8; extra == \"dev\"",
"vulture>=2.14; extra == \"dev\"",
"pandas-stubs; extra == \"dev\"",
"types-requests; extra == \"dev\"",
"pipreqs; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Translator-CATRAX/stitch-proj",
"Issues, https://github.com/Translator-CATRAX/stitch-proj/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T18:00:10.670089 | stitch_proj-0.1.0.tar.gz | 29,693 | cf/5f/f98d1c2a52aec55e9092d8513c905a2869c3c1dcc55249878d2eccc4dd12/stitch_proj-0.1.0.tar.gz | source | sdist | null | false | f8c4876fd20a122f3c6e1e83910c6b3c | 1ee2773673614fa98fd9043148c3965e921650e8c015f7141624109720a6bcc3 | cf5ff98d1c2a52aec55e9092d8513c905a2869c3c1dcc55249878d2eccc4dd12 | null | [
"LICENSE"
] | 229 |
2.4 | vanda-api | 0.1.1 | Official Python SDK for Vanda Analytics Data API | # Vanda Analytics Data API SDK
Official Python SDK for the Vanda Analytics Data API.
## Installation
```bash
pip install vanda-api
```
For pandas support:
```bash
pip install vanda-api[pandas]
```
## Quick Start
### Synchronous Client
```python
from datetime import date
from vanda import VandaClient
with VandaClient(token="YOUR_TOKEN_HERE") as client:
data = client.get_timeseries(
symbol="TSLA",
start_date=date(2025, 12, 1),
end_date=date(2025, 12, 31),
fields=["retail_net_turnover", "retail_buy_turnover"],
)
print(f"Retrieved {len(data)} records")
```
```python
from vanda import VandaClient
# Pass credentials directly
with VandaClient(email="your_email@example.com", password="your_password") as client:
data = client.get_timeseries(
symbol="TSLA",
start_date="2025-12-01",
end_date="2025-12-31",
fields=["retail_net_turnover"],
)
```
### Recommended to use environment variables
```python
export VANDA_LOGIN_EMAIL="your_email@example.com"
export VANDA_PASSWORD="your_password"
```
### Asynchronous Client
```python
import asyncio
from datetime import date
from vanda import AsyncVandaClient
async def main():
async with AsyncVandaClient(token="YOUR_TOKEN_HERE") as client:
data = await client.get_timeseries(
symbol="TSLA",
start_date=date(2025, 12, 1),
end_date=date(2025, 12, 31),
fields=["retail_net_turnover"],
)
print(f"Retrieved {len(data)} records")
asyncio.run(main())
```
```python
import asyncio
from vanda import AsyncVandaClient
async def main():
async with AsyncVandaClient(email="your_email@example.com", password="your_password") as client:
data = await client.get_timeseries(
symbol="TSLA",
start_date="2025-12-01",
end_date="2025-12-31",
fields=["retail_net_turnover"],
)
asyncio.run(main())
```
### Recommended to use environment variables
```python
export VANDA_LOGIN_EMAIL="your_email@example.com"
export VANDA_PASSWORD="your_password"
```
## Authentication
Set your API token via environment variable:
```bash
export VANDA_API_TOKEN="your_token_here"
```
or
```bash
export VANDA_LOGIN_EMAIL="your_email@example.com"
export VANDA_PASSWORD="your_password"
```
Or pass directly:
```python
client = VandaClient(token="your_token_here")
```
or
```python
client = VandaClient(email="your_email@example.com", password="your_password")
```
## Features
- Sync and async clients with consistent interfaces
- Automatic retry with exponential backoff
- Comprehensive error handling
- Job polling for async operations
- Export utilities (CSV, JSONL)
- Optional pandas support
- Type hints throughout
## API Methods
### Timeseries Data
- `get_timeseries()` - Get timeseries for a single symbol
- `get_timeseries_many()` - Get timeseries for multiple symbols
- `get_leaderboard()` - Get ranked leaderboard data
### Bulk Operations
- `bulk_securities()` - Bulk fetch securities data
- `get_daily_snapshot()` - Get daily snapshot for many securities
### Job Management
- `create_bulk_securities_job()` - Create async job
- `poll_job()` - Poll job until completion
- `get_job_status()` - Get job status
- `export_job_result()` - Export job result to file
- `stream_job_result()` - Stream job result to file
### Export Operations
- `export_timeseries()` - Export timeseries to file
### Metadata
- `list_fields()` - List available fields
- `list_intervals()` - List available intervals
- `list_securities()` - List available securities
## Examples
See `examples/` directory for complete usage examples.
## Requirements
- Python 3.9+
- httpx
## Development
```bash
git clone https://gitlab.com/yourusername/vanda-api.git
cd vanda-api
python -m venv venv
source venv/bin/activate
pip install -e ".[dev]"
pre-commit install
pytest
```
## License
MIT License - see LICENSE file for details.
## Support
For issues and questions, please open an issue on GitLab.
| text/markdown | null | Jonathan Aina <Jonathan.Aina@vanda.com> | null | null | MIT | analytics, api, finance, market-data, retail-trading, vanda | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.27.0",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.5.0; extra == \"dev\"",
"pandas-stubs>=2.0.0; extra == \"dev\"",
"pre-commit>=3.3.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\"",
"respx>=0.20.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"pandas>=1.5.0; extra == \"pandas\""
] | [] | [] | [] | [
"Homepage, https://gitlab.com/yourusername/vanda-api",
"Documentation, https://gitlab.com/yourusername/vanda-api#readme",
"Repository, https://gitlab.com/yourusername/vanda-api",
"Issues, https://gitlab.com/yourusername/vanda-api/-/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T17:59:52.436894 | vanda_api-0.1.1.tar.gz | 20,160 | 44/b1/7df62539bc687b9e623f8a6e16979f6a0b9b79f3a6de207f358cc79a4ee2/vanda_api-0.1.1.tar.gz | source | sdist | null | false | 362315dcb85120997b32113ad4e77899 | 275c80ea6a1dfffb54095e40a09665e6cda0edb35d8508ea65d99d9b3e8d8432 | 44b17df62539bc687b9e623f8a6e16979f6a0b9b79f3a6de207f358cc79a4ee2 | null | [
"LICENSE"
] | 229 |
2.4 | seleniumUts | 1.2.2 | Zdek Util libraries for Pythom coding | # SeleniumUts
Uma biblioteca Python que encapsula algumas funcionalidades do Selenium WebDriver, facilitando a automação de navegadores para testes e raspagem de dados. A biblioteca suporta o uso do `undetected_chromedriver` e integra-se facilmente com o Selenoid para execução de testes em ambientes distribuídos.
## Uso
### Importando a Biblioteca
```python
from seleniumUts import SeleniumUts
```
### Criando uma Instância de `SeleniumUts`
```python
selenium_lib = SeleniumUts()
```
### Exemplos de Uso
#### Configurando o Selenium com ChromeDriver
```python
# Configure o Selenium sem usar o Selenoid
selenium_lib.setupSelenium(host=None, use_selenoid=False)
# Abrir uma página web
driver = selenium_lib.open_page('https://www.example.com')
# Fechar o navegador
selenium_lib.close()
```
#### Configurando o Selenium com Selenoid
```python
# Configure o Selenium usando o Selenoid
selenoid_host = 'http://your-selenoid-server.com/wd/hub'
selenium_lib.setupSelenium(host=selenoid_host, use_selenoid=True)
# Abrir uma página web
driver = selenium_lib.open_page('https://www.example.com')
# Fechar o navegador
selenium_lib.close()
```
#### Aguardando a Visibilidade de um Elemento
```python
# Configure o Selenium
selenium_lib.setupSelenium(host=None, use_selenoid=False)
# Abrir uma página web
selenium_lib.open_page('https://www.example.com')
# Esperar até que o elemento esteja visível
element = selenium_lib.wait_xpath('//button[@id="submit"]', time=10)
element.click()
# Fechar o navegador
selenium_lib.close()
```
#### Envio de Texto com Atraso entre Caracteres
```python
# Configure o Selenium
selenium_lib.setupSelenium(host=None, use_selenoid=False)
# Abrir uma página web
selenium_lib.open_page('https://www.example.com')
# Encontrar o campo de texto e enviar texto com atraso
element = selenium_lib.wait_xpath('//input[@id="search-box"]')
element.delayed_send('Python Selenium', delay=0.2)
# Fechar o navegador
selenium_lib.close()
```
#### Rolagem até o Fim da Página
```python
# Configure o Selenium
selenium_lib.setupSelenium(host=None, use_selenoid=False)
# Abrir uma página web
selenium_lib.open_page('https://www.example.com')
# Rolagem até o fim da página
selenium_lib.scroll_end()
# Fechar o navegador
selenium_lib.close()
```
## Métodos Disponíveis
- **`setupSelenium(host, name="default", use_selenoid=False, cust_opt=[], remove_default_options=False, download_path=None, selenoid_browser=("chrome","110.0"))`**: Configura o WebDriver do Selenium com opções personalizadas e preferências para o ChromeDriver. Suporta configuração para Selenoid.
- **`open_page(page)`**: Abre uma página web e espera até que ela seja totalmente carregada.
- **`wait_xpath(path, time=20, throw=True)`**: Aguarda até que um elemento, identificado por um caminho XPath, esteja visível no DOM.
- **`<el>.delayed_send(word, delay)`**: Envia texto para um elemento, inserindo um atraso especificado entre cada caractere.
- **`scroll_end()`**: Rola até o final da página atual.
- **`close()`**: Fecha o navegador e encerra a sessão do WebDriver.
## Contribuição
Contribuições são bem-vindas! Por favor, envie um pull request ou abra uma issue para quaisquer problemas ou melhorias.
## Licença
Este projeto está licenciado sob a licença MIT.
| text/markdown | Zdek Development team | null | null | null | MIT | seleniumUts | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Education",
"Operating System :: Microsoft :: Windows :: Windows 10",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3"
] | [] | https://github.com/ZdekPyPi/SeleniumUts | null | >=3.10 | [] | [] | [] | [
"undetected-chromedriver>=3.5.5",
"selenium>=4.15.2"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T17:59:44.316775 | seleniumuts-1.2.2.tar.gz | 14,244 | c6/b9/aae9087e1b14388708844d3f37f1d8d5366966fdfad9bd4aa9a179e020ed/seleniumuts-1.2.2.tar.gz | source | sdist | null | false | 0019149e024216e1faedcd2f24957173 | ab2c4b94e2416c4de5155c9a910400bd88db18ec8fa452fe8a513fbd6289f914 | c6b9aae9087e1b14388708844d3f37f1d8d5366966fdfad9bd4aa9a179e020ed | null | [] | 0 |
2.4 | dff-py | 0.1.7 | A simple differential fuzzing framework | # DFF Python Implementation
A Python implementation of the DFF (Differential Fuzzing Framework) that uses Unix domain sockets
and System V shared memory for high-performance IPC.
## Installation
```bash
pip install dff-py
```
## Requirements
- Python 3.9 or higher
- Linux or macOS
### Linux
```bash
sudo sysctl -w kernel.shmmax=104857600
sudo sysctl -w kernel.shmall=256000
```
### macOS
```bash
sudo sysctl -w kern.sysv.shmmax=104857600
sudo sysctl -w kern.sysv.shmall=256000
```
## Usage
### Example Client
```python
import sys
import hashlib
from pathlib import Path
from dff import Client
def process_sha(method: str, inputs: list[bytes]) -> bytes:
"""Process function for SHA256 hashing.
Args:
method: The fuzzing method (should be "sha")
inputs: List of byte arrays to hash
Returns:
SHA256 hash of the first input
Raises:
ValueError: If method is not "sha" or no inputs provided
"""
if method != "sha":
raise ValueError(f"Unknown method: {method}")
if not inputs:
raise ValueError("No inputs provided")
return hashlib.sha256(inputs[0]).digest()
def main() -> None:
"""Main entry point."""
client = Client("python", process_sha)
try:
client.connect()
client.run()
except KeyboardInterrupt:
print("\nShutdown requested")
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
finally:
client.close()
if __name__ == "__main__":
main()
```
### Example Server
```python
import sys
import random
from pathlib import Path
from dff import Server
def data_provider() -> list[bytes]:
"""Generate random data for fuzzing.
Returns:
List containing a single random byte array
"""
MIN_SIZE = 1 * 1024 * 1024 # 1 MB
MAX_SIZE = 4 * 1024 * 1024 # 4 MB
# Use a deterministic seed that increments
if not hasattr(data_provider, "seed_counter"):
data_provider.seed_counter = 1
seed = data_provider.seed_counter
data_provider.seed_counter += 1
# Generate random data with deterministic seed
random.seed(seed)
size = random.randint(MIN_SIZE, MAX_SIZE)
data = random.randbytes(size)
return [data]
def main() -> None:
"""Main entry point."""
server = Server("sha")
try:
server.run(data_provider)
except KeyboardInterrupt:
print("\nShutdown requested")
except Exception as e:
print(f"Server error: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()
```
| text/markdown | null | Justin Traglia <jtraglia@pm.me> | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/jtraglia/dff",
"Repository, https://github.com/jtraglia/dff"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T17:59:40.854767 | dff_py-0.1.7.tar.gz | 9,388 | 80/93/69bf03b3d98b0e840a3b94e077d37975b1382a66c90aa8d2657e63b191f3/dff_py-0.1.7.tar.gz | source | sdist | null | false | c5d99a73afe8940d80048a9e0a26e320 | 8e293d9e8b5b78903150556b5a1e0e40bcaa4390861d743fc29e27e43fd239fd | 809369bf03b3d98b0e840a3b94e077d37975b1382a66c90aa8d2657e63b191f3 | null | [] | 239 |
2.4 | python-misc-utils | 0.21 | A collection of Python utility APIs | ## Python Utility Code
I keep writing the same stuff in different projects, so I finally decided
to throw stuff into a common boilerplate and stop the cut&paste jobs.
## Install
The library can be installed using *PyPi*:
```Shell
$ pip install python-misc-utils
```
Or directly from the *Github* repository:
```Shell
$ pip install git+https://github.com/davidel/py_misc_utils.git
```
| text/markdown | Davide Libenzi | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Intended Audience :: Developers"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"pandas",
"psutil",
"pyyaml",
"boto3; extra == \"fs\"",
"bs4; extra == \"fs\"",
"ftputil; extra == \"fs\"",
"google-cloud-storage; extra == \"fs\"",
"pyarrow; extra == \"fs\""
] | [] | [] | [] | [
"Homepage, https://github.com/davidel/py_misc_utils",
"Issues, https://github.com/davidel/py_misc_utils/issues",
"Repository, https://github.com/davidel/py_misc_utils.git"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T17:58:32.572776 | python_misc_utils-0.21.tar.gz | 94,789 | 15/22/1064d00307a92598d8856cee193b4f60e6617c1199c6555ceeb5eca4230f/python_misc_utils-0.21.tar.gz | source | sdist | null | false | 08da1a8a85f1d1fe82407e0070b3a683 | 01659df55db4735562078bffc2252b74c38a8b63f95263359bb8e744b410e418 | 15221064d00307a92598d8856cee193b4f60e6617c1199c6555ceeb5eca4230f | Apache-2.0 | [
"LICENSE"
] | 234 |
2.4 | fastyaml-rs | 0.5.1 | A fast YAML parser and linter for Python, powered by Rust | # fastyaml-rs
[](https://pypi.org/project/fastyaml-rs/)
[](https://pypi.org/project/fastyaml-rs/)
[](https://github.com/bug-ops/fast-yaml/blob/main/LICENSE-MIT)
A fast YAML 1.2.2 parser and linter for Python, powered by Rust.
> [!IMPORTANT]
> Requires Python 3.10 or later.
## Installation
```bash
pip install fastyaml-rs
```
## Usage
```python
import fast_yaml
# Parse YAML
data = fast_yaml.safe_load("name: test\nvalue: 123")
print(data) # {'name': 'test', 'value': 123}
# Dump YAML
yaml_str = fast_yaml.safe_dump({"name": "test", "value": 123})
print(yaml_str) # name: test\nvalue: 123\n
```
## Features
- **YAML 1.2.2 compliant** — Full Core Schema support
- **Fast** — 5-10x faster than PyYAML
- **PyYAML compatible** — Drop-in replacement with `load`, `dump`, `Loader`, `Dumper` classes
- **Linter** — Rich diagnostics with line/column tracking
- **Parallel processing** — Multi-threaded parsing for large files
- **Batch processing** — Process multiple files in parallel
- **Type stubs** — Full IDE support with `.pyi` files
## Batch Processing
Process multiple YAML files in parallel:
```python
from fast_yaml._core import batch
# Parse multiple files
result = batch.process_files([
"config1.yaml",
"config2.yaml",
"config3.yaml",
])
print(f"Processed {result.total} files, {result.failed} failed")
# With configuration
config = batch.BatchConfig(workers=4, indent=2)
result = batch.process_files(paths, config)
```
### Format Files
```python
# Dry-run: get formatted content without writing
results = batch.format_files(["config.yaml"])
for path, content, error in results:
if content:
print(f"{path}: {len(content)} bytes")
# In-place: format and write back
result = batch.format_files_in_place(["config.yaml"])
print(f"Changed {result.changed} files")
```
### BatchConfig Options
| Option | Default | Description |
|--------|---------|-------------|
| `workers` | Auto | Number of worker threads |
| `mmap_threshold` | 512 KB | Mmap threshold for large files |
| `max_input_size` | 100 MB | Maximum file size |
| `indent` | 2 | Indentation width |
| `width` | 80 | Line width |
| `sort_keys` | False | Sort dictionary keys |
### BatchResult
```python
result = batch.process_files(paths)
print(f"Total: {result.total}")
print(f"Success: {result.success}")
print(f"Changed: {result.changed}")
print(f"Failed: {result.failed}")
print(f"Duration: {result.duration_ms}ms")
print(f"Files/sec: {result.files_per_second()}")
for path, error in result.errors():
print(f"Error in {path}: {error}")
```
## Documentation
See the [main repository](https://github.com/bug-ops/fast-yaml) for full documentation.
## License
Licensed under either of [Apache License, Version 2.0](../LICENSE-APACHE) or [MIT License](../LICENSE-MIT) at your option.
| text/markdown; charset=UTF-8; variant=GFM | fast-yaml contributors | null | null | null | MIT OR Apache-2.0 | yaml, parser, linter, rust, performance | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Rust",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Text Processing :: Markup",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://github.com/bug-ops/fast-yaml#readme",
"Homepage, https://github.com/bug-ops/fast-yaml",
"Repository, https://github.com/bug-ops/fast-yaml"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T17:58:25.819542 | fastyaml_rs-0.5.1.tar.gz | 214,198 | 35/40/208bdfb8ace672973a9300ef194e35c5b33fbc0225c353ae8409277959e7/fastyaml_rs-0.5.1.tar.gz | source | sdist | null | false | 1b83532e01fd01634b46897eae349f9b | 65ad400ca4d976cab3dba3f4c9f654ae4364bfd7f2e811ff9d16e04259b88f33 | 3540208bdfb8ace672973a9300ef194e35c5b33fbc0225c353ae8409277959e7 | null | [] | 2,123 |
2.4 | socio4health | 1.0.1 | Socio4health is a Python package for gathering and consolidating socio-demographic data. |
<a href="https://www.harmonize-tools.org/">
<img height="120" align="right" src="https://harmonize-tools.github.io/harmonize-logo.png" />
</a>
<a href="https://harmonize-tools.github.io/socio4health/">
<img height="120" src="https://raw.githubusercontent.com/harmonize-tools/socio4health/main/docs/source/_static/image.png" />
</a>
# socio4health
<!-- badges: start -->
[](https://lifecycle.r-lib.org/articles/stages.html#experimental)
[](https://github.com/harmonize-tools/socio4health/blob/main/LICENSE.md/)
[](https://github.com/harmonize-tools/socio4health/graphs/contributors)

<!-- badges: end -->
## Overview
<p style="font-family: Arial, sans-serif; font-size: 14px;">
Package socio4health is an extraction, transformation and loading (ETL) classification tool designed to simplify the intricate process of collecting and merging data from multiple sources, focusing on sociodemographic and census datasets from Colombia, Brazil, and Peru, into a harmonized dataset.
</p>
- Seamlessly retrieve data from online data sources through web scraping, as well as from local files.
- Support for various data formats, including `.csv`, `.xlsx`, `.xls`, `.txt`, `.sav`, fixed-width files and geospatial files, ensuring versatility in sourcing information.
- Consolidating extracted data into a pandas (or dask) DataFrame.
## Dependencies
<table>
<tr>
<td align="center">
<a href="https://www.dask.org/" target="_blank">
<img src="https://avatars.githubusercontent.com/u/17131925?s=200&v=4" height="50" alt="pandas logo">
</a>
</td>
<td align="left">
<strong>Dask</strong><br>
Dask is a flexible parallel computing library for analytics.<br>
</td>
</tr>
<tr>
<td align="center">
<a href="https://pandas.pydata.org/" target="_blank">
<img src="https://avatars.githubusercontent.com/u/21206976?s=280&v=4" height="50" alt="pandas logo">
</a>
</td>
<td align="left">
<strong>Pandas</strong><br>
Pandas is a well-known open source data analysis and manipulation tool.<br>
</td>
</tr>
<tr>
<td align="center">
<a href="https://geopandas.org/" target="_blank">
<img src="https://avatars.githubusercontent.com/u/8130715?s=48&v=4" height="50" alt="pandas logo">
</a>
</td>
<td align="left">
<strong>Geopandas</strong><br>
Python tools for geographic data.<br>
</td>
</tr>
<tr>
<td align="center">
<a href="https://numpy.org/" target="_blank">
<img src="https://avatars.githubusercontent.com/u/288276?s=48&v=4" height="50" alt="numpy logo">
</a>
</td>
<td align="left">
<strong>Numpy</strong><br>
The fundamental package for scientific computing with Python.<br>
</td>
</tr>
<tr>
<td align="center">
<a href="https://scrapy.org/" target="_blank">
<img src="https://avatars.githubusercontent.com/u/733635?s=48&v=4" height="50" alt="scrapy logo">
</a>
</td>
<td align="left">
<strong>Scrapy</strong><br>
Framework for extracting the data you need from websites.<br>
</td>
</tr>
<tr>
<td align="center">
<a href="https://matplotlib.org/" target="_blank">
<img src="https://avatars.githubusercontent.com/u/215947?s=48&v=4" height="50" alt="scrapy logo">
</a>
</td>
<td align="left">
<strong>Matplotlib</strong><br>
Library for creating static, animated, and interactive visualizations in Python.<br>
</td>
</tr>
<tr>
<td align="center">
<a href="https://pytorch.org/" target="_blank">
<img src="https://avatars.githubusercontent.com/u/21003710?s=48&v=4" height="50" alt="scrapy logo">
</a>
</td>
<td align="left">
<strong>Torch</strong><br>
Python package for tensor computation and deep neural networks.<br>
</td>
</tr>
</table>
- <a href="https://openpyxl.readthedocs.io/en/stable/">openpyxl</a>
- <a href="https://py7zr.readthedocs.io/en/latest/">py7zr</a>
- <a href="https://pypi.org/project/pyreadstat/">pyreadstat</a>
- <a href="https://tqdm.github.io/">tqdm</a>
- <a href="https://requests.readthedocs.io/en/latest/">requests</a>
- <a href="https://pypi.org/project/appdirs/">appdirs</a>
- <a href="https://pypi.org/project/pyarrow/">pyarrow</a>
- <a href="https://pypi.org/project/deep-translator/">deep_translator</a>
- <a href="https://pypi.org/project/transformers/">transformers</a>
- <a href="https://pypi.org/project/pytest/">pytest</a>
## Installation
**socio4health** can be installed via pip from [PyPI](https://pypi.org/project/socio4health/).
``` CMD
# Install using pip
pip install socio4health
```
## How to Use it
To use the socio4health package, follow these steps:
1. Import the package in your Python script:
```python
from socio4health import Extractor()
from socio4health import Harmonizer
```
2. Create an instance of the `Extractor` class:
```python
extractor = Extractor()
```
3. Extract data from online sources and create a list of data information:
```python
url = 'https://www.example.com'
depth = 0
ext = 'csv'
list_datainfo = extractor.s4h_extract(url=url, depth=depth, ext=ext)
harmonizer = Harmonizer()
```
For more detailed examples and use cases, please refer to the [socio4health documentation](https://harmonize-tools.github.io/socio4health/).
## Resources
<details>
<summary>
Package Website
</summary>
The [socio4health website](https://harmonize-tools.github.io/socio4health/) package website includes **API reference**, **user guide**, and **examples**. The site mainly concerns the release version, but you can also find documentation for the latest development version.
</details>
<details>
<summary>
Organisation Website
</summary>
[Harmonize](https://www.harmonize-tools.org/) is an international project that develops cost-effective and reproducible digital tools for stakeholders in Latin America and the Caribbean (LAC) affected by a changing climate. These stakeholders include cities, small islands, highlands, and the Amazon rainforest.
The project consists of resources and [tools](https://harmonize-tools.github.io/) developed in conjunction with different teams from Brazil, Colombia, Dominican Republic, Peru, and Spain.
</details>
## Organizations
<table>
<tr>
<td align="center">
<a href="https://www.bsc.es/" target="_blank">
<img src="https://imgs.search.brave.com/t_FUOTCQZmDh3ddbVSX1LgHYq4mzCxvVA8U_YHywMTc/rs:fit:500:0:0/g:ce/aHR0cHM6Ly9zb21t/YS5lcy93cC1jb250/ZW50L3VwbG9hZHMv/MjAyMi8wNC9CU0Mt/Ymx1ZS1zbWFsbC5q/cGc" height="64" alt="bsc logo">
</a>
</td>
<td align="center">
<a href="https://uniandes.edu.co/" target="_blank">
<img src="https://raw.githubusercontent.com/harmonize-tools/socio4health/refs/heads/main/docs/img/uniandes.png" height="64" alt="uniandes logo">
</a>
</td>
</tr>
</table>
## Authors / Contact information
Here is the contact information of authors/contributors in case users have questions or feedback.
</br>
</br>
<a href="https://github.com/dirreno">
<img src="https://avatars.githubusercontent.com/u/39099417?v=4" style="width: 50px; height: auto;" />
</a>
<span style="display: flex; align-items: center; margin-left: 10px;">
<strong>Diego Irreño</strong> (developer)
</span>
</br>
<a href="https://github.com/Ersebreck">
<img src="https://avatars.githubusercontent.com/u/81669194?v=4" style="width: 50px; height: auto;" />
</a>
<span style="display: flex; align-items: center; margin-left: 10px;">
<strong>Erick Lozano</strong> (developer)
</span>
</br>
<a href="https://github.com/Juanmontenegro99">
<img src="https://avatars.githubusercontent.com/u/60274234?v=4" style="width: 50px; height: auto;" />
</a>
<span style="display: flex; align-items: center; margin-left: 10px;">
<strong>Juan Montenegro</strong> (developer)
</span>
</br>
<a href="https://github.com/ingridvmoras">
<img src="https://avatars.githubusercontent.com/u/91691844?s=400&u=945efa0d09fcc25d1e592d2a9fddb984fdc6ceea&v=4" style="width: 50px; height: auto;" />
</a>
<span style="display: flex; align-items: center; margin-left: 10px;">
<strong>Ingrid Mora</strong> (documentation)
</span>
---
# Changelog
All notable changes to this project will be documented in this file.
The format is based on "Keep a Changelog" (https://keepachangelog.com/en/1.0.0/)
## [Unreleased]
- Prepare improvements and documentation updates.
## [1.0.1] - 2026-02-20
### Fixed
- Each extracted compressed file is now stored in its own independent folder.
## [1.0.0] - 2025-10-22
### Added
- Project now includes changelog linked from packaging metadata.
- Minor documentation updates.
### Fixed
- Packaging metadata clarified in `setup.py`.
## [0.1.7] - 2024-06-01
### Added
- Initial public release notes placeholder.
## [Unreleased]: https://github.com/harmonize-tools/socio4health/compare/v1.0.0...HEAD
## [1.0.0]: https://github.com/harmonize-tools/socio4health/compare/v0.1.7...v1.0.0
## [0.1.7]: https://github.com/harmonize-tools/socio4health/releases/tag/v0.1.7
| text/markdown | Erick Lozano, Diego Irreño, Juan Montenegro, Ingrid Mora | null | null | null | null | extract transform load etl scraping relational census sociodemographic colombia brazil | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3 :: Only"
] | [] | https://github.com/harmonize-tools/socio4health | null | <4,>=3.10 | [] | [] | [] | [
"pandas>=2.0.0",
"requests>=2.31.0",
"Scrapy>=2.11.1",
"tqdm>=4.66.1",
"pyreadstat>=1.2.6",
"py7zr>=0.20.8",
"matplotlib>=3.7.0",
"numpy>=1.24.0",
"openpyxl>=3.1.2",
"dask>=2023.0.0",
"appdirs>=1.4.4",
"pyarrow>=12.0.0",
"deep-translator>=1.11.4",
"transformers>=4.30.0",
"torch>=2.0.0",
"geopandas>=0.14.0",
"check-manifest; extra == \"dev\"",
"pytest; extra == \"dev\"",
"coverage; extra == \"test\""
] | [] | [] | [] | [
"Bug Reports, https://github.com/harmonize-tools/socio4health/issues",
"Source, https://github.com/harmonize-tools/socio4health/",
"Changelog, https://github.com/harmonize-tools/socio4health/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T17:58:23.661346 | socio4health-1.0.1.tar.gz | 37,078 | e8/6b/4976ed5ceb1f7a1a2e9c4904b33232c31b03ae934b0f7cdf4ad5ff17337e/socio4health-1.0.1.tar.gz | source | sdist | null | false | 1c992b4c1338f3c47ab8537de3bbc8c8 | 11bd02bec22c59504eccce26a471a59489d592d3344bed2daccf323142d554b8 | e86b4976ed5ceb1f7a1a2e9c4904b33232c31b03ae934b0f7cdf4ad5ff17337e | null | [
"LICENSE.md"
] | 222 |
2.4 | supero | 1.0.4 | Unified SDK for Supero platform - includes intuitive wrapper (supero) and low-level API client (py_api_lib) | # Supero Python SDK
**The AI-native backend platform.** Build complete APIs in minutes, not months.
Supero transforms your data schemas into a full-featured backend with REST APIs, CRUD operations, queries, aggregations, relationships, multi-tenancy, and AI agents — all without writing backend code.
---
## ✨ The Magic of Supero
```
┌─────────────────────────────────────────────────────────────────┐
│ │
│ UPLOAD YOUR JSON SCHEMAS │
│ ↓ │
│ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ SUPERO GENERATES: │ │
│ │ │ │
│ │ ✓ REST APIs ✓ Database Tables │ │
│ │ ✓ Authentication ✓ Authorization │ │
│ │ ✓ Multi-Tenancy ✓ Data Isolation │ │
│ │ ✓ AI Agents ✓ Natural Language │ │
│ │ ✓ Vector Search ✓ Conversation Memory │ │
│ │ ✓ Python SDK ✓ Type Safety │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────┘ │
│ ↓ │
│ │
│ YOUR SAAS PLATFORM IS READY! │
│ │
│ In minutes, not months. No backend team required. │
│ │
└─────────────────────────────────────────────────────────────────┘
```
---
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Authentication](#authentication)
- [Schema Definition](#schema-definition)
- [CRUD Operations](#crud-operations)
- [Query Builder](#query-builder)
- [Aggregations](#aggregations)
- [References (Relationships)](#references-relationships)
- [Parent-Child Hierarchies](#parent-child-hierarchies)
- [Bulk Operations](#bulk-operations)
- [Convenience Methods](#convenience-methods)
- [Error Handling](#error-handling)
- [Typed SDK](#typed-sdk)
- [AI Features](#ai-features)
- [RBAC & API Keys](#rbac--api-keys)
- [Multi-Tenant Architecture](#multi-tenant-architecture)
- [Complete API Reference](#complete-api-reference)
- [Examples](#examples)
---
## Installation
### From PyPI
```bash
pip install supero
```
### Domain-Specific Wheel (For Production)
Once you've set up your domain, you can generate a typed SDK wheel:
```python
org.install_sdk() # Generates supero_acme-1.0.0-py3-none-any.whl
```
Then distribute and install the wheel:
```bash
pip install supero_acme-1.0.0-py3-none-any.whl
```
### What's Inside the Domain Wheel?
| Build Type | Wheel Name | Contents |
|------------|------------|----------|
| **Platform** | `supero-1.0.0-py3-none-any.whl` | Base SDK for setup |
| **Domain** | `supero_<domain>-1.0.0-py3-none-any.whl` | Typed models + API client |
**Benefits of domain wheels:**
- ✅ Full IDE auto-completion for your schemas
- ✅ Type safety and editor warnings
- ✅ Single file deployment
- ✅ Multiple domains can coexist
---
## Quick Start
### The Flow
```
1. Register Domain → 2. Create Project → 3. Upload Schemas → 4. CRUD
```
### 1. Register Domain (First Time)
```python
from supero import register_domain
org = register_domain(
domain_name="acme",
admin_email="admin@acme.com",
admin_password="SecurePass123!"
)
```
> This creates your organization with the necessary infrastructure.
### 2. Create Your Project
```python
project = org.Project.create(name="ecommerce", description="E-commerce Platform")
org = org.switch_project(project_name="ecommerce")
```
### 3. Upload Schemas
Create `schemas/task.json`:
```json
{
"name": "task",
"fields": {
"title": {"type": "string", "required": true},
"done": {"type": "boolean", "default": false},
"priority": {"type": "string", "enum": ["low", "medium", "high"]}
}
}
```
Upload:
```python
org.upload_schemas("schemas/")
```
**That's it.** You now have a full REST API for tasks.
### 4. Use Your API
```python
# Create
task = org.crud.create("task", name="task-1", title="Learn Supero", priority="high")
# Read
task = org.crud.get("task", task["uuid"])
# Update
org.crud.update("task", task["uuid"], done=True)
# Delete
org.crud.delete("task", task["uuid"])
```
### Returning Users - Login
```python
from supero import login
# Single-tenant app
org = login("acme", "user@acme.com", "password", project="ecommerce")
# Multi-tenant app
org = login("acme", "user@acme.com", "password", project="hr-system", tenant="sales")
```
---
## Authentication
### Register New Domain
```python
from supero import register_domain
org = register_domain(
domain_name="acme",
admin_email="admin@acme.com",
admin_password="SecurePass123!"
)
# Create your application project
project = org.Project.create(name="myapp")
org = org.switch_project(project_name="myapp")
```
### Login with Credentials
```python
from supero import login
# Single-tenant (tenant is auto-created)
org = login("acme", "user@acme.com", "password", project="myapp")
# Multi-tenant (specify tenant)
org = login("acme", "user@acme.com", "password", project="saas-app", tenant="customer-a")
```
### Token-Based Authentication (Production)
For frontends, microservices, or APIs, use token-based auth:
```python
from supero import login, quickstart
# Step 1: Login to get a token
org = login("acme", "user@acme.com", "password", project="myapp")
token = org.jwt_token # Save this (e.g., localStorage, config, env var)
# Step 2: Later, reconnect without credentials
org = quickstart("acme", "myapp", jwt_token=token)
# Multi-tenant
org = quickstart("acme", "saas-app", jwt_token=token, tenant="customer-a")
```
### Get JWT via REST API
```bash
curl -X POST https://api.supero.dev/api/v1/auth/login \
-H "Content-Type: application/json" \
-d '{
"domain": "acme",
"project": "myapp",
"email": "user@acme.com",
"password": "your-password"
}'
# Response: {"token": "eyJhbGc...", "expires_in": 86400}
```
### Login with API Key
```python
from supero import quickstart
org = quickstart("acme", "myapp", api_key="ak_prod_123")
```
---
## Schema Definition
Schemas define your data models. Place JSON files in a `schemas/` directory.
### Basic Schema
```json
{
"name": "customer",
"fields": {
"email": {"type": "string", "required": true, "unique": true},
"company": {"type": "string"},
"tier": {"type": "string", "enum": ["free", "pro", "enterprise"], "default": "free"},
"active": {"type": "boolean", "default": true}
}
}
```
### Field Types
| Type | JSON Example | Python Example | Notes |
|------|--------------|----------------|-------|
| `string` | `"hello"` | `"hello"` | Text |
| `number` | `42`, `3.14` | `42`, `3.14` | Integer or float |
| `boolean` | `true`, `false` | `True`, `False` | |
| `array` | `["a", "b"]` | `["a", "b"]` | List of values |
| `object` | `{"key": "val"}` | `{"key": "val"}` | Nested object |
| `datetime` | `"2024-01-15T10:30:00Z"` | `"2024-01-15T10:30:00Z"` | ISO 8601 format |
### Field Options
| Option | Type | Description |
|--------|------|-------------|
| `type` | string | Field type (required) |
| `required` | boolean | Field must be provided |
| `default` | any | Default value if not provided |
| `unique` | boolean | Value must be unique across all objects |
| `enum` | array | Allowed values |
### Schema with References
```json
{
"name": "order",
"fields": {
"total": {"type": "number", "required": true},
"status": {"type": "string", "enum": ["pending", "shipped", "delivered"], "default": "pending"}
},
"refs": {
"customer": {"type": "customer"},
"products": {"type": "product", "many": true}
}
}
```
### Upload Schemas
```python
# Upload all schemas from directory
org.upload_schemas("schemas/")
# Upload single schema
org.upload_schema("schemas/task.json")
```
---
## CRUD Operations
All CRUD operations are available via `org.crud`.
### Create
```python
task = org.crud.create("task",
name="task-001",
title="Learn Supero",
priority="high"
)
# Returns: {"uuid": "...", "name": "task-001", "title": "Learn Supero", ...}
```
### Read
```python
# Get by UUID
task = org.crud.get("task", "uuid-here")
# Get by name
task = org.crud.get_by_name("task", "task-001")
# Get by fully qualified name
task = org.crud.get_by_fq_name("task", ["my-app", "task-001"])
# List all
tasks = org.crud.list("task")
# List with pagination
tasks = org.crud.list("task", limit=10, offset=20)
```
### Update
```python
task = org.crud.update("task", "uuid-here", done=True, priority="low")
```
### Delete
```python
success = org.crud.delete("task", "uuid-here")
# Returns: True if deleted, False if failed
```
---
## Query Builder
For complex queries, use the fluent query builder.
### Basic Query
```python
tasks = org.crud.query("task").all()
```
### Filtering
```python
# Single filter
tasks = org.crud.query("task").filter(done=False).all()
# Multiple filters (AND)
tasks = org.crud.query("task").filter(done=False, priority="high").all()
# Chained filters
tasks = org.crud.query("task") \
.filter(done=False) \
.filter(priority="high") \
.all()
```
### Ordering
```python
# Ascending
tasks = org.crud.query("task").order_by("created_at").all()
# Descending (prefix with -)
tasks = org.crud.query("task").order_by("-created_at").all()
# Multiple fields
tasks = org.crud.query("task").order_by("-priority", "created_at").all()
```
### Pagination
```python
# Limit and offset
tasks = org.crud.query("task").limit(10).offset(20).all()
# Page-based
tasks = org.crud.query("task").paginate(page=3, per_page=25).all()
```
### Other Query Methods
```python
# Get first match
task = org.crud.query("task").filter(priority="high").first()
# Check existence
exists = org.crud.query("task").filter(priority="high").exists()
# Count
count = org.crud.query("task").filter(done=False).count()
# Delete all matching
deleted = org.crud.query("task").filter(done=True).delete_all()
```
### Shorthand: find()
```python
tasks = org.crud.find("task", done=False, priority="high")
```
---
## Aggregations
```python
# Count
total = org.crud.count("task")
active = org.crud.count("task", done=False)
# Sum, Average, Min, Max
revenue = org.crud.sum("order", "amount")
avg_price = org.crud.avg("product", "price")
cheapest = org.crud.min("product", "price")
highest = org.crud.max("order", "amount")
# Distinct values
statuses = org.crud.distinct("order", "status")
# → ["pending", "shipped", "delivered"]
# Group by (count)
by_priority = org.crud.count_by("task", "priority")
# → {"low": 10, "medium": 25, "high": 5}
# Full stats
stats = org.crud.stats("order", "amount")
# → {"count": 100, "sum": 15000, "avg": 150.0, "min": 10, "max": 500}
```
---
## References (Relationships)
### Define References in Schema
```json
{
"name": "order",
"fields": { "total": {"type": "number"} },
"refs": {
"customer": {"type": "customer"},
"products": {"type": "product", "many": true}
}
}
```
### Use References
```python
# Create objects
customer = org.crud.create("customer", name="cust-1", email="john@example.com")
order = org.crud.create("order", name="order-1", total=99.99)
# Link them
org.crud.set_ref("order", order["uuid"], "customer", customer["uuid"])
# With metadata
org.crud.set_ref("order", order["uuid"], "customer", customer["uuid"],
role="billing", notes="Primary contact")
# Get reference
ref = org.crud.get_ref("order", order["uuid"], "customer")
customer = ref["target"]
role = ref["link_data"]["role"]
# Remove reference
org.crud.remove_ref("order", order["uuid"], "customer")
```
---
## Parent-Child Hierarchies
```python
# Create parent
customer = org.crud.create("customer", name="acme-corp", email="contact@acme.com")
# Create child with parent reference
order = org.crud.create("order",
parent=customer,
name="order-001",
total=199.99
)
# fq_name will be: ["my-app", "acme-corp", "order-001"]
```
---
## Bulk Operations
```python
# Create many
tasks = org.crud.bulk_create("task", [
{"name": "task-1", "title": "First task", "priority": "high"},
{"name": "task-2", "title": "Second task", "priority": "medium"},
])
# Get many
tasks = org.crud.bulk_get("task", ["uuid-1", "uuid-2"])
# Update many
org.crud.bulk_update("task", [
{"uuid": "uuid-1", "done": True},
{"uuid": "uuid-2", "done": True},
])
# Delete many
org.crud.bulk_delete("task", ["uuid-1", "uuid-2"])
```
---
## Convenience Methods
```python
# Check existence
if org.crud.exists("user", email="admin@example.com"):
print("User exists")
# Get or create (idempotent)
task, created = org.crud.get_or_create("task", "task-001",
title="Default Title",
priority="medium"
)
# Update or create
task, created = org.crud.update_or_create("task", "task-001",
title="Updated Title",
done=True
)
```
---
## Error Handling
```python
# Safe get (returns None if not found)
task = org.crud.get("task", "invalid-uuid")
if task is None:
print("Not found")
# Safe delete (returns False if failed)
if not org.crud.delete("task", "invalid-uuid"):
print("Delete failed")
# Try/except
try:
task = org.crud.create("task", name="task-1", title="Test")
except Exception as e:
print(f"Create failed: {e}")
```
---
## Typed SDK
Generate a typed SDK for full IDE auto-completion.
```python
org.install_sdk()
```
### Typed Usage
```python
# Instead of string-based:
task = org.crud.create("task", name="t1", title="Learn Supero")
# Use typed access:
task = org.Task.create(name="t1", title="Learn Supero")
# Full examples
task = org.Task.get(uuid)
tasks = org.Task.find(done=False)
org.Task.update(uuid, done=True)
org.Task.delete(uuid)
# Query
tasks = org.Task.query().filter(done=False).order_by("-priority").all()
```
### Comparison
| Feature | `org.crud` | `org.Task` (Typed) |
|---------|------------|-------------------|
| Auto-completion | ❌ | ✅ |
| Type hints | ❌ | ✅ |
| Typo detection | Runtime | Editor |
---
## AI Features
### Simple Chat
```python
response = org.ai.chat("What projects do we have?")
print(response.content)
```
### Quick Question
```python
answer = org.ask("How many active projects are there?")
# → "You have 5 active projects."
```
### Streaming
```python
for chunk in org.ai.chat_stream("Explain our architecture"):
print(chunk, end="", flush=True)
```
### Multi-Turn Conversations
```python
session = org.ai.sessions.create()
org.ai.chat("List all projects", session_id=session.id)
org.ai.chat("Create a task for the first one", session_id=session.id)
org.ai.chat("Call it 'Setup CI/CD'", session_id=session.id)
```
### Vector Search (RAG)
```python
# Index
org.ai.vectors.index(
content="Project Alpha is our flagship e-commerce platform...",
metadata={"type": "project_doc"}
)
# Search
results = org.ai.vectors.search("how does authentication work?", limit=5)
```
### Configure AI
```python
org.ai.configure(
model="claude-sonnet-4-20250514",
max_tokens=8000,
temperature=0.5
)
```
---
## RBAC & API Keys
```python
# Pass api_key to CRUD operations
task = org.crud.create("task", name="t1", title="Test", api_key="user-key")
# Or with query builder
tasks = org.crud.query("task") \
.with_api_key("user-key") \
.filter(done=False) \
.all()
```
---
## Multi-Tenant Architecture
### Hierarchy
```
Domain ──→ Project ──→ Tenant ──→ User Account
│ │
│ └──→ Schemas (linked to project)
│
└──→ default-project / default-tenant (auto-created)
```
| Level | Description | Required? |
|-------|-------------|-----------|
| **Domain** | Top-level organization | ✅ Yes |
| **Project** | Your application | ✅ Yes |
| **Tenant** | Isolated environment | Optional* |
| **User** | Individual account | ✅ Yes |
*Single-tenant apps: tenant is auto-created and abstracted away
*Multi-tenant apps: create and specify tenants explicitly
### Single-Tenant vs Multi-Tenant
| Type | Tenant Handling | Login Example |
|------|-----------------|---------------|
| **Single-Tenant** | Auto (abstracted) | `login("acme", email, pwd, project="ecommerce")` |
| **Multi-Tenant** | Explicit | `login("acme", email, pwd, project="hr", tenant="sales")` |
### Multi-Tenant Setup
```python
from supero import register_domain, login
# Setup (as admin)
org = register_domain("acme", "admin@acme.com", "SecurePass123!")
org.Project.create(name="hr-system")
org = org.switch_project(project_name="hr-system")
# Create tenants
org.Tenant.create(name="sales")
org.Tenant.create(name="payroll")
org.upload_schemas("schemas/")
# Usage: Each tenant has isolated data
sales = login("acme", "sales@acme.com", "password", project="hr-system", tenant="sales")
payroll = login("acme", "payroll@acme.com", "password", project="hr-system", tenant="payroll")
sales.crud.create("employee", name="alice", department="Sales")
payroll.crud.create("payslip", name="pay-001", amount=5000)
# Data is isolated!
sales.crud.list("employee") # Only sales data
payroll.crud.list("payslip") # Only payroll data
```
### Domain Wheel Distribution
SDK is generated at the domain level. Project/tenant are specified at login:
```bash
pip install supero_acme-1.0.0-py3-none-any.whl
```
```python
from supero import login, quickstart
# Same wheel, different project/tenant
ecom = login("acme", "user@acme.com", "pass", project="ecommerce")
sales = login("acme", "user@acme.com", "pass", project="hr-system", tenant="sales")
# Or with tokens (after obtaining from login)
ecom_token = ecom.token
ecom = quickstart("acme", "ecommerce", jwt_token=ecom_token)
```
---
## Complete API Reference
### Setup Methods
| Method | Description |
|--------|-------------|
| `register_domain(domain_name, admin_email, admin_password)` | Create new domain |
| `org.Project.create(name, description=)` | Create project |
| `org.Tenant.create(name, description=)` | Create tenant |
| `org.switch_project(project_name=, tenant_name=)` | Switch context |
| `login(domain, email, password, project=, tenant=)` | Login with credentials |
| `quickstart(domain, project, *, jwt_token=, tenant=)` | Connect with token |
| `org.upload_schemas(path)` | Upload schemas |
| `org.install_sdk()` | Generate typed SDK |
### CRUD Methods
| Method | Returns | Description |
|--------|---------|-------------|
| `create(type, name=, **fields)` | `dict` | Create object |
| `get(type, uuid)` | `dict \| None` | Get by UUID |
| `get_by_name(type, name)` | `dict \| None` | Get by name |
| `list(type, limit=, offset=)` | `list` | List all |
| `update(type, uuid, **fields)` | `dict` | Update object |
| `delete(type, uuid)` | `bool` | Delete object |
### Query Methods
| Method | Returns | Description |
|--------|---------|-------------|
| `query(type)` | `QueryBuilder` | Start query |
| `find(type, **filters)` | `list` | Shorthand filter |
| `.filter(**conditions)` | `QueryBuilder` | Add filters |
| `.order_by(*fields)` | `QueryBuilder` | Set ordering |
| `.limit(n)` / `.offset(n)` | `QueryBuilder` | Pagination |
| `.all()` | `list` | Execute query |
| `.first()` | `dict \| None` | Get first |
| `.count()` | `int` | Count matches |
| `.exists()` | `bool` | Check existence |
### Aggregation Methods
| Method | Returns | Description |
|--------|---------|-------------|
| `count(type, **filters)` | `int` | Count objects |
| `sum(type, field, **filters)` | `number` | Sum field |
| `avg(type, field, **filters)` | `number` | Average field |
| `min/max(type, field)` | `number` | Min/max value |
| `distinct(type, field)` | `list` | Unique values |
| `count_by(type, field)` | `dict` | Group counts |
| `stats(type, field)` | `dict` | Full statistics |
### Reference Methods
| Method | Returns | Description |
|--------|---------|-------------|
| `set_ref(type, uuid, ref_field, ref_uuid, **link_data)` | `dict` | Create link |
| `get_ref(type, uuid, ref_field)` | `dict \| None` | Get link |
| `remove_ref(type, uuid, ref_field)` | `bool` | Remove link |
### Bulk Methods
| Method | Returns | Description |
|--------|---------|-------------|
| `bulk_create(type, items)` | `list` | Create multiple |
| `bulk_get(type, uuids)` | `list` | Get multiple |
| `bulk_update(type, updates)` | `list` | Update multiple |
| `bulk_delete(type, uuids)` | `int` | Delete multiple |
### AI Methods
| Method | Returns | Description |
|--------|---------|-------------|
| `org.ai.chat(message, session_id=)` | `Response` | Chat with AI |
| `org.ai.chat_stream(message)` | `Iterator` | Streaming chat |
| `org.ask(question)` | `str` | Quick Q&A |
| `org.ai.sessions.create()` | `Session` | Create session |
| `org.ai.vectors.index(content, metadata=)` | `str` | Index document |
| `org.ai.vectors.search(query, limit=)` | `list` | Semantic search |
---
## Examples
### E-Commerce App (Single-Tenant)
```python
from supero import register_domain, login
# Setup
org = register_domain("acme", "admin@acme.com", "SecurePass123!")
org.Project.create(name="ecommerce")
org = org.switch_project(project_name="ecommerce")
org.upload_schemas("schemas/")
# Usage
org = login("acme", "user@acme.com", "password", project="ecommerce")
customer = org.crud.create("customer", name="john", email="john@example.com")
order = org.crud.create("order", parent=customer, name="order-001", total=99.99)
org.crud.set_ref("order", order["uuid"], "customer", customer["uuid"])
revenue = org.crud.sum("order", "total", status="completed")
print(f"Revenue: ${revenue}")
```
### HR System (Multi-Tenant)
```python
from supero import register_domain, login
# Setup
org = register_domain("acme", "admin@acme.com", "SecurePass123!")
org.Project.create(name="hr-system")
org = org.switch_project(project_name="hr-system")
org.Tenant.create(name="sales")
org.Tenant.create(name="payroll")
org.upload_schemas("schemas/")
# Usage - isolated data per tenant
sales = login("acme", "sales@acme.com", "pass", project="hr-system", tenant="sales")
payroll = login("acme", "payroll@acme.com", "pass", project="hr-system", tenant="payroll")
sales.crud.create("employee", name="alice", department="Sales")
payroll.crud.create("payslip", name="pay-001", amount=5000)
```
### AI-Powered App
```python
from supero import login
org = login("acme", "admin@acme.com", "pass", project="myapp")
session = org.ai.sessions.create()
org.ai.chat("List all projects", session_id=session.id)
org.ai.chat("Create a task for the first one", session_id=session.id)
for chunk in org.ai.chat_stream("Generate a status report"):
print(chunk, end="", flush=True)
```
---
## Best Practices
1. **Use meaningful names** — `name="acme-corp-2024"` not `name="customer1"`
2. **Use get_or_create for idempotency** — Safe to call multiple times
3. **Use bulk operations** — One API call instead of many
4. **Handle None returns** — `get()` returns `None` if not found
5. **Use Typed SDK in production** — Full IDE support and type safety
---
## Troubleshooting
### "name field is required"
```python
# Wrong
org.crud.create("task", title="My Task")
# Correct
org.crud.create("task", name="task-001", title="My Task")
```
### "parent must have uuid and object_type"
```python
# Wrong
org.crud.create("order", parent={"name": "cust-1"}, ...)
# Correct
customer = org.crud.get("customer", uuid)
org.crud.create("order", parent=customer, ...)
```
---
<div align="center">
## 🚀 Build the Future of SaaS with Supero
**Upload your schemas → Get a complete backend with AI in minutes, not months.**
[Documentation](https://docs.supero.dev) • [GitHub](https://github.com/supero) • [Support](https://support.supero.dev)
**Built with ❤️ by the Supero Team**
</div>
| text/markdown | null | Supero Team <support@supero.io> | null | null | MIT | api, sdk, orm, crud, supero, api-client | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.28.0",
"inflection>=0.5.1",
"gevent>=22.10.0",
"cryptography>=41.0.0",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-mock>=3.11.0; extra == \"dev\"",
"coverage>=7.3.0; extra == \"dev\"",
"black>=23.7.0; extra == \"dev\"",
"flake8>=6.1.0; extra == \"dev\"",
"mypy>=1.5.0; extra == \"dev\"",
"pytest>=7.4.0; extra == \"test\"",
"pytest-cov>=4.1.0; extra == \"test\"",
"pytest-mock>=3.11.0; extra == \"test\"",
"coverage>=7.3.0; extra == \"test\"",
"sphinx>=7.2.0; extra == \"docs\"",
"sphinx-rtd-theme>=1.3.0; extra == \"docs\"",
"supero[dev,docs,test]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/yourusername/supero",
"Documentation, https://supero.readthedocs.io",
"Repository, https://github.com/yourusername/supero",
"Bug Tracker, https://github.com/yourusername/supero/issues",
"Changelog, https://github.com/yourusername/supero/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-20T17:58:17.069885 | supero-1.0.4-py3-none-any.whl | 1,282,534 | 5a/3a/da538cbc1ed4af120d884fc1b123d738619f80809ef14418f069e79ce3ac/supero-1.0.4-py3-none-any.whl | py3 | bdist_wheel | null | false | 6ae40b111d38f1697e3d22f8b1fdb8fa | 263352dc8a46f296eec95bcd05883ed9b41e09f01ca4c57872d8d76c39f320dc | 5a3ada538cbc1ed4af120d884fc1b123d738619f80809ef14418f069e79ce3ac | null | [] | 85 |
2.4 | tror-yong-asr | 0.0.1 | Automatic Speech Recognition Model | # TrorYong ASR Model
`TrorYongASR`, is an Automatic Speech Recognition Model implemented by KrorngAI.
`TrorYong` (ត្រយ៉ង) is Khmer word for giant ibis, the bird that symbolises __Cambodia__.
## Support My Work
While this work comes truly from the heart, each project represents a significant investment of time -- from deep-dive research and code preparation to the final narrative and editing process.
I am incredibly passionate about sharing this knowledge, but maintaining this level of quality is a major undertaking.
If you find my work helpful and are in a position to do so, please consider supporting my work with a donation.
You can click <a href="https://pay.ababank.com/oRF8/8yp6hy53">here</a> to donate or scan the QR code below.
Your generosity acts as a huge encouragement and helps ensure that I can continue creating in-depth, valuable content for you.
<figure>
<div style="text-align: center;"><a name='slotMachine' ><img src="https://kimang18.github.io/assets/fig/aba_qr_kimang.JPG" width="500" /></a></div>
<figcaption> Using Cambodian bank account, you can donate by scanning my ABA QR code here. (or click <a href="https://pay.ababank.com/oRF8/8yp6hy53">here</a>. Make sure that receiver's name is 'Khun Kim Ang'.) </figcaption>
</figure>
# Installation
You can easily install `tror-yong-asr` using `pip` command as the following:
```bash
pip install tror-yong-asr
```
# Usage
# TODO: update the detail
## Loading tokenizer
`TrorYongOCR` is a small optical character recognition model that you can train from scratch.
With this goal, you can use your own tokenizer to pair with `TrorYongOCR`.
Just make sure that the __tokenizer used for training__ and the __tokenizer used for inference__ is __the same__.
Your tokenizer must contain begin of sequence (`bos`), end of sequence (`eos`) and padding (`pad`) tokens.
`bos` token id and `eos` token id are used in decoding function.
`pad` token id is used during training.
I also provide a tokenizer that supports Khmer and English.
```python
from tror_yong_ocr import get_tokenizer
tokenizer = get_tokenizer(charset=None)
print(len(tokenizer)) # you should receive 185
text = 'Amazon បង្កើនការវិនិយោគជិត១'
print(tokenizer.decode(tokenizer.encode(data[0]['text'], add_special_tokens=True), ignore_special_tokens=False))
# this should print <s>Amazon បង្កើនការវិនិយោគជិត១</s>
```
When preparing a dataset to train `TrorYongOCR`, you just need to transform the text into token ids using the tokenizer
```python
sentence = 'Cambodia needs peace.'
token_ids = tokenizer.encode(sentence, add_special_tokens=True)
```
__NOTE:__ I want to highlight that my tokenizer works at character level.
## Loading TrorYongOCR model
Inspired by [`PARSeq`](https://github.com/baudm/parseq/tree/main) and [`DTrOCR`](https://github.com/arvindrajan92/DTrOCR), I design `TrorYongOCR` as the following: given `n_layer` transformer layers
- `n_layer-1` are encoding layers for encoding a given image
- the final layer is a decoding layer without cross-attention mechanism
- for the decoding layer,
- the __latent state__ of an image (the output of encoding layers) is concatenated with the __input character embedding__ (token embedding including `bos` token plus position embedding) to create __context vector__, _i.e._ __key and value vectors__ (think of it like a prompt prefill)
- and the __input character embedding__ (token embedding plus position embedding) is used as __query vector__.
The architecture of TrorYongOCR can be found in Figure 1 below.
<figure>
<div style="text-align: center;"><a name='slotMachine' ><img src="https://raw.githubusercontent.com/Kimang18/KrorngAI/refs/heads/main/tror-yong-ocr/TrorYongOCR.drawio.png" width="500" /></a></div>
<figcaption> Figure 1: TrorYongOCR architecture overview. The input image is transformed into patch embedding. Image embedding is obtained by additioning patch embedding and position embedding. The image embedding is passed through L-1 encoder blocks to generate image encoding (latent state). The image encoding is concatenated with character embedding (i.e. token embedding plus position embedding) before undergoing causal self-attention mechanism in the single decoder block to generate next token.</figcaption>
</figure>
New technologies in Attention mechanism such as Rotary Positional Embedding (RoPE), and Sigmoid Linear Unit (SiLU) and Gated Linear Unit (GLU) in MLP of Transformer block are implemented in TrorYongOCR.
### Compared to PARSeq
For `PARSeq` model which is an encoder-decoder architecture, text decoder uses position embedding as __query vector__, character embedding (token embedding plus position embedding) as __context vector__, and the __latent state__ from image encoder as __memory__ for the cross-attention mechanism (see Figure 3 of their paper).
### Compared to DTrOCR
For DTrOCR which is a decoder-only architecture, the image embedding (patch embedding plus position embedding) is concatenated with input character embedding (a `[SEP]` token is added at the beginning of input character embedding to indicate sequence separation. `[SEP]` token is equivalent to `bos` token in `TrorYongOCR`), and causal self-attention mechanism is applied to the concatenation from layer to layer to generate text autoregressively (see Figure 2 of their paper).
```python
from tror_yong_ocr import TrorYongOCR, TrorYongConfig
from tror_yong_ocr import get_tokenizer
tokenizer = get_tokenizer()
config = TrorYongConfig(
img_size=(32, 128),
patch_size=(4, 8),
n_channel=3,
vocab_size=len(tokenizer),
block_size=192,
n_layer=4,
n_head=6,
n_embed=384,
dropout=0.1,
bias=True,
)
model = TrorYongOCR(config, tokenizer)
```
## Train TrorYongOCR
You can check out the notebook below to train your own Small OCR Model.
[](https://colab.research.google.com/github/Kimang18/SourceCode-KrorngAI-YT/blob/main/FinetuneTrorYongOCR.ipynb)
I also have a video about training TrorYongOCR below
[](https://youtu.be/3W8P0mByFBY)
## Inference
I also provide `decode` function to decode image in `TrorYongOCR` class.
Note that it can process only one image at a time.
```python
from tror_yong_ocr import TrorYongOCR, TrorYongConfig
from tror_yong_ocr import get_tokenizer
tokenizer = get_tokenizer()
config = TrorYongConfig(
img_size=(32, 128),
patch_size=(4, 8),
n_channel=3,
vocab_size=len(tokenizer), # exclude pad and unk tokens
block_size=192,
n_layer=4,
n_head=6,
n_embed=384,
dropout=0.1,
bias=True,
)
model = TrorYongOCR(config, tokenizer)
model.load_state_dict(torch.load('path/to/your/weights.pt', map_location='cpu'))
pred = model.decode(batch['img_tensor'][0], max_tokens=192, temperature=0.001, top_k=None)
print(tokenizer.decode(pred[0].tolist(), ignore_special_tokens=True))
```
## TODO:
- [X] implement model with KV cache `TrorYongOCR`
- [X] notebook colab for training `TrorYongOCR`
- [ ] benchmarking
| text/markdown | KHUN Kimang | kimang.khun@polytechnique.org | null | null | null | null | [] | [] | https://github.com/kimang18/KrorngAI | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.10 | 2026-02-20T17:58:03.191485 | tror_yong_asr-0.0.1.tar.gz | 12,997 | e6/62/5a1e70723009fafb9c7c56a5a5f3a5c697add113618aaf920a329164003e/tror_yong_asr-0.0.1.tar.gz | source | sdist | null | false | 183545f2ee3d98343294923d633e5309 | a817a8bcda4928df586acc712c74f603da829ad8880a1ec9c4cd7bc74e33fdcf | e6625a1e70723009fafb9c7c56a5a5f3a5c697add113618aaf920a329164003e | null | [
"LICENSE"
] | 186 |
2.4 | emphub-cli | 0.2.0 | CLI tools to interface with EMPHub | # emphub-cli
CLI tools to interface with EMPHub.
## Installation
First, install the package:
```sh
python3 -m pip install emphub-cli
```
then add a configuration file to connect to the EMPHub instance at `~/.emp/config.yaml`
The configuration file has the following format:
```yaml
---
registry:
host: <S3 HOST>
access_key: <S3 ACCESS KEY>
secret_key: <S3 SECRET KEY>
secure: false # set to true if using HTTPS
bucket: emp-packages
local-storage:
path: ~/.emp/packages
connections_file: ~/.emp/connections.xml
```
## Usage
To view all packages available on EMPHub, use the following command:
```sh
emp packages
```
To see the tags for a given package, use the `tags` command:
```sh
emp tags <PACKAGE>
```
Then to pull a specific tag use the following command:
```sh
emp pull <PACKAGE>:<TAG>
```
The bitfile will then be available at `~/.emp/packages/<PACKAGE>/<TAG>/package/top.bit`, along with the address table and a pre-made `connections.xml` for the Serenity card.
| text/markdown | null | David Monk <dmonk@cern.ch> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"boto3",
"click",
"pyyaml",
"tabulate"
] | [] | [] | [] | [
"Homepage, https://gitlab.cern.ch/dmonk/emphub-cli",
"Issues, https://gitlab.cern.ch/dmonk/emphub-cli/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T17:57:38.842730 | emphub_cli-0.2.0.tar.gz | 5,113 | a2/98/a1f88bc4b2c64b05472c70c49344bebb499c5d217178cfcd816f4f5536e6/emphub_cli-0.2.0.tar.gz | source | sdist | null | false | 627d82de3bc2d254d8c481bbd32455f0 | 451d5bd3e30a6ef2da9e34ba068881799d345f0f225410a9d6b1bcc048584609 | a298a1f88bc4b2c64b05472c70c49344bebb499c5d217178cfcd816f4f5536e6 | MIT | [
"LICENSE"
] | 228 |
2.4 | scenario-integration | 0.2.5 | Scenario Integration: py/jupyter notebook-friendly APIs for state/subsector load shaping and DC additions (si.ScenarioIntegrator().help to get more help) | # ScenarioIntegration
Modeling tool now as a python package
Notebook-first API for your Scenario Integration workflow.
## Install (dev)
```bash
pip install -e .
| text/markdown | Lara Bezerra | null | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"pandas>=2.0",
"numpy>=1.24"
] | [] | [] | [] | [
"Homepage, https://github.com/SoLaraS2/ScenarioIntegration"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-20T17:57:31.119122 | scenario_integration-0.2.5.tar.gz | 9,753 | c3/4c/2fbe40e8ad58ea81a730116f85937bdba2af80b67b2d6a75ccc94e091a19/scenario_integration-0.2.5.tar.gz | source | sdist | null | false | a5c3a28ed78e6c4ebf9d0a22d4d5295a | ef4cc08444df4be7933bbee059f75a514a0cc443dc6fead947ae32079c2504a3 | c34c2fbe40e8ad58ea81a730116f85937bdba2af80b67b2d6a75ccc94e091a19 | null | [] | 219 |
2.4 | robotframework-reportlens | 0.1.4 | Generate a modern, interactive HTML report from Robot Framework output.xml | # Robotframework ReportLens
[](https://badge.fury.io/py/robotframework-reportlens)
[](https://www.python.org/downloads/)
[](https://github.com/deekshith-poojary98/robotframework-reportlens/actions/workflows/code-checks.yml)
**ReportLens** turns Robot Framework XML output (`output.xml`) into a single, self-contained HTML report with a modern, interactive UI.
## Sample Report
View generated reports here
- [Pass Report](https://deekshith-poojary98.github.io/robotframework-reportlens/pass/pass_report.html "Link to sample report")
- [Fail Report](https://deekshith-poojary98.github.io/robotframework-reportlens/fail/fail_report.html "Link to sample report")
## Installation
```bash
pip install robotframework-reportlens
```
Requires **Python 3.10+**. No extra dependencies (stdlib only).
## Usage
After running Robot Framework tests (e.g. `robot test/`), generate a report from `output.xml`:
```bash
reportlens output.xml -o report.html
```
**Arguments:**
- `xml_file` – Path to Robot Framework XML output (e.g. `output.xml`)
- `-o`, `--output` – Output HTML path (default: `report.html`)
**Examples:**
```bash
# Default output (report.html in current directory)
reportlens output.xml
# Custom output path
reportlens output.xml -o docs/report.html
```
Open the generated `.html` file in a browser.
You can also run the module directly:
```bash
python -m robotframework_reportlens output.xml -o report.html
```
## Features
- **Suite/test tree** – Navigate suites and tests with pass/fail/skip counts
- **Search & filters** – Filter by status and tags; search test names
- **Keyword tree** – Expand SETUP, keywords, and TEARDOWN; select a keyword to see its logs
- **Logs panel** – Log level filter (All, ERROR, WARN, INFO, etc.); copy button on each log message (shown on hover)
- **Failed-tests summary** – Quick access to failed tests from the sidebar
- **Dark/light theme** – Toggle in the report header
- **Fixed layout** – Same layout on all screens; zoom and scroll as needed
## How it works
ReportLens reads `output.xml`, parses suites, tests, keywords, and messages, then builds one HTML file from a bundled template. The report is data-driven: all content is embedded as JSON and rendered by JavaScript in the browser. No server required.
## Development / source layout
```
├── robotframework_reportlens/
│ ├── __init__.py
│ ├── cli.py # reportlens entry point
│ ├── generator.py # XML → report data → HTML
│ └── template/
│ └── template.html
├── tests/
│ ├── conftest.py # pytest fixtures
│ ├── test_cli.py # CLI tests
│ ├── test_generator.py # report generator tests
│ └── fixtures/ # minimal Robot output.xml for tests
├── pyproject.toml
└── README.md
```
### Running tests
Install with dev dependencies and run pytest:
```bash
pip install -e ".[dev]"
pytest tests/ -v
```
## License
Apache License 2.0 - See [LICENSE](LICENSE) file for details.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
| text/markdown | null | Deekshith Poojary <deekshithpoojary355@gmail.com> | null | null | null | robotframework, report, html, testing, cli, log | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Framework :: Robot Framework"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"robotframework>=6.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"ruff; extra == \"dev\"",
"build; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/deekshith-poojary98/robotframework-reportlens"
] | twine/6.2.0 CPython/3.13.9 | 2026-02-20T17:57:26.705847 | robotframework_reportlens-0.1.4.tar.gz | 39,831 | f3/2d/5ad31f828f5a8bfd18e5b3cdb20dc8ff839cb8f2c0b8ffb2d22b4e149710/robotframework_reportlens-0.1.4.tar.gz | source | sdist | null | false | a0e015c44bc38d8d5baa917fa6ad2517 | 7cc1af2bfb33d38d814cd9a99f8afd404cc27f15ba58f078c7e5b6ef3fec190e | f32d5ad31f828f5a8bfd18e5b3cdb20dc8ff839cb8f2c0b8ffb2d22b4e149710 | Apache-2.0 | [
"LICENSE"
] | 216 |
2.4 | pyxhdl | 0.33 | Use Python as HDL language, and generate equivalent SystemVerilog and VHDL code | # PyXHDL - Python Frontend For VHDL And Verilog
*PyXHDL* born for developers who are not really in love with any of the HDL languages
and instead appreciate the simplicity and flexibility of using Python for their workflows.
## Install
The easiest way to install *PyXHDL* is using PIP from the *PyPi* repo:
```Shell
$ pip install pyxhdl
```
Otherwise it can be installed directly from the Github repo.
```Shell
$ pip install git+https://github.com/davidel/pyxhdl.git
```
## Description
*PyXHDL* allows to write HDL code in Python, generating VHDL (>= 2008) and Verilog
(SystemVerilog >= 2012) code to be used for synthesis and simulation.
*PyXHDL* does not try to create an IR to be lowered, but instead interprets Python AST
code and maps that directly into the selected HDL backend code. The optimizations are
left to the OEM HDL compiler used to syntesize the design.
The main advantage of *PyXHDL* is that you can write functions and modules/entities
wihtout explicit parametrization.
The function calls, and the modules/entities instantiations automatically capture the
call/instantiation site types, similarly to what C++ template programming allows.
Even though user HDL code in Python can use loops and functions, everything will be unrolled
in the final VHDL/Verilog code (this is what the synthesis will do anyway, since there are
no loops or function calls in HW).
*PyXHDL* does not try to map Python loops to VHDL/Verilog ones, as those are too limited
when compared to the power of the Python ones, but instead unrolls them like the OEM HDL
compiler will.
The workflow with *PyXHDL* is not meant as to generate code to be manually edited, but
to be directly fed into OEM HDL synthesis (and testing, when using the *testbench* code
generation) tools.
An *IF* statement in Python gets emitted as *IF* of the target HDL language, if the test
depends on HDL variables (the *X.Value* type), or gets statically resolved to the if/else
branch like it normally would in Python.
Note that in case the *IF* test condition depends on HDL variables, both branches are
executed in order to generate the proper HDL code. See the following code for the
*pyint* pitfall.
```Python
pyint = 0
if hdlvar > 2:
# IF HDL code
...
pyint += 1
else:
# ELSE HDL code
...
pyint += 1
# pyint == 2 here!
```
On the contrary, *IF* statements whose condition only depends on Python variables,
is resolved statically and the "false branch" never executed/emitted.
```Python
if pyint > 2:
hdl_var1 = hdl_var2 + 17
else:
hdl_array[2, 1] = hdl_array[1, 2]
```
In the above example if, say, *pyint* is 3, the *ELSE* branch won't be evaluated,
which means that no code will be emitted for the HDL array operation in it.
An *IF* condition containing a mix of Python and HDL conditions get shortcutted.
Example:
```Python
# Given a Python scalar A with value 3, and HDL variables B and C, we have ...
if A < 2 and B >= C:
# Code below never executed because of AND(False, B >= C) shortcut.
...
if A > 2 or B >= C:
# Code below always executed (the B >= C won't be generated) because of
# OR(True, B >= C) shortcut.
...
if A > 2 and B >= C:
# The above test reduces to:
#
# if B >= C:
# ...
#
# Because the A > 2 statement is known to be True at compile time.
...
```
If an HDL function contains return statements nested within a branch depending on
the value of an HDL variable, all the "return paths" must have the same signature.
Below, if *hdl_var1* is *X.Bits(5)* and *hdl_var2* is *X.Bits(8)*, the function
is valid because both return paths return a two-element tuple with signature
*X.Bits(11)*, *X.Bits(8)* (*PyXHDL* borrowed the Python _MatMul_ operator "@" for
bits concatenation).
```Python
@X.hdl
def tuple_return(hdl_var1, hdl_var2):
if hdl_var1 == "0b01101":
return hdl_var1 @ "0b110110", hdl_var2
...
return hdl_var2 @ "0b101", hdl_var1 @ "0b001"
```
The normal Python *FOR* and *WHILE* loops are supported in full, but the generators and
tests should not depend on HDL code (IOW the loop has to be deterministic, as required
by synthesis).
Within Python loops, it is not possible to have *BREAK* and *CONTINUE* within a branch
(*IF*) that depends on an HDL variable. Even this case behavior is dictated by the
fact that *PyXHDL* emitted code must be synthesisazible (deterministic loops).
So this is not possible:
```Python
for i, hdl_value in enumerate(my_hdl_generator(...)):
...
if hdl_value == 0:
break
```
While this is (assuming *max_count* a Python scalar):
```Python
for i, hdl_value in enumerate(my_hdl_generator(...)):
...
if i > max_count:
break
```
The data access model of VHDL and Verilog differs quite a bit from a user level
POV (though it converges at the end at lower level).
In VHDL it is not possible (modulo declaring them *shared* which is usually a bad
idea) to have global variables, while it is possible to assign wires signals from
within processes, and the assignment will be immediate or delayed (at the next
clock cycle) depending on the process type (combinatorial vs sequantial).
On the contrary in Verilog it is possible to declare registers at module level
(though only one process can write them), but it is not possible to assign wires
from within processes (*always* blocks of kind).
In *PyXHDL* registers (by the means of *X.mkreg()* or *X.mkvreg()*) should be used
for sequential logic, and wires (*X.mkwire()* and *X.mkvwire()*) should be the glue
for combinatorial logic.
*PyXHDL* registers map to *signal* in VHDL and to *logic* in SystemVerilog.
*PyXHDL* wires map to *variable* in VHDL and to *logic* in SystemVerilog.
HDL variables should be declared before being assigned. An assignment to a bare Python
variable with the result of an HDL operation, will simply create the *X.Value* result
and store it to the Python variable, no HDL assignment will be generated.
Example:
```Python
wtemp = X.mkreg(hdl_var1.dtype)
temp = hdl_var1 + hdl_var2
wtemp = hdl_var1 - hdl_var2
# Somewhere below the "temp" value will be used ...
```
In the above code, an HDL assignment to the *wtemp* HDL variable will be generated,
while the *temp* assignment won't generate one (it can be seen as temporary value to
be used in following computations, but without explicit instantiation at HDL level).
When an HDL variable is declared of a given type, assignments to it will cast the RHS
to the type of the variable.
```Python
C = X.mkreg(X.Bits(10))
# If A is Bits(4) and B is Bits(8), C will be (as declared) Bits(10) obtained by truncating
# the concatenation result of Bits(12).
C = A @ B
```
Python functions which contains operations on *X.Value* types need to be marked with the
*@X.hdl* decorator, while *X.Entity* methods which are to be translated to processes,
need to be marked with the *@X.hdl_process* decorator.
Note that if a Python function simply handles HDL variables as data, or uses their object
APIs, there is no need to decorate the functions as HDL. So this is valid from a *PyXHDL*
point of view:
```Python
def hdl_handler(hdl_v1, hdl_v2):
assert hdl_v1.dtype == hdl_v2.dtype, f'Types must match: {hdl_v1.dtype} vs. {hdl_v2.dtype}'
return dict(v1=hdl_v1, v2=hdl_v2), (hdl_v1, hdl_v2)
```
The Python matrix multiplication operator "@" has been repurposed to mean concatenation
of bits sequences.
The Python slice works with a base that is an HDL variable. The slice *stop* must be left
empty, and the *step* must be constant.
```Python
class VarSlice(X.Entity):
PORTS = 'A, B, =XOUT'
@X.hdl_process(sens='A, B')
def var_slice():
XOUT = A[B + 1::4]
```
Produces the following Verilog code:
```Verilog
module VarSlice(A, B, XOUT);
input logic [31: 0] A;
input logic [3: 0] B;
output logic [3: 0] XOUT;
always @(A or B)
var_slice : begin
XOUT = A[int'(B + 1) -: 4];
end
endmodule
```
And VHDL code:
```VHDL
architecture behavior of VarSlice is
begin
var_slice : process (A, B)
begin
XOUT <= std_logic_vector(A((to_integer(B + 1) - 3) downto to_integer(B + 1)));
end process;
end architecture;
```
*PyXHDL* uses the new Python *MATCH*/*CASE* statement to map that to the appropriate
HDL case select construct, in order to easily code FSMs.
The restriction is that the *CASE* values need to be Python variables (cannot be HDL).
Example:
```Python
# Somewhere defined ...
IDLE, START, STOP = 1, 2, 3
@X.hdl_process(sens='A, B')
def tester():
match A:
case IDLE:
XOUT = A + 1
case START:
XOUT = A + B
case STOP:
XOUT = A - B
case _:
XOUT = A * B
```
Will map to VHDL:
```VHDL
architecture behavior of MatchEnt is
begin
tester : process (A, B)
begin
case A is
when to_unsigned(1, 8) =>
XOUT <= A + 1;
when to_unsigned(2, 8) =>
XOUT <= A + B;
when to_unsigned(3, 8) =>
XOUT <= A - B;
when others =>
XOUT <= resize(A * B, 8);
end case;
end process;
end architecture;
```
## Data Types
The types supported by *PyXHDL* are *Uint* (unsigned integer), *Sint* (signed integer),
*Bits* (generic bit group), *Float* (HW synthesizable floating point), *Integer* (generic
integer), *Real* (generic floating point) and *Bool* (boolean).
```Python
# A 8 bits unsigned integer type.
u8 = X.Uint(8)
# A 15 bits signed integer type.
s15 = X.Sint(15)
# A 32 bits group type.
b32 = X.Bits(32)
# A 32 bits floating point type.
f32 = X.Float(32)
```
The following types are predefined for an easy use.
```Python
INT4 = Sint(4)
INT8 = Sint(8)
INT16 = Sint(16)
INT32 = Sint(32)
INT64 = Sint(64)
INT128 = Sint(128)
UINT4 = Uint(4)
UINT8 = Uint(8)
UINT16 = Uint(16)
UINT32 = Uint(32)
UINT64 = Uint(64)
UINT128 = Uint(128)
FLOAT16 = Float(16)
FLOAT32 = Float(32)
FLOAT64 = Float(64)
FLOAT80 = Float(80)
FLOAT128 = Float(128)
BOOL = Bool()
BIT = Bits(1)
INT = Integer()
REAL = Real()
VOID = Void()
```
Logic bit values can be *0*, *1*, *X* and *Z*, although if expected to use the
VHDL backend only, the full set of VHDL logic values (*01XUZWHL*) can be used.
Bits are assigned with Python strings like '0b110xz' (the '0b' prefix, followed
by the logic states for the bits).
In order for the HW synthesizable floating point to be fully specified, a mapping
from the full number of bits representation to the exponent and mantissa size is
required.
The default mapping is the following, which follows the IEEE standard.
```Python
# FSpec = FSpec(EXP_SIZE, MANT_SIZE)
_FLOAT_SPECS = {
16: FSpec(5, 10),
32: FSpec(8, 23),
64: FSpec(11, 52),
80: FSpec(17, 63),
128: FSpec(15, 112),
}
```
It is possible to override that using a configuration file or defining environment
variables. Example, define the *F16_SPEC* environment varible to "8,7" to map the
16 bits floating point number to the *BFLOAT16* standard.
It is also possible to configure the floating point type mapping using the "float_specs"
entry of the configurations file (see [Mock Configuration](https://github.com/davidel/pyxhdl/blob/main/pyxhdl/config/pyxhdl.yaml)).
While VHDL (2008) offers a standard HW floating point library, Verilog does not.
In order for *PyXHDL* to be able to handle the *Float* type, a
[Verilog FPU Library](https://github.com/davidel/v_fplib) is included within its
standard libraries.
Note that this is a very simple library which has been tested, but likely not as
fully as it would call for a full IEEE754 compliance.
It is possible to map the Verilog FPU library to a different implementation by
adding the following configuration to the YAML configuration file provided to
the code generator:
```YAML
backend:
verilog:
vfpu_conf: FPU_CONF_PATH
```
Where *FPU_CONF_PATH* is the path to the YAML configuration file describing the
mapping between the *PyXHDL* used functions, and the external module implementation.
The [default configuration](https://github.com/davidel/pyxhdl/blob/main/pyxhdl/hdl_libs/verilog/vfpu.yaml)
is to use the [Verilog FPU Library](https://github.com/davidel/v_fplib).
Arrays are created using the *X.mkarray()* API.
```Python
# Creates a (4, 4) array of UINT8 initialized with 0.
ARR = X.mkvreg(X.mkarray(X.UINT8, 4, 4), 0)
```
Arrays are indexed in the standard Python way.
```Python
RES = ARR[1, 2] + ARR[i, j]
```
When creating bits sequence types (Sint, Uint, Bits), the last dimension of the type
shape is the number of bits. The example above shape for *ARR* will be (4, 4, 8).
Slicing the last dimension of a bits sequence type, will result in selecting the bits.
```Python
# C will will Bits(4)
C = ARR[0, 1, 2: 6]
```
Sliced assign works in a similar fashion:
```Python
# XOUT is X.UINT8 and A is X.Uint(4, 4, 8).
# Assign the first 4 bits of XOUT with the last 4 bits of A[1, 0] and assigns the
# last 4 bits of XOUT with the first 4 bits of A[0, 1].
XOUT[: 4] = A[1, 0, 4: 8]
XOUT[4: 8] = B[0, 1, : 4]
```
Also works assigning whole sub-arrays, if the type matches (in shape and core type):
```Python
# XOUT is X.Uint(6, 4, 8) and A is X.Uint(4, 4, 8).
XOUT[1] = A[2]
```
Complex Python slice operations (e.g *A[0 : 8 : 2]*) are not supported (though in theory
they could be expanded in element-wise operations on the complex slice component).
## Creating Modules/Entities
Creating a module/entity with *PyHDL* is simply a matter of defining a new class inheriting
from the *X.Entity* base class.
The class variable *PORTS* must be defined, specifying the names of the ports and their
direction.
Note that the port data type is not defined statically, but instead during instantiation,
allowing to use the same entity code for different types. Clearly the Python code defining
the entity should take care of creating intermediate types using the real input types,
and use code that is compatible with the entity inputs.
Note that the entity *PORTS* declaration:
```Python
class MyEntity(X.Entity):
PORTS = 'A, B:u*, C:s*=s16, D=b8, =XOUT'
...
```
is equivalent to the fully expanded form:
```Python
class MyEntity(X.Entity):
PORTS = (
X.Port('A', X.IN),
X.Port('B', X.IN, type='u*'),
X.Port('C', X.IN, type='s*', default=SINT16),
X.Port('D', X.IN, default=Bits(8)),
X.Port('XOUT', X.OUT),
)
...
```
If a port name is prefixed with the "=" character, the port is an output one. If instead
is prefixed with a "+" character is an input/output one, otherwise (with no prefix at all)
it is an input only port.
The "default" argument in the port declaration is used only when a root entity is
being generated, to avoid specifying manually the input to the generator script.
In all other cases the type is bound at instantiation time, as usual.
If necessary, it is possible to restrict the port input types to specific types, using
the following syntax:
```Python
class MyEntity(X.Entity):
PORTS = 'A:u*, B:u*, =XOUT:s16'
...
```
Above, the *A* and *B* ports are restricted to _unsigned_ types of any size, and the
*XOUT* output port to _signed_ 16 bits type.
This is equivalent to the fully expanded form:
```Python
class MyEntity(X.Entity):
PORTS = (
X.Port('A', X.IN, type='u*'),
X.Port('B', X.IN, type='u*'),
X.Port('XOUT', X.OUT, type='s16'),
)
...
```
If more complex type checking is required, it is of course possible to be implementing
the logic within the *MyEntity* _\_\_init\_\_()_ method.
The processes defining the entity behavior should also be declared, using the
*@X.hdl_process(...)* decorator.
The *sens=* parameter of the *@X.hdl_process(...)* specifies the sensitivity list of the
process, and can either be a comma separated string of port names, a dictionary whose
keys are the port names, and the values instances of the *X.Sens* class, or a tuple/list
with a combination of those:
```Python
@X.hdl_process(sens='+CLK, RESET, READY')
def run():
if RESET != 0:
XOUT = 0
```
The **+** sign in front of an HDL variable name means positive edge (0->1, *POSEDGE*),
while a **-** means negative edge (1->0, *NEGEDGE*). The default is *LEVEL* edge, which
triggers at every change of the variable value.
The above could have also been written using the fully expanded *X.Sens(...)* object.
```Python
@X.hdl_process(
sens=(dict(CLK=X.Sens(X.POSEDGE)), 'RESET, READY'),
)
def run():
if RESET != 0:
XOUT = 0
```
The *kind=* parameter of the *@X.hdl_process(...)* specifies the process kind. If not
defined, the process is a normal process, with its code enclosed within a process block.
Declaring the process as **root** will lead to the code generated from the function, to
be emitted outside of any process block, within the entity root section.
```Python
@X.hdl_process(kind=X.ROOT_PROCESS)
def root():
temp = X.mkwire(A.dtype)
temp = A + B
XOUT = temp * 3
```
Which produces the following Verilog code:
```Verilog
module Ex2(A, B, XOUT);
input logic [7: 0] A;
input logic [7: 0] B;
output logic [7: 0] XOUT;
wire logic [7: 0] temp;
assign temp = A + B;
assign XOUT = 8'(temp * 3);
endmodule
```
And VHDL code:
```VHDL
architecture behavior of Ex2 is
signal temp : unsigned(7 downto 0);
begin
temp <= A + B;
XOUT <= resize(temp * 3, 8);
end architecture;
```
Note that there is no need to declare the ports variables as inputs of the process
functions, as those are implicitly defined by *PyXHDL* before interpreting the function
AST code. The same thing is true for any of the *kwargs* passed during an *Entity*
instantiation, which get the default values from the *Entity* *ARGS* class variable.
If a given *Entity* is instantiated with different input types (or even keyword arguments)
as many unique entities will be generated in the HDL emitted code.
The name of the functions within an entity (which will become the name of the processes
if the *@X.hdl_process(...)* decorator is used), as well as the names of the HDL entity
inputs and variables, must not conflict with VHDL/Verilog reserved keywords.
Example of simple gates composition in *PyXHDL*:
```Python
import pyxhdl as X
from pyxhdl import xlib as XL
class AndGate(X.Entity):
PORTS = 'A, B, =XOUT'
@X.hdl_process(kind=X.ROOT_PROCESS)
def run():
XOUT = A & B
class NotGate(X.Entity):
PORTS = 'A, =XOUT'
@X.hdl_process(kind=X.ROOT_PROCESS)
def run():
XOUT = ~A
class NandGate(X.Entity):
PORTS = 'A, B, =XOUT'
@X.hdl_process(kind=X.ROOT_PROCESS)
def run():
OOUT = X.mkwire(XOUT.dtype)
AndGate(A=A,
B=B,
XOUT=OOUT)
NotGate(A=OOUT,
XOUT=XOUT)
```
Note that the modules above do not need any parametrization to be specified,
and automatically work with any bit vector type. Types are bound at instantiation
site, and propagate down without any explicit parametrization.
It is also possible to have *Entity* other configuration as Python typical *kwargs*
during entity instantiation. This requires the inherited *Entity* to declare the
*ARGS* class variable defining the valid *kwargs* and their default values.
Example declaration:
```Python
class ArgsEntity(X.Entity):
PORTS = 'CLK, XIN, =XOUT'
ARGS = dict(mask=31)
@X.hdl_process(sens='+CLK, XIN')
def run():
XOUT = XIN ^ mask
```
Example instantiation:
```Python
class UseArgsEntity(X.Entity):
PORTS = 'CLK, A, B, =XOUT'
@X.hdl_process(kind=X.ROOT_PROCESS)
def use_proc():
OOUT = X.mkreg(A.dtype)
ArgsEntity(CLK=CLK,
XIN=A,
XOUT=OOUT,
mask=63)
...
```
## Interfaces
*PyXHDL* has support for interfaces as well, in order to group signals and
simplify modules argument passing.
The interfaces within *PyXHDL* are not generated into the specific HDL
backend ones, but are expanded at code generation time. For the user there
is no visible effect among the two options.
Example interface use in *PyXHDL*:
```Python
import pyxhdl as X
from pyxhdl import xlib as XL
class TestIfc(X.Interface):
FIELDS = 'X:u16, Y:u16=0'
IPORT = 'CLK, RST_N, +X, +Y, =XOUT'
def __init__(self, clk, rst_n, xout, **kwargs):
super().__init__('TEST', **kwargs)
self.mkfield('CLK', clk)
self.mkfield('RST_N', rst_n)
self.mkfield('XOUT', xout)
class IfcEnt(X.Entity):
PORTS = f'*IFC:{__name__}.TestIfc.IPORT, A'
@X.hdl_process(sens='+IFC.CLK')
def run(self):
if IFC.RST_N != 1:
IFC.X = 1
IFC.Y = 0
IFC.XOUT = 0
else:
IFC.XOUT = A * IFC.X + IFC.Y - IFC.an_int
IFC.X += 1
IFC.Y += 2
class IfcTest(X.Entity):
PORTS = 'CLK, RST_N, A, =XOUT'
@X.hdl_process(kind=X.ROOT_PROCESS)
def root(self):
self.ifc = TestIfc(CLK, RST_N, XOUT,
an_int=17)
IfcEnt(IFC=self.ifc,
A=A)
```
The generated code is mapped to the following VHDL one:
```VHDL
architecture behavior of IfcTest is
signal TEST_X : unsigned(15 downto 0);
signal TEST_Y : unsigned(15 downto 0) := to_unsigned(0, 16);
begin
IfcEnt_1 : entity IfcEnt
port map (
IFC_CLK => CLK,
IFC_RST_N => RST_N,
IFC_X => TEST_X,
IFC_Y => TEST_Y,
IFC_XOUT => XOUT,
A => A
);
end architecture;
architecture behavior of IfcEnt is
begin
run : process (IFC_CLK)
begin
if rising_edge(IFC_CLK) then
if IFC_RST_N /= '1' then
IFC_X <= to_unsigned(1, 16);
IFC_Y <= to_unsigned(0, 16);
IFC_XOUT <= to_unsigned(0, 16);
else
IFC_XOUT <= (resize(A * IFC_X, 16) + IFC_Y) - 17;
IFC_X <= IFC_X + 1;
IFC_Y <= IFC_Y + 2;
end if;
end if;
end process;
end architecture;
```
And the following Verilog code:
```Verilog
module IfcTest(CLK, RST_N, A, XOUT);
input logic CLK;
input logic RST_N;
input logic [15: 0] A;
output logic [15: 0] XOUT;
logic [15: 0] TEST_X;
logic [15: 0] TEST_Y = 16'(0);
IfcEnt IfcEnt_1(
.IFC_CLK(CLK),
.IFC_RST_N(RST_N),
.IFC_X(TEST_X),
.IFC_Y(TEST_Y),
.IFC_XOUT(XOUT),
.A(A)
);
endmodule
module IfcEnt(IFC_CLK, IFC_RST_N, IFC_X, IFC_Y, IFC_XOUT, A);
input logic IFC_CLK;
input logic IFC_RST_N;
inout logic [15: 0] IFC_X;
inout logic [15: 0] IFC_Y;
output logic [15: 0] IFC_XOUT;
input logic [15: 0] A;
always_ff @(posedge IFC_CLK)
run : begin
if (IFC_RST_N != 1'(1)) begin
IFC_X <= 16'(1);
IFC_Y <= 16'(0);
IFC_XOUT <= 16'(0);
end else begin
IFC_XOUT <= (16'(A * IFC_X) + IFC_Y) - 17;
IFC_X <= IFC_X + 1;
IFC_Y <= IFC_Y + 2;
end
end
endmodule
```
## Attributes
*PyXHDL* allows the user to specify the equivalent of VHDL and Verilog
attributes, when creating new objects. Example:
```Python
class BlockRam(X.Entity):
PORTS = 'CLK, RST_N, RDEN, WREN, ADDR, IN_DATA, =OUT_DATA'
ARGS = dict(RAM_SIZE=None)
# The "vhdl" and "verilog" entries are for backend-specific attributes, and do
# not need to be present if empty. The "$common" ones will be used for all backends.
RAM_ATTRIBUTES = {
'$common': {
'ram_style': 'block',
},
'vhdl': {
},
'verilog': {
}
}
@X.hdl_process(sens='+CLK')
def run(self):
mem = X.mkreg(X.mkarray(IN_DATA.dtype, RAM_SIZE),
attributes=self.RAM_ATTRIBUTES)
if not RST_N:
OUT_DATA = 0
else:
if WREN:
mem[ADDR] = IN_DATA
elif RDEN:
OUT_DATA = mem[ADDR]
```
The above Python code generates the following VHDL:
```VHDL
architecture behavior of BlockRam is
signal mem : pyxhdl.bits_array1d(0 to 3071)(15 downto 0);
attribute ram_style : string;
attribute ram_style of mem : signal is "block";
begin
run : process (CLK)
begin
if rising_edge(CLK) then
if (not RST_N) /= '0' then
OUT_DATA <= std_logic_vector(to_unsigned(0, 16));
elsif WREN /= '0' then
mem(to_integer(unsigned(ADDR))) <= IN_DATA;
elsif RDEN /= '0' then
OUT_DATA <= mem(to_integer(unsigned(ADDR)));
end if;
end if;
end process;
end architecture;
```
And the following Verilog:
```Verilog
module BlockRam(CLK, RST_N, RDEN, WREN, ADDR, IN_DATA, OUT_DATA);
input logic CLK;
input logic RST_N;
input logic RDEN;
input logic WREN;
input logic [11: 0] ADDR;
input logic [15: 0] IN_DATA;
output logic [15: 0] OUT_DATA;
(* ram_style = "block" *)
logic [15: 0] mem[3072];
always_ff @(posedge CLK)
run : begin
if (&(!RST_N)) begin
OUT_DATA <= 16'(0);
end else if (&WREN) begin
mem[int'(ADDR)] <= IN_DATA;
end else if (&RDEN) begin
OUT_DATA <= mem[int'(ADDR)];
end
end
endmodule
```
## Generating For AutoGenerated Code
It is possible to combine the flixibilty that Python offers as scripting language, with
the code generation capabilities of *PyXHDL* to generate the Python code to be parsed.
```Python
import py_misc_utils.template_replace as pytr
import pyxhdl as X
from pyxhdl import xlib as XL
class And(X.Entity):
PORTS = 'A, B, =XOUT'
@X.hdl_process(kind=X.ROOT_PROCESS)
def run():
XOUT = A & B
_TEMPLATE = """
And(A=A[$i], B=B[$i], XOUT=XOUT[$i])
"""
class Ex3(X.Entity):
PORTS = 'A, B, =XOUT'
@X.hdl_process(kind=X.ROOT_PROCESS)
def run():
for i in range(A.dtype.nbits):
code = pytr.template_replace(_TEMPLATE, vals=dict(i=i))
XL.xexec(code)
```
Which generates the following Verilog code.
```Verilog
// Entity "Ex3" is "Ex3" with:
// args={'A': 'bits(4)', 'B': 'bits(4)', 'XOUT': 'bits(4)'}
// kwargs={}
module Ex3(A, B, XOUT);
input logic [3: 0] A;
input logic [3: 0] B;
output logic [3: 0] XOUT;
And And_1(
.A(A[0]),
.B(B[0]),
.XOUT(XOUT[0])
);
And And_2(
.A(A[1]),
.B(B[1]),
.XOUT(XOUT[1])
);
And And_3(
.A(A[2]),
.B(B[2]),
.XOUT(XOUT[2])
);
And And_4(
.A(A[3]),
.B(B[3]),
.XOUT(XOUT[3])
);
endmodule
// Entity "And" is "And" with:
// args={'A': 'bits(1)', 'B': 'bits(1)', 'XOUT': 'bits(1)'}
// kwargs={}
module And(A, B, XOUT);
input logic A;
input logic B;
output logic XOUT;
assign XOUT = A & B;
endmodule
```
The above is only an example of using the *XL.xexec()* API, as that could have been
simply done with a bare *And* entity instantiation loop (generating the exact same code).
```Python
import py_misc_utils.utils as pyu
import pyxhdl as X
from pyxhdl import xlib as XL
class And(X.Entity):
PORTS = 'A, B, =XOUT'
@X.hdl_process(kind=X.ROOT_PROCESS)
def run():
XOUT = A & B
class Ex3(X.Entity):
PORTS = 'A, B, =XOUT'
@X.hdl_process(kind=X.ROOT_PROCESS)
def run():
for i in range(A.dtype.nbits):
And(A=A[i], B=B[i], XOUT=XOUT[i])
```
## Loading External Libraries
*PyXHDL* loads its support libraries from the package *hdl_libs/{BACKEND}* folder,
according to the *LIBS* manifest file present within such folder.
On top of the required libraries, the user can also inject its owns, by the following
means:
- Define the environment variable *PYXHDL_{BACKEND}_LIBS* to contain a semicolon
separated list of source files to load.
- List the library files within the configuration file, under the ```libs.{BACKEND}```
key. Files listed there can be aither absolute paths, or paths relative to the
configuration file path folder
(see [Mock Configuration](https://github.com/davidel/pyxhdl/blob/main/pyxhdl/config/pyxhdl.yaml)).
```YAML
libs:
vhdl:
- LIBNAME
- /PATH/TO/LIB
- ...
verilog:
- ...
```
The specified libraries will be loaded from the *PyXHDL* *hdl_libs/{BACKEND}* folder,
from one of the "lib_paths" configuration, or from one of the *PYXHDL_{BACKEND}_LIBPATH*
(semicolon separated) environment variable.
It is also possible to register HDL specific modules at runtime using the *XL.register_module()*
API.
Furthermore, although the Python functions marked with *@X.hdl* and *@X.hdl_process(...)*
decorators lead to inlined HDL code, it is possible to define an inject HDL
functions from within *PyXHDL* and use them as normal Python functions.
See the example below that shows how to use *XL.register_module()* and *XL.create_function()*
to register HDL specific code to be used within *PyXHDL*.
```Python
MY_VHDL_MODULE = """
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
package mypkg is
function func(a : in unsigned; b : in unsigned) return unsigned;
procedure proc(a : in unsigned; b : in unsigned);
end package;
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
package body mypkg is
function func(a : in unsigned; b : in unsigned) return unsigned is
begin
return a + b;
end function;
procedure proc(a : in unsigned; b : in unsigned) is
begin
assert a > b report "Compare failed!" severity error;
end procedure;
end package body;
"""
def not_in_global_context(...):
# If XL.register_module() is called from a global context (or more in general,
# when no CodeGen context are active), it will register modules globally.
# Register globally here means that running multiple generations from within the
# same Python process will find the module regsitered.
# Otherwise the module will be registered within the active CodeGen context.
# For anyone using the `generator` module to emit HDL code (which will be the
# majority), there will be no difference between global and context-local registrations.
XL.register_module('mypkg', {'vhdl': MY_VHDL_MODULE})
# Note that the 'mypkg' argument to *XL.register_module()* is just a unique ID
# (further registrations with such ID will override the previous ones) which does
# not have to match the HDL package/module name (though it likely helps if it does
# resemble it).
# Versus ...
# This will register globally. This and not_in_global_context() should not be used at
# the same time.
XL.register_module('mypkg', {'vhdl': MY_VHDL_MODULE})
# Then it is possible to define Python functions using them, like describe below.
# Note that we we did not register the Verilog variant with the XL.register_module()
# above, so any attempt to use the APIs defined below while selecting a Verilog
# backend will fail.
my_func = XL.create_function('my_func',
{
'vhdl': 'mypkg.func',
'verilog: 'mypkg.func',
},
fnsig='u*, u*',
dtype=XL.argn_dtype(0))
my_proc = XL.create_function('my_proc',
{
'vhdl': 'mypkg.proc',
'verilog: 'mypkg.proc',
},
fnsig='u*, u*')
@X.hdl
def using_userdef(a, b):
c = my_func(a + b, b - a)
my_proc(a, c * 17)
```
Note that calls to *my_func()* and *my_proc()* will emit a function/procedure call
in the target HDL backend, and will not get inlined (though at the end they will, by
the backend HDL compiler in order to generate the matching HW).
The *dtype* argument to *XL.create_function()* can be a direct *X.Type* object, or
a lambda to which the arguments of a call are passed, and has to return the proper
type. In the above example, the utility *XL.argn_dtype()* is used, to return a lambda
which returns the type of the 0-th (first) argument.
## Placing External Modules/Entities
It is also possible to create entities class which are not defined within the *PyXHDL*
Python framework, but which instead refers to external entities (IP blocks). This is
accomplished by simply defining an *X.Entity* subclass with the *NAME* class property
specified (this is the full entity path, included the package scope, if any).
```Python
class ExternIP(X.Entity):
PORTS = 'IN_A:u*, IN_B:u*, =OUT_DATA:u*'
NAME = 'extern_pkg.Entity'
LIBNAME = 'my_extern_pkg'
```
After that it can be instantiated like a normal *PyXHDL* entity.
```Python
# To be called from a root process.
ExternIP(IN_A=A,
IN_B=B,
OUT_DATA=XOUT,
_P=dict(NBITS=A.dtype.nbits,
SIGNED=1))
```
The *extern_pkg.Entity* will have to be defined in backend specific library files to be
loaded within *PyXHDL* (see [Loading External Libraries](#loading-external-libraries)).
The *LIBNAME* setting is optional in case the library containing such entity is force
loaded (not on demand).
The *_P* argument allow to specify module/entity instantiation parameters/generics.
The *XL.ExternalEntity()* class should be used from the root process of a module or entity,
otherwise the generated code will not be valid.
Note that modules/entities defined with Python using *PyXHDL* can be instatiated by
simply constructing an object of that class from within a root process:
```Python
@X.hdl_process(kind=X.ROOT_PROCESS)
def root():
OOUT = X.mkwire(XOUT.dtype)
# AndGate defined within the PyXHDL Python framework.
AndGate(A=A,
B=B,
XOUT=OOUT)
```
## Code Generation
In order to generate code for the target backend, the *generator* module is used,
like in the following example:
```Shell
$ python -m pyxhdl.generator \
--input_file src/my_entity.py \
--entity MyEntity \
--backend vhdl \
--inputs 'CLK,RESET,READY=mkwire(BIT)' \
--inputs 'A,B,XOUT=mkwire(UINT8)' \
--kwargs 'mode="simple"' \
--kwargs 'steps=8' \
--output_file my_entity.vhd
```
The *--input_file* argument specifies the path to the Python file defining the
root entity, while the *--entity* sets the name of the root entity itself.
The example above specifies the **CLK**, **RESET** and **READY** ports of the
root entity to be one bit, while the **A**, **B** and **XOUT** ones to be 8bit
unsigned integers.
It is also possible to pass keyword arguments to the entity, allowing runtime
configuration similar to what *VHDL* generics do (note that string inputs should
be quoted). This requires the user inherited *Entity* to specify the keyword
arguments within the *ARGS* class variable of the new entity.
## TestBench Code Generation
Using the same *generator* module, it is possible to generate a *testbench*
feeding the generator itself with the input data (*YAML* or *JSON*) to be used
for the test.
```Shell
$ python -m pyxhdl.generator \
--input_file src/my_entity.py \
--entity MyEntity \
--backend vhdl \
--inputs 'CLK,RESET,READY=mkvreg(BIT, 0)' \
--inputs 'A,B,XOUT=mkvreg(UINT8, 0)' \
--kwargs 'mode="simple"' \
--kwargs 'steps=8' \
--output_file my_entity_tb.vhd \
--testbench \
--tb_input_file test/my_entity_input_data.yaml \
--tb_clock 'CLK,10ns'
```
The *--testbench* argument triggers the *testbench* module code generation.
The *--tb_input_file* points to the input data file for the test (both *YAML*
and *JSON* are supported), which has the following format:
```YAML
env:
ENV_VAR: WAIT_MODE
conf:
loaders:
- A:
dtype: uint8
kind: numpy
- B:
dtype: uint8
kind: numpy
data:
- RESET: 1
_wait_expr: XL.wait_rising(CLK)
- RESET: 0
_wait_expr: XL.wait_rising(CLK)
- A: 17
B: 21
XOUT: 134
_wait_expr: XL.wait_rising(CLK)
- A: 3
B: 11
XOUT: 77
_wait_expr: XL.wait_rising(CLK)
- ...
```
The *loaders* section of the input data is optional, and if missing the input
types will be the ones created by the *YAML* (or *JSON*) parser.
The *testbench* works by iterating the *data* section, setting the inputs to the
specified values, waiting according to the *_wait_expr* rule (see below for
more options), and comparing the outputs of the module/entity with the expected
values specified by the data.
The *_wait_expr* can be any Python code, and can also be multiline, by using the
pipe ("|") *YAML* separator. It needs to be properly indented though, example:
```YAML
- A: 17
B: 21
XOUT: 134
_wait_expr: |
if ENV_VAR == 'WAIT_MODE':
XL.wait_rising(CLK)
```
Where the value of *ENV_VAR* comes from the *env* section of the *--tb_input_file*
configuration.
The wait condition can also be specified in the command line, using the *--tb_wait*
argument. The *--tb_wait* specifies a wait time in nanoseconds. In case the
*--tb_wait* option is used in the command line, there is no need for *_wait_expr*
entries in the data.
Via the *--tb_clock_sync* argument it is also possible to configure a different
wait rule, specifying the clock port name and the sync mode. Example:
```
--tb_clock_sync 'CLK,rising'
```
In such cases, there is no need for the explicit *_wait_expr* in the input data
at all.
The *--tb_clock* enables the generation of a clock signal, on the specified port.
For example, to generate a 10ns period clock signal on the **CLK** port:
```
--tb_clock 'CLK,10ns'
```
A simpler version of the run above, using command line specified wait/sync
could be:
```Shell
$ python -m pyxhdl.generator \
--input_file src/my_entity.py \
--entity MyEntity \
--backend vhdl \
--inputs 'CLK,RESET,READY=mkvreg(BIT, 0)' \
--inputs 'A,B,XOUT=mkvreg(UINT8, 0)' \
--kwargs 'mode="simple"' \
--kwargs 'steps=8' \
--output_file my_entity_tb.vhd \
--testbench \
--tb_input_file test/my_entity_input_data.yaml \
--tb_clock 'CLK,10ns' \
--tb_clock_sync 'CLK,rising'
```
With data:
```YAML
data:
- RESET: 1
- RESET: 0
- A: 17
B: 21
XOUT: 134
- A: 3
B: 11
XOUT: 77
- ...
```
Essentially the *testbench* iterates through each data entry, feed the tested
entity input ports with the data specified in the current entry (if an entry
does not contain data for an input, such input is not changed from the previous
one), wait according to the specified rules, and then compares the ouput ports
of the tested entity with the matching data in the current entry.
Same as the inputs, if the current entry does not contain data for a given output
port, nothing is compared for that port.
So, taking as example the above test data, the *testbench* will generate HDL
code to:
- Set the **RESET** input to 1, and wait for the **CLK** rising edge (according
to the *--tb_clock_sync* command line argument).
- Set the **RESET** input to 0, and wait for the **CLK** rising edge.
- Set the tested entity input ports **A** and **B** to 17 and 21 respectively,
wait for the **CLK** rising edge, and then compare the **XOUT** ouput port
with 134.
- Set the tested entity input ports **A** and **B** to 3 and 11 respectively,
wait for the **CLK** rising edge, and then compare the **XOUT** ouput port
with 77.
- ...
The *--tb_input_file* argument can also point to a Python file, implementing
a *tb_iterator()* API, returning a Python iterator yielding *TbData* structures.
For a full example, see [UART TB Generator](https://github.com/davidel/pyxhdl/blob/main/examples/uart/tb_generator.py).
## Less Used Features
Below are briefly illustrated some less common features which are supported, with
the generated VHDL code letting explain their purpose:
```Python
# 1
XL.wait_until(A == 1)
# 2
with XL.context(delay=10):
ctx = A * B
```
Generated VHDL code assuming *A* and *B* being an *X.UINT8* and *CLK* an *X.BIT*:
```VHDL
-- 1
wait until (A = to_unsigned(1, 8));
-- 2
ctx <= resize(A * B, 8) after 10 ns;
```
It is possible, within an HDL function, to disable to Python to HDL rewrite by
using the *XL.no_hdl()* Python context manager:
```Python
@X.hdl
def my_hdl_function(a, b):
c = a + b
with XL.no_hdl():
# Some code with HDL remapping disabled...
...
return c * b
```
## Verifying Generated HDL Output
A script is provided to verify the output generated by *PyXHDL*.
It can be used to verify both VHDL (with **GHDL**, **Vivado** and **YoSys/GHDL**) and
Verilog (**Vivado**, **Verilator**, **SLANG** and **YoSys/SLANG**).
Example use to verify a generated VHDL file *generate_output.vhd* with a *RootEntity* top:
```Shell
$ python3 -m pyxhdl.tools.verify --inputs generated_output.vhd --backend vhdl --entity RootEntity
```
| text/markdown | Davide Libenzi | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Electronic Design Automation (EDA)"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"python_misc_utils"
] | [] | [] | [] | [
"Homepage, https://github.com/davidel/pyxhdl",
"Issues, https://github.com/davidel/pyxhdl/issues",
"Repository, https://github.com/davidel/pyxhdl.git"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T17:57:16.495960 | pyxhdl-0.33.tar.gz | 95,126 | 59/9f/5d2c3c9b463621212ebba9bc622518e3a776d60317f039b5ed5cf0a019d7/pyxhdl-0.33.tar.gz | source | sdist | null | false | 6ee2f6e195e37e286bbe4b5aaf475d96 | 5dad00f0bc81adddb0fc172bbdbb7b101b61f7892f3e3caea1f08e9664f6728d | 599f5d2c3c9b463621212ebba9bc622518e3a776d60317f039b5ed5cf0a019d7 | Apache-2.0 | [
"LICENSE"
] | 221 |
2.4 | azure-switchboard | 2026.2.7 | Batteries-included loadbalancing client for Azure OpenAI | # Azure Switchboard
Batteries-included, coordination-free client loadbalancing for Azure OpenAI and OpenAI.
```bash
uv add azure-switchboard
```
[](https://pypi.org/project/azure-switchboard/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/arini-ai/azure-switchboard/actions/workflows/ci.yaml)
## Overview
`azure-switchboard` is a Python 3 asyncio library that provides an API-compatible client loadbalancer for Chat Completions. You instantiate a `Switchboard` with one or more `Deployment`s, and requests are distributed across healthy deployments using the [power of two random choices](https://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf) method. Deployments can point at Azure OpenAI (`base_url=.../openai/v1/`) or OpenAI (`base_url=None`).
## Features
- **API Compatibility**: `Switchboard.create` is a transparently-typed proxy for `OpenAI.chat.completions.create`.
- **Coordination-Free**: The default Two Random Choices algorithm does not require coordination between client instances to achieve excellent load distribution characteristics.
- **Utilization-Aware**: TPM/RPM utilization is tracked per model per deployment for use during selection.
- **Batteries Included**:
- **Session Affinity**: Provide a `session_id` to route requests in the same session to the same deployment.
- **Automatic Failover**: Retries are controlled by a tenacity `AsyncRetrying` policy (`failover_policy`).
- **Pluggable Selection**: Custom selection algorithms can be provided by passing a callable to the `selector` parameter on the Switchboard constructor.
- **OpenTelemetry Integration**: Built-in metrics for request routing and healthy deployment counts.
- **Lightweight**: Small codebase with minimal dependencies: `openai`, `tenacity`, `wrapt`, and `opentelemetry-api`.
## Runnable Example
```python
#!/usr/bin/env python3
#
# To run this, use:
# uv run --env-file .env tools/readme_example.py
#
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "azure-switchboard",
# ]
# ///
import asyncio
import os
from azure_switchboard import Deployment, Model, Switchboard
azure_openai_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
azure_openai_api_key = os.getenv("AZURE_OPENAI_API_KEY")
openai_api_key = os.getenv("OPENAI_API_KEY")
deployments = []
if azure_openai_endpoint and azure_openai_api_key:
# create 3 deployments. reusing the endpoint
# is fine for the purposes of this demo
for name in ("east", "west", "south"):
deployments.append(
Deployment(
name=name,
base_url=f"{azure_openai_endpoint}/openai/v1/",
api_key=azure_openai_api_key,
models=[Model(name="gpt-4o-mini")],
)
)
if openai_api_key:
deployments.append(
Deployment(
name="openai",
api_key=openai_api_key,
models=[Model(name="gpt-4o-mini")],
)
)
if not deployments:
raise RuntimeError(
"Set AZURE_OPENAI_ENDPOINT/AZURE_OPENAI_API_KEY or OPENAI_API_KEY to run this example."
)
async def main():
async with Switchboard(deployments=deployments) as sb:
print("Basic functionality:")
await basic_functionality(sb)
print("Session affinity (should warn):")
await session_affinity(sb)
async def basic_functionality(switchboard: Switchboard):
# Make a completion request (non-streaming)
response = await switchboard.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello, world!"}],
)
print("completion:", response.choices[0].message.content)
# Make a streaming completion request
stream = await switchboard.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello, world!"}],
stream=True,
)
print("streaming: ", end="")
async for chunk in stream:
if chunk.choices and chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
print()
async def session_affinity(switchboard: Switchboard):
session_id = "anything"
# First message will select a random healthy
# deployment and associate it with the session_id
r = await switchboard.create(
session_id=session_id,
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Who won the World Series in 2020?"}],
)
d1 = switchboard.select_deployment(model="gpt-4o-mini", session_id=session_id)
print("deployment 1:", d1)
print("response 1:", r.choices[0].message.content)
# Follow-up requests with the same session_id will route to the same deployment
r2 = await switchboard.create(
session_id=session_id,
model="gpt-4o-mini",
messages=[
{"role": "user", "content": "Who won the World Series in 2020?"},
{"role": "assistant", "content": r.choices[0].message.content},
{"role": "user", "content": "Who did they beat?"},
],
)
print("response 2:", r2.choices[0].message.content)
# Simulate a failure by marking down the deployment
d1.models["gpt-4o-mini"].mark_down()
# A new deployment will be selected for this session_id
r3 = await switchboard.create(
session_id=session_id,
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Who won the World Series in 2021?"}],
)
d2 = switchboard.select_deployment(model="gpt-4o-mini", session_id=session_id)
print("deployment 2:", d2)
print("response 3:", r3.choices[0].message.content)
assert d2 != d1
if __name__ == "__main__":
asyncio.run(main())
```
## Benchmarks
```bash
just bench
uv run --env-file .env tools/bench.py -v -r 1000 -d 10 -e 500
Distributing 1000 requests across 10 deployments
Max inflight requests: 1000
Request 500/1000 completed
Utilization Distribution:
0.000 - 0.200 | 0
0.200 - 0.400 | 10 ..............................
0.400 - 0.600 | 0
0.600 - 0.800 | 0
0.800 - 1.000 | 0
Avg utilization: 0.339 (0.332 - 0.349)
Std deviation: 0.006
{
'bench_0': {'gpt-4o-mini': {'util': 0.361, 'tpm': '10556/30000', 'rpm': '100/300'}},
'bench_1': {'gpt-4o-mini': {'util': 0.339, 'tpm': '9819/30000', 'rpm': '100/300'}},
'bench_2': {'gpt-4o-mini': {'util': 0.333, 'tpm': '9405/30000', 'rpm': '97/300'}},
'bench_3': {'gpt-4o-mini': {'util': 0.349, 'tpm': '10188/30000', 'rpm': '100/300'}},
'bench_4': {'gpt-4o-mini': {'util': 0.346, 'tpm': '10210/30000', 'rpm': '99/300'}},
'bench_5': {'gpt-4o-mini': {'util': 0.341, 'tpm': '10024/30000', 'rpm': '99/300'}},
'bench_6': {'gpt-4o-mini': {'util': 0.343, 'tpm': '10194/30000', 'rpm': '100/300'}},
'bench_7': {'gpt-4o-mini': {'util': 0.352, 'tpm': '10362/30000', 'rpm': '102/300'}},
'bench_8': {'gpt-4o-mini': {'util': 0.35, 'tpm': '10362/30000', 'rpm': '102/300'}},
'bench_9': {'gpt-4o-mini': {'util': 0.365, 'tpm': '10840/30000', 'rpm': '101/300'}}
}
Utilization Distribution:
0.000 - 0.100 | 0
0.100 - 0.200 | 0
0.200 - 0.300 | 0
0.300 - 0.400 | 10 ..............................
0.400 - 0.500 | 0
0.500 - 0.600 | 0
0.600 - 0.700 | 0
0.700 - 0.800 | 0
0.800 - 0.900 | 0
0.900 - 1.000 | 0
Avg utilization: 0.348 (0.333 - 0.365)
Std deviation: 0.009
Distribution overhead: 926.14ms
Average response latency: 5593.77ms
Total latency: 17565.37ms
Requests per second: 1079.75
Overhead per request: 0.93ms
```
Distribution overhead scales ~linearly with the number of deployments.
## Configuration Reference
### switchboard.Model Parameters
| Parameter | Description | Default |
| ------------------ | ---------------------------------------------------------------------- | ------------- |
| `name` | Model name as sent to Chat Completions | Required |
| `tpm` | Tokens-per-minute budget used for utilization tracking and routing | 0 (unlimited) |
| `rpm` | Requests-per-minute budget used for utilization tracking and routing | 0 (unlimited) |
| `default_cooldown` | Cooldown duration (seconds) after a deployment/model failure mark-down | 10.0 |
### switchboard.Deployment Parameters
| Parameter | Description | Default |
| ---------- | ---------------------------------------------------------------------------------------------------- | ---------------------------- |
| `name` | Unique identifier for the deployment | Required |
| `base_url` | API base URL. Azure example: `https://<resource>.openai.azure.com/openai/v1/`. OpenAI: leave `None`. | None |
| `api_key` | API key for the deployment | None |
| `timeout` | Default request timeout (seconds) | 600.0 |
| `models` | Models available on this deployment | Built-in model name defaults |
### switchboard.Switchboard Parameters
| Parameter | Description | Default |
| ------------------ | ---------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- |
| `deployments` | List of deployment configs | Required |
| `selector` | Deployment selection function `(model, eligible_deployments) -> deployment` | `two_random_choices` |
| `failover_policy` | Tenacity `AsyncRetrying` policy used around each `create` call | `AsyncRetrying(stop=stop_after_attempt(2), retry=retry_if_not_exception_type(SwitchboardError), reraise=True)` |
| `ratelimit_window` | How often usage counters reset (seconds). Set `0` to disable periodic reset. | 60.0 |
| `max_sessions` | LRU capacity for session affinity map | 1024 |
## Development
This project uses [uv](https://github.com/astral-sh/uv) for package management,
and [just](https://github.com/casey/just) for task automation. See the [justfile](https://github.com/arini-ai/azure-switchboard/blob/master/justfile)
for available commands.
```bash
git clone https://github.com/arini-ai/azure-switchboard
cd azure-switchboard
just install
```
### Running tests
```bash
just test
```
### Release
This library uses CalVer for versioning. On push to master, if tests pass, a package is automatically built, released, and uploaded to PyPI.
Locally, the package can be built with uv:
```bash
uv build
```
### OpenTelemetry Integration
`azure-switchboard` uses OpenTelemetry metrics via the meter `azure_switchboard.switchboard`.
Metrics emitted on the request path include:
- `healthy_deployments_count` (gauge)
- `requests` (counter, with deployment + model attributes)
To run with local OTEL instrumentation:
```bash
just otel-run
```
## Contributing
1. Fork/clone repo
2. Make changes
3. Run tests with `just test`
4. Lint with `just lint`
5. Commit and make a PR
## License
MIT
| text/markdown | null | Abizer Lokhandwala <abizer@abizer.me> | null | null | MIT | ai, azure, litellm, llm, loadbalancing, openai | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"loguru>=0.7.3",
"openai>=1.62.0",
"opentelemetry-api>=1.30.0",
"tenacity>=9.0.0",
"wrapt>=1.17.2"
] | [] | [] | [] | [
"Homepage, https://github.com/arini-ai/azure-switchboard"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T17:57:08.736037 | azure_switchboard-2026.2.7-py3-none-any.whl | 13,415 | 65/19/c546e956c2d349cce1019c5a0c6cdcda1069e8b0d62055cac6f4bfc449d6/azure_switchboard-2026.2.7-py3-none-any.whl | py3 | bdist_wheel | null | false | cb7a819b42b40089c5b6f4580f32015b | a9bcae0a0e6dc681ff9b5f9ff342f120c75ce381ddb3cd9695295db3997bfa7e | 6519c546e956c2d349cce1019c5a0c6cdcda1069e8b0d62055cac6f4bfc449d6 | null | [
"LICENSE"
] | 324 |
2.4 | chunkseg | 0.3.1 | Evaluate chaptering quality for audio and video content in time space, supporting segmentation and title generation | # chunkseg
A Python package for **comprehensive evaluation of segmentation (chaptering) quality** in audio and video content. Chunkseg implements three complementary evaluation protocols introduced in *Beyond Transcripts: A Renewed Perspective on Audio Chaptering* ([arXiv:2602.08979](https://arxiv.org/abs/2602.08979)):
1. **Discretized time evaluation** — Convert boundaries to fixed-size time chunks and apply established text segmentation metrics (Pk, WindowDiff, Boundary Similarity, GHD) plus binary classification metrics (F1, precision, recall).
2. **Continuous time evaluation** — Collar-based boundary F1 that matches predicted and reference boundaries within a time tolerance window (default ±3 s).
3. **Title evaluation** — BERTScore- and ROUGE-L-based comparison of chapter titles in two modes: *Temporally Matched* (TM) and *Global Concatenation* (GC).
By evaluating in the time domain rather than the text domain, chunkseg is **transcript-invariant** and enables comparisons across models that produce very different output formats.
## Install
```bash
pip install chunkseg
```
For **title evaluation** (BERTScore + ROUGE-L):
```bash
pip install "chunkseg[titles]"
```
For **forced alignment** from structured transcripts without timestamps:
```bash
pip install "chunkseg[align]"
```
## Evaluation Protocols
### 1. Discretized Time (Time-Chunks)
Segment boundaries are projected onto a sequence of fixed-size binary time chunks. A chunk is labelled `1` if a boundary falls within it, `0` otherwise. Standard binary classification metrics (precision, recall, F1, accuracy, specificity) and established segmentation metrics (Pk, WindowDiff, Boundary Similarity, GHD) are then computed over this sequence.
This approach is compatible with any model that produces boundary timestamps, regardless of transcript format.
### 2. Continuous Time (Collar-Based F1)
Predicted and reference boundary timestamps are compared directly in continuous time. A predicted boundary counts as a true positive if it falls within ±`collar` seconds of a reference boundary, using greedy closest-first 1-to-1 matching. Returns `collar_precision`, `collar_recall`, and `collar_f1`.
This metric is **always computed** alongside the time-chunk metrics whenever timestamps are available.
### 3. Title Evaluation
Chapter title quality is measured in two modes, both supporting BERTScore (BS) and ROUGE-L (RL):
| Mode | Description |
|------|-------------|
| **TM-BS / TM-RL** | *Temporally Matched* — pair hyp/ref titles by start time within a tolerance window, score matched pairs only |
| **GC-BS / GC-RL** | *Global Concatenation* — join all titles with `\n`, compute a single score on the two concatenated strings |
| **tm_matched** | Fraction of reference titles that were matched (0–1) |
Requires `reference_titles` (and optionally `hyp_titles`) in the input. Enable with `--titles` in the CLI or `titles=True` in `evaluate_batch()`.
## Usage
### Python API
```python
from chunkseg import evaluate, evaluate_batch, print_results
# Timestamps mode — no transcript or audio needed
result = evaluate(
hypothesis=[120.5, 300.0],
reference=[125.0, 310.0],
duration=600.0,
chunk_size=6.0,
collar=3.0,
)
# Batch evaluation with aggregated metrics and bootstrap CIs
results = evaluate_batch(
samples=[
{"hypothesis": [120.5], "reference": [125.0], "duration": 600.0},
{"hypothesis": [300.0], "reference": [310.0], "duration": 500.0},
],
chunk_size=6.0,
collar=3.0,
)
print_results(results)
# Title evaluation (requires chunkseg[titles])
result = evaluate(
hypothesis=[24.2, 33.94],
reference=[11.0, 23.0, 34.0],
duration=50.0,
hyp_titles=[("Set a background", 24.2), ("Clip the background", 33.94)],
reference_titles=[("Wrap text with a span", 11.0),
("Add a background", 23.0),
("Clip background to text", 34.0)],
tolerance=5.0,
)
# Standalone title scoring
from chunkseg import compute_title_scores
scores = compute_title_scores(
hyp_titles=[("Introduction", 0.0), ("Methods", 60.0)],
ref_titles=[("Intro", 0.0), ("Methodology", 58.0)],
tolerance=5.0,
)
# Returns: tm_bs_f1, tm_rl_f1, gc_bs_f1, gc_rl_f1, tm_matched, ...
```
### CLI
```bash
# Basic timestamps mode
chunkseg samples.jsonl
# With title evaluation and custom collar
chunkseg samples.jsonl --titles --tolerance 5.0 --collar 3.0
# With WER (requires reference_transcript field)
chunkseg samples.jsonl --wer
# Structured transcript mode (forced alignment)
chunkseg transcripts.jsonl --format cstart --lang eng
# Structured transcript with embedded timestamps
chunkseg transcripts.jsonl --format cstart_ts
# Save results to JSON
chunkseg samples.jsonl --titles --output results.json
```
## Input Format
Each line of the input JSONL file must be a JSON object. The required fields depend on the evaluation mode.
### Timestamps mode (minimal)
```json
{"hypothesis": [24.2, 33.94], "reference": [11.0, 23.0, 34.0], "duration": 50.0}
```
### With title evaluation
```json
{
"hypothesis": [24.2, 33.94],
"reference": [11.0, 23.0, 34.0],
"duration": 50.0,
"reference_titles": [["Wrap text with a span", 11.0],
["Add a background", 23.0],
["Clip background to text", 34.0]],
"hyp_titles": [["Set a background", 24.2],
["Clip the background", 33.94]]
}
```
### Structured transcript (forced alignment)
```json
{
"hypothesis": "[CSTART] Intro [CEND] text... [CSTART] Main [CEND] more...",
"reference": [125.0],
"audio": "/path/to/audio.wav",
"duration": 600.0
}
```
### All fields
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `hypothesis` | `list[float]` or `str` | Yes | Predicted boundaries (seconds) or structured transcript |
| `reference` | `list[float]` | Yes | Ground-truth boundary timestamps (seconds) |
| `duration` | `float` | Yes | Total duration (seconds) |
| `audio` | `str` | For alignment | Path to audio file |
| `reference_titles` | `[[title, seconds], ...]` | For `--titles` | Ground-truth chapter titles with start times |
| `hyp_titles` | `[[title, seconds], ...]` | Optional | Predicted titles (inferred from transcript if omitted) |
| `reference_transcript` | `str` | For `--wer` | Reference transcript text |
## Input Modes
| Mode | `hypothesis` type | Audio? | Notes |
|------|------------------|--------|-------|
| Timestamps | `list[float]` | No | Direct boundary seconds |
| Transcript (alignment) | `str` | Yes | `format="cstart"` etc., requires `chunkseg[align]` |
| Transcript + timestamps (use provided) | `str` | No | `format="cstart_ts"` etc. |
| Transcript + timestamps (force alignment) | `str` | Yes | `format="cstart_ts"`, `force_alignment=True` |
## Parser Presets
| Preset | Format | Timestamps? |
|--------|--------|-------------|
| `cstart` | `[CSTART] Title [CEND] text...` | No |
| `cstart_ts` | `[CSTART] 1:23:45 - Title [CEND] text...` | Yes |
| `newline` | Sections separated by blank lines | No |
| `markdown` | `# Title\nText\n## Subtitle...` | No |
| `markdown_ts` | `# 0:15:30 - Introduction\nText...` | Yes |
| `custom` | User-provided regex | No |
| `custom_ts` | User-provided regex with `(?P<timestamp>...)` | Yes |
## Metrics
### Discretized Time (Time-Chunks)
| Metric | Description |
|--------|-------------|
| `f1` | Harmonic mean of precision and recall (derived from aggregated P/R) |
| `precision` | TP / (TP + FP) |
| `recall` | TP / (TP + FN) |
| `accuracy` | (TP + TN) / Total |
| `specificity` | TN / (TN + FP) |
| `pk` | Beeferman's Pk (lower is better) |
| `window_diff` | WindowDiff (lower is better) |
| `boundary_similarity` | Boundary similarity score |
| `ghd` | Generalized Hamming Distance |
### Continuous Time (Collar-Based)
| Metric | Description |
|--------|-------------|
| `collar_f1` | F1 within ±collar seconds (default ±3 s) |
| `collar_precision` | Precision within collar |
| `collar_recall` | Recall within collar |
### Title Evaluation
| Metric | Description |
|--------|-------------|
| `tm_bs_f1` | BERTScore F1 on temporally matched title pairs |
| `tm_bs_precision` | BERTScore precision on matched pairs |
| `tm_bs_recall` | BERTScore recall on matched pairs |
| `tm_rl_f1` | ROUGE-L F1 on temporally matched title pairs |
| `tm_rl_precision` | ROUGE-L precision on matched pairs |
| `tm_rl_recall` | ROUGE-L recall on matched pairs |
| `tm_matched` | Fraction of reference titles matched (0–1) |
| `gc_bs_f1` | BERTScore F1 on globally concatenated titles |
| `gc_bs_precision` | BERTScore precision on concatenated titles |
| `gc_bs_recall` | BERTScore recall on concatenated titles |
| `gc_rl_f1` | ROUGE-L F1 on globally concatenated titles |
| `gc_rl_precision` | ROUGE-L precision on concatenated titles |
| `gc_rl_recall` | ROUGE-L recall on concatenated titles |
All scalar metrics are reported as `{mean, std, ci_lower, ci_upper}` with bootstrap confidence intervals (default 100 iterations, configurable via `--num-bootstrap`).
## CLI Reference
```
chunkseg <input.jsonl> [options]
Options:
--chunk-size FLOAT Chunk size in seconds (default: 6.0)
--collar FLOAT Collar size for boundary F1 (default: 3.0)
--format STR Parser preset for transcript mode
--custom-pattern STR Regex for custom/custom_ts format
--timestamp-format STR Timestamp format for custom_ts
--lang STR ISO 639-3 language code for alignment (default: eng)
--force-alignment Derive timestamps from audio alignment
--wer Compute WER (requires reference_transcript field)
--titles Compute title metrics (requires reference_titles field)
--tolerance FLOAT Time tolerance for TM matching in seconds (default: 5.0)
--output FILE Write results to JSON file
--num-bootstrap INT Bootstrap iterations for CIs (default: 100)
```
## Dependencies
**Required:**
- `numpy`
- `segeval`
- `nltk`
- `jiwer`
**Optional — title evaluation (`chunkseg[titles]`):**
- `bert-score`
- `rouge-score`
**Optional — transcript alignment (`chunkseg[align]`):**
- `torch >= 2.1.0`
- `torchaudio >= 2.1.0`
## Citation
If you use chunkseg in your research, please cite:
```bibtex
@article{retkowski2026beyond,
title = {Beyond Transcripts: A Renewed Perspective on Audio Chaptering},
author = {Retkowski, Fabian and Z{\"u}fle, Maike and Nguyen, Thai Binh and Niehues, Jan and Waibel, Alexander},
journal = {arXiv preprint arXiv:2602.08979},
year = {2026},
url = {https://arxiv.org/abs/2602.08979}
}
```
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
| text/markdown | null | Fabian Retkowski <f@retkow.ski> | null | null | MIT | segmentation, evaluation, audio, video, chapter, boundary, temporal | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Multimedia :: Sound/Audio :: Analysis",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"segeval",
"nltk",
"jiwer",
"torch>=2.1.0; extra == \"align\"",
"torchaudio>=2.1.0; extra == \"align\"",
"bert-score; extra == \"titles\"",
"rouge-score; extra == \"titles\""
] | [] | [] | [] | [
"Homepage, https://github.com/retkowski/chunkseg",
"Repository, https://github.com/retkowski/chunkseg",
"Issues, https://github.com/retkowski/chunkseg/issues"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-20T17:56:32.861984 | chunkseg-0.3.1.tar.gz | 27,027 | 85/27/4e10fa02b4e2c76fbc3806a2e5c9f315bb58cdedbc567920035e2f34fd76/chunkseg-0.3.1.tar.gz | source | sdist | null | false | 13ea0a7616801338b1e233ea400deca5 | cd8b7a430bc62c3348f0f5324d6d5799201413f898563e8260552a1fc7c1da83 | 85274e10fa02b4e2c76fbc3806a2e5c9f315bb58cdedbc567920035e2f34fd76 | null | [
"LICENSE"
] | 231 |
2.4 | zotero-mcp-server | 0.1.3 | A Model Context Protocol server for Zotero | # Zotero MCP: Chat with your Research Library—Local or Web—in Claude, ChatGPT, and more.
<p align="center">
<a href="https://www.zotero.org/">
<img src="https://img.shields.io/badge/Zotero-CC2936?style=for-the-badge&logo=zotero&logoColor=white" alt="Zotero">
</a>
<a href="https://www.anthropic.com/claude">
<img src="https://img.shields.io/badge/Claude-6849C3?style=for-the-badge&logo=anthropic&logoColor=white" alt="Claude">
</a>
<a href="https://chatgpt.com/">
<img src="https://img.shields.io/badge/ChatGPT-74AA9C?style=for-the-badge&logo=openai&logoColor=white" alt="ChatGPT">
</a>
<a href="https://modelcontextprotocol.io/introduction">
<img src="https://img.shields.io/badge/MCP-0175C2?style=for-the-badge&logoColor=white" alt="MCP">
</a>
<a href="https://pypi.org/project/zotero-mcp-server/">
<img src="https://img.shields.io/pypi/v/zotero-mcp-server?style=for-the-badge&logo=pypi&logoColor=white" alt="PyPI">
</a>
</p>
**Zotero MCP** seamlessly connects your [Zotero](https://www.zotero.org/) research library with [ChatGPT](https://openai.com), [Claude](https://www.anthropic.com/claude), and other AI assistants (e.g., [Cherry Studio](https://cherry-ai.com/), [Chorus](https://chorus.sh), [Cursor](https://www.cursor.com/)) via the [Model Context Protocol](https://modelcontextprotocol.io/introduction). Review papers, get summaries, analyze citations, extract PDF annotations, and more!
## ✨ Features
### 🧠 AI-Powered Semantic Search
- **Vector-based similarity search** over your entire research library
- **Multiple embedding models**: Default (free), OpenAI, and Gemini options
- **Intelligent results** with similarity scores and contextual matching
- **Auto-updating database** with configurable sync schedules
### 🔍 Search Your Library
- Find papers, articles, and books by title, author, or content
- Perform complex searches with multiple criteria
- Browse collections, tags, and recent additions
- **NEW**: Semantic search for conceptual and topic-based discovery
### 📚 Access Your Content
- Retrieve detailed metadata for any item
- Get full text content (when available)
- Access attachments, notes, and child items
### 📝 Work with Annotations
- Extract and search PDF annotations directly
- Access Zotero's native annotations
- Create and update notes and annotations
### 🔄 Easy Updates
- **Smart update system** that detects your installation method (uv, pip, conda, pipx)
- **Configuration preservation** - all settings maintained during updates
- **Version checking** and automatic update notifications
### 🌐 Flexible Access Methods
- Local method for offline access (no API key needed)
- Web API for cloud library access
- Perfect for both local research and remote collaboration
## 🚀 Quick Install
### Default Installation
#### Installing via uv (recommended)
```bash
uv tool install zotero-mcp-server
zotero-mcp setup # Auto-configure (Claude Desktop supported)
```
#### Installing via pip
```bash
pip install zotero-mcp-server
zotero-mcp setup # Auto-configure (Claude Desktop supported)
```
#### Installing via pipx
```bash
pipx install zotero-mcp-server
zotero-mcp setup # Auto-configure (Claude Desktop supported)
```
### Installing via Smithery
To install Zotero MCP via [Smithery](https://smithery.ai/server/@54yyyu/zotero-mcp) for Claude Desktop:
```bash
npx -y @smithery/cli install @54yyyu/zotero-mcp --client claude
```
#### Updating Your Installation
Keep zotero-mcp up to date with the smart update command:
```bash
# Check for updates
zotero-mcp update --check-only
# Update to latest version (preserves all configurations)
zotero-mcp update
```
## 🧠 Semantic Search
Zotero MCP now includes powerful AI-powered semantic search capabilities that let you find research based on concepts and meaning, not just keywords.
### Setup Semantic Search
During setup or separately, configure semantic search:
```bash
# Configure during initial setup (recommended)
zotero-mcp setup
# Or configure semantic search separately
zotero-mcp setup --semantic-config-only
```
**Available Embedding Models:**
- **Default (all-MiniLM-L6-v2)**: Free, runs locally, good for most use cases
- **OpenAI**: Better quality, requires API key (`text-embedding-3-small` or `text-embedding-3-large`)
- **Gemini**: Better quality, requires API key (`models/text-embedding-004` or experimental models)
**Update Frequency Options:**
- **Manual**: Update only when you run `zotero-mcp update-db`
- **Auto on startup**: Update database every time the server starts
- **Daily**: Update once per day automatically
- **Every N days**: Set custom interval
### Using Semantic Search
After setup, initialize your search database:
```bash
# Build the semantic search database (fast, metadata-only)
zotero-mcp update-db
# Build with full-text extraction (slower, more comprehensive)
zotero-mcp update-db --fulltext
# Use your custom zotero.sqlite path
zotero-mcp update-db --fulltext --db-path "/Your_custom_path/zotero.sqlite"
If you have embedding confilts when using `zotero-mcp update-db --fulltext`, use `--force-rebuild` to force a rebuild.
# Check database status
zotero-mcp db-status
```
**Example Semantic Queries in your AI assistant:**
- *"Find research similar to machine learning concepts in neuroscience"*
- *"Papers that discuss climate change impacts on agriculture"*
- *"Research related to quantum computing applications"*
- *"Studies about social media influence on mental health"*
- *"Find papers conceptually similar to this abstract: [paste abstract]"*
The semantic search provides similarity scores and finds papers based on conceptual understanding, not just keyword matching.
## 🖥️ Setup & Usage
Full documentation is available at [Zotero MCP docs](https://stevenyuyy.us/zotero-mcp/).
**Requirements**
- Python 3.10+
- Zotero 7+ (for local API with full-text access)
- An MCP-compatible client (e.g., Claude Desktop, ChatGPT Developer Mode, Cherry Studio, Chorus)
**For ChatGPT setup: see the [Getting Started guide](./docs/getting-started.md).**
### For Claude Desktop (example MCP client)
#### Configuration
After installation, either:
1. **Auto-configure** (recommended):
```bash
zotero-mcp setup
```
2. **Manual configuration**:
Add to your `claude_desktop_config.json`:
```json
{
"mcpServers": {
"zotero": {
"command": "zotero-mcp",
"env": {
"ZOTERO_LOCAL": "true"
}
}
}
}
```
#### Usage
1. Start Zotero desktop (make sure local API is enabled in preferences)
2. Launch Claude Desktop
3. Access the Zotero-MCP tool through Claude Desktop's tools interface
Example prompts:
- "Search my library for papers on machine learning"
- "Find recent articles I've added about climate change"
- "Summarize the key findings from my paper on quantum computing"
- "Extract all PDF annotations from my paper on neural networks"
- "Search my notes and annotations for mentions of 'reinforcement learning'"
- "Show me papers tagged '#Arm' excluding those with '#Crypt' in my library"
- "Search for papers on operating system with tag '#Arm'"
- "Export the BibTeX citation for papers on machine learning"
- **"Find papers conceptually similar to deep learning in computer vision"** *(semantic search)*
- **"Research that relates to the intersection of AI and healthcare"** *(semantic search)*
- **"Papers that discuss topics similar to this abstract: [paste text]"** *(semantic search)*
### For Cherry Studio
#### Configuration
Go to Settings -> MCP Servers -> Edit MCP Configuration, and add the following:
```json
{
"mcpServers": {
"zotero": {
"name": "zotero",
"type": "stdio",
"isActive": true,
"command": "zotero-mcp",
"args": [],
"env": {
"ZOTERO_LOCAL": "true"
}
}
}
}
```
Then click "Save".
Cherry Studio also provides a visual configuration method for general settings and tools selection.
## 🔧 Advanced Configuration
### Using Web API Instead of Local API
For accessing your Zotero library via the web API (useful for remote setups):
```bash
zotero-mcp setup --no-local --api-key YOUR_API_KEY --library-id YOUR_LIBRARY_ID
```
### Environment Variables
**Zotero Connection:**
- `ZOTERO_LOCAL=true`: Use the local Zotero API (default: false)
- `ZOTERO_API_KEY`: Your Zotero API key (for web API)
- `ZOTERO_LIBRARY_ID`: Your Zotero library ID (for web API)
- `ZOTERO_LIBRARY_TYPE`: The type of library (user or group, default: user)
**Semantic Search:**
- `ZOTERO_EMBEDDING_MODEL`: Embedding model to use (default, openai, gemini)
- `OPENAI_API_KEY`: Your OpenAI API key (for OpenAI embeddings)
- `OPENAI_EMBEDDING_MODEL`: OpenAI model name (text-embedding-3-small, text-embedding-3-large)
- `OPENAI_BASE_URL`: Custom OpenAI endpoint URL (optional, for use with compatible APIs)
- `GEMINI_API_KEY`: Your Gemini API key (for Gemini embeddings)
- `GEMINI_EMBEDDING_MODEL`: Gemini model name (models/text-embedding-004, etc.)
- `GEMINI_BASE_URL`: Custom Gemini endpoint URL (optional, for use with compatible APIs)
- `ZOTERO_DB_PATH`: Custom `zotero.sqlite` path (optional)
### Command-Line Options
```bash
# Run the server directly
zotero-mcp serve
# Specify transport method
zotero-mcp serve --transport stdio|streamable-http|sse
# Setup and configuration
zotero-mcp setup --help # Get help on setup options
zotero-mcp setup --semantic-config-only # Configure only semantic search
zotero-mcp setup-info # Show installation path and config info for MCP clients
# Updates and maintenance
zotero-mcp update # Update to latest version
zotero-mcp update --check-only # Check for updates without installing
zotero-mcp update --force # Force update even if up to date
# Semantic search database management
zotero-mcp update-db # Update semantic search database (fast, metadata-only)
zotero-mcp update-db --fulltext # Update with full-text extraction (comprehensive but slower)
zotero-mcp update-db --force-rebuild # Force complete database rebuild
zotero-mcp update-db --fulltext --force-rebuild # Rebuild with full-text extraction
zotero-mcp update-db --fulltext --db-path "your_path_to/zotero.sqlite" # Customize your zotero database path
zotero-mcp db-status # Show database status and info
# General
zotero-mcp version # Show current version
```
## 📑 PDF Annotation Extraction
Zotero MCP includes advanced PDF annotation extraction capabilities:
- **Direct PDF Processing**: Extract annotations directly from PDF files, even if they're not yet indexed by Zotero
- **Enhanced Search**: Search through PDF annotations and comments
- **Image Annotation Support**: Extract image annotations from PDFs
- **Seamless Integration**: Works alongside Zotero's native annotation system
For optimal annotation extraction, it is **highly recommended** to install the [Better BibTeX plugin](https://retorque.re/zotero-better-bibtex/installation/) for Zotero. The annotation-related functions have been primarily tested with this plugin and provide enhanced functionality when it's available.
The first time you use PDF annotation features, the necessary tools will be automatically downloaded.
## 📚 Available Tools
### 🧠 Semantic Search Tools
- `zotero_semantic_search`: AI-powered similarity search with embedding models
- `zotero_update_search_database`: Manually update the semantic search database
- `zotero_get_search_database_status`: Check database status and configuration
### 🔍 Search Tools
- `zotero_search_items`: Search your library by keywords
- `zotero_advanced_search`: Perform complex searches with multiple criteria
- `zotero_get_collections`: List collections
- `zotero_get_collection_items`: Get items in a collection
- `zotero_get_tags`: List all tags
- `zotero_get_recent`: Get recently added items
- `zotero_search_by_tag`: Search your library using custom tag filters
### 📚 Content Tools
- `zotero_get_item_metadata`: Get detailed metadata (supports BibTeX export via `format="bibtex"`)
- `zotero_get_item_fulltext`: Get full text content
- `zotero_get_item_children`: Get attachments and notes
### 📝 Annotation & Notes Tools
- `zotero_get_annotations`: Get annotations (including direct PDF extraction)
- `zotero_get_notes`: Retrieve notes from your Zotero library
- `zotero_search_notes`: Search in notes and annotations (including PDF-extracted)
- `zotero_create_note`: Create a new note for an item (beta feature)
## 🔍 Troubleshooting
### General Issues
- **No results found**: Ensure Zotero is running and the local API is enabled. You need to toggle on `Allow other applications on this computer to communicate with Zotero` in Zotero preferences.
- **Can't connect to library**: Check your API key and library ID if using web API
- **Full text not available**: Make sure you're using Zotero 7+ for local full-text access
- **Local library limitations**: Some functionality (tagging, library modifications) may not work with local JS API. Consider using web library setup for full functionality. (See the [docs](docs/getting-started.md#local-library-limitations) for more info.)
- **Installation/search option switching issues**: Database problems from changing install methods or search options can often be resolved with `zotero-mcp update-db --force-rebuild`
### Semantic Search Issues
- **"Missing required environment variables" when running update-db**: Run `zotero-mcp setup` to configure your environment, or the CLI will automatically load settings from your MCP client config (e.g., Claude Desktop)
- **ChromaDB warnings**: Update to the latest version - deprecation warnings have been fixed
- **Database update takes long**: By default, `update-db` is fast (metadata-only). For comprehensive indexing with full-text, use `--fulltext` flag. Use `--limit` parameter for testing: `zotero-mcp update-db --limit 100`
- **Semantic search returns no results**: Ensure the database is initialized with `zotero-mcp update-db` and check status with `zotero-mcp db-status`
- **Limited search quality**: For better semantic search results, use `zotero-mcp update-db --fulltext` to index full-text content (requires local Zotero setup)
- **OpenAI/Gemini API errors**: Verify your API keys are correctly set and have sufficient credits/quota
### Update Issues
- **Update command fails**: Check your internet connection and try `zotero-mcp update --force`
- **Configuration lost after update**: The update process preserves configs automatically, but check `~/.config/zotero-mcp/` for backup files
## 📄 License
MIT
| text/markdown | null | 54yyyu <54yyyu@github.com> | null | null | MIT | bibliography, citations, mcp, model-context-protocol, research, zotero | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"chromadb>=0.4.0",
"ebooklib>=0.18",
"fastmcp>=2.14.0",
"google-genai>=0.7.0",
"markitdown[pdf]",
"mcp>=1.2.0",
"openai>=1.0.0",
"pydantic>=2.0.0",
"pymupdf>=1.24.0",
"python-dotenv>=1.0.0",
"pyzotero>=1.5.0",
"requests>=2.28.0",
"sentence-transformers>=2.2.0",
"black>=23.0.0; extra == \"dev\"",
"isort>=5.12.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/54yyyu/zotero-mcp",
"Bug Tracker, https://github.com/54yyyu/zotero-mcp/issues",
"Documentation, https://stevenyuyy.us/zotero-mcp/",
"Changelog, https://github.com/54yyyu/zotero-mcp/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T17:55:39.325123 | zotero_mcp_server-0.1.3.tar.gz | 400,168 | da/29/2135d2c4e1ad6078190ce81bfe5494cf5faefebc2a566d82d922f7c61c8c/zotero_mcp_server-0.1.3.tar.gz | source | sdist | null | false | 150ccde3e689645ae070f8030f408b48 | 7e71a667d572706d445a78bdeaef0b3fd21c6a6897082b130fb9647f7c66878a | da292135d2c4e1ad6078190ce81bfe5494cf5faefebc2a566d82d922f7c61c8c | null | [
"LICENSE"
] | 331 |
2.4 | laakhay-quantlab | 0.1.4 | Quant tools built with ♥︎ by Laakhay | # Laakhay Quantlab
`laakhay-quantlab` is a high-performance, backend-agnostic quantitative computation layer designed for simulation-heavy research and production analytics. It provides a unified interface over **NumPy**, **JAX**, and **PyTorch**, enabling seamless switching between CPU and GPU backends without code changes.
## Key Features
- **Backend Agnostic**: Write once, run on NumPy, JAX, or PyTorch.
- **Hardware Acceleration**: Transparent GPU/TPU support via JAX/Torch backends.
- **Vectorized Operations**: Optimized `ArrayBackend` with JIT compilation support.
- **Simulation Primitives**: Fast Gaussian sampling, Geometric Brownian Motion (GBM), and more.
- **Options & Pricing (New)**: Comprehensive verification and pricing of derivatives using analytical (Black-Scholes) and numerical (Monte Carlo) methods.
## Ecosystem
`laakhay-quantlab` fits into the broader Laakhay quantitative ecosystem:
1. **`laakhay-data`**: Market data acquisition and normalization.
2. **`laakhay-ta`**: Technical analysis indicators and strategy engine.
3. **`laakhay-quantlab`**: Numerical simulation, pricing, and risk modeling.
## Installation
```bash
pip install laakhay-quantlab
# extensions: [jax, jax-gpu, torch, all]
pip install "laakhay-quantlab[all]"
```
## Quick Start: Options Pricing
The `pricing` module supports a wide range of exotic and vanilla options, along with Greeks calculation.
```python
from laakhay.quantlab.pricing import (
EuropeanCall,
MarketData,
Pricer,
PricingMethod
)
# 1. Define Market Conditions
market = MarketData(spot=100.0, rate=0.05, vol=0.2)
# 2. Define Instrument
option = EuropeanCall(strike=100.0, expiry=1.0)
# 3. Price using Black-Scholes (Analytical)
bs_pricer = Pricer(method=PricingMethod.BLACK_SCHOLES)
price, greeks = bs_pricer.price_with_greeks(option, market)
print(f"Price: {price:.4f}")
print(f"Delta: {greeks.delta:.4f}")
# 4. Price using Monte Carlo (Numerical)
mc_pricer = Pricer(method=PricingMethod.MONTE_CARLO)
mc_price = mc_pricer.price(option, market)
print(f"MC Price: {mc_price:.4f}")
```
## Documentation
See the `docs/` directory for detailed guides:
- **Getting Started**: Installation and first steps.
- **Pricing**: Detailed guide on options, strategies, and pricing models.
- **Backends**: Configuring specific computation backends. | text/markdown | null | Laakhay Corporation <laakhay.corp@gmail.com> | null | null | null | backtesting, finance, quantitative-analysis, trading | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Office/Business :: Financial"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"laakhay-data",
"laakhay-ta",
"jax[cpu]>=0.4.0; extra == \"all\"",
"numpy>=1.21.0; extra == \"all\"",
"scipy>=1.10.0; extra == \"all\"",
"torch>=2.0.0; extra == \"all\"",
"build>=1.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"numpy>=1.21.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"scipy>=1.10.0; extra == \"dev\"",
"twine>=5.0; extra == \"dev\"",
"jax[cpu]>=0.4.0; extra == \"jax\"",
"jax[cuda]>=0.4.0; extra == \"jax-gpu\"",
"numpy>=1.21.0; extra == \"numpy\"",
"scipy>=1.10.0; extra == \"numpy\"",
"torch>=2.0.0; extra == \"torch\""
] | [] | [] | [] | [
"Homepage, https://laakhay.com",
"Repository, https://github.com/laakhay/quantlab"
] | uv/0.9.4 | 2026-02-20T17:55:33.111203 | laakhay_quantlab-0.1.4.tar.gz | 52,222 | 19/1a/d64671211542cbc396ea528ceffa284d3d1b0089ecc7925b27d0f2455cf2/laakhay_quantlab-0.1.4.tar.gz | source | sdist | null | false | 1c636d6702d3ba10cc8828d0d49c7f11 | a396477a2003d5dda9a66d87770e261e5ee7a3c367961cb98ff088c13e1e155a | 191ad64671211542cbc396ea528ceffa284d3d1b0089ecc7925b27d0f2455cf2 | MIT | [
"LICENSE"
] | 232 |
2.4 | mistralai | 1.12.4 | Python Client SDK for the Mistral AI API. | # Mistral Python Client
## Migration warning
This documentation is for Mistral AI SDK v1. You can find more details on how to migrate from v0 to v1 [here](MIGRATION.md)
## API Key Setup
Before you begin, you will need a Mistral AI API key.
1. Get your own Mistral API Key: <https://docs.mistral.ai/#api-access>
2. Set your Mistral API Key as an environment variable. You only need to do this once.
```bash
# set Mistral API Key (using zsh for example)
$ echo 'export MISTRAL_API_KEY=[your_key_here]' >> ~/.zshenv
# reload the environment (or just quit and open a new terminal)
$ source ~/.zshenv
```
<!-- Start Summary [summary] -->
## Summary
Mistral AI API: Our Chat Completion and Embeddings APIs specification. Create your account on [La Plateforme](https://console.mistral.ai) to get access and read the [docs](https://docs.mistral.ai) to learn how to use it.
<!-- End Summary [summary] -->
<!-- Start Table of Contents [toc] -->
## Table of Contents
<!-- $toc-max-depth=2 -->
* [Mistral Python Client](#mistral-python-client)
* [Migration warning](#migration-warning)
* [API Key Setup](#api-key-setup)
* [SDK Installation](#sdk-installation)
* [SDK Example Usage](#sdk-example-usage)
* [Providers' SDKs Example Usage](#providers-sdks-example-usage)
* [Available Resources and Operations](#available-resources-and-operations)
* [Server-sent event streaming](#server-sent-event-streaming)
* [File uploads](#file-uploads)
* [Retries](#retries)
* [Error Handling](#error-handling)
* [Server Selection](#server-selection)
* [Custom HTTP Client](#custom-http-client)
* [Authentication](#authentication)
* [Resource Management](#resource-management)
* [Debugging](#debugging)
* [IDE Support](#ide-support)
* [Development](#development)
* [Contributions](#contributions)
<!-- End Table of Contents [toc] -->
<!-- Start SDK Installation [installation] -->
## SDK Installation
> [!NOTE]
> **Python version upgrade policy**
>
> Once a Python version reaches its [official end of life date](https://devguide.python.org/versions/), a 3-month grace period is provided for users to upgrade. Following this grace period, the minimum python version supported in the SDK will be updated.
The SDK can be installed with *uv*, *pip*, or *poetry* package managers.
### uv
*uv* is a fast Python package installer and resolver, designed as a drop-in replacement for pip and pip-tools. It's recommended for its speed and modern Python tooling capabilities.
```bash
uv add mistralai
```
### PIP
*PIP* is the default package installer for Python, enabling easy installation and management of packages from PyPI via the command line.
```bash
pip install mistralai
```
### Poetry
*Poetry* is a modern tool that simplifies dependency management and package publishing by using a single `pyproject.toml` file to handle project metadata and dependencies.
```bash
poetry add mistralai
```
### Shell and script usage with `uv`
You can use this SDK in a Python shell with [uv](https://docs.astral.sh/uv/) and the `uvx` command that comes with it like so:
```shell
uvx --from mistralai python
```
It's also possible to write a standalone Python script without needing to set up a whole project like so:
```python
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.9"
# dependencies = [
# "mistralai",
# ]
# ///
from mistralai import Mistral
sdk = Mistral(
# SDK arguments
)
# Rest of script here...
```
Once that is saved to a file, you can run it with `uv run script.py` where
`script.py` can be replaced with the actual file name.
<!-- End SDK Installation [installation] -->
### Agents extra dependencies
When using the agents related feature it is required to add the `agents` extra dependencies. This can be added when
installing the package:
```bash
pip install "mistralai[agents]"
```
> Note: These features require Python 3.10+ (the SDK minimum).
<!-- Start SDK Example Usage [usage] -->
## SDK Example Usage
### Create Chat Completions
This example shows how to create chat completions.
```python
# Synchronous Example
from mistralai import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.chat.complete(model="mistral-large-latest", messages=[
{
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
},
], stream=False, response_format={
"type": "text",
})
# Handle response
print(res)
```
</br>
The same SDK client can also be used to make asynchronous requests by importing asyncio.
```python
# Asynchronous Example
import asyncio
from mistralai import Mistral
import os
async def main():
async with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = await mistral.chat.complete_async(model="mistral-large-latest", messages=[
{
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
},
], stream=False, response_format={
"type": "text",
})
# Handle response
print(res)
asyncio.run(main())
```
### Upload a file
This example shows how to upload a file.
```python
# Synchronous Example
from mistralai import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.files.upload(file={
"file_name": "example.file",
"content": open("example.file", "rb"),
})
# Handle response
print(res)
```
</br>
The same SDK client can also be used to make asynchronous requests by importing asyncio.
```python
# Asynchronous Example
import asyncio
from mistralai import Mistral
import os
async def main():
async with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = await mistral.files.upload_async(file={
"file_name": "example.file",
"content": open("example.file", "rb"),
})
# Handle response
print(res)
asyncio.run(main())
```
### Create Agents Completions
This example shows how to create agents completions.
```python
# Synchronous Example
from mistralai import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.agents.complete(messages=[
{
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
},
], agent_id="<id>", stream=False, response_format={
"type": "text",
})
# Handle response
print(res)
```
</br>
The same SDK client can also be used to make asynchronous requests by importing asyncio.
```python
# Asynchronous Example
import asyncio
from mistralai import Mistral
import os
async def main():
async with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = await mistral.agents.complete_async(messages=[
{
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
},
], agent_id="<id>", stream=False, response_format={
"type": "text",
})
# Handle response
print(res)
asyncio.run(main())
```
### Create Embedding Request
This example shows how to create embedding request.
```python
# Synchronous Example
from mistralai import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.embeddings.create(model="mistral-embed", inputs=[
"Embed this sentence.",
"As well as this one.",
])
# Handle response
print(res)
```
</br>
The same SDK client can also be used to make asynchronous requests by importing asyncio.
```python
# Asynchronous Example
import asyncio
from mistralai import Mistral
import os
async def main():
async with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = await mistral.embeddings.create_async(model="mistral-embed", inputs=[
"Embed this sentence.",
"As well as this one.",
])
# Handle response
print(res)
asyncio.run(main())
```
<!-- End SDK Example Usage [usage] -->
### More examples
You can run the examples in the `examples/` directory using `uv run`.
## Providers' SDKs Example Usage
### Azure AI
**Prerequisites**
Before you begin, ensure you have `AZUREAI_ENDPOINT` and an `AZURE_API_KEY`. To obtain these, you will need to deploy Mistral on Azure AI.
See [instructions for deploying Mistral on Azure AI here](https://docs.mistral.ai/deployment/cloud/azure/).
Here's a basic example to get you started. You can also run [the example in the `examples` directory](/examples/azure).
```python
import asyncio
import os
from mistralai_azure import MistralAzure
client = MistralAzure(
azure_api_key=os.getenv("AZURE_API_KEY", ""),
azure_endpoint=os.getenv("AZURE_ENDPOINT", "")
)
async def main() -> None:
res = await client.chat.complete_async(
max_tokens= 100,
temperature= 0.5,
messages= [
{
"content": "Hello there!",
"role": "user"
}
]
)
print(res)
asyncio.run(main())
```
The documentation for the Azure SDK is available [here](packages/mistralai_azure/README.md).
### Google Cloud
**Prerequisites**
Before you begin, you will need to create a Google Cloud project and enable the Mistral API. To do this, follow the instructions [here](https://docs.mistral.ai/deployment/cloud/vertex/).
To run this locally you will also need to ensure you are authenticated with Google Cloud. You can do this by running
```bash
gcloud auth application-default login
```
**Step 1: Install**
Install the extras dependencies specific to Google Cloud:
```bash
pip install mistralai[gcp]
```
**Step 2: Example Usage**
Here's a basic example to get you started.
```python
import asyncio
from mistralai_gcp import MistralGoogleCloud
client = MistralGoogleCloud()
async def main() -> None:
res = await client.chat.complete_async(
model= "mistral-small-2402",
messages= [
{
"content": "Hello there!",
"role": "user"
}
]
)
print(res)
asyncio.run(main())
```
The documentation for the GCP SDK is available [here](packages/mistralai_gcp/README.md).
<!-- Start Available Resources and Operations [operations] -->
## Available Resources and Operations
<details open>
<summary>Available methods</summary>
### [Agents](docs/sdks/agents/README.md)
* [complete](docs/sdks/agents/README.md#complete) - Agents Completion
* [stream](docs/sdks/agents/README.md#stream) - Stream Agents completion
### [Audio.Transcriptions](docs/sdks/transcriptions/README.md)
* [complete](docs/sdks/transcriptions/README.md#complete) - Create Transcription
* [stream](docs/sdks/transcriptions/README.md#stream) - Create Streaming Transcription (SSE)
### [Batch.Jobs](docs/sdks/mistraljobs/README.md)
* [list](docs/sdks/mistraljobs/README.md#list) - Get Batch Jobs
* [create](docs/sdks/mistraljobs/README.md#create) - Create Batch Job
* [get](docs/sdks/mistraljobs/README.md#get) - Get Batch Job
* [cancel](docs/sdks/mistraljobs/README.md#cancel) - Cancel Batch Job
### [Beta.Agents](docs/sdks/mistralagents/README.md)
* [create](docs/sdks/mistralagents/README.md#create) - Create a agent that can be used within a conversation.
* [list](docs/sdks/mistralagents/README.md#list) - List agent entities.
* [get](docs/sdks/mistralagents/README.md#get) - Retrieve an agent entity.
* [update](docs/sdks/mistralagents/README.md#update) - Update an agent entity.
* [delete](docs/sdks/mistralagents/README.md#delete) - Delete an agent entity.
* [update_version](docs/sdks/mistralagents/README.md#update_version) - Update an agent version.
* [list_versions](docs/sdks/mistralagents/README.md#list_versions) - List all versions of an agent.
* [get_version](docs/sdks/mistralagents/README.md#get_version) - Retrieve a specific version of an agent.
* [create_version_alias](docs/sdks/mistralagents/README.md#create_version_alias) - Create or update an agent version alias.
* [list_version_aliases](docs/sdks/mistralagents/README.md#list_version_aliases) - List all aliases for an agent.
* [delete_version_alias](docs/sdks/mistralagents/README.md#delete_version_alias) - Delete an agent version alias.
### [Beta.Conversations](docs/sdks/conversations/README.md)
* [start](docs/sdks/conversations/README.md#start) - Create a conversation and append entries to it.
* [list](docs/sdks/conversations/README.md#list) - List all created conversations.
* [get](docs/sdks/conversations/README.md#get) - Retrieve a conversation information.
* [delete](docs/sdks/conversations/README.md#delete) - Delete a conversation.
* [append](docs/sdks/conversations/README.md#append) - Append new entries to an existing conversation.
* [get_history](docs/sdks/conversations/README.md#get_history) - Retrieve all entries in a conversation.
* [get_messages](docs/sdks/conversations/README.md#get_messages) - Retrieve all messages in a conversation.
* [restart](docs/sdks/conversations/README.md#restart) - Restart a conversation starting from a given entry.
* [start_stream](docs/sdks/conversations/README.md#start_stream) - Create a conversation and append entries to it.
* [append_stream](docs/sdks/conversations/README.md#append_stream) - Append new entries to an existing conversation.
* [restart_stream](docs/sdks/conversations/README.md#restart_stream) - Restart a conversation starting from a given entry.
### [Beta.Libraries](docs/sdks/libraries/README.md)
* [list](docs/sdks/libraries/README.md#list) - List all libraries you have access to.
* [create](docs/sdks/libraries/README.md#create) - Create a new Library.
* [get](docs/sdks/libraries/README.md#get) - Detailed information about a specific Library.
* [delete](docs/sdks/libraries/README.md#delete) - Delete a library and all of it's document.
* [update](docs/sdks/libraries/README.md#update) - Update a library.
#### [Beta.Libraries.Accesses](docs/sdks/accesses/README.md)
* [list](docs/sdks/accesses/README.md#list) - List all of the access to this library.
* [update_or_create](docs/sdks/accesses/README.md#update_or_create) - Create or update an access level.
* [delete](docs/sdks/accesses/README.md#delete) - Delete an access level.
#### [Beta.Libraries.Documents](docs/sdks/documents/README.md)
* [list](docs/sdks/documents/README.md#list) - List documents in a given library.
* [upload](docs/sdks/documents/README.md#upload) - Upload a new document.
* [get](docs/sdks/documents/README.md#get) - Retrieve the metadata of a specific document.
* [update](docs/sdks/documents/README.md#update) - Update the metadata of a specific document.
* [delete](docs/sdks/documents/README.md#delete) - Delete a document.
* [text_content](docs/sdks/documents/README.md#text_content) - Retrieve the text content of a specific document.
* [status](docs/sdks/documents/README.md#status) - Retrieve the processing status of a specific document.
* [get_signed_url](docs/sdks/documents/README.md#get_signed_url) - Retrieve the signed URL of a specific document.
* [extracted_text_signed_url](docs/sdks/documents/README.md#extracted_text_signed_url) - Retrieve the signed URL of text extracted from a given document.
* [reprocess](docs/sdks/documents/README.md#reprocess) - Reprocess a document.
### [Chat](docs/sdks/chat/README.md)
* [complete](docs/sdks/chat/README.md#complete) - Chat Completion
* [stream](docs/sdks/chat/README.md#stream) - Stream chat completion
### [Classifiers](docs/sdks/classifiers/README.md)
* [moderate](docs/sdks/classifiers/README.md#moderate) - Moderations
* [moderate_chat](docs/sdks/classifiers/README.md#moderate_chat) - Chat Moderations
* [classify](docs/sdks/classifiers/README.md#classify) - Classifications
* [classify_chat](docs/sdks/classifiers/README.md#classify_chat) - Chat Classifications
### [Embeddings](docs/sdks/embeddings/README.md)
* [create](docs/sdks/embeddings/README.md#create) - Embeddings
### [Files](docs/sdks/files/README.md)
* [upload](docs/sdks/files/README.md#upload) - Upload File
* [list](docs/sdks/files/README.md#list) - List Files
* [retrieve](docs/sdks/files/README.md#retrieve) - Retrieve File
* [delete](docs/sdks/files/README.md#delete) - Delete File
* [download](docs/sdks/files/README.md#download) - Download File
* [get_signed_url](docs/sdks/files/README.md#get_signed_url) - Get Signed Url
### [Fim](docs/sdks/fim/README.md)
* [complete](docs/sdks/fim/README.md#complete) - Fim Completion
* [stream](docs/sdks/fim/README.md#stream) - Stream fim completion
### [FineTuning.Jobs](docs/sdks/jobs/README.md)
* [list](docs/sdks/jobs/README.md#list) - Get Fine Tuning Jobs
* [create](docs/sdks/jobs/README.md#create) - Create Fine Tuning Job
* [get](docs/sdks/jobs/README.md#get) - Get Fine Tuning Job
* [cancel](docs/sdks/jobs/README.md#cancel) - Cancel Fine Tuning Job
* [start](docs/sdks/jobs/README.md#start) - Start Fine Tuning Job
### [Models](docs/sdks/models/README.md)
* [list](docs/sdks/models/README.md#list) - List Models
* [retrieve](docs/sdks/models/README.md#retrieve) - Retrieve Model
* [delete](docs/sdks/models/README.md#delete) - Delete Model
* [update](docs/sdks/models/README.md#update) - Update Fine Tuned Model
* [archive](docs/sdks/models/README.md#archive) - Archive Fine Tuned Model
* [unarchive](docs/sdks/models/README.md#unarchive) - Unarchive Fine Tuned Model
### [Ocr](docs/sdks/ocr/README.md)
* [process](docs/sdks/ocr/README.md#process) - OCR
</details>
<!-- End Available Resources and Operations [operations] -->
<!-- Start Server-sent event streaming [eventstream] -->
## Server-sent event streaming
[Server-sent events][mdn-sse] are used to stream content from certain
operations. These operations will expose the stream as [Generator][generator] that
can be consumed using a simple `for` loop. The loop will
terminate when the server no longer has any events to send and closes the
underlying connection.
The stream is also a [Context Manager][context-manager] and can be used with the `with` statement and will close the
underlying connection when the context is exited.
```python
from mistralai import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.beta.conversations.start_stream(inputs=[
{
"object": "entry",
"type": "function.result",
"tool_call_id": "<id>",
"result": "<value>",
},
], stream=True, completion_args={
"response_format": {
"type": "text",
},
})
with res as event_stream:
for event in event_stream:
# handle event
print(event, flush=True)
```
[mdn-sse]: https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events
[generator]: https://book.pythontips.com/en/latest/generators.html
[context-manager]: https://book.pythontips.com/en/latest/context_managers.html
<!-- End Server-sent event streaming [eventstream] -->
<!-- Start File uploads [file-upload] -->
## File uploads
Certain SDK methods accept file objects as part of a request body or multi-part request. It is possible and typically recommended to upload files as a stream rather than reading the entire contents into memory. This avoids excessive memory consumption and potentially crashing with out-of-memory errors when working with very large files. The following example demonstrates how to attach a file stream to a request.
> [!TIP]
>
> For endpoints that handle file uploads bytes arrays can also be used. However, using streams is recommended for large files.
>
```python
from mistralai import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.beta.libraries.documents.upload(library_id="a02150d9-5ee0-4877-b62c-28b1fcdf3b76", file={
"file_name": "example.file",
"content": open("example.file", "rb"),
})
# Handle response
print(res)
```
<!-- End File uploads [file-upload] -->
<!-- Start Retries [retries] -->
## Retries
Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK.
To change the default retry strategy for a single API call, simply provide a `RetryConfig` object to the call:
```python
from mistralai import Mistral
from mistralai.utils import BackoffStrategy, RetryConfig
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.models.list(,
RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False))
# Handle response
print(res)
```
If you'd like to override the default retry strategy for all operations that support retries, you can use the `retry_config` optional parameter when initializing the SDK:
```python
from mistralai import Mistral
from mistralai.utils import BackoffStrategy, RetryConfig
import os
with Mistral(
retry_config=RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False),
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.models.list()
# Handle response
print(res)
```
<!-- End Retries [retries] -->
<!-- Start Error Handling [errors] -->
## Error Handling
[`MistralError`](./src/mistralai/models/mistralerror.py) is the base class for all HTTP error responses. It has the following properties:
| Property | Type | Description |
| ------------------ | ---------------- | --------------------------------------------------------------------------------------- |
| `err.message` | `str` | Error message |
| `err.status_code` | `int` | HTTP response status code eg `404` |
| `err.headers` | `httpx.Headers` | HTTP response headers |
| `err.body` | `str` | HTTP body. Can be empty string if no body is returned. |
| `err.raw_response` | `httpx.Response` | Raw HTTP response |
| `err.data` | | Optional. Some errors may contain structured data. [See Error Classes](#error-classes). |
### Example
```python
import mistralai
from mistralai import Mistral, models
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = None
try:
res = mistral.models.retrieve(model_id="ft:open-mistral-7b:587a6b29:20240514:7e773925")
# Handle response
print(res)
except models.MistralError as e:
# The base class for HTTP error responses
print(e.message)
print(e.status_code)
print(e.body)
print(e.headers)
print(e.raw_response)
# Depending on the method different errors may be thrown
if isinstance(e, models.HTTPValidationError):
print(e.data.detail) # Optional[List[mistralai.ValidationError]]
```
### Error Classes
**Primary error:**
* [`MistralError`](./src/mistralai/models/mistralerror.py): The base class for HTTP error responses.
<details><summary>Less common errors (6)</summary>
<br />
**Network errors:**
* [`httpx.RequestError`](https://www.python-httpx.org/exceptions/#httpx.RequestError): Base class for request errors.
* [`httpx.ConnectError`](https://www.python-httpx.org/exceptions/#httpx.ConnectError): HTTP client was unable to make a request to a server.
* [`httpx.TimeoutException`](https://www.python-httpx.org/exceptions/#httpx.TimeoutException): HTTP request timed out.
**Inherit from [`MistralError`](./src/mistralai/models/mistralerror.py)**:
* [`HTTPValidationError`](./src/mistralai/models/httpvalidationerror.py): Validation Error. Status code `422`. Applicable to 53 of 75 methods.*
* [`ResponseValidationError`](./src/mistralai/models/responsevalidationerror.py): Type mismatch between the response data and the expected Pydantic model. Provides access to the Pydantic validation error via the `cause` attribute.
</details>
\* Check [the method documentation](#available-resources-and-operations) to see if the error is applicable.
<!-- End Error Handling [errors] -->
<!-- Start Server Selection [server] -->
## Server Selection
### Select Server by Name
You can override the default server globally by passing a server name to the `server: str` optional parameter when initializing the SDK client instance. The selected server will then be used as the default on the operations that use it. This table lists the names associated with the available servers:
| Name | Server | Description |
| ---- | ------------------------ | -------------------- |
| `eu` | `https://api.mistral.ai` | EU Production server |
#### Example
```python
from mistralai import Mistral
import os
with Mistral(
server="eu",
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.models.list()
# Handle response
print(res)
```
### Override Server URL Per-Client
The default server can also be overridden globally by passing a URL to the `server_url: str` optional parameter when initializing the SDK client instance. For example:
```python
from mistralai import Mistral
import os
with Mistral(
server_url="https://api.mistral.ai",
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.models.list()
# Handle response
print(res)
```
<!-- End Server Selection [server] -->
<!-- Start Custom HTTP Client [http-client] -->
## Custom HTTP Client
The Python SDK makes API calls using the [httpx](https://www.python-httpx.org/) HTTP library. In order to provide a convenient way to configure timeouts, cookies, proxies, custom headers, and other low-level configuration, you can initialize the SDK client with your own HTTP client instance.
Depending on whether you are using the sync or async version of the SDK, you can pass an instance of `HttpClient` or `AsyncHttpClient` respectively, which are Protocol's ensuring that the client has the necessary methods to make API calls.
This allows you to wrap the client with your own custom logic, such as adding custom headers, logging, or error handling, or you can just pass an instance of `httpx.Client` or `httpx.AsyncClient` directly.
For example, you could specify a header for every request that this sdk makes as follows:
```python
from mistralai import Mistral
import httpx
http_client = httpx.Client(headers={"x-custom-header": "someValue"})
s = Mistral(client=http_client)
```
or you could wrap the client with your own custom logic:
```python
from mistralai import Mistral
from mistralai.httpclient import AsyncHttpClient
import httpx
class CustomClient(AsyncHttpClient):
client: AsyncHttpClient
def __init__(self, client: AsyncHttpClient):
self.client = client
async def send(
self,
request: httpx.Request,
*,
stream: bool = False,
auth: Union[
httpx._types.AuthTypes, httpx._client.UseClientDefault, None
] = httpx.USE_CLIENT_DEFAULT,
follow_redirects: Union[
bool, httpx._client.UseClientDefault
] = httpx.USE_CLIENT_DEFAULT,
) -> httpx.Response:
request.headers["Client-Level-Header"] = "added by client"
return await self.client.send(
request, stream=stream, auth=auth, follow_redirects=follow_redirects
)
def build_request(
self,
method: str,
url: httpx._types.URLTypes,
*,
content: Optional[httpx._types.RequestContent] = None,
data: Optional[httpx._types.RequestData] = None,
files: Optional[httpx._types.RequestFiles] = None,
json: Optional[Any] = None,
params: Optional[httpx._types.QueryParamTypes] = None,
headers: Optional[httpx._types.HeaderTypes] = None,
cookies: Optional[httpx._types.CookieTypes] = None,
timeout: Union[
httpx._types.TimeoutTypes, httpx._client.UseClientDefault
] = httpx.USE_CLIENT_DEFAULT,
extensions: Optional[httpx._types.RequestExtensions] = None,
) -> httpx.Request:
return self.client.build_request(
method,
url,
content=content,
data=data,
files=files,
json=json,
params=params,
headers=headers,
cookies=cookies,
timeout=timeout,
extensions=extensions,
)
s = Mistral(async_client=CustomClient(httpx.AsyncClient()))
```
<!-- End Custom HTTP Client [http-client] -->
<!-- Start Authentication [security] -->
## Authentication
### Per-Client Security Schemes
This SDK supports the following security scheme globally:
| Name | Type | Scheme | Environment Variable |
| --------- | ---- | ----------- | -------------------- |
| `api_key` | http | HTTP Bearer | `MISTRAL_API_KEY` |
To authenticate with the API the `api_key` parameter must be set when initializing the SDK client instance. For example:
```python
from mistralai import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.models.list()
# Handle response
print(res)
```
<!-- End Authentication [security] -->
<!-- Start Resource Management [resource-management] -->
## Resource Management
The `Mistral` class implements the context manager protocol and registers a finalizer function to close the underlying sync and async HTTPX clients it uses under the hood. This will close HTTP connections, release memory and free up other resources held by the SDK. In short-lived Python programs and notebooks that make a few SDK method calls, resource management may not be a concern. However, in longer-lived programs, it is beneficial to create a single SDK instance via a [context manager][context-manager] and reuse it across the application.
[context-manager]: https://docs.python.org/3/reference/datamodel.html#context-managers
```python
from mistralai import Mistral
import os
def main():
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
# Rest of application here...
# Or when using async:
async def amain():
async with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
# Rest of application here...
```
<!-- End Resource Management [resource-management] -->
<!-- Start Debugging [debug] -->
## Debugging
You can setup your SDK to emit debug logs for SDK requests and responses.
You can pass your own logger class directly into your SDK.
```python
from mistralai import Mistral
import logging
logging.basicConfig(level=logging.DEBUG)
s = Mistral(debug_logger=logging.getLogger("mistralai"))
```
You can also enable a default debug logger by setting an environment variable `MISTRAL_DEBUG` to true.
<!-- End Debugging [debug] -->
<!-- Start IDE Support [idesupport] -->
## IDE Support
### PyCharm
Generally, the SDK will work well with most IDEs out of the box. However, when using PyCharm, you can enjoy much better integration with Pydantic by installing an additional plugin.
- [PyCharm Pydantic Plugin](https://docs.pydantic.dev/latest/integrations/pycharm/)
<!-- End IDE Support [idesupport] -->
<!-- Placeholder for Future Speakeasy SDK Sections -->
# Development
## Contributions
While we value open-source contributions to this SDK, this library is generated programmatically. Any manual changes added to internal files will be overwritten on the next generation.
We look forward to hearing your feedback. Feel free to open a PR or an issue with a proof of concept and we'll do our best to include it in a future release.
| text/markdown | Mistral | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"eval-type-backport>=0.2.0",
"httpx>=0.28.1",
"invoke<3.0.0,>=2.2.0",
"opentelemetry-api<2.0.0,>=1.33.1",
"opentelemetry-exporter-otlp-proto-http<2.0.0,>=1.37.0",
"opentelemetry-sdk<2.0.0,>=1.33.1",
"pydantic>=2.10.3",
"python-dateutil>=2.8.2",
"pyyaml<7.0.0,>=6.0.2",
"typing-inspection>=0.4.0",
"authlib<2.0,>=1.5.2; extra == \"agents\"",
"griffe<2.0,>=1.7.3; extra == \"agents\"",
"mcp<2.0,>=1.0; extra == \"agents\"",
"google-auth>=2.27.0; extra == \"gcp\"",
"requests>=2.32.3; extra == \"gcp\"",
"websockets>=13.0; extra == \"realtime\""
] | [] | [] | [] | [
"Repository, https://github.com/mistralai/client-python.git"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T17:55:15.270292 | mistralai-1.12.4-py3-none-any.whl | 509,321 | c9/f9/98d825105c450b9c67c27026caa374112b7e466c18331601d02ca278a01b/mistralai-1.12.4-py3-none-any.whl | py3 | bdist_wheel | null | false | 4494cbe3efe226fbd053cbe48c35494c | 7b69fcbc306436491ad3377fbdead527c9f3a0ce145ec029bf04c6308ff2cca6 | c9f998d825105c450b9c67c27026caa374112b7e466c18331601d02ca278a01b | null | [
"LICENSE"
] | 73,372 |
2.4 | aseview | 0.0.1 | A molecular viewer for ASE (Atomic Simulation Environment) data | # aseview
Molecular structure viewer for ASE (Atomic Simulation Environment).
**Status: pre-alpha**
## Installation
```bash
pip install aseview
```
For development:
```bash
git clone https://github.com/kangmg/aseview.git
cd aseview
pip install -e .
```
## CLI Usage
```bash
# Basic usage
aseview molecule.xyz
# View trajectory (all frames)
aseview trajectory.xyz -i :
# View specific frames
aseview trajectory.xyz -i 0:10 # frames 0-9
aseview trajectory.xyz -i -1 # last frame
aseview trajectory.xyz -i ::2 # every 2nd frame
# Specify file format
aseview POSCAR -f vasp
# Custom port
aseview molecule.xyz -p 9000
# Overlay multiple structures
aseview reactant.xyz product.xyz
# Overlay with colormap
aseview trajectory.xyz -v overlay --cmap viridis
# Normal mode visualization with ORCA Hessian
aseview molecule.xyz --hess orca.hess
# Save as HTML file
aseview molecule.xyz -o output.html
# Kill existing server on port
aseview molecule.xyz -k
# Help
aseview -h
```
## SSH Port Forwarding
When running on a remote server (e.g., HPC cluster, Docker container):
```bash
# 1. On remote server
aseview molecule.xyz -p 8080
# 2. On local machine (separate terminal)
ssh -L 8080:localhost:8080 user@remote-server
# 3. Open in local browser
# http://localhost:8080
```
For Docker with custom SSH port:
```bash
# Connect with port forwarding
ssh user@localhost -p 10011 -L 8080:localhost:8080
# Then run aseview inside container
aseview molecule.xyz -p 8080
```
## Jupyter Notebook
### Quick Start
```python
from ase.io import read
from aseview import MolecularViewer
atoms = read('molecule.xyz')
viewer = MolecularViewer(atoms)
viewer.show()
```
### With Trajectory
```python
from ase.io import read
from aseview import MolecularViewer
# Read all frames
trajectory = read('trajectory.xyz', index=':')
viewer = MolecularViewer(trajectory)
viewer.show()
```
### Overlay Multiple Structures
```python
from ase.io import read
from aseview import OverlayViewer
reactant = read('reactant.xyz')
product = read('product.xyz')
viewer = OverlayViewer([reactant, product])
viewer.show()
```
### Overlay with Colormap
```python
from ase.io import read
from aseview import OverlayViewer
trajectory = read('optimization.xyz', index=':')
viewer = OverlayViewer(
trajectory,
colorBy='Colormap',
colormap='viridis' # viridis, plasma, coolwarm, jet, rainbow, grayscale
)
viewer.show()
```
### Align Molecules (RMSD Minimization)
```python
from ase.io import read
from aseview import OverlayViewer
structures = [read(f'conf{i}.xyz') for i in range(5)]
viewer = OverlayViewer(
structures,
alignMolecules=True, # Kabsch rotation + Hungarian reordering
colorBy='Molecule'
)
viewer.show()
```
### Normal Mode Visualization
#### From ASE Vibrations
```python
from ase import Atoms
from ase.calculators.emt import EMT
from ase.optimize import BFGS
from ase.vibrations import Vibrations
from aseview import NormalViewer
# Create or load molecule
atoms = Atoms('H2O', positions=[[0, 0, 0], [0.96, 0, 0], [-0.24, 0.93, 0]])
atoms.calc = EMT()
# Optimize structure
opt = BFGS(atoms)
opt.run(fmax=0.01)
# Calculate vibrations
vib = Vibrations(atoms, name='vib')
vib.run()
vib.summary()
# Visualize normal modes
viewer = NormalViewer(atoms, vibrations=vib)
viewer.show()
```
#### From ORCA Hessian File
```python
from ase.io import read
from aseview import NormalViewer
atoms = read('molecule.xyz')
viewer = NormalViewer.from_orca(atoms, 'orca.hess')
viewer.show()
```
Features:
- Mode selector dropdown with frequencies
- Sinusoidal animation of atomic displacements
- Amplitude slider to control displacement magnitude
- Show Vectors toggle to display mode displacement arrows
- Imaginary frequencies (transition states) shown in red
### Using view_molecule Helper
```python
from aseview.jupyter import view_molecule
from ase.io import read
atoms = read('molecule.xyz')
view_molecule(atoms, viewer_type='molecular', height=600)
```
### Custom Settings
```python
from aseview import MolecularViewer
viewer = MolecularViewer(
atoms,
style='neon', # default, cartoon, neon, glossy, metallic, rowan, grey
bondThreshold=1.2, # bond detection scale factor
atomSize=0.5,
showCell=False,
backgroundColor='#000000'
)
viewer.show(width='100%', height=800)
```
### Save to HTML
```python
viewer.save_html('output.html')
```
## Viewer Types
| Viewer | Description |
|--------|-------------|
| MolecularViewer | Single structure or trajectory animation |
| NormalViewer | Normal mode vibration visualization |
| OverlayViewer | Compare multiple structures overlaid |
## JavaScript Module
Use aseview in any web page without Python:
```html
<div id="viewer" style="width:100%; height:500px;"></div>
<script src="https://raw.githack.com/kangmg/aseview_v2_dev/main/aseview/static/js/aseview.js"></script>
<script>
const viewer = new ASEView.MolecularViewer('#viewer');
viewer.setData({
symbols: ['O', 'H', 'H'],
positions: [
[0.0, 0.0, 0.117],
[0.0, 0.757, -0.469],
[0.0, -0.757, -0.469]
]
});
</script>
```
See the [JavaScript Module documentation](https://kangmg.github.io/aseview_v2_dev/js-module/) for full API reference.
## Supported Formats
All formats supported by ASE: xyz, cif, pdb, POSCAR, extxyz, etc.
## License
MIT
| text/markdown | null | Mingi Kang <kangmg@kentech.ac.kr> | null | null | MIT License
Copyright (c) 2024 Mingi Kang
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"ase",
"ipywidgets",
"jupyter",
"typer>=0.9.0",
"rich>=13.0.0",
"pytest; extra == \"dev\"",
"black; extra == \"dev\"",
"flake8; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/kangmg/aseview",
"Repository, https://github.com/kangmg/aseview",
"Documentation, https://kangmg.github.io/aseview",
"Bug Tracker, https://github.com/kangmg/aseview/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T17:55:02.352969 | aseview-0.0.1.tar.gz | 3,806,695 | c4/d9/6d67016955c0968000ec7ce93f562c28eab4362e31f9f60aeec31ccecf86/aseview-0.0.1.tar.gz | source | sdist | null | false | 247aee53a1156819503d70837ff6e6c3 | ef4d562378a98e3be35444a84df0253574f35011bef885c3749170a59b1d9406 | c4d96d67016955c0968000ec7ce93f562c28eab4362e31f9f60aeec31ccecf86 | null | [
"LICENSE"
] | 241 |
2.4 | dragonfly-schema | 2.0.2 | Dragonfly Data-Model Objects | [](https://github.com/ladybug-tools/dragonfly-schema/actions)
[](https://www.python.org/downloads/release/python-3100/) [](https://www.python.org/downloads/release/python-370/)
# dragonfly-schema
Dragonfly Data-Model that generates documentation and OpenAPI specifications for
the DFJSON file schema.
## Installation
```console
pip install dragonfly-schema
```
## QuickStart
```python
import dragonfly_schema
```
## API Documentation
[Model Schema](https://ladybug-tools.github.io/dragonfly-schema/model.html)
[Energy Simulation Parameter Schema](https://ladybug-tools-in2.github.io/honeybee-schema/simulation-parameter.html)
## Local Development
1. Clone this repo locally
```console
git clone git@github.com:ladybug-tools/dragonfly-schema
# or
git clone https://github.com/ladybug-tools/dragonfly-schema
```
2. Install dependencies:
```console
cd dragonfly-schema
pip install -r dev-requirements.txt
pip install -r requirements.txt
```
3. Run Tests:
```console
python -m pytest tests/
```
4. Generate Documentation:
```python
python ./docs.py
```
5. Generate Sample Files:
```python
python ./scripts/export_samples.py
```
| text/markdown | Ladybug Tools | info@ladybug.tools | null | null | MIT | null | [
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Operating System :: OS Independent"
] | [] | https://github.com/ladybug-tools/dragonfly-schema | null | null | [] | [] | [] | [
"honeybee-schema==2.0.3"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-20T17:54:41.334937 | dragonfly_schema-2.0.2.tar.gz | 18,158 | 22/2f/8f704428f78c1f62642bbbaf9473d6aa2b4a604aff23140c0463fd6d3d8f/dragonfly_schema-2.0.2.tar.gz | source | sdist | null | false | bbd86d1a46da4a8dd418c7ad9a1867af | c59f8e79c98e37ccb9803dfd59c226bc742c466b5e1f13075ca43dd927266a86 | 222f8f704428f78c1f62642bbbaf9473d6aa2b4a604aff23140c0463fd6d3d8f | null | [
"LICENSE"
] | 347 |
2.4 | datadock | 0.1.3 | Datadock is a PySpark-based data interoperability library. It automatically detects schemas from heterogeneous files (CSV, JSON, Parquet), groups them by structural similarity, and performs standardized batch reads. Designed for pipelines handling non-uniform large-scale data, enabling robust integration and reuse in distributed environments. | # Datadock
**Datadock** is a Python library built on top of PySpark, designed to simplify **data interoperability** between files of different formats and schemas in modern data engineering pipelines.
It automatically detects schemas from CSV, JSON and Parquet files, groups structurally similar files, and allows standardized reading of all grouped files into a single Spark DataFrame — even in highly heterogeneous datasets.
## ✨ Key Features
- 🚀 **Automatic parsing** of multiple file formats: `.csv`, `.json`, `.parquet`
- 🧠 **Schema-based file grouping** by structural similarity
- 📊 **Auto-selection of dominant schemas**
- 🛠️ **Unified read** across similar files into a single PySpark DataFrame
- 🔍 **Schema insight** for diagnostics and inspection
## 🔧 Installation
```bash
pip install datadock
```
## 🗂️ Expected Input Structure
Place your data files (CSV, JSON or Parquet) inside a single folder. The library will automatically detect supported files and organize them by schema similarity.
```bash
/data/input/
├── sales_2020.csv
├── sales_2021.csv
├── products.json
├── archive.parquet
├── log.parquet
```
## 🧪 Usage Example
```python
from datadock import scan_schema, get_schema_info, read_data
path = "/path/to/your/data"
# Logs schema groups detected
scan_schema(path)
# Retrieves schema metadata
info = get_schema_info(path)
print(info)
# Loads all files from schema group 1
df = read_data(path, schema_id=1, logs=True)
df.show()
```
## 📌 Public API
### `scan_schema`
Logs the identified schema groups found in the specified folder.
### `get_schema_info`
Returns a list of dictionaries containing:
- `schema_id`: ID of the schema group
- `file_count`: number of files in the group
- `column_count`: number of columns in the schema
- `files`: list of file names in the group
### `read_data`
Reads and merges all files that share the same schema.
If `schema_id` is not specified, the group with the most columns will be selected.
## ✅ Requirements
- Python 3.10+
- PySpark
## 📚 Motivation
In real-world data engineering workflows, it's common to deal with files that represent the same data domain but have slight structural variations — such as missing columns, different orders, or evolving schemas.
**Datadock** automates the process of grouping, inspecting, and reading these files reliably, allowing you to build pipelines that are schema-aware, scalable, and format-agnostic.
## 📄 License
This project is licensed under the **MIT License**.
| text/markdown | Otavio Oliveira | datadock.sup@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"loguru<0.8.0,>=0.7.3",
"pyarrow<21.0.0,>=20.0.0",
"pyspark<4.0.0,>=3.5.5"
] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.11.14 Linux/6.11.0-1018-azure | 2026-02-20T17:54:40.193802 | datadock-0.1.3-py3-none-any.whl | 8,388 | 24/b8/36c2e6fe980afdea64256f5d5c229f55b1e2495551bc970310c492eb1c6d/datadock-0.1.3-py3-none-any.whl | py3 | bdist_wheel | null | false | b3f7c0f4f1fcfa49f20534ba6087cd42 | 592f36faa2256b0cb2717d37483a3a75882d5428ff085d0c9c29fb1651fee81f | 24b836c2e6fe980afdea64256f5d5c229f55b1e2495551bc970310c492eb1c6d | null | [
"LICENSE"
] | 218 |
2.4 | lightpath | 1.0.9 | {{ cookiecutter.description }} | LCLS Lightpath
==============
.. image:: https://travis-ci.org/pcdshub/lightpath.svg?branch=master
:target: https://travis-ci.org/pcdshub/lightpath
Python module for control of LCLS beamlines
By abstracting individual devices into larger collections of paths, operators
can quickly guide beam to experimental end stations. Instead of dealing with
the individual interfaces for each device, devices are summarized in states.
This allows operators to quickly view and manipulate large sections of the
beamline when the goal is to simply handle beam delivery.
Conda
++++++
Install the most recent tagged build:
.. code::
conda install lightpath -c pcds-tag -c conda-forge
Install the most recent development build:
.. code::
conda install lightpath -c pcds-dev -c conda-forge
| text/x-rst | SLAC National Accelerator Laboratory | null | null | null | Copyright (c) 2023, The Board of Trustees of the Leland Stanford Junior
University, through SLAC National Accelerator Laboratory (subject to receipt
of any required approvals from the U.S. Dept. of Energy). All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
(1) Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
(2) Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
(3) Neither the name of the Leland Stanford Junior University, SLAC National
Accelerator Laboratory, U.S. Dept. of Energy nor the names of its
contributors may be used to endorse or promote products derived from this
software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER, THE UNITED STATES GOVERNMENT,
OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
OF SUCH DAMAGE.
You are under no obligation whatsoever to provide any bug fixes, patches, or
upgrades to the features, functionality or performance of the source code
("Enhancements") to anyone; however, if you choose to make your Enhancements
available either publicly, or directly to SLAC National Accelerator Laboratory,
without imposing a separate written license agreement for such Enhancements,
then you hereby grant the following license: a non-exclusive, royalty-free
perpetual license to install, use, modify, prepare derivative works, incorporate
into other computer software, distribute, and sublicense such Enhancements or
derivative works thereof, in binary and source code form.
| null | [
"Development Status :: 2 - Pre-Alpha",
"Natural Language :: English",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"coloredlogs",
"happi>=1.6.0",
"numpy",
"ophyd",
"networkx",
"prettytable",
"pydm; extra == \"gui\"",
"PyQt5; extra == \"gui\"",
"qtawesome; extra == \"gui\"",
"qtpy; extra == \"gui\"",
"typhos>=1.0.0; extra == \"gui\"",
"docs-versions-menu; extra == \"doc\"",
"sphinx; extra == \"doc\"",
"sphinx_rtd_theme>=1.2.0; extra == \"doc\"",
"sphinx-argparse; extra == \"doc\"",
"sphinxcontrib-jquery; extra == \"doc\"",
"pytest; extra == \"test\"",
"ipython; extra == \"test\"",
"matplotlib; extra == \"test\"",
"flake8; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-qt; extra == \"test\"",
"pydm; extra == \"test\"",
"PyQt5; extra == \"test\"",
"qtawesome; extra == \"test\"",
"qtpy; extra == \"test\"",
"typhos>=1.0.0; extra == \"test\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T17:54:25.037028 | lightpath-1.0.9.tar.gz | 190,037 | 16/9b/192b444a68b7df0df9be5fe1d85cb68a213301e0425c7794f95bacc973da/lightpath-1.0.9.tar.gz | source | sdist | null | false | 3d5e3159607b51320cb85d34694ccb15 | 53f496842ab4645a21a786767ee16831ba21c4d372a87d04fd6945551c01d235 | 169b192b444a68b7df0df9be5fe1d85cb68a213301e0425c7794f95bacc973da | null | [
"LICENSE.md",
"AUTHORS.rst"
] | 233 |
2.4 | honeybee-core | 1.64.21 | A library to create 3D building geometry for various types of environmental simulation. | 
[](https://github.com/ladybug-tools/honeybee-core/actions)
[](https://www.python.org/downloads/release/python-3120/) [](https://www.python.org/downloads/release/python-3100/) [](https://www.python.org/downloads/release/python-370/) [](https://www.python.org/downloads/release/python-270/) [](https://github.com/IronLanguages/ironpython2/releases/tag/ipy-2.7.8/)
# honeybee-core
Honeybee is a collection of Python libraries to create representations of buildings
following [honeybee-schema](https://github.com/ladybug-tools/honeybee-schema/wiki).
This package is the core library that provides honeybee's common functionalities.
To extend these functionalities you should install available Honeybee extensions or write
your own.
Here are a number of frequently used extensions for Honeybee:
- [honeybee-radiance](https://github.com/ladybug-tools/honeybee-radiance): Adds daylight simulation to Honeybee.
- [honeybee-energy](https://github.com/ladybug-tools/honeybee-energy): Adds Energy simulation to Honeybee.
- [honeybee-display](https://github.com/ladybug-tools/honeybee-display): Adds VTK visualization to Honeybee.
# Installation
To install the core library use:
`pip install -U honeybee-core`
To check if Honeybee command line interface is installed correctly use `honeybee viz` and you
should get a `viiiiiiiiiiiiizzzzzzzzz!` back in response! :bee:
# [API Documentation](https://www.ladybug.tools/honeybee-core/docs/)
## Local Development
1. Clone this repo locally
```console
git clone git@github.com:ladybug-tools/honeybee-core.git
# or
git clone https://github.com/ladybug-tools/honeybee-core.git
```
2. Install dependencies:
```console
cd honeybee-core
pip install -r dev-requirements.txt
pip install -r requirements.txt
```
3. Run Tests:
```console
python -m pytest ./tests
```
4. Generate Documentation:
```console
sphinx-apidoc -f -e -d 4 -o ./docs ./honeybee
sphinx-build -b html ./docs ./docs/_build/docs
```
| text/markdown | Ladybug Tools | info@ladybug.tools | null | null | AGPL-3.0 | null | [
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: IronPython",
"Operating System :: OS Independent"
] | [] | https://github.com/ladybug-tools/honeybee-core | null | null | [] | [] | [] | [
"ladybug-core==0.44.36",
"ladybug-geometry-polyskel==1.7.40",
"honeybee-schema==2.0.3; python_version >= \"3.7\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-20T17:53:59.119373 | honeybee_core-1.64.21.tar.gz | 172,200 | 77/92/372f032e35f16a3968132ee643ba9a23db9cdd90cfade4807c74740c0113/honeybee_core-1.64.21.tar.gz | source | sdist | null | false | 28ee165bbb528551aa329ef6c06d10bd | 06a9ca4b52551bf5fc29d6914a7423f339d8f20a146c2ee9214a2bd9724b1ca0 | 7792372f032e35f16a3968132ee643ba9a23db9cdd90cfade4807c74740c0113 | null | [
"LICENSE"
] | 1,195 |
2.4 | easypost | 10.5.0 | EasyPost Shipping API Client Library for Python | # EasyPost Python Client Library
[](https://github.com/EasyPost/easypost-python/actions?query=workflow%3ACI)
[](https://coveralls.io/github/EasyPost/easypost-python)
[](https://badge.fury.io/py/easypost)
EasyPost, the simple shipping solution. You can sign up for an account at <https://easypost.com>.
## Install
The library is tested against Python3 and should be compatible with PyPy3.
```bash
pip install easypost
```
```python
# Import the EasyPost library:
import easypost
```
## Usage
A simple create & buy shipment example:
```python
import os
import easypost
client = easypost.EasyPostClient(os.getenv('EASYPOST_API_KEY'))
shipment = client.shipment.create(
from_address = {
"name": "EasyPost",
"street1": "118 2nd Street",
"street2": "4th Floor",
"city": "San Francisco",
"state": "CA",
"zip": "94105",
"country": "US",
"phone": "415-456-7890",
},
to_address = {
"name": "Dr. Steve Brule",
"street1": "179 N Harbor Dr",
"city": "Redondo Beach",
"state": "CA",
"zip": "90277",
"country": "US",
"phone": "310-808-5243",
},
parcel = {
"length": 10.2,
"width": 7.8,
"height": 4.3,
"weight": 21.2,
},
)
bought_shipment = client.shipment.buy(shipment.id, rate=shipment.lowest_rate())
print(bought_shipment)
```
### HTTP Hooks
Users can subscribe to HTTP requests and responses via the `RequestHook` and `ResponseHook` objects. To do so, pass a function to the `subscribe_to_request_hook` or `subscribe_to_response_hook` methods of an `EasyPostClient` object:
```python
def custom_function(**kwargs):
"""Pass your code here, data about the request/response is contained within `kwargs`."""
print(f"Received a request with the status code of: {kwargs.get('http_status')}")
client = easypost.EasyPostClient(os.getenv('EASYPOST_API_KEY'))
client.subscribe_to_response_hook(custom_function)
# Make your API calls here, your custom_function will trigger once a response is received
```
You can also unsubscribe your functions in a similar manner by using the `unsubscribe_from_request_hook` and `unsubscribe_from_response_hook` methods of a client object.
## Documentation
API documentation can be found at: <https://docs.easypost.com>.
Library documentation can be found on the web at: <https://easypost.github.io/easypost-python/> or by building them locally via the `just docs` command.
Upgrading major versions of this project? Refer to the [Upgrade Guide](UPGRADE_GUIDE.md).
## Support
New features and bug fixes are released on the latest major release of this library. If you are on an older major release of this library, we recommend upgrading to the most recent release to take advantage of new features, bug fixes, and security patches. Older versions of this library will continue to work and be available as long as the API version they are tied to remains active; however, they will not receive updates and are considered EOL.
For additional support, see our [org-wide support policy](https://github.com/EasyPost/.github/blob/main/SUPPORT.md).
## Development
```bash
# Install dependencies
just install
# Lint project
just lint
just lint-fix
# Run tests
EASYPOST_TEST_API_KEY=123... EASYPOST_PROD_API_KEY=123... just test
EASYPOST_TEST_API_KEY=123... EASYPOST_PROD_API_KEY=123... just coverage
# Run security analysis
just scan
# Generate library documentation
just docs
# Update submodules
just update-examples-submodule
```
### Testing
The test suite in this project was specifically built to produce consistent results on every run, regardless of when they run or who is running them. This project uses [VCR](https://github.com/kevin1024/vcrpy) to record and replay HTTP requests and responses via "cassettes". When the suite is run, the HTTP requests and responses for each test function will be saved to a cassette if they do not exist already and replayed from this saved file if they do, which saves the need to make live API calls on every test run. If you receive errors about a cassette expiring, delete and re-record the cassette to ensure the data is up-to-date.
**Sensitive Data:** We've made every attempt to include scrubbers for sensitive data when recording cassettes so that PII or sensitive info does not persist in version control; however, please ensure when recording or re-recording cassettes that prior to committing your changes, no PII or sensitive information gets persisted by inspecting the cassette.
**Making Changes:** If you make an addition to this project, the request/response will get recorded automatically for you if the `@pytest.mark.vcr()` decorator is included on the test function. When making changes to this project, you'll need to re-record the associated cassette to force a new live API call for that test which will then record the request/response used on the next run.
**Test Data:** The test suite has been populated with various helpful fixtures that are available for use, each completely independent from a particular user **with the exception of the USPS carrier account ID** (see [Unit Test API Keys](#unit-test-api-keys) for more information) which has a fallback value of our internal testing user's ID. Some fixtures use hard-coded dates that may need to be incremented if cassettes get re-recorded (such as reports or pickups).
#### Unit Test API Keys
The following are required on every test run:
- `EASYPOST_TEST_API_KEY`
- `EASYPOST_PROD_API_KEY`
Some tests may require an EasyPost user with a particular set of enabled features such as a `Partner` user when creating referrals. We have attempted to call out these functions in their respective docstrings. The following are required when you need to re-record cassettes for applicable tests:
- `USPS_CARRIER_ACCOUNT_ID` (eg: one-call buying a shipment for non-EasyPost employees)
- `PARTNER_USER_PROD_API_KEY` (eg: creating a referral user)
- `REFERRAL_CUSTOMER_PROD_API_KEY` (eg: adding a credit card to a referral user)
#### Google Cloud SDK
To run the test suite with the Google Cloud SDK (`urlfetch` instead of the `requests` library), you'll need the following:
1. Install the appengine Python package to this virtual environment: `venv/bin/pip install appengine-python-standard`
1. Install the Google Cloud SDK
- [Direct Download](https://cloud.google.com/sdk/docs/install)
- [Homebrew](https://formulae.brew.sh/cask/google-cloud-sdk)
1. Point the `PYTHONPATH` environment variable to the path of the newly installed `google-cloud-sdk` directory. For Homebrew, this is `"$(brew --prefix)/share/google-cloud-sdk"`
1. Run the test suite with the commands listed in this README
| text/markdown | null | EasyPost <support@easypost.com> | null | null | The MIT License
Copyright (c) 2013 EasyPost (Simpler Postage, Inc)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
| null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"License :: OSI Approved :: MIT License",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"requests>=2.4.3",
"bandit==1.8.*; extra == \"dev\"",
"build==1.2.*; extra == \"dev\"",
"mypy==1.15.*; extra == \"dev\"",
"pdoc==15.*; extra == \"dev\"",
"pytest==8.*; extra == \"dev\"",
"pytest-cov==6.*; extra == \"dev\"",
"pytest-vcr==1.*; extra == \"dev\"",
"ruff==0.14.*; extra == \"dev\"",
"vcrpy==7.*; extra == \"dev\""
] | [] | [] | [] | [
"Docs, https://docs.easypost.com",
"Tracker, https://github.com/EasyPost/easypost-python/issues",
"Source, https://github.com/EasyPost/easypost-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T17:53:28.339098 | easypost-10.5.0.tar.gz | 122,458 | 6f/18/db96f1259291cbc60e42ee1336e154c655111e1e14a8eae560118b58271d/easypost-10.5.0.tar.gz | source | sdist | null | false | 0a320bf80e8f712b719cf4ea6d5cb2eb | 47026b57c6a9e2a6c1dc52624bdfce33ce85d827eebc3b4cfa49f523297a2302 | 6f18db96f1259291cbc60e42ee1336e154c655111e1e14a8eae560118b58271d | null | [
"LICENSE"
] | 552 |
2.4 | caspian-utils | 0.1.0 | A utility package for Caspian projects | # Caspian Utils (`casp`)
Python is finally reactive.
**Caspian Utils** is the core utility package behind the Caspian framework ecosystem. It provides the building blocks used by Caspian projects: FastAPI-first plumbing, HTML/template processing, reactive integration patterns, RPC helpers, and common DX utilities.
- **PyPI package name:** `casp`
- **Framework/brand name:** Caspian (Caspian Utils is the shared library)
---
## Why Caspian Utils?
Modern web stacks often require a separate Node backend, bundlers, and a growing set of conventions. Caspian projects remove that complexity while keeping the power:
- Keep the **DOM as the source of truth**
- Write **async Python** for server logic
- Call server functions directly from the UI via **RPC**
- Ship without a bundler or build pipeline
Caspian Utils exists to make those patterns reusable across apps and tooling.
---
## Key Capabilities
### FastAPI engine (native async)
Run application logic in **async Python** and leverage the FastAPI ecosystem (middleware, dependency injection, validation, Starlette sessions, etc.).
### Reactive DOM (no build step)
Modify HTML and see changes instantly. No Webpack/Vite required. The DOM is the runtime surface.
### File-system routing
Create pages like:
```text
app/users/index.py
app/users/index.html
```
Caspian projects mount routes automatically based on your folder structure.
### Type-safe RPC (no API routes)
Call Python functions directly from the frontend without manually creating REST endpoints.
### Prisma ORM integration (optional)
Define your schema once and use an auto-generated, type-safe client in Python (no SQL boilerplate).
---
## Installation
```bash
pip install caspian-utils
```
**Python:** `>=3.11`
---
## Quick Start (Conceptual)
A typical Caspian-style app is split into:
- `app/**/index.html` for UI
- `app/**/index.py` (or actions modules) for backend logic / RPC
- optional `app/**/layout.html` for nested layouts
### 1) Reactive UI in HTML
```html
<!-- app/todos/index.html -->
<!-- 1. Import Python Components -->
<!-- @import { Badge } from ../components/ui -->
<div class="flex gap-2 mb-4">
<Badge variant="default">Tasks: {todos.length}</Badge>
</div>
<!-- 2. Reactive loop -->
<ul>
<template pp-for="todo in todos">
<li key="{todo.id}" class="p-2 border-b">{todo.title}</li>
</template>
</ul>
<script>
// 3. State initialized by Python backend automatically
const [todos, setTodos] = pp.state([[todos]]);
</script>
```
### 2) Direct async RPC in Python
```py
# actions.py
from casp.rpc import rpc
from src.lib.prisma.db import prisma
@rpc(require_auth=True)
async def like_post(post_id: str):
# 1. Direct DB access (async)
post = await prisma.post.update(
where={"id": post_id},
data={"likes": {"increment": 1}},
)
# 2. Return data directly to frontend
return post.likes
```
### 3) Call RPC from the frontend
```html
<button onclick="likePost()">Like Post</button>
<script>
async function likePost() {
const newCount = await pp.rpc("like_post", { post_id: "123" });
setLikes(newCount);
}
</script>
```
---
## What’s Included
Caspian Utils ships with integrations commonly used in Caspian projects:
- `fastapi`, `uvicorn`
- `jinja2` (templating)
- `beautifulsoup4` (HTML processing)
- `python-dotenv` (env loading)
- `slowapi` (rate limiting)
- `starsessions` (session management)
- `python-multipart` (uploads)
- `httpx` (HTTP client)
- `tailwind-merge` (class merging utility)
- `werkzeug` (helpers; used in some tooling)
- ID utilities: `cuid2`, `nanoid`, `python-ulid`
(Exact dependencies may vary by version; see `setup.py` in the repository.)
---
## Packaging Notes
If you are publishing to PyPI under `caspian-utils`, ensure your `setup.py` matches:
```py
setup(
name="caspian-utils",
# ...
)
```
---
## Repository
[Repository](https://github.com/TheSteelNinjaCode/casp)
---
## License
MIT
---
## Author
Jefferson Abraham
| text/markdown | Jefferson Abraham | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/TheSteelNinjaCode/caspian_utils | null | >=3.11 | [] | [] | [] | [
"fastapi~=0.110",
"uvicorn~=0.27",
"python-dotenv~=1.0",
"jinja2~=3.1",
"beautifulsoup4~=4.12",
"tailwind-merge~=0.1",
"slowapi~=0.1",
"python-multipart~=0.0.9",
"starsessions~=1.3",
"httpx~=0.27",
"werkzeug~=3.0",
"cuid2~=2.0",
"nanoid~=2.0",
"python-ulid~=2.7"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.0 | 2026-02-20T17:51:52.792489 | caspian_utils-0.1.0.tar.gz | 44,874 | 6d/3e/14de49f81878dac868cbaa5d012dee00c2073cc34572ecb96a7d162d5caf/caspian_utils-0.1.0.tar.gz | source | sdist | null | false | 49beda6c3cbabe759cc874267fb34d6f | fa726bcd04e7d0baba9c7ad98bb745d50046d4941f14fc1522a679a98a9c7da5 | 6d3e14de49f81878dac868cbaa5d012dee00c2073cc34572ecb96a7d162d5caf | null | [] | 215 |
2.4 | django-moo | 0.47.2 | A game server for hosting text-based online MOO-like games. | # DjangoMOO
> "LambdaMOO on Django"





DjangoMOO is a game server for hosting text-based online MOO-like games.
## Quick Start
Checkout the project and use Docker Compose to run the necessary components:
git clone https://gitlab.com/bubblehouse/django-moo
cd django-moo
docker compose up
Run `migrate`, `collectstatic`, and bootstrap the initial database with some sample objects and users:
docker compose run webapp manage.py migrate
docker compose run webapp manage.py collectstatic
docker compose run webapp manage.py moo_init
docker compose run webapp manage.py createsuperuser --username phil
docker compose run webapp manage.py moo_enableuser --wizard phil Wizard
Now you should be able to connect to https://localhost/ and login with the superuser you just created, described below.
## Login via Web
To make things easier for folks without SSH access or who are behind firewalls, the server interface is exposed through [webssh](https://github.com/huashengdun/webssh).

This client is only able to open connections to the local SSH server.
### Admin Interface
As a secondary way to view the contents of a running server, a Django Admin interface is available at `/admin`. It's really a last resort for most things, but it's still the best way to modify verb code in a running server:

## Login via SSH
Of course, it's also possible (perhaps even preferred) to connect directly over SSH:

It's also possible to associate an SSH Key with your user in the Django Admin so as to skip the password prompt.
When you're done exploring, you can hit `Ctrl-D` to exit.
| text/markdown | Phil Christensen | phil@bubblehouse.org | null | null | AGPL | null | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11"
] | [] | null | null | <3.12,>=3.11 | [] | [] | [] | [
"asyncssh[bcrypt]<3.0.0,>=2.14.0",
"celery[redis]<6.0.0,>=5.4.0",
"django<6,>=5",
"django-ace<2.0.0,>=1.32.4",
"django-celery-beat<3.0.0,>=2.6.0",
"django-celery-results<3.0.0,>=2.5.1",
"django-simplesshkey<3.0.0,>=2.1.0",
"more-itertools<11.0.0,>=10.7.0",
"packaging<26.0,>=25.0",
"poetry-plugin-export<2.0.0,>=1.8.0",
"prompt-toolkit<4.0.0,>=3.0.39",
"psycopg2-binary<3.0.0,>=2.9.7",
"ptpython<4.0.0,>=3.0.23",
"redis<6.0.0,>=5.2.1",
"restrictedpython<8.0,>=7.0",
"rich<14.0.0,>=13.5.3",
"uwsgi<3.0.0,>=2.0.22",
"watchdog<5.0.0,>=4.0.0",
"webssh<2.0.0,>=1.6.2"
] | [] | [] | [] | [
"Repository, https://gitlab.com/bubblehouse/django-moo"
] | twine/6.2.0 CPython/3.11.12 | 2026-02-20T17:51:51.924068 | django_moo-0.47.2.tar.gz | 168,222 | 5d/55/2d91f8d7f19267efad4ffe3bb02a8ce642070e08cce0f35414ed9b001e54/django_moo-0.47.2.tar.gz | source | sdist | null | false | 23bcdfdf38359141040e7ea05504f9c5 | 1a1f0ee0299e0564aeead693886eda2ab88f47b5f35a3437fc6d880452e7cb17 | 5d552d91f8d7f19267efad4ffe3bb02a8ce642070e08cce0f35414ed9b001e54 | null | [
"LICENSE"
] | 221 |
2.4 | cyberwave-cli | 0.11.23 | The official command-line interface for Cyberwave | # Cyberwave CLI
The official command-line interface for Cyberwave. Authenticate and bootstrap robotics projects from your terminal.
## Installation
### From PyPI (pip)
```bash
pip install cyberwave-cli
```
### From APT (Debian/Ubuntu)
```bash
# You may need to install curl and/or gpg first, if you're on a very minimal host:
sudo apt update && sudo apt install curl gpg -y
# Add Cyberwave repository (one-time setup)
curl -fsSL "https://packages.buildkite.com/cyberwave/cyberwave-cli/gpgkey" | sudo gpg --dearmor -o /etc/apt/keyrings/cyberwave_cyberwave-cli-archive-keyring.gpg
# Configure the source
echo -e "deb [signed-by=/etc/apt/keyrings/cyberwave_cyberwave-cli-archive-keyring.gpg] https://packages.buildkite.com/cyberwave/cyberwave-cli/any/ any main\ndeb-src [signed-by=/etc/apt/keyrings/cyberwave_cyberwave-cli-archive-keyring.gpg] https://packages.buildkite.com/cyberwave/cyberwave-cli/any/ any main" | sudo tee /etc/apt/sources.list.d/buildkite-cyberwave-cyberwave-cli.list > /dev/null
# Install
sudo apt update && sudo apt install cyberwave-cli
```
### From Source
```bash
git clone https://github.com/cyberwave-os/cyberwave-cli
cd cyberwave-cli
pip install -e .
```
## Quick Start - Edge
### 1. SSH into your edge device
```bash
ssh yourhost@your-ip
```
### 2. Set up your Edge device
Once you are in your edge device, set it up by:
```bash
cyberwave edge install
```
This command will guide you to your first-time setup of your edge device.
## Commands
| Command | Description |
| ------------ | ---------------------------------------- |
| `login` | Authenticate with Cyberwave |
| `logout` | Remove stored credentials |
| `config-dir` | Print the active configuration directory |
| `core` | Visualize the core commands |
### `cyberwave login`
Authenticates with Cyberwave using your email and password.
```bash
# Interactive login (prompts for credentials)
cyberwave login
# Non-interactive login
cyberwave login --email you@example.com --password yourpassword
```
**Options:**
- `-e, --email`: Email address
- `-p, --password`: Password (will prompt if not provided)
### `cyberwave config-dir`
Prints the resolved configuration directory path. Useful in scripts to locate credentials and config files without hardcoding paths.
```bash
cyberwave config-dir
# /etc/cyberwave
# Use in a script
CONFIG_DIR=$(cyberwave config-dir)
cat "$CONFIG_DIR/credentials.json"
```
The CLI resolves the directory with the following priority:
1. `CYBERWAVE_EDGE_CONFIG_DIR` environment variable (explicit override)
2. `/etc/cyberwave` if writable or creatable (system-wide, preferred)
3. `~/.cyberwave` as a fallback for non-root users
## `cyberwave edge`
Manage the edge node service lifecycle, configuration, and monitoring.
| Subcommand | Description |
| ---------------- | -------------------------------------------------------- |
| `install` | Install cyberwave-edge-core and register systemd service |
| `uninstall` | Stop and remove the systemd service |
| `start` | Start the edge node |
| `stop` | Stop the edge node |
| `restart` | Restart the edge node (systemd or process) |
| `status` | Check if the edge node is running |
| `pull` | Pull edge configuration from backend |
| `whoami` | Show device fingerprint and info |
| `health` | Check edge health status via MQTT |
| `remote-status` | Check edge status from twin metadata (heartbeat) |
| `logs` | Show edge node logs |
| `install-deps` | Install edge ML dependencies |
| `sync-workflows` | Trigger workflow sync on the edge node |
| `list-models` | List model bindings loaded on the edge node |
### `cyberwave edge install`
Installs the `cyberwave-edge-core` package (via apt-get on Debian/Ubuntu) and creates a systemd service so it starts on boot. Guides you through workspace and environment selection.
```bash
sudo cyberwave edge install
sudo cyberwave edge install -y # skip prompts
```
### `cyberwave edge uninstall`
Stops the systemd service, removes the unit file, and optionally uninstalls the package.
```bash
sudo cyberwave edge uninstall
```
### `cyberwave edge start / stop / restart`
```bash
cyberwave edge start # background
cyberwave edge start -f # foreground
cyberwave edge start --env-file ./my/.env # custom config
cyberwave edge stop
sudo cyberwave edge restart # systemd
cyberwave edge restart --env-file .env # process mode
```
### `cyberwave edge status`
Checks whether the edge node process is running.
```bash
cyberwave edge status
```
### `cyberwave edge pull`
Pulls edge configuration from the backend using the discovery API (or legacy twin/environment lookup).
```bash
cyberwave edge pull # auto-discover via fingerprint
cyberwave edge pull --twin-uuid <UUID> # single twin (legacy)
cyberwave edge pull --environment-uuid <UUID> # all twins in environment (legacy)
cyberwave edge pull -d ./my-edge # custom output directory
```
### `cyberwave edge whoami`
Displays the unique hardware fingerprint for this device, used to identify the edge when connecting to twins.
```bash
cyberwave edge whoami
```
### `cyberwave edge health`
Queries real-time health status via MQTT (stream states, FPS, WebRTC connections).
```bash
cyberwave edge health -t <TWIN_UUID>
cyberwave edge health -t <TWIN_UUID> --watch # continuous
cyberwave edge health -t <TWIN_UUID> --timeout 10
```
### `cyberwave edge remote-status`
Checks the last heartbeat stored in twin metadata to determine online/offline status without MQTT.
```bash
cyberwave edge remote-status -t <TWIN_UUID>
```
### `cyberwave edge logs`
```bash
cyberwave edge logs # last 50 lines
cyberwave edge logs -n 100 # last 100 lines
cyberwave edge logs -f # follow (tail -f)
```
### `cyberwave edge install-deps`
Installs common ML runtimes needed by edge plugins.
```bash
cyberwave edge install-deps # ultralytics + opencv
cyberwave edge install-deps -r onnx -r tflite # specific runtimes
```
### `cyberwave edge sync-workflows / list-models`
```bash
cyberwave edge sync-workflows --twin-uuid <UUID> # re-sync model bindings
cyberwave edge list-models --twin-uuid <UUID> # show loaded models
```
## Configuration
Configuration is stored in a single directory shared by the CLI and the edge-core service. The directory is resolved as follows:
1. **`CYBERWAVE_EDGE_CONFIG_DIR`** env var — explicit override
2. **`/etc/cyberwave`** — system-wide (preferred, requires root or write access)
3. **`~/.cyberwave`** — per-user fallback for non-root environments
Run `cyberwave config-dir` to see which directory is active.
**Files inside the config directory:**
- `credentials.json` — API token and workspace info (permissions `600`)
- `environment.json` — selected workspace, environment, and twin bindings
- `fingerprint.json` — unique edge device identifier
Other environment variables:
- `CYBERWAVE_API_URL`: Override the API URL (default: `https://api.cyberwave.com`)
- `CYBERWAVE_BASE_URL`: SDK-compatible alias for API URL
- `CYBERWAVE_ENVIRONMENT`: Environment name (for example `dev`, defaults to `production`)
When credentials are written, the CLI also persists these `CYBERWAVE_*` values into
`credentials.json` so `cyberwave-edge-core` can reuse them in service mode.
## Building for Distribution
### PyInstaller (standalone binary)
```bash
pip install -e ".[build]"
pyinstaller --onefile --name cyberwave-cli cyberwave_cli/main.py
```
### Debian Package
See `debian/` directory for packaging scripts.
## Support
- **Documentation**: [docs.cyberwave.com](https://docs.cyberwave.com)
- **Issues**: [GitHub Issues](https://github.com/cyberwave-os/cyberwave-cli/issues)
- **Community**: [Discord](https://discord.gg/dfGhNrawyF)
| text/markdown | null | Cyberwave <info@cyberwave.com> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"click>=8.1.0",
"cyberwave>=0.3.14",
"httpx>=0.25.0",
"rich>=13.0.0",
"pyinstaller>=6.0.0; python_version < \"3.15\" and extra == \"build\"",
"mypy>=1.5.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://cyberwave.com",
"Documentation, https://docs.cyberwave.com",
"Repository, https://github.com/cyberwave-os/cyberwave-cli",
"Issues, https://github.com/cyberwave-os/cyberwave-cli/issues"
] | poetry/2.3.2 CPython/3.10.19 Linux/6.11.0-1018-azure | 2026-02-20T17:51:46.082095 | cyberwave_cli-0.11.23-py3-none-any.whl | 77,939 | 62/3f/626cb40e18215d23a25514e096d4e673fbac0c965f53ce2f50adffc26f6e/cyberwave_cli-0.11.23-py3-none-any.whl | py3 | bdist_wheel | null | false | 3fa75a27035f761ad25ee23645011e93 | e45e0e15468499cf16cd25e47d988883c34762f916324fe7d13d0d3b322fe548 | 623f626cb40e18215d23a25514e096d4e673fbac0c965f53ce2f50adffc26f6e | null | [
"LICENSE"
] | 208 |
2.4 | simple-blogger | 0.2.15 | A simple blogger library | # **PythonSimpleBlogger library (simple_blogger)** #
This is simple library to make simple blog project with Python.
The library is distributed under the MIT license and can be downloaded and used by anyone.
----------
## How to install ##
To install, you can use the command:
pip3 install simple_blogger
Or download the repository from [GitHub](https://github.com/athenova/simple_blogger)
----------
## Initialization ##
Just start with a simple code.
```python
blogger = simple_blogger.CommonBlogger(PUT-YOUR-TELEGRAM-PRIVATE-CHAT-ID-IN-HERE)
blogger.init_project()
```
It initalizes folder structure of your own blog project in the working directory.
### Blog topics ###
Find project ideas json-file in `files/ideas` folder. Fill it with topics and categories of your blog.
### Creating tasks ###
Call `push`.
```python
blogger.push()
```
It creates tasks json-file in `files` folder with dates of publications and prompts to AI that generate image and text for topics.
### Adding Tasks ###
Put any json-file in `files/ideas` folder. It has to be idea-structured. You can put as many idea-files in idea folder as you want.
Call `revert` to put unhandled tasks back in backlog.
```python
blogger.revert()
```
Call `push` again. Now all backloged tasks and new ideas are in progress.
### Publication Review ###
Call `review` to send tommorow's publication to your private telegram channel.
```python
blogger.review()
```
### Publication ###
Call `send` to send today's publication to your public telegram channel.
```python
blogger.send()
```
**Note:** call `review` before `send` or call `send` with `image_gen`=`True`(*to produce image*) and `text_gen`=`True`(*to produce text*).
### Error handling ###
If something goes wrong method sends Exception text to your private telegram channel.
### Default parameters ###
Library uses working directory name as project name and production telegram channel name by default.
### Environment variables ###
Library uses `dall-e-3` model to generate images and `deepseek-chat` to generate texts by default and sends publications to telegram channels.
It needs following environment variables:
- BLOGGER_BOT_TOKEN
- OPENAI_API_KEY
- DEEPSEEK_API_KEY
Yandex generators needs following environment variables:
- YC_API_KEY
- YC_FOLDER_ID
## From the developer ##
> There are(or will be) more examples of using this library in sibling repos on [GitHub](https://github.com/athenova/simple_blogger)
| text/markdown | Aleksey Sergeyev | aleksey.sergeyev@yandex.com | null | null | null | python blog ai | [
"Programming Language :: Python :: 3.10",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/athenova/simple_blogger | null | >=3.10.6 | [] | [] | [] | [
"pillow>=10.4.0",
"openai>=1.76.0",
"pyTelegramBotAPI>=4.26.0",
"requests>=2.32.3",
"yandex-cloud-ml-sdk>=0.4.1",
"emoji>=2.14.1",
"markdown>=3.7",
"vk>=3.0",
"gigachat>=0.1.42",
"beautifulsoup4>=4.13.4"
] | [] | [] | [] | [
"Documentation, https://github.com/athenova/simple_blogger"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T17:51:38.326438 | simple_blogger-0.2.15.tar.gz | 18,725 | f0/37/52e96e91ed17334851cca9e69bb12d5c1b7dfbe6d4f5bf6270d395cfb78e/simple_blogger-0.2.15.tar.gz | source | sdist | null | false | 08a1f7e685759c94a4804ea0433278bc | b402e7936178b621aaf8c6669007ccc0a47621668220110c8ab120dec084f495 | f03752e96e91ed17334851cca9e69bb12d5c1b7dfbe6d4f5bf6270d395cfb78e | null | [
"LICENSE"
] | 207 |
2.4 | bet-hedge-calculator | 0.1.1 | CLI & library to compute sports bet hedging scenarios — guaranteed profit regardless of outcome. | # bet-hedge-calculator
A CLI tool and Python library for computing sports bet hedging scenarios.
Given a bet already placed on **Team A**, it shows you exactly what odds and stake you need on **Team B** to guarantee a fixed profit — regardless of which team wins.
---
## Installation
```bash
pip install bet-hedge-calculator
```
---
## CLI Usage
After installation a `bet-hedge-calc` command is available globally.
**Interactive mode** (prompts for input):
```bash
bet-hedge-calc
```
**Direct mode** (pass values as flags):
```bash
bet-hedge-calc --odds-a 2.50 --stake-a 100
```
**Custom ROI range and step**:
```bash
bet-hedge-calc --odds-a 3.0 --stake-a 200 --roi-min -10 --roi-max 100 --roi-step 2.5
```
**All options**:
```
--odds-a Decimal odds of Team A (e.g. 2.50)
--stake-a Your stake on Team A (e.g. 100)
--roi-min Minimum target ROI % (default: -20)
--roi-max Maximum target ROI % (default: 200)
--roi-step ROI step size in % (default: 5)
--currency Currency symbol (default: £)
```
### Example output
```
═══ Bet Hedging Calculator ═══
╭─────────────────────────────────────────────────────────────────────────────────────╮
│ Hedging Calculator — Team A odds: 2.5 | Stake on A: £100.00 │
├───────────────┬─────────────────┬───────────────┬─────────────────┬───────────────┤
│ Target ROI │ Required Odds B │ Stake on B │ Total Invested │ Guaranteed │
│ │ │ (£) │ (£) │ Profit (£) │
├───────────────┼─────────────────┼───────────────┼─────────────────┼───────────────┤
│ -20.0% │ 1.1765 │ £212.50 │ £312.50 │ -£62.50 │
│ -15.0% │ 1.2195 │ £205.00 │ £305.00 │ -£45.75 │
│ -10.0% │ 1.2658 │ £197.50 │ £297.50 │ -£29.75 │
│ -5.0% │ 1.3158 │ £190.00 │ £290.00 │ -£14.50 │
│ 0.0% │ 1.6667 │ £150.00 │ £250.00 │ £0.00 │
│ +5.0% │ 1.7500 │ £142.86 │ £242.86 │ £12.14 │
│ +10.0% │ 1.8333 │ £136.36 │ £236.36 │ £23.64 │
│ +20.0% │ 2.0000 │ £125.00 │ £225.00 │ £45.00 │
│ +50.0% │ 2.5000 │ £100.00 │ £200.00 │ £100.00 │
│ +100.0% │ — │ — │ — │ — │
│ │ Not achievable │
╰─────────────────────────────────────────────────────────────────────────────────────╯
```
---
## Python API
Import the library directly to get raw data or render the table programmatically.
### Get hedging data
```python
from bet_hedge_calculator import calculate_scenarios
scenarios = calculate_scenarios(odds_a=2.5, stake_a=100)
for s in scenarios:
if s.valid:
print(f"ROI {s.roi_pct:+.1f}% → odds B: {s.odds_b:.4f} | stake B: £{s.stake_b:.2f} | profit: £{s.guaranteed_profit:.2f}")
```
Output:
```
ROI -20.0% → odds B: 1.1765 | stake B: £212.50 | profit: £-62.50
ROI -15.0% → odds B: 1.2195 | stake B: £205.00 | profit: £-45.75
...
ROI +10.0% → odds B: 1.8333 | stake B: £136.36 | profit: £23.64
```
### Render the rich table
```python
from bet_hedge_calculator import calculate_scenarios, render_table
scenarios = calculate_scenarios(odds_a=2.5, stake_a=100)
render_table(scenarios, odds_a=2.5, stake_a=100, currency="$")
```
### Single scenario
```python
from bet_hedge_calculator import calculate_scenario
s = calculate_scenario(odds_a=3.0, stake_a=200, roi=0.10) # 10% ROI
if s.valid:
print(f"Stake £{s.stake_b:.2f} on Team B at odds {s.odds_b:.4f}")
print(f"Guaranteed profit: £{s.guaranteed_profit:.2f}")
```
### Custom ROI range
```python
from bet_hedge_calculator import calculate_scenarios
scenarios = calculate_scenarios(
odds_a=2.5,
stake_a=100,
roi_min_pct=0.0,
roi_max_pct=50.0,
roi_step_pct=2.5,
)
```
### HedgeScenario dataclass
Each scenario is a `HedgeScenario` with these fields:
| Field | Type | Description |
|---|---|---|
| `roi_pct` | `float` | Target ROI in percent (e.g. `10.0`) |
| `valid` | `bool` | `False` when the scenario is mathematically unachievable |
| `odds_b` | `float \| None` | Required decimal odds for Team B |
| `stake_b` | `float \| None` | Required stake on Team B |
| `total_invested` | `float \| None` | Combined stake on A + B |
| `guaranteed_profit` | `float \| None` | Profit regardless of outcome |
---
## The maths
For a bet already placed on **Team A** (decimal odds `O_A`, stake `S_A`), we want a guaranteed profit `P` regardless of which team wins:
```
If A wins: S_A × O_A − S_A − S_B = P
If B wins: S_B × O_B − S_B − S_A = P
```
Both equations give `S_A × O_A = S_B × O_B`, so:
```
O_B = O_A × (1 + ROI) / (O_A − 1 − ROI)
S_B = S_A × O_A / O_B
```
This is valid only when `ROI < O_A − 1`. Rows outside this range are marked **Not achievable**.
---
## Development
```bash
git clone https://github.com/your-username/bet-hedge-calculator
cd bet-hedge-calculator
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
pytest tests/ -v
```
---
## License
MIT
| text/markdown | null | null | null | null | MIT | betting, hedging, sports, odds, calculator, finance | [
"Development Status :: 3 - Alpha",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Office/Business :: Financial"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"rich>=13.0.0",
"pytest>=7.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/rohit8y/bet-hedge-calculator",
"Bug Tracker, https://github.com/rohit8y/bet-hedge-calculator/issues"
] | twine/6.2.0 CPython/3.10.10 | 2026-02-20T17:50:53.247743 | bet_hedge_calculator-0.1.1.tar.gz | 11,388 | d9/a7/d8b59d22d3f98ca72fe3f7d6a771fa219ffb54a4a588e7bfd07ed55be8d1/bet_hedge_calculator-0.1.1.tar.gz | source | sdist | null | false | 15e6bf2e35e9edaf940c439ab38ad165 | 26386886c34abddbc762e76e6b3d0aa0dd2c9a42832adf61566e497b25519742 | d9a7d8b59d22d3f98ca72fe3f7d6a771fa219ffb54a4a588e7bfd07ed55be8d1 | null | [] | 220 |
2.4 | ksef2 | 0.8.0 | Python SDK and Tools for Poland's KSeF (Krajowy System e-Faktur) API | <div align="center">
<a href="https://github.com/artpods56/KUL_Notarius" title="TrendRadar">
<img src="https://raw.githubusercontent.com/artpods56/ksef2/master/docs/assets/logo.png" alt="KSeF Toolkit" width="50%">
</a>
**Python SDK and Tools for Poland's KSeF (Krajowy System e-Faktur) v2.0 API.**

[](https://www.python.org/downloads/)
[](https://github.com/artpods56/ksef2/actions/workflows/integration-tests.yml) \
[](https://github.com/beartype/beartype)
[](https://github.com/pre-commit/pre-commit)
[](https://github.com/astral-sh/ruff)
[](https://opensource.org/licenses/MIT)
</div>
## Installation
```bash
pip install ksef2
```
Or with [uv](https://docs.astral.sh/uv/):
```bash
uv add ksef2
```
Requires Python 3.12+.
## Features
- **Type-Safe** — Full type annotations with Pydantic models, excellent IDE support and autocomplete
- **Pythonic API** — Context managers for sessions, clean interfaces, intuitive method chaining
- **Both Auth Methods** — XAdES (certificate-based) and token authentication supported
- **Automatic Encryption** — Invoice encryption/decryption handled transparently (AES-CBC, RSA-OAEP)
- **Session Resume** — Serialize session state and resume later, perfect for worker processes and long-running exports
- **Test Environment Support** — Self-signed certificates, test data setup with automatic cleanup via `temporal()` context manager
## Quick Start
```python
from datetime import datetime, timezone
from ksef2 import Client, Environment, FormSchema
from ksef2.core.xades import generate_test_certificate
from ksef2.domain.models import (
InvoiceQueryFilters, InvoiceSubjectType, InvoiceQueryDateRange, DateType,
)
NIP = "5261040828"
client = Client(Environment.TEST)
# Authenticate (XAdES — TEST environment)
cert, key = generate_test_certificate(NIP)
auth = client.auth.authenticate_xades(nip=NIP, cert=cert, private_key=key)
with client.sessions.open_online(
access_token=auth.access_token,
form_code=FormSchema.FA3,
) as session:
# Send an invoice
result = session.send_invoice(open("invoice.xml", "rb").read())
print(result.reference_number)
# Check processing status
status = session.get_invoice_status(result.reference_number)
# Export invoices matching a query
export = session.schedule_invoices_export(
filters=InvoiceQueryFilters(
subject_type=InvoiceSubjectType.SUBJECT1,
date_range=InvoiceQueryDateRange(
date_type=DateType.ISSUE,
from_=datetime(2026, 1, 1, tzinfo=timezone.utc),
to=datetime.now(tz=timezone.utc),
),
),
)
# Download the exported package
export_result = session.get_export_status(export.reference_number)
if package := export_result.package:
for path in session.fetch_package(package=package, target_directory="downloads"):
print(f"Downloaded: {path}")
```
> Full runnable version: [`send_query_export_download.py`](scripts/examples/invoices/send_query_export_download.py) — more examples in [`scripts/examples`](scripts/examples).
### XAdES on DEMO / PRODUCTION (MCU certificate)
The TEST environment accepts self-signed certificates generated by the SDK.
DEMO and PRODUCTION require a certificate issued by MCU — use the provided helpers to load it:
```python
from ksef2 import Client, Environment
from ksef2.core.xades import load_certificate_from_pem, load_private_key_from_pem
cert = load_certificate_from_pem("cert.pem") # downloaded from MCU
key = load_private_key_from_pem("key.pem")
auth = Client(Environment.DEMO).auth.authenticate_xades(
nip=NIP, cert=cert, private_key=key, verify_chain=False,
)
```
> If your certificate is a `.p12` archive: `load_certificate_and_key_from_p12("cert.p12", password=b"...")`
### Token Authentication
For production, or when you have a pre-generated KSeF token:
```python
from ksef2 import Client
client = Client() # uses production environment by default
auth = client.auth.authenticate_token(ksef_token="your-ksef-token", nip=NIP)
print(auth.access_token)
```
## Authenticated Operations
After authentication, you get an `AuthenticatedClient` with access to various services — no need to open an invoice session for these operations:
```python
auth = client.auth.authenticate_xades(nip=NIP, cert=cert, private_key=key)
# Manage KSeF authorization tokens
token = auth.tokens.generate(
permissions=[TokenPermission.INVOICE_READ, TokenPermission.INVOICE_WRITE],
description="API integration token",
)
print(f"Generated token: {token.token}")
# Query and modify API limits (TEST environment)
limits = auth.limits.get_context_limits()
print(f"Max invoices per session: {limits.online_session.max_invoices}")
# Manage permissions
auth.permissions.grant_person(
subject_identifier=IdentifierType.PESEL,
subject_value="12345678901",
permissions=[PermissionType.INVOICE_READ],
description="Read access for accountant",
first_name="Jan", last_name="Kowalski",
)
# List and terminate authentication sessions
sessions = auth.sessions.list()
auth.sessions.terminate_current()
# Manage KSeF certificates
certs = auth.certificates.query(status=CertificateStatus.ACTIVE)
```
## Development
```bash
just sync # Install all dependencies (including dev)
just test # Run unit tests
just regenerate-models # Regenerate OpenAPI models
```
### Other commands
```bash
just integration # Run integration tests (requires KSEF credentials in .env)
just coverage # Calculate API coverage (updates coverage.json)
just fetch-spec # Fetch latest OpenAPI spec from KSeF
```
## API Coverage
The SDK covers **77 of 77** KSeF API endpoints (100%). See feature docs for details:
- [Authentication](docs/guides/authentication.md) — XAdES, token auth, session management
- [Invoices](docs/guides/invoices.md) — send, download, query, export
- [Sessions](docs/guides/sessions.md) — online/batch sessions, resume support
- [Tokens](docs/guides/tokens.md) — generate and manage KSeF authorization tokens
- [Permissions](docs/guides/permissions.md) — grant/query permissions for persons and entities
- [Certificates](docs/guides/certificates.md) — enroll, query, revoke KSeF certificates
- [Limits](docs/guides/limits.md) — query and modify API rate limits
- [Test Data](docs/guides/testdata.md) — create test subjects, manage test environment
## License
[MIT](LICENSE.md)
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"cryptography>=44.0.0",
"dotenv>=0.9.9",
"httpx[http2]>=0.28.1",
"pydantic>=2.12.5",
"requests>=2.32.5",
"signxml>=4.0",
"structlog>=25.5.0",
"tenacity>=9.0",
"weasyprint>=63.0",
"xsdata-pydantic>=24.5"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T17:50:41.980706 | ksef2-0.8.0.tar.gz | 1,901,324 | de/f3/27308305e59f7ce02603fe886bfc1cfb9102b02899795df4d4a6acf06214/ksef2-0.8.0.tar.gz | source | sdist | null | false | 81e735be2bda77b20003a39001c3ad66 | b447bacbd0e0a9e2b82b981a19660f692267e520ea6946d7b3b93af5a44e8b47 | def327308305e59f7ce02603fe886bfc1cfb9102b02899795df4d4a6acf06214 | null | [
"LICENSE.md"
] | 227 |
2.4 | sagemaker-studio | 1.0.24 | Python library to interact with Amazon SageMaker Unified Studio | # SageMaker Studio
SageMaker Studio is an open source library for interacting with Amazon SageMaker Unified Studio resources. With the library, you can access these resources such as domains, projects, connections, and databases, all in one place with minimal code.
## Table of Contents
1. [Installation](#installation)
2. [Usage](#usage)
1. [Setting Up Credentials and ClientConfig](#credentials--client-config)
1. [Using ClientConfig](#using-clientconfig)
2. [Domain](#domain)
3. [Domain Properties](#domain-properties)
3. [Project](#project)
1. [Properties](#project-properties)
1. [IAM Role ARN](#iam-role)
2. [KMS Key ARN](#kms-key-arn)
3. [MLflow Tracking Server ARN](#mlflow-tracking-server-arn)
4. [S3 Path](#s3-path)
2. [Connections](#connections)
1. [Connection Data](#connection-data)
2. [Secrets](#secrets)
3. [Catalogs](#catalogs)
4. [Databases and Tables](#databases-and-tables)
1. [Databases](#databases)
2. [Tables](#tables)
4. [Execution APIs](#execution-apis)
1. [Local Execution APIs](#local-execution-apis)
1. [StartExecution API](#startexecution)
2. [GetExecution API](#getexecution)
3. [ListExecutions API](#listexecutions)
4. [StopExecution API](#stopexecution)
2. [Remote Execution APIs](#remote-execution-apis)
1. [StartExecution API](#startexecution-1)
2. [GetExecution API](#getexecution-1)
3. [ListExecutions API](#listexecutions-1)
4. [StopExecution API](#stopexecution-1)
## 1) Installation
The SageMaker Studio is built to PyPI, and the latest version of the library can be installed using the following command:
```bash
pip install sagemaker-studio
```
#### Supported Python Versions
SageMaker Studio supports Python versions 3.9 and newer.
#### Licensing
SageMaker Studio is licensed under the Apache 2.0 License. It is copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. The license is available at: http://aws.amazon.com/apache2.0/
## 2) Usage
### Setting Up Credentials and ClientConfig
If SageMaker Studio is being used within Amazon SageMaker Unified Studio JupyterLab, the library will automatically pull your latest credentials from the environment.
If you are using the library elsewhere, or if you want to use different credentials within the SageMaker Unified Studio JupyterLab, you will need to first retrieve your SageMaker Unified Studio credentials and make them available in the environment through either:
1. Storing them within an [AWS named profile](https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html). If using a profile name other than `default`, you will need to supply the profile name by:
1. Supplying it during initialization of the SageMaker Studio `ClientConfig` object
2. Setting the AWS profile name as an environment variable (e.g. `export AWS_PROFILE="my_profile_name"`)
2. Initializing a [boto3 `Session`](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html) object and supplying it when initializing a SageMaker Studio `ClientConfig` object
##### AWS Named Profile
To use the AWS named profile, you can update your AWS `config` file with your profile name and any other settings you would like to use:
```config
[my_profile_name]
region = us-east-1
```
Your `credentials` file should have the credentials stored for your profile:
```config
[my_profile_name]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
aws_session_token=IQoJb3JpZ2luX2IQoJb3JpZ2luX2IQoJb3JpZ2luX2IQoJb3JpZ2luX2IQoJb3JpZVERYLONGSTRINGEXAMPLE
```
Finally, you can pass in the profile when initializing the `ClientConfig` object.
```python
from sagemaker_studio import ClientConfig
conf = ClientConfig(profile_name="my_profile_name")
```
You can also set the profile name as an environment variable:
```bash
export AWS_PROFILE="my_profile_name"
```
##### Boto3 Session
To use a [boto3 `Session`](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html) object for credentials, you will need to initialize the `Session` and supply it to `ClientConfig`.
```python
from boto3 import Session
from sagemaker_studio import ClientConfig
my_session = Session(...)
conf = ClientConfig(session=my_session)
```
#### Using ClientConfig
If using `ClientConfig` for supplying credentials or changing the AWS region name, the `ClientConfig` object will need to be supplied when initializing any further SageMaker Studio objects, such as `Domain` or `Project`. If using non prod endpoint for an AWS service, it can also be supplied in the `ClientConfig`. Note: In sagemaker space, datazone endpoint is by default fetched from the metadata json file.
```python
from sagemaker_studio import ClientConfig, Project
conf = ClientConfig(region="eu-west-1")
proj = Project(config=conf)
```
### Domain
`Domain` can be initialized as follows.
```python
from sagemaker_studio import Domain
dom = Domain()
```
If you are not using the SageMaker Studio within SageMaker Unified Studio Jupyter Lab, you will need to provide the ID of the domain you want to use.
```python
dom = Domain(id="123456")
```
#### Domain Properties
A `Domain` object has several string properties that can provide information about the domain that you are using.
```python
dom.id
dom.root_domain_unit_id
dom.name
dom.domain_execution_role
dom.status
dom.portal_url
```
### Project
`Project` can be initialized as follows.
```python
from sagemaker_studio import Project
proj = Project()
```
If you are not using the SageMaker Studio within the SageMaker Unified Studio Jupyter Lab, you will need to provide either the ID or name of the project you would like to use and the domain ID of the project.
```python
proj = Project(name="my_proj_name", domain_id="123456")
```
#### Project Properties
A `Project` object has several string properties that can provide information about the project that you are using.
```python
proj.id
proj.name
proj.domain_id,
proj.project_status,
proj.domain_unit_id,
proj.project_profile_id
proj.user_id
```
##### IAM Role ARN
To retrieve the project IAM role ARN, you can retrieve the `iam_role` field. This gets the IAM role ARN of the default IAM connection within your project.
```python
proj.iam_role
```
##### KMS Key ARN
If you are using a KMS key within your project, you can retrieve the `kms_key_arn` field.
```python
proj.kms_key_arn
```
### MLflow Tracking Server ARN
If you are using an MLflow tracking server within your project, you can retrieve the `mlflow_tracking_server_arn` field.
**Usage**
```python
proj.mlflow_tracking_server_arn
```
##### S3 Path
One of the properties of a `Project` is `s3`. You can access various S3 paths that exist within your project.
```python
# S3 path of project root directory
proj.s3.root
# S3 path of datalake consumer Glue DB directory (requires DataLake environment)
proj.s3.datalake_consumer_glue_db
# S3 path of Athena workgroup directory (requires DataLake environment)
proj.s3.datalake_athena_workgroup
# S3 path of workflows output directory (requires Workflows environment)
proj.s3.workflow_output_directory
# S3 path of workflows temp storage directory (requires Workflows environment)
proj.s3.workflow_temp_storage
# S3 path of EMR EC2 log destination directory (requires EMR EC2 environment)
proj.s3.emr_ec2_log_destination
# S3 path of EMR EC2 log bootstrap directory (requires EMR EC2 environment)
proj.s3.emr_ec2_certificates
# S3 path of EMR EC2 log bootstrap directory (requires EMR EC2 environment)
proj.s3.emr_ec2_log_bootstrap
```
###### Other Environment S3 Paths
You can also access the S3 path of a different environment by providing an environment ID.
```python
proj.s3.environment_path(environment_id="env_1234")
```
#### Connections
You can retrieve a list of connections for a project, or you can retrieve a single connection by providing its name.
```python
proj_connections: List[Connection] = proj.connections
proj_redshift_conn = proj.connection("my_redshift_connection_name")
```
Each `Connection` object has several properties that can provide information about the connection.
```python
proj_redshift_conn.name
proj_redshift_conn.id
proj_redshift_conn.physical_endpoints[0].host
proj_redshift_conn.iam_role
```
##### Connection Data
To retrieve all properties of a `Connection`, you can access the `data` field to get a `ConnectionData` object. `ConnectionData` fields can be accessed using the dot notation (e.g. `conn_data.top_level_field`). For retrieving further nested data within `ConnectionData`, you can access it as a dictionary. (e.g. `conn_data.top_level_field["nested_field"]`).
```python
conn_data: ConnectionData = proj_redshift_conn.data
red_temp_dir = conn_data.redshiftTempDir
lineage_sync = conn_data.lineageSync
lineage_job_id = lineage_sync["lineageJobId"]
```
```python
spark_conn = proj.connection("my_spark_glue_connection_name")
id = spark_conn.id
env_id = spark_conn.environment_id
glue_conn = spark_conn.data.glue_connection_name
workers = spark_conn.data.number_of_workers
glue_version = spark_conn.data.glue_version
```
#### Catalogs
If your `Connection` is of the `LAKEHOUSE` or `IAM` type, you can retrieve a list of catalogs, or a single catalog by providing its id.
```python
conn_catalogs: List[Catalog] = proj.connection("project.iam").catalogs
my_catalog: Catalog = proj.connection("project.iam").catalog("1234567890:catalog1/sub_catalog")
proj.connection("project.default_lakehouse").catalogs
```
Each `Catalog` object has several properties that can provide information about the catalog.
```python
my_catalog.name
my_catalog.id
my_catalog.type
my_catalog.spark_catalog_name
my_catalog.resource_arn
```
#### Secrets
Retrieve the secret (username, password, other connection related metadata) for the connection using this property.
```python
snowflake_connection: Connection = proj.connection("project.snowflake")
secret = snowflake_connection.secret
```
Secret can be a dictionary containing credentials, or a single string depending on the connection type.
#### AWS Clients
You can retrieve a Boto3 AWS client initialized with the connection's credentials.
```python
redshift_connection: Connection = proj.connection("project.redshift")
redshift_client = redshift_connection.create_client()
```
Some connections are directly associated with an AWS service, and will default to using that AWS service's client if no service name is specified. Those connections are listed in the below table.
| Connection Type | AWS Service Name |
|-----------------|------------------|
| ATHENA | athena |
| DYNAMODB | dynamodb |
| REDSHIFT | redshift |
| S3 | s3 |
| S3_FOLDER | s3 |
For other connection types, you must specify an AWS service name.
```python
iam_connection: Connection = proj.connection("project.iam")
glue_client = iam_connection.create_client("glue")
```
#### Databases and Tables
##### Databases
Within a catalog, you can retrieve a list of databases, or a single database by providing its name.
```python
my_catalog: Catalog
catalog_dbs: List[Database] = my_catalog.databases
my_db: Database = my_catalog.database("my_db")
```
Each `Database` object has several properties that can provide information about the database.
```python
my_db.name
my_db.catalog_id
my_db.location_uri
my_db.project_id
my_db.domain_id
```
##### Tables
You can also retrieve either a list of tables or a specific table within a `Database`.
```python
my_db_tables: List[Table] = my_db.tables
my_table: Table = my_db.table("my_table")
```
Each `Table` object has several properties that can provide information about the table.
```python
my_table.name
my_table.database_name
my_table.catalog_id
my_table.location
```
You can also retrieve a list of the columns within a table. `Column` contains the column name and the data type of the column.
```python
my_table_columns: List[Column] = my_table.columns
col_0: Column = my_table_columns[0]
col_0.name
col_0.type
```
### Execution APIs
Execution APIs provide you the ability to start an execution to run a notebook headlessly either within the same user space or on remote compute.
#### Local Execution APIs
Use these APIs to start/stop/get/list executions within the user's space.
##### StartExecution
You can start a notebook execution headlessly within the same user space.
```python
from sagemaker_studio.sagemaker_studio_api import SageMakerStudioAPI
from sagemaker_studio import ClientConfig
config = ClientConfig(overrides={
"execution": {
"local": True,
}
})
sagemaker_studio_api = SageMakerStudioAPI(config)
result = sagemaker_studio_api.execution_client.start_execution(
execution_name="my-execution",
input_config={"notebook_config": {
"input_path": "src/folder2/test.ipynb"}},
execution_type="NOTEBOOK",
output_config={"notebook_config": {
"output_formats": ["NOTEBOOK", "HTML"]
}}
)
print(result)
```
##### GetExecution
You can retrieve details about a local execution using the `GetExecution` API.
```python
from sagemaker_studio.sagemaker_studio_api import SageMakerStudioAPI
from sagemaker_studio import ClientConfig
config = ClientConfig(region="us-west-2", overrides={
"execution": {
"local": True,
}
})
sagemaker_studio_api = SageMakerStudioAPI(config)
get_response = sagemaker_studio_api.execution_client.get_execution(execution_id="asdf-3b998be2-02dd-42af-8802-593d48d04daa")
print(get_response)
```
##### ListExecutions
You can use `ListExecutions` API to list all the executions that ran in the user's space.
```python
from sagemaker_studio.sagemaker_studio_api import SageMakerStudioAPI
from sagemaker_studio import ClientConfig
config = ClientConfig(region="us-west-2", overrides={
"execution": {
"local": True,
}
})
sagemaker_studio_api = SageMakerStudioAPI(config)
list_executions_response = sagemaker_studio_api.execution_client.list_executions(status="COMPLETED")
print(list_executions_response)
```
##### StopExecution
You can use `StopExecution` API to stop an execution that's running in the user space.
```python
from sagemaker_studio.sagemaker_studio_api import SageMakerStudioAPI
from sagemaker_studio import ClientConfig
config = ClientConfig(region="us-west-2", overrides={
"execution": {
"local": True,
}
})
sagemaker_studio_api = SageMakerStudioAPI(config)
stop_response = sagemaker_studio_api.execution_client.stop_execution(execution_id="asdf-3b998be2-02dd-42af-8802-593d48d04daa")
print(stop_response)
```
#### Remote Execution APIs
Use these APIs to start/stop/get/list executions running on remote compute.
##### StartExecution
You can start a notebook execution headlessly on a remote compute specified in the StartExecution request.
```python
from sagemaker_studio.sagemaker_studio_api import SageMakerStudioAPI
from sagemaker_studio import ClientConfig
config = ClientConfig(region="us-west-2")
sagemaker_studio_api = SageMakerStudioAPI(config)
result = sagemaker_studio_api.execution_client.start_execution(
execution_name="my-execution",
execution_type="NOTEBOOK",
input_config={"notebook_config": {"input_path": "src/folder2/test.ipynb"}},
output_config={"notebook_config": {"output_formats": ["NOTEBOOK", "HTML"]}},
termination_condition={"max_runtime_in_seconds": 9000},
compute={
"instance_type": "ml.c5.xlarge",
"image_details": {
# provide either ecr_uri or (image_name and image_version)
"image_name": "sagemaker-distribution-embargoed-prod",
"image_version": "2.2",
"ecr_uri": "ECR-registry-account.dkr.ecr.us-west-2.amazonaws.com/repository-name[:tag]",
}
}
)
print(result)
```
##### GetExecution
You can retrieve details about an execution running on remote compute using the `GetExecution` API.
```python
from sagemaker_studio.sagemaker_studio_api import SageMakerStudioAPI
from sagemaker_studio import ClientConfig
config = ClientConfig(region="us-west-2")
sagemaker_studio_api = SageMakerStudioAPI(config)
get_response = sagemaker_studio_api.execution_client.get_execution(execution_id="asdf-3b998be2-02dd-42af-8802-593d48d04daa")
print(get_response)
```
##### ListExecutions
You can use `ListExecutions` API to list all the headless executions that ran on remote compute.
```python
from sagemaker_studio.sagemaker_studio_api import SageMakerStudioAPI
from sagemaker_studio import ClientConfig
config = ClientConfig(region="us-west-2")
sagemaker_studio_api = SageMakerStudioAPI(config)
list_executions_response = sagemaker_studio_api.execution_client.list_executions(status="COMPLETED")
print(list_executions_response)
```
##### StopExecution
You can use `StopExecution` API to stop an execution that's running on remote compute.
```python
from sagemaker_studio.sagemaker_studio_api import SageMakerStudioAPI
from sagemaker_studio import ClientConfig
config = ClientConfig(region="us-west-2")
sagemaker_studio_api = SageMakerStudioAPI(config)
stop_response = sagemaker_studio_api.execution_client.stop_execution(execution_id="asdf-3b998be2-02dd-42af-8802-593d48d04daa")
print(stop_response)
```
| text/markdown | Amazon Web Services | null | null | null | Apache License 2.0 | AWS, Amazon, SageMaker, SageMaker Unified Studio, SDK | [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: Apache Software License"
] | [
"Linux"
] | https://aws.amazon.com/sagemaker/ | null | >=3.9 | [] | [] | [] | [
"boto3>=1.34.106",
"botocore>=1.34.106",
"urllib3>=1.26.19",
"requests>=2.25.1",
"psutil>=5.9.8",
"python_dateutil>=2.5.3",
"setuptools>=21.0.0",
"packaging>=24.0",
"pyathena>=3.17.1",
"sqlalchemy>=1.4.54",
"pandas<3.0.0,>=2.1.4",
"duckdb>=1.3.2",
"pymysql>=1.1.1",
"snowflake-sqlalchemy>=1.7.6",
"sqlalchemy-bigquery>=0.0.7",
"pydynamodb>=0.7.4",
"psycopg2-binary>=2.9.10",
"pymssql>=2.3.7",
"awswrangler>=3.5.0",
"pyiceberg>=0.7.0",
"numpy<2.3.0,>=1.26.4",
"pyarrow>=18.1.0",
"aws-embedded-metrics>=3.2.0",
"pytest>=6; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"toml; extra == \"test\"",
"coverage; extra == \"test\"",
"wheel; extra == \"dev\"",
"invoke; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.12 | 2026-02-20T17:50:06.462281 | sagemaker_studio-1.0.24.tar.gz | 532,482 | b3/ec/e1d87ca45e238628e86f336ff433c7d4790ed1a0b7c87e66ccd9b7148bcc/sagemaker_studio-1.0.24.tar.gz | source | sdist | null | false | 3ff66cb1312878124f5c471d9db2db9a | 0c2ab7a2e3dc66e7b745ec7364966ef501088fc5fbadb0189e6b6dcfd6812b9a | b3ece1d87ca45e238628e86f336ff433c7d4790ed1a0b7c87e66ccd9b7148bcc | null | [
"LICENSE",
"NOTICE"
] | 99,603 |
2.4 | dbt-fabric-samdebruyn | 1.10.3 | A Microsoft Fabric Synapse Data Warehouse adapter plugin for dbt | # dbt-fabric-samdebruyn
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/sdebruyn/dbt-fabric/forked-version/assets/dbt-signature_tm_light.png">
<img alt="dbt logo" src="https://raw.githubusercontent.com/sdebruyn/dbt-fabric/forked-version/assets/dbt-signature_tm.png">
</picture>
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/sdebruyn/dbt-fabric/forked-version/assets/fabric.png">
<img alt="Fabric logo" src="https://raw.githubusercontent.com/sdebruyn/dbt-fabric/forked-version/assets/fabric.png">
</picture>
This is a maintained and extended fork of the [dbt-fabric](https://github.com/microsoft/dbt-fabric) adapter. This fork has [additional features and bugfixes](https://dbt-fabric.debruyn.dev/feature-comparison/) compared to the original adapter.
The adapter was [originally developed by the community](https://github.com/microsoft/dbt-fabric/graphs/contributors) and later adopted by Microsoft.
Given Microsoft's limited investments in the adapter, this fork aims to continue its development and maintenance.
[](https://pypi.org/project/dbt-fabric-samdebruyn/)
## Documentation
A website with all documentation with regards to using dbt with Microsoft Fabric can be found at [http://dbt-fabric.debruyn.dev/](http://dbt-fabric.debruyn.dev/).
## Drop-in replacement
This adapter is a drop-in replacement for the original `dbt-fabric` adapter. To start using this adapter, all you have to do is a `pip uninstall dbt-fabric` and a `pip install dbt-fabric-samdebruyn`.
## Code of Conduct
Everyone interacting in this project's codebases, issues, discussions, and related Slack channels is expected to follow the [dbt Code of Conduct](https://docs.getdbt.com/community/resources/code-of-conduct).
## Acknowledgements
Special thanks to:
* [Jacob Mastel](https://github.com/jacobm001): for his initial work on building dbt-sqlserver.
* [Mikael Ene](https://github.com/mikaelene): for his initial work and continued maintenance on the dbt-sqlserver adapter.
* [Anders Swanson](https://github.com/dataders): for his continued maintenance of the dbt-sqlserver adapter and the creation of the dbt-synapse adapter. And for his work at [dbt Labs](https://www.getdbt.com/).
* [dbt Labs](https://www.getdbt.com/): for their continued support of the dbt open source ecosystem.
* the Microsoft Fabric product team, for their support and contributions to the dbt-fabric adapter.
* every other contributor to dbt-sqlserver, dbt-synapse, and dbt-fabric.
| text/markdown | Sam Debruyn | null | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"azure-identity>=1.12.0",
"dbt-adapters<2.0,>=1.1.1",
"dbt-common<2.0,>=1.0.4",
"pyodbc>=4.0.35",
"requests>=2.32.3"
] | [] | [] | [] | [
"Documentation, http://dbt-fabric.debruyn.dev/",
"Changelog, https://github.com/sdebruyn/dbt-fabric/releases",
"Issue Tracker, https://github.com/sdebruyn/dbt-fabric/issues",
"Homepage, https://github.com/sdebruyn/dbt-fabric"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T17:50:00.516343 | dbt_fabric_samdebruyn-1.10.3.tar.gz | 195,160 | a8/e1/eabd62791d4e6598a22bcb9fe69a9d0223231b2150ae3bcc7b630a584277/dbt_fabric_samdebruyn-1.10.3.tar.gz | source | sdist | null | false | a28a87dd108ad4977ae8572fdab74cca | dc14453880e7a361482231f8c1a47d5fc95d23c118f8995dcf182aabd16af1e2 | a8e1eabd62791d4e6598a22bcb9fe69a9d0223231b2150ae3bcc7b630a584277 | MIT | [
"LICENSE"
] | 202 |
2.4 | langgraph-runtime-inmem | 0.25.2 | Inmem implementation for the LangGraph API server. | # LangGraph Runtime Inmem
This is the inmem implementation of the LangGraph Runtime API.
| text/markdown | null | Will Fu-Hinthorn <will@langchain.dev> | null | null | Elastic-2.0 | null | [] | [] | null | null | >=3.11.0 | [] | [] | [] | [
"blockbuster<2.0.0,>=1.5.24",
"croniter>=1.0.1",
"langgraph-checkpoint<5,>=3",
"langgraph<2,>=0.4.10",
"sse-starlette>=2",
"starlette>=0.37",
"structlog>23"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T17:49:50.966652 | langgraph_runtime_inmem-0.25.2.tar.gz | 110,299 | 00/c5/f53ff002c872a2868a15da71419fc7111272210f0fea4f720eeae04542cd/langgraph_runtime_inmem-0.25.2.tar.gz | source | sdist | null | false | a58ea7cd91e61128a321a98daf4d8ee0 | aa443020f781e1582598779b3b2c86dd5224d635731acb1dd33701e9164b55f2 | 00c5f53ff002c872a2868a15da71419fc7111272210f0fea4f720eeae04542cd | null | [] | 13,851 |
2.1 | diffindiff | 2.2.6 | diffindiff: Python library for convenient Difference-in-Differences analyses | # diffindiff: Python library for convenient Difference-in-Differences analyses
This Python library is designed for performing Difference-in-Differences (DiD) analyses in a convenient way. It allows users to construct datasets, define treatment and control groups, and set treatment periods. DiD model analyses may be conducted with both datasets created by built-in functions and ready-to-use external datasets. Both simultaneous and staggered adoption are supported. The library allows for various extensions, such as two-way fixed effects models, group- or individual-specific effects, post-treatment periods, and triple-difference estimations. Additionally, it includes functions for visualizing results, such as plotting DiD coefficients with confidence intervals and illustrating the temporal evolution of staggered treatments. Furthermore, several functions for rigorous treatment setting and data diagnostics are incorporated.
## Author
Thomas Wieland [ORCID](https://orcid.org/0000-0001-5168-9846) [EMail](mailto:geowieland@googlemail.com)
## Availability
- 📦 PyPI: [diffindiff](https://pypi.org/project/diffindiff/)
- 💻 GitHub Repository: [diffindiff_official](https://github.com/geowieland/diffindiff_official)
- 📄 DOI (Zenodo): [10.5281/zenodo.18656820](https://doi.org/10.5281/zenodo.18656820)
## Citation
If you use this software, please cite:
Wieland, T. (2026). diffindiff: A Python library for convenient difference-in-differences analyses (Version 2.2.6) [Computer software]. Zenodo. https://doi.org/10.5281/zenodo.18656820
## Installation
To install the package, use `pip`:
```bash
pip install diffindiff
```
To install the package from GitHub with `pip`:
```bash
pip install git+https://github.com/geowieland/diffindiff_official.git
```
## Features
- **Data preparation and pre-analysis**:
- Define custom treatment and control groups as well as treatment periods
- Create ready-to-fit DiD data objects
- Create predictive counterfactuals
- **DiD analysis**:
- Perfom standard DiD analysis
- Model extensions:
- Staggered adoption
- Multiple treatments
- Two-way fixed effects models
- Group- or individual-specific treatment effects
- Group- or individual-specific time trends
- Including covariates
- Including after-treatment period
- Triple Difference (DDD)
- Own counterfactuals
- Bonferroni correction for treatment effects
- Placebo test
- **Visualization**:
- Plot observed and expected time course of treatment and control group
- Plot expected time course of treatment group and counterfactual
- Plot model coefficients with confidence intervals
- Plot individual or group-specific treatment effects with confidence intervals
- Visualize the temporal evolution of staggered treatments
- **Diagnosis tools**:
- Test for control conditions
- Test for type of adoption
- Test whether the panel dataset is balanced
- Test for parallel trend assumption
## Examples
```python
curfew_DE=pd.read_csv("data/curfew_DE.csv", sep=";", decimal=",")
# Test dataset: Daily and cumulative COVID-19 infections in German counties
curfew_data=create_data(
outcome_data=curfew_DE,
unit_id_col="county",
time_col="infection_date",
outcome_col="infections_cum_per100000",
treatment_group=
curfew_DE.loc[curfew_DE["Bundesland"].isin([9,10,14])]["county"],
control_group=
curfew_DE.loc[~curfew_DE["Bundesland"].isin([9,10,14])]["county"],
study_period=["2020-03-01", "2020-05-15"],
treatment_period=["2020-03-21", "2020-05-05"],
freq="D"
)
# Creating DiD dataset by defining groups and treatment time
curfew_data.summary()
# Summary of created treatment data
curfew_model = curfew_data.analysis()
# Model analysis of created data
curfew_model.summary()
# Model summary
curfew_model.plot(
y_label="Cumulative infections per 100,000",
plot_title="Curfew effectiveness - Groups over time",
plot_observed=True
)
# Plot observed vs. predicted (means) separated by group (treatment and control)
curfew_model.plot_effects(
x_label="Coefficients with 95% CI",
plot_title="Curfew effectiveness - DiD effects"
)
# plot effects
counties_DE=pd.read_csv("data/counties_DE.csv", sep=";", decimal=",", encoding='latin1')
# Dataset with German county data
curfew_data_withgroups = curfew_data.add_covariates(
additional_df=counties_DE,
unit_col="county",
time_col=None,
variables=["BL"])
# Adding federal state column as covariate
curfew_model_withgroups = curfew_data_withgroups.analysis(
GTE=True,
group_by="BL")
# Model analysis of created data
curfew_model_withgroups.summary()
# Model summary
curfew_model_withgroups.plot_group_treatment_effects(
treatment_group_only=True
)
# Plot of group-specific treatment effects
```
See the /tests directory for usage examples of most of the included functions.
## Literature
- Baker AC, Larcker DF, Wang CCY (2022) How much should we trust staggered difference-in-differences estimates? *Journal of Financial Economics* 144(2): 370-395. [10.1016/j.jfineco.2022.01.004](https://doi.org/10.1016/j.jfineco.2022.01.004)
- Card D, Krueger AD (1994) Minimum Wages and Employment: A Case Study of the Fast Food Industry in New Jersey and Pennsylvania. *The American Economic Review* 84(4): 772-793. [JSTOR](https://www.jstor.org/stable/2677856)
- de Haas S, Götz G, Heim S (2022) Measuring the effect of COVID‑19‑related night curfews in a bundled intervention within Germany. *Scientific Reports* 12: 19732. [10.1038/s41598-022-24086-9](https://doi.org/10.1038/s41598-022-24086-9)
- Goodman-Bacon A (2021) Difference-in-differences with variation in treatment timing. *Journal of Econometrics* 225(2): 254-277. [10.1016/j.jeconom.2021.03.014](https://doi.org/10.1016/j.jeconom.2021.03.014)
- Greene WH (2012) *Econometric Analysis*.
- Goldfarb A, Tucker C, Wang Y (2022) Conducting Research in Marketing with Quasi-Experiments. *Journal of Marketing* 86(3): 1-19. [10.1177/00222429221082977](https://doi.org/10.1177/00222429221082977)
- Isporhing IE, Lipfert M, Pestel N (2021) Does re-opening schools contribute to the spread of SARS-CoV-2? Evidence from staggered summer breaks in Germany. *Journal of Public Economics* 198: 104426. [10.1016/j.jpubeco.2021.104426](https://doi.org/10.1016/j.jpubeco.2021.104426)
- Li KT, Luo L, Pattabhiramaiah A (2024) Causal Inference with Quasi-Experimental Data. *IMPACT at JMR* November 13, 2024. [AMA](https://www.ama.org/marketing-news/causal-inference-with-quasi-experimental-data/)
- Olden A (2018) What do you buy when no one's watching? The effect of self-service checkouts on the composition of sales in retail. Discussion paper FOR 3/18, Norwegian School of Economics, Norway. [http://hdl.handle.net/11250/2490886](http://hdl.handle.net/11250/2490886)
- Olden A, Moen J (2022) The triple difference estimator. *The Econometrics Journal* 25(3): 531-553. [10.1093/ectj/utac010](https://doi.org/10.1093/ectj/utac010)
- Strassmann A, Çolak Y, Serra-Burriel M, Nordestgaard BG, Turk A, Afzal S, Puhan MA (2023) Nationwide indoor smoking ban and impact on smoking behaviour and lung function: a two-population natural experiment. *Thorax* 78(2): 144-150. [10.1136/thoraxjnl-2021-218436](https://doi.org/10.1136/thoraxjnl-2021-218436)
- Villa JM (2016) diff: Simplifying the estimation of difference-in-differences treatment effects. *The Stata Journal* 16(1): 52-71. [10.1177/1536867X1601600108](https://doi.org/10.1177/1536867X1601600108)
- von Bismarck-Osten C, Borusyak K, Schönberg U (2022) The role of schools in transmission of the SARS-CoV-2 virus: quasi-experimental evidence from Germany. *Economic Policy* 37(109): 87–130. [10.1093/epolic/eiac001](https://doi.org/10.1093/epolic/eiac001)
- Wieland T (2025) Assessing the effectiveness of non-pharmaceutical interventions in the SARS-CoV-2 pandemic: results of a natural experiment regarding Baden-Württemberg (Germany) and Switzerland in the second infection wave. *Journal of Public Health: From Theory to Practice* 33(11): 2497-2511. [10.1007/s10389-024-02218-x](https://doi.org/10.1007/s10389-024-02218-x)
- Wooldridge JM (2012) *Introductory Econometrics. A Modern Approach*.
## What's new (v2.2.6)
- Bugfixes:
- Check for correct dates in diddata.create_treatment()
- Check for valid columns in diddata.merge_data()
- Removed unnecessary old dependencies and imports
- Other:
- Changed diddata.DiffGroups.add_segmentation() to return a message rather than raising an exception when the DiffGroups object already includes a benefit group
| text/markdown | Thomas Wieland | geowieland@googlemail.com | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T17:49:41.430438 | diffindiff-2.2.6.tar.gz | 1,653,467 | df/6b/877c238fa05043551f6846ff923aba698f5a1b4b5f37d3bca46728c291fa/diffindiff-2.2.6.tar.gz | source | sdist | null | false | de7447f49b61bda4e35bc8f9dff8f872 | 2ce53df0121d12b8ec5a505ce6bf0dbae4c51d1fe7eaa9970d7d27497127f197 | df6b877c238fa05043551f6846ff923aba698f5a1b4b5f37d3bca46728c291fa | null | [] | 211 |
2.4 | pdfpress | 1.0.5 | PDF toolkit: compress, merge, and unlock PDF files | <div align="center">
<img src="logo.png" alt="pdfpress" width="512"/>
# pdfpress
[](https://www.python.org/)
[](https://pypi.org/project/pdfpress/)
[](https://opensource.org/licenses/MIT)
[](https://www.ghostscript.com/)
**🔧 Compress, merge, split, and unlock PDF files with one tool ⚡**
[Installation](#installation) · [Usage](#usage) · [How It Works](#how-it-works)
</div>
## Overview
pdfpress is a multi-command PDF toolkit that handles your most common PDF chores from the command line — compress bloated files, merge multiple PDFs, extract specific pages, and strip password protection.
## Features
- **Smart compression** - Tries 3 strategies (pikepdf, Ghostscript, combined) and picks the smallest result
- **Batch processing** - Compress multiple files with wildcards (`*.pdf`)
- **Parallel processing** - Compress multiple files concurrently with `-j`
- **Quality presets** - Choose between screen, ebook, printer, or prepress quality
- **Merge PDFs** - Combine multiple files or entire directories into one
- **Grouped merge** - Automatically merge files by name pattern (e.g. `report-1.pdf` + `report-2.pdf` → `report.merged.pdf`)
- **Split/extract pages** - Extract specific pages, ranges, odd/even, or each page individually
- **Unlock PDFs** - Remove password protection from encrypted files
- **Flexible output** - Custom filenames, directories, or in-place replacement
- **Scriptable** - Quiet mode for automation and pipelines
## Installation
### Prerequisites
Install Ghostscript (required for the compress command):
```bash
# macOS
brew install ghostscript
# Ubuntu/Debian
apt install ghostscript
# Fedora/RHEL
dnf install ghostscript
```
### Install from PyPI (recommended)
```bash
pip install pdfpress
```
Or with uv:
```bash
uv tool install pdfpress
```
### Install from source
```bash
pip install git+https://github.com/tsilva/pdfpress.git
```
## Usage
### Compress
```bash
# Compress all PDFs in current directory
pdfpress compress
# Compress a single file (creates document.compressed.pdf)
pdfpress compress document.pdf
# Specify output filename
pdfpress compress document.pdf -o small.pdf
# Batch compress to a directory
pdfpress compress *.pdf -d compressed/
# Replace original files (use with caution)
pdfpress compress -i large.pdf
# Use 4 parallel workers for batch compression
pdfpress compress *.pdf -j 4
# Use screen quality (72 DPI) for smallest size
pdfpress compress document.pdf -Q screen
# Preview compression without saving
pdfpress compress --dry-run
```
#### Compress Options
| Option | Description |
|--------|-------------|
| `-o, --output <file>` | Output filename (single file mode only) |
| `-d, --output-dir <dir>` | Output directory for compressed files |
| `-i, --in-place` | Replace original files |
| `-Q, --quality <preset>` | Quality preset: screen, ebook, printer, prepress |
| `-j, --jobs <n>` | Number of parallel jobs (0 = auto) |
| `-n, --dry-run` | Simulate compression without saving |
| `-q, --quiet` | Suppress output except errors |
#### Quality Presets
| Preset | DPI | Use Case |
|--------|-----|----------|
| `screen` | 72 | Web viewing, smallest size |
| `ebook` | 150 | E-readers and tablets (default) |
| `printer` | 300 | Office printing |
| `prepress` | 300 | Professional printing |
### Merge
```bash
# Merge all PDFs in a directory
pdfpress merge dir/
# Merge specific files into one output
pdfpress merge f1.pdf f2.pdf -o out.pdf
# Merge by filename pattern (report-1.pdf + report-2.pdf → report.merged.pdf)
pdfpress merge dir/ --grouped
# Ask before each group merge
pdfpress merge dir/ --grouped --ask
```
#### Merge Options
| Option | Description |
|--------|-------------|
| `-o, --output <file>` | Output filename |
| `-g, --grouped` | Merge by base name pattern |
| `-a, --ask` | Confirm before each merge |
| `-q, --quiet` | Suppress output except errors |
### Split
```bash
# Extract specific pages
pdfpress split document.pdf -p "1,3,5"
# Extract a page range
pdfpress split document.pdf -p "1-5"
# Extract odd or even pages
pdfpress split document.pdf -p "odd"
pdfpress split document.pdf -p "even"
# Custom output filename
pdfpress split document.pdf -p "1-5" -o out.pdf
# Export each page to a separate file
pdfpress split document.pdf -p "all" -i
# Individual files in a specific directory
pdfpress split document.pdf -p "all" -i -d out/
```
#### Split Options
| Option | Description |
|--------|-------------|
| `-p, --pages <spec>` | Pages to extract: `1,3,5-10`, `all`, `odd`, `even` |
| `-o, --output <file>` | Output filename |
| `-d, --output-dir <dir>` | Output directory |
| `-i, --individual` | Export each page to a separate file |
| `-q, --quiet` | Suppress output except errors |
### Unlock
```bash
# Unlock all PDFs in a directory (prompts for password)
pdfpress unlock dir/
# Unlock with password flag
pdfpress unlock file.pdf -p "secret"
# Custom output filename
pdfpress unlock file.pdf -o unlocked.pdf
# Unlock to a specific directory
pdfpress unlock dir/ -d unlocked/
```
#### Unlock Options
| Option | Description |
|--------|-------------|
| `-o, --output <file>` | Output filename (single file mode only) |
| `-d, --output-dir <dir>` | Output directory for unlocked files |
| `-p, --password <pass>` | Password (prompted interactively if not provided) |
| `-q, --quiet` | Suppress output except errors |
## How It Works
### Compress
The compress command tries three strategies and keeps the smallest result:
| Strategy | Method | Best For |
|----------|--------|----------|
| **pikepdf** | Linearizes and optimizes PDF object streams | Already-optimized PDFs |
| **Ghostscript** | Aggressive image downsampling | Image-heavy PDFs |
| **Combined** | Ghostscript followed by pikepdf optimization | Mixed content |
If none of the strategies produce a smaller file, the original is preserved.
### Merge
Groups files alphabetically when merging a directory. With `--grouped`, strips trailing `-N`/`_N` number suffixes to detect related files and merge each group separately.
### Split
Supports flexible page selection: individual pages (`1,3,5`), ranges (`1-5`), keywords (`all`, `odd`, `even`), and mixed combinations (`1,3,5-10,15`).
### Unlock
Uses pikepdf to open and re-save the PDF without password protection. Skips files that are not encrypted. Writes to a temp file atomically to avoid corrupting the original on failure.
## Example Results
| PDF Type | Original | Compressed | Reduction |
|----------|----------|------------|-----------|
| Scanned document | 434 KB | 38 KB | 91% |
| Digital form | 164 KB | 96 KB | 41% |
| Invoice | 32 KB | 21 KB | 33% |
Results vary depending on PDF content. Image-heavy PDFs typically see the largest reductions.
## Contributing
Found a bug or have a suggestion? Please open an issue:
[GitHub Issues](https://github.com/tsilva/pdfpress/issues)
## License
MIT License - see [LICENSE](LICENSE) for details.
## Author
**Tiago Silva** - [@tsilva](https://github.com/tsilva)
| text/markdown | null | Tiago Silva <tiago@tsilva.com> | null | null | MIT | combine, compression, ghostscript, merge, optimization, password, pdf, pikepdf, unlock | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Multimedia :: Graphics :: Graphics Conversion",
"Topic :: Utilities"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pikepdf>=10.0.0",
"rich>=14.0.0",
"typer>=0.12.0",
"mypy>=1.10.0; extra == \"dev\"",
"pytest-cov>=5.0.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.5.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/tsilva/pdfpress",
"Repository, https://github.com/tsilva/pdfpress",
"Issues, https://github.com/tsilva/pdfpress/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T17:49:37.882686 | pdfpress-1.0.5.tar.gz | 352,092 | a6/0a/e69399f889750a504a69ff66895425d0971f3838776c335612e6fe4ea380/pdfpress-1.0.5.tar.gz | source | sdist | null | false | 1bfb8f41bd7f011282fde7691d4c8af4 | 604e1bacf143d4e8c526baf77b8b8261013a31d2151be1193ffac9dff7c0b2ab | a60ae69399f889750a504a69ff66895425d0971f3838776c335612e6fe4ea380 | null | [
"LICENSE"
] | 227 |
2.4 | wuzup | 0.1.6 | A project for wuzup | # wuzup
A repo for wuzup.
This is very very beta / wip.
| text/markdown | null | csm10495 <csm10495@gmail.com> | null | null | Copyright 2025 Charles Machalow
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
| null | [
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: POSIX",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pytesseract",
"Pillow",
"requests",
"beautifulsoup4",
"pytest; extra == \"dev\""
] | [] | [] | [] | [
"homepage, https://github.com/csm10495/wuzup",
"repository, https://github.com/csm10495/wuzup",
"documentation, https://csm10495.github.io/wuzup"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T17:48:30.760873 | wuzup-0.1.6.tar.gz | 11,347 | 37/93/febf7786f4a03a751d6c294be6313d34459db9a84830d95c99fd48692ace/wuzup-0.1.6.tar.gz | source | sdist | null | false | c54ea20ec28b3ca0b9bab76be67bd5b6 | eaa3ade44b13743c38b4215f89d58315329f9e00c9afb77b5387ba32cb906f73 | 3793febf7786f4a03a751d6c294be6313d34459db9a84830d95c99fd48692ace | null | [
"LICENSE.md"
] | 206 |
2.4 | kanban-tui | 0.19.2 | customizable task tui powered by textual usable by agents | <!-- Icons -->
[](https://github.com/astral-sh/ruff)
[](https://pypi.org/project/kanban-tui/)
[](https://pypi.python.org/pypi/kanban-tui)
[](https://opensource.org/licenses/MIT)
[](https://pepy.tech/project/kanban-tui)
[](https://coveralls.io/github/Zaloog/kanban-tui?branch=main)
# kanban-tui
A customizable terminal-based task manager powered by [Textual][textual] with multiple backends.
Now also usable in co-op mode with AI agents (check the [CLI interface](#cli-interface) and [MCP Server](#mcp-server) section for more info).
## Demo

Try `kanban-tui` instantly without installation:
```bash
uvx kanban-tui demo
```
## Features
<details><summary>following the xdg basedir convention</summary>
kanban-tui utilizes [pydantic-settings] and [xdg-base-dirs] `user_config_dir` to save
the config file and `user_data_dir` for the sqlite database.
you can get an overview of all file locations with `uvx kanban-tui info`
</details>
<details><summary>Customizable Board</summary>
kanban-tui comes with four default columns
(`Ready`, `Doing`, `Done`, `Archive`) but can be customized to your needs.
More columns can be created via the `Settings`-Tab. Also the visibility and order of columns can be adjusted.
Deletion of existing columns is only possible, if no task is present in the column you want to delete.
</details>
<details><summary>Multiple Backends</summary>
kanban-tui currently supports three backends.
- **sqlite** (default) | Supports all features of `kanban-tui`
- **jira** | Connect to your jira instance via api key and query tasks via jql.
Columns are defined by task transitions.
- **claude** | Read the `.json` files under `~/.claude/tasks/`. Boards are created for each session ID.
Supports only a subset of features
</details>
<details><summary>Multi Board Support</summary>
With version v0.4.0 kanban-tui allows the creation of multiple boards.
Use `B` on the `Kanban Board`-Tab to get an overview over all Boards including
the amount of columns, tasks and the earliest Due Date.
</details>
<details><summary>Task Management</summary>
When on the `Kanban Board`-Tab you can `create (n)`, `edit (e)`, `delete (d)`, move between columns (`H`, `L`), or reorder within a column (`J` down / `K` up).
Movement between columns and reordering within a column are also supported via mouse drag and drop.
Task dependencies can be defined, which restrict movement to the `Doing` (start_column).
To have the restrictions available, the status columns must be defined on the settings screen.
</details>
<details><summary>Task Dependencies</summary>
Tasks can have dependencies on other tasks, creating a workflow where certain tasks must be completed before others can proceed.
- **Add Dependencies**: When editing a task, use the dependency selector dropdown to add other tasks as dependencies
- **Remove Dependencies**: Select a dependency in the table and press enter to remove it
- **Blocking Prevention**: Tasks with unfinished dependencies cannot be moved to start/finish columns
- **Circular Detection**: The system prevents circular dependencies (Task A depends on Task B, Task B depends on Task A)
- **Visual Indicators**: Task cards show visual cues for dependency status:
- 🔒 "Blocked by X unfinished tasks" - Task has dependencies that aren't finished yet
- ❗ "Blocking Y tasks" - Other tasks depend on this one
- ✅ "No dependencies" - Task has no dependency relationships
- **CLI Support**: Dependencies can be managed via the CLI with the `--depends-on` flag when creating tasks, or using the `--force` flag to override blocking when moving tasks
</details>
<details><summary>Database Information</summary>
The current database schema looks as follows.
The Audit table is filled automatically based on triggers.
```mermaid
erDiagram
tasks }|--o| categories: have
tasks }|--|| audits: updates
tasks ||--o{ task_dependencies: "blocks"
tasks ||--o{ task_dependencies: "blocked_by"
tasks {
INTEGER task_id PK
INTEGER column FK
INTEGER category FK
TEXT title
TEXT description
DATETIME creation_date
DATETIME start_date
DATETIME finish_date
DATETIME due_date
TEXT metadata
}
task_dependencies {
INTEGER dependency_id PK
INTEGER task_id FK
INTEGER depends_on_task_id FK
}
boards }|--o{ columns: contains
boards }|--|| audits: updates
boards {
INTEGER board_id PK
INTEGER reset_column FK
INTEGER start_column FK
INTEGER finish_column FK
TEXT name
TEXT icon
DATETIME creation_date
}
columns ||--|{ tasks: contains
columns }|--|| audits: updates
columns {
INTEGER column_id PK
INTEGER board_id FK
TEXT name
BOOLEAN visible
INTEGER position
}
categories {
INTEGER category_id PK
TEXT name
TEXT color
}
audits {
INTEGER event_id PK
DATETIME event_timestamp
TEXT event_type
TEXT object_type
INTEGER object_id
TEXT object_field
TEXT value_old
TEXT value_new
}
```
</details>
<details><summary>Visual Summary and Audit Table</summary>
To give you an overview over the amount of tasks you `created`, `started` or `finished`, kanban-tui
provides an `Overview`-Tab to show you a bar-chart on a `monthly`, `weekly` or `daily` scale.
It also can be changed to a stacked bar chart per category.
This feature is powered by the [plotext] library with help of [textual-plotext].
There is also an audit table, which tracks the creation/update/deletion of tasks/boards and columns.
</details>
## Installation
You can install `kanban-tui` with one of the following options:
```bash
uv tool install kanban-tui
```
```bash
pipx install kanban-tui
```
```bash
# not recommended
pip install kanban-tui
```
I recommend using [pipx] or [uv] to install CLI Tools into an isolated environment.
To be able to use `kanban-tui` in your browser with the `--web`-flag, the optional dependency
`textual-serve` is needed. You can add this to `kanban-tui` by installing the optional `web`-dependency
with the installer of your choice, for example with [uv]:
```bash
uv tool install 'kanban-tui[web]'
```
## Usage
kanban-tui now also supports the `kanban-tui` entrypoint besides `ktui`.
This was added to support easier installation via [uv]'s `uvx` command.
### Normal Mode
Start `kanban-tui` with by just running the tool without any command. The application can be closed by pressing `ctrl+q`.
Pass the `--web` flag and follow the shown link to open `kanban-tui` in your browser.
```bash
ktui
```
### Demo Mode
Creates a temporary Config and Database which is populated with example Tasks to play around.
Kanban-Tui will delete the temporary Config and Database after closing the application.
Pass the `--clean` flag to start with an empty demo app.
Pass the `--keep` flag to tell `kanban-tui` not to delete the temporary Database and Config.
Pass the `--web` flag and follow the shown link to open `kanban-tui` in your browser.
```bash
ktui demo
```
### Clear Database and Configuration
If you want to start with a fresh database and configuration file, you can use this command to
delete your current database and configuration file.
```bash
ktui clear
```
### Create or Update Agent SKILL.md File
With version v0.11.0 kanban-tui offers a [CLI Interface](#cli-interface-to-manage-tasks) to manage tasks, boards and columns.
This is targeted mainly for agentic use e.g. via [Claude][claude-code], because references will be made only by ids, but some commands
are also ergonomic for human use (e.g. task or board creation).
```bash
ktui skill init/update/delete
```
### CLI Interface to manage Tasks
The commands to manage tasks, boards and columns via the CLI are all build up similarly. For detailed overview of arguments
and options please use the `--help` command.
Note that not every functionality is supported yet (e.g. category management, column customisation).
```bash
ktui task list/create/update/move/delete
ktui board list/create/delete/activate
ktui column list
```
### MCP Server
In addition to skills, `kanban-tui` can be run as a local mcp server, which exposes the `ktui task/board/column` commands.
This requires the optional `mcp` dependency, which can be installed via `uv tool install kanban-tui[mcp]`. It utilizes [pycli-mcp]
to directly expose the commands.
Using the bare `ktui mcp` command shows the instruction to add `kanban-tui` mcp to [claude-code]. The server itself is
started using the `--start-server` flag.
```bash
ktui mcp
```
### Show Location of Data, Config and Skill Files
`kanban-tui` follows the [XDG] basedir-spec and uses the [xdg-base-dirs] package to get the locations for data and config files.
You can use this command to check where the files are located, that `kanban-tui` creates on your system.
```bash
ktui info
```
## Feedback and Issues
Feel free to reach out and share your feedback, or open an [Issue],
if something doesn't work as expected.
Also check the [Changelog] for new updates.
<!-- Repo Links -->
[Changelog]: https://github.com/Zaloog/kanban-tui/blob/main/CHANGELOG.md
[Issue]: https://github.com/Zaloog/kanban-tui/issues
<!-- external Links Python -->
[textual]: https://textual.textualize.io
[pipx]: https://github.com/pypa/pipx
[PyPi]: https://pypi.org/project/kanban-tui/
[plotext]: https://github.com/piccolomo/plotext
[textual-plotext]: https://github.com/Textualize/textual-plotext
[xdg-base-dirs]: https://github.com/srstevenson/xdg-base-dirs
[pydantic-settings]: https://pypi.org/project/pydantic-settings/
[pycli-mcp]: https://github.com/ofek/pycli-mcp
<!-- external Links Others -->
[XDG]: https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html
[uv]: https://docs.astral.sh/uv
[claude-code]: https://code.claude.com/docs/en/overview
| text/markdown | null | Zaloog <gramslars@gmail.com> | null | null | MIT | cli, kanban, mcp, python, tasks, textual, tui | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.1.7",
"pydantic-settings>=2.10.1",
"pydantic>=2.11.7",
"python-dateutil>=2.9.0.post0",
"textual-jumper>=0.2.0",
"textual-plotext>=1.0.0",
"textual>=6.1.0",
"tomli-w>=1.2.0",
"tzdata>=2025.2",
"xdg-base-dirs>=6.0.2",
"atlassian-python-api>=4.0.7; extra == \"jira\"",
"pycli-mcp>=0.3.0; extra == \"mcp\"",
"textual-serve>=1.1.1; extra == \"web\""
] | [] | [] | [] | [
"Repository, https://github.com/Zaloog/kanban-tui",
"Changelog, https://github.com/Zaloog/kanban-tui/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T17:48:02.632233 | kanban_tui-0.19.2.tar.gz | 3,718,873 | 5e/8b/e1da0a0fb64c42a6d1adcf9edb03fa43c80596c2a6cd416ed785cd707870/kanban_tui-0.19.2.tar.gz | source | sdist | null | false | 4fd914867c16d6ee9459def92abd0eb0 | 05a7a31421ea26b828bf6fb2aac47a27242ace517e4c8e774cceb42c10f6d62f | 5e8be1da0a0fb64c42a6d1adcf9edb03fa43c80596c2a6cd416ed785cd707870 | null | [
"LICENSE.txt"
] | 229 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.